Food for thought: Fixing an over-engineered environment
-
@dafyre said in Food for thought: Fixing an over-engineered environment:
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
For a virtualized SQL server and IIS , does it make sense to have separate VHDs for the OS / application and actual database files / virtual folders?
These should not be lumped together, these are polar opposite workloads. IIS is a stateless system with essentially zero storage needs. It should be treated very differently than your most storage heavy system, SQL Server.
For IIS, you'd just use a single VHD and that's it. Easy peasy.
For SQL Server, it likely still makes sense to have a VHD for the OS and one for the data and one for the logs.
@scottalanmiller on the SQL Server -- Any reason to not use a single larger VHD and partition it as opposed to multiple VHDs?
Yes. Partitions are blind to Hyper-V so if you do that you lose power.
-
I meant lumped together in your thinking and approach. You have many VMs, but are treated those two like they might have special storage needs. One is your most storage intense, the other is your least. Why treat the two of them as special and not, for example, your other database server?
I see what you're saying.
These should not be lumped together, these are polar opposite workloads. IIS is a stateless system with essentially zero storage needs. It should be treated very differently than your most storage heavy system, SQL Server.
For IIS, you'd just use a single VHD and that's it. Easy peasy.
There's about 500 GB of data that's stored on that IIS server which is used in some way by our application (working on figuring out exactly what/how). IIS has a virtual directory that points to it. Should that data live on the one VHD that's for the IIS VM?
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
I meant lumped together in your thinking and approach. You have many VMs, but are treated those two like they might have special storage needs. One is your most storage intense, the other is your least. Why treat the two of them as special and not, for example, your other database server?
I see what you're saying.
These should not be lumped together, these are polar opposite workloads. IIS is a stateless system with essentially zero storage needs. It should be treated very differently than your most storage heavy system, SQL Server.
For IIS, you'd just use a single VHD and that's it. Easy peasy.
There's about 500 GB of data that's stored on that IIS server which is used in some way by our application (working on figuring out exactly what/how). IIS has a virtual directory that points to it. Should that data live on the one VHD that's for the IIS VM?
Why is IIS storing stuff?
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
I meant lumped together in your thinking and approach. You have many VMs, but are treated those two like they might have special storage needs. One is your most storage intense, the other is your least. Why treat the two of them as special and not, for example, your other database server?
I see what you're saying.
These should not be lumped together, these are polar opposite workloads. IIS is a stateless system with essentially zero storage needs. It should be treated very differently than your most storage heavy system, SQL Server.
For IIS, you'd just use a single VHD and that's it. Easy peasy.
There's about 500 GB of data that's stored on that IIS server which is used in some way by our application (working on figuring out exactly what/how). IIS has a virtual directory that points to it. Should that data live on the one VHD that's for the IIS VM?
yeah, this is how our old EHR was - all the scanned in documents were stored on the IIS server. After 7 years we had 800 GB of documents there - it was a nightmare.
-
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
I meant lumped together in your thinking and approach. You have many VMs, but are treated those two like they might have special storage needs. One is your most storage intense, the other is your least. Why treat the two of them as special and not, for example, your other database server?
I see what you're saying.
These should not be lumped together, these are polar opposite workloads. IIS is a stateless system with essentially zero storage needs. It should be treated very differently than your most storage heavy system, SQL Server.
For IIS, you'd just use a single VHD and that's it. Easy peasy.
There's about 500 GB of data that's stored on that IIS server which is used in some way by our application (working on figuring out exactly what/how). IIS has a virtual directory that points to it. Should that data live on the one VHD that's for the IIS VM?
Why is IIS storing stuff?
Here's where my ignorance of terms is probably going to shine. It's probably our application (which is on the IIS server) storing stuff, not IIS itself. I'm working on getting information from the folks who built this in the first place to figure out what's going on.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
I meant lumped together in your thinking and approach. You have many VMs, but are treated those two like they might have special storage needs. One is your most storage intense, the other is your least. Why treat the two of them as special and not, for example, your other database server?
I see what you're saying.
These should not be lumped together, these are polar opposite workloads. IIS is a stateless system with essentially zero storage needs. It should be treated very differently than your most storage heavy system, SQL Server.
For IIS, you'd just use a single VHD and that's it. Easy peasy.
There's about 500 GB of data that's stored on that IIS server which is used in some way by our application (working on figuring out exactly what/how). IIS has a virtual directory that points to it. Should that data live on the one VHD that's for the IIS VM?
Why is IIS storing stuff?
Here's where my ignorance of terms is probably going to shine. It's probably our application (which is on the IIS server) storing stuff, not IIS itself. I'm working on getting information from the folks who built this in the first place to figure out what's going on.
Your app, I’m sure, runs in IIS. But generally, but there are exceptions, you don’t want it atoringbthings on the app server. That’s what the database is for.
-
@dashrender said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
I meant lumped together in your thinking and approach. You have many VMs, but are treated those two like they might have special storage needs. One is your most storage intense, the other is your least. Why treat the two of them as special and not, for example, your other database server?
I see what you're saying.
These should not be lumped together, these are polar opposite workloads. IIS is a stateless system with essentially zero storage needs. It should be treated very differently than your most storage heavy system, SQL Server.
For IIS, you'd just use a single VHD and that's it. Easy peasy.
There's about 500 GB of data that's stored on that IIS server which is used in some way by our application (working on figuring out exactly what/how). IIS has a virtual directory that points to it. Should that data live on the one VHD that's for the IIS VM?
yeah, this is how our old EHR was - all the scanned in documents were stored on the IIS server. After 7 years we had 800 GB of documents there - it was a nightmare.
Cheesy
-
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
I meant lumped together in your thinking and approach. You have many VMs, but are treated those two like they might have special storage needs. One is your most storage intense, the other is your least. Why treat the two of them as special and not, for example, your other database server?
I see what you're saying.
These should not be lumped together, these are polar opposite workloads. IIS is a stateless system with essentially zero storage needs. It should be treated very differently than your most storage heavy system, SQL Server.
For IIS, you'd just use a single VHD and that's it. Easy peasy.
There's about 500 GB of data that's stored on that IIS server which is used in some way by our application (working on figuring out exactly what/how). IIS has a virtual directory that points to it. Should that data live on the one VHD that's for the IIS VM?
Why is IIS storing stuff?
Here's where my ignorance of terms is probably going to shine. It's probably our application (which is on the IIS server) storing stuff, not IIS itself. I'm working on getting information from the folks who built this in the first place to figure out what's going on.
Your app, I’m sure, runs in IIS. But generally, but there are exceptions, you don’t want it atoringbthings on the app server. That’s what the database is for.
Makes 100% sense. Unfortunately, that's not going to be fixed. At least not in my lifetime :(.
-
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
@dashrender said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
I meant lumped together in your thinking and approach. You have many VMs, but are treated those two like they might have special storage needs. One is your most storage intense, the other is your least. Why treat the two of them as special and not, for example, your other database server?
I see what you're saying.
These should not be lumped together, these are polar opposite workloads. IIS is a stateless system with essentially zero storage needs. It should be treated very differently than your most storage heavy system, SQL Server.
For IIS, you'd just use a single VHD and that's it. Easy peasy.
There's about 500 GB of data that's stored on that IIS server which is used in some way by our application (working on figuring out exactly what/how). IIS has a virtual directory that points to it. Should that data live on the one VHD that's for the IIS VM?
yeah, this is how our old EHR was - all the scanned in documents were stored on the IIS server. After 7 years we had 800 GB of documents there - it was a nightmare.
Cheesy
They were so completely and utterly unprepared for the amount of documentation we would have it was ridiculous!.
-
EDIT: Design is now deprecated
Current idea.
- Take the SSDs (Intel S3500) out of Server 1 and 3, and add them to Server 2. Put these into a RAID5 providing 1.5 TB of storage
- Keep Winchester disks in Server 1 (either keep the RAID 10 or go to RAID 6).
- Put the four Intel S3700 SSDs into Server 3.
Server 1 serves block storage to the VM in Server 2 that would be running [the backup software]
Server 2 (with its massive RAM and better processor) becomes a Hyper-V hypervisor
Server 3 currently doesn't have a purpose.I remove the Dell Powerconnect 24 port switch, as it doesn't seem to serve a purpose and replace the Cisco ASA with an Edge Router Pro.
-
@eddiejennings Getting there. Hyper-V should be installed on all the servers. Even when you just have a single VM running on a hypervisor, the time savings in restoring after a hardware failure is worthwhile.
Server 3 could be you're testing/development area and with Hyper-V installed on all of them, you could quickly restore images to it if one of the other servers goes down. Yeah, the performance would just hurt, but at least you'd be running till actual hardware replacements arrive.
-
@travisdh1 said in Food for thought: Fixing an over-engineered environment:
@eddiejennings Getting there. Hyper-V should be installed on all the servers. Even when you just have a single VM running on a hypervisor, the time savings in restoring after a hardware failure is worthwhile.
Server 3 could be you're testing/development area and with Hyper-V installed on all of them, you could quickly restore images to it if one of the other servers goes down. Yeah, the performance would just hurt, but at least you'd be running till actual hardware replacements arrive.
I see. So I'd also make Server 1 a hypervisor with one VM, and that VM would provide block storage to the VM that would be running backup software. That make sense; since on second thought, there would be no reason not to just make it a hypervisor.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@travisdh1 said in Food for thought: Fixing an over-engineered environment:
@eddiejennings Getting there. Hyper-V should be installed on all the servers. Even when you just have a single VM running on a hypervisor, the time savings in restoring after a hardware failure is worthwhile.
Server 3 could be you're testing/development area and with Hyper-V installed on all of them, you could quickly restore images to it if one of the other servers goes down. Yeah, the performance would just hurt, but at least you'd be running till actual hardware replacements arrive.
I see. So I'd also make Server 1 a hypervisor with one VM, and that VM would provide block storage to the VM that would be running backup software. That make sense; since on second thought, there would be no reason not to just make it a hypervisor.
Yep.
Where you'd see a major benefit in this case is if you ever needed to restore the backup storage from another backup. The hardware doesn't matter so long as it has the needed amount of storage.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@travisdh1 said in Food for thought: Fixing an over-engineered environment:
@eddiejennings Getting there. Hyper-V should be installed on all the servers. Even when you just have a single VM running on a hypervisor, the time savings in restoring after a hardware failure is worthwhile.
Server 3 could be you're testing/development area and with Hyper-V installed on all of them, you could quickly restore images to it if one of the other servers goes down. Yeah, the performance would just hurt, but at least you'd be running till actual hardware replacements arrive.
I see. So I'd also make Server 1 a hypervisor with one VM, and that VM would provide block storage to the VM that would be running backup software. That make sense; since on second thought, there would be no reason not to just make it a hypervisor.
Yup, basically no exception to installing the hypervisor first.
-
@travisdh1 said in Food for thought: Fixing an over-engineered environment:
@eddiejennings Getting there. Hyper-V should be installed on all the servers. Even when you just have a single VM running on a hypervisor, the time savings in restoring after a hardware failure is worthwhile.
Can add stability, too.
-
This is a disaster and needs updated from what you learned today in your other thread.
But beyond that, you talk about putting the connections to the router. This is wrong. A router routes traffic. it is not a switch.
You still require a switch.
You put multiple NICS in a team and plug those into the switch. If the switch supports full LACP then you can get awesome performance, if not, switch independent mode is the best solution.
You install Hyper-V Server 2016 on all three boxes.
On all Servers:
Create 1 partition for the Hyper-V Server drive C (I use 80GB, but I think the technical minimum is 32GB)
Create 1 partition from the rest of the space to mount as drive D inside Hyper-V
This D drive is where all of the guest files will be stored. Config files as well as replicas, checkpoints (snapshots), and the virtual hard disks.On Server 2, restore all your current servers as new VMs
On Server 1, create a small VHDX to install Windows and run Veeam.
On Server 1 create a large VHDX to house the backups. This will bethe D drive inside the Veeam guest.On Server 3, setup a test environment or sell the hardware. You could use Hyper-V replication, but you need SA on the original guest VMs or full licenses for the replicas.
-
@jaredbusch Not as disastrous as what we have, but not good either. This thread is my brainstorm thread :D. I'll put an updated diagram up tomorrow, when I return to work.
On the switch, you're right. Even though the router can probably handle the traffic, it is better to let it do its job and the switch likewise. Also since I have plenty of NICs on each physical server, I can utilizing teaming as you suggested -- which is what we have now, but with the needless VLANs. The Dell switch I have supports LACP, which is what we're using for the current teams.
On the storage configuration of the servers (drive Cs and Ds), that's what I was considering. I'm glad you mentioned putting the config files, etc., on the same partition as the VHDs, as I think about it, I don't see any advantage to keeping the VHDs separate from everything else.
From my OP, our main dev / one of my bosses (yes, that's as screwy as it sounds) is envisioning this idea of eventually having staging VMs, which could perhaps create a use for server 3. My main focus now is fixing the terrible environment from the OP.
-
Diagram update following lessons in backup design.
One other thing for me to consider is that each of these physical servers has 8 NICs (excluding the IPMI NIC). That's of course way overkill, but since I have them , would there be a reason to not team more than 4 NICs together (notwithstanding the fact that a decision hasn't been made yet about what to do with Server 3)?
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
That's of course way overkill, but since I have them , would there be a reason to not team more than 4 NICs together (notwithstanding the fact that a decision hasn't been made yet about what to do with Server 3)?
Four is the max you can consider in a load balancing team. If you move to pure failover, you can do unlimited. Beyond four, the algorithms become so inefficient that you don't get faster, and by six, you start actually getting slower. Most people only go to two, four is the absolute max to consider. Since you have eight (how did that happen?) you might as well do four. But the rest are wasted or could be used for a different network connection entirely.
-
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
That's of course way overkill, but since I have them , would there be a reason to not team more than 4 NICs together (notwithstanding the fact that a decision hasn't been made yet about what to do with Server 3)?
Four is the max you can consider in a load balancing team. If you move to pure failover, you can do unlimited. Beyond four, the algorithms become so inefficient that you don't get faster, and by six, you start actually getting slower. Most people only go to two, four is the absolute max to consider. Since you have eight (how did that happen?) you might as well do four. But the rest are wasted or could be used for a different network connection entirely.
Wouldn't this be 4 max per vNetwork in the VM host?