Food for thought: Fixing an over-engineered environment
-
@jaredbusch Not as disastrous as what we have, but not good either. This thread is my brainstorm thread :D. I'll put an updated diagram up tomorrow, when I return to work.
On the switch, you're right. Even though the router can probably handle the traffic, it is better to let it do its job and the switch likewise. Also since I have plenty of NICs on each physical server, I can utilizing teaming as you suggested -- which is what we have now, but with the needless VLANs. The Dell switch I have supports LACP, which is what we're using for the current teams.
On the storage configuration of the servers (drive Cs and Ds), that's what I was considering. I'm glad you mentioned putting the config files, etc., on the same partition as the VHDs, as I think about it, I don't see any advantage to keeping the VHDs separate from everything else.
From my OP, our main dev / one of my bosses (yes, that's as screwy as it sounds) is envisioning this idea of eventually having staging VMs, which could perhaps create a use for server 3. My main focus now is fixing the terrible environment from the OP.
-
Diagram update following lessons in backup design.
One other thing for me to consider is that each of these physical servers has 8 NICs (excluding the IPMI NIC). That's of course way overkill, but since I have them , would there be a reason to not team more than 4 NICs together (notwithstanding the fact that a decision hasn't been made yet about what to do with Server 3)?
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
That's of course way overkill, but since I have them , would there be a reason to not team more than 4 NICs together (notwithstanding the fact that a decision hasn't been made yet about what to do with Server 3)?
Four is the max you can consider in a load balancing team. If you move to pure failover, you can do unlimited. Beyond four, the algorithms become so inefficient that you don't get faster, and by six, you start actually getting slower. Most people only go to two, four is the absolute max to consider. Since you have eight (how did that happen?) you might as well do four. But the rest are wasted or could be used for a different network connection entirely.
-
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
That's of course way overkill, but since I have them , would there be a reason to not team more than 4 NICs together (notwithstanding the fact that a decision hasn't been made yet about what to do with Server 3)?
Four is the max you can consider in a load balancing team. If you move to pure failover, you can do unlimited. Beyond four, the algorithms become so inefficient that you don't get faster, and by six, you start actually getting slower. Most people only go to two, four is the absolute max to consider. Since you have eight (how did that happen?) you might as well do four. But the rest are wasted or could be used for a different network connection entirely.
Wouldn't this be 4 max per vNetwork in the VM host?
-
If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.
-
@dashrender said in Food for thought: Fixing an over-engineered environment:
If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.
Where "might be" = "long past due."
-
@dashrender said in Food for thought: Fixing an over-engineered environment:
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
That's of course way overkill, but since I have them , would there be a reason to not team more than 4 NICs together (notwithstanding the fact that a decision hasn't been made yet about what to do with Server 3)?
Four is the max you can consider in a load balancing team. If you move to pure failover, you can do unlimited. Beyond four, the algorithms become so inefficient that you don't get faster, and by six, you start actually getting slower. Most people only go to two, four is the absolute max to consider. Since you have eight (how did that happen?) you might as well do four. But the rest are wasted or could be used for a different network connection entirely.
Wouldn't this be 4 max per vNetwork in the VM host?
Correct, if the connects are independent, you get to do another four.
-
@dashrender said in Food for thought: Fixing an over-engineered environment:
If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.
I don't need more than 1 GB judging from what New Relic has shown me; however, since I have the hardware (and 4 of the 8 NICs are integrated on the motherboard) I might as well configure it to give the most performance it can.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@dashrender said in Food for thought: Fixing an over-engineered environment:
If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.
I don't need more than 1 GB judging from what New Relic has shown me; however, since I have the hardware (and 4 of the 8 NICs are integrated on the motherboard) I might as well configure it to give the most performance it can.
That's not how things work. Teaming is for bandwidth, not-teaming is for latency. Working in banks, we specifically avoided teaming because it increases latency slowing down the network traffic on a per packet basis. Everything is a trade off, or there wouldn't be options.
It's like adding more memory to your server. It's more stuff that can go in memory, but more memory that the CPU has to manage and therefore, it adds load to the server which turns into latency for processes.
-
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@dashrender said in Food for thought: Fixing an over-engineered environment:
If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.
I don't need more than 1 GB judging from what New Relic has shown me; however, since I have the hardware (and 4 of the 8 NICs are integrated on the motherboard) I might as well configure it to give the most performance it can.
That's not how things work. Teaming is for bandwidth, not-teaming is for latency. Working in banks, we specifically avoided teaming because it increases latency slowing down the network traffic on a per packet basis. Everything is a trade off, or there wouldn't be options.
It's like adding more memory to your server. It's more stuff that can go in memory, but more memory that the CPU has to manage and therefore, it adds load to the server which turns into latency for processes.
That makes sense. Performance was a poor choice of words.
-
I never use IPMI.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@dashrender said in Food for thought: Fixing an over-engineered environment:
If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.
I don't need more than 1 GB judging from what New Relic has shown me; however, since I have the hardware (and 4 of the 8 NICs are integrated on the motherboard) I might as well configure it to give the most performance it can.
This is not only bad for the reasons Scott said, but it's also a waste of Switch ports and resources.
If you only need 1 Gb, then I'd remove the card (less power use) and only use two onboard NICs.
-
This is not only bad for the reasons Scott said, but it's also a waste of Switch ports and resources.
If you only need 1 Gb, then I'd remove the card (less power use) and only use two onboard NICs.
The whole situation is a waste of resources. I'm looking to see how to best utilize them.
-
@jaredbusch said in Food for thought: Fixing an over-engineered environment:
I never use IPMI.
@JaredBusch thought IPMI was something special for Hyper-V, not that you were talking about the iDRAC like interface - he stands corrected and uses the iDRAC like interface as much as he can.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
This is not only bad for the reasons Scott said, but it's also a waste of Switch ports and resources.
If you only need 1 Gb, then I'd remove the card (less power use) and only use two onboard NICs.
The whole situation is a waste of resources. I'm looking to see how to best utilize them.
Right, so for this part, the best would likely be two 1 Gb (on board) NICs in a team.
-
@jaredbusch said in Food for thought: Fixing an over-engineered environment:
I never use IPMI.
I've been underwhelmed with it. If you're curious, this is the motherboard that's on all of these servers:
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@jaredbusch said in Food for thought: Fixing an over-engineered environment:
I never use IPMI.
I've been underwhelmed with it. If you're curious, this is the motherboard that's on all of these servers:
I've had very good luck with it.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@jaredbusch said in Food for thought: Fixing an over-engineered environment:
I never use IPMI.
I've been underwhelmed with it. If you're curious, this is the motherboard that's on all of these servers:
What doesn't it give you that you want?
-
@dashrender said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@jaredbusch said in Food for thought: Fixing an over-engineered environment:
I never use IPMI.
I've been underwhelmed with it. If you're curious, this is the motherboard that's on all of these servers:
What doesn't it give you that you want?
I might have to re-evaluate it. I've only used the IPMI View java app to use the virtual KVM console. I'm looking its web portal now, and looks pretty good. I would like a way to see RAID health status and configuration, but perhaps that's not a reasonable want.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@dashrender said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@jaredbusch said in Food for thought: Fixing an over-engineered environment:
I never use IPMI.
I've been underwhelmed with it. If you're curious, this is the motherboard that's on all of these servers:
What doesn't it give you that you want?
I might have to re-evaluate it. I've only used the IPMI View java app to use the virtual KVM console. I'm looking its web portal now, and looks pretty good. I would like a way to see RAID health status and configuration, but perhaps that's not a reasonable want.
It is not, since RAID is not part of the hardware that the IPMI sees.