Food for thought: Fixing an over-engineered environment
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
Right now, I'm planning on one host with multiple VMs. So if I had this separate, internal network, methinks performance would be better on a virtual private switch, rather than using virtual external switches bound to a physical NIC that is a part of a separate VLAN on the physical switch.
Probably not. But you're talking yourself out of it now so I don't need to say anything else.
-
@coliver said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
Right now, I'm planning on one host with multiple VMs. So if I had this separate, internal network, methinks performance would be better on a virtual private switch, rather than using virtual external switches bound to a physical NIC that is a part of a separate VLAN on the physical switch.
Probably not. But you're talking yourself out of it now so I don't need to say anything else.
Yeah, during this thought process, I'll likely be talking myself out of most things that would be just a virtualized version of current architecture.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@coliver said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
Right now, I'm planning on one host with multiple VMs. So if I had this separate, internal network, methinks performance would be better on a virtual private switch, rather than using virtual external switches bound to a physical NIC that is a part of a separate VLAN on the physical switch.
Probably not. But you're talking yourself out of it now so I don't need to say anything else.
Yeah, during this thought process, I'll likely be talking myself out of most things that would be just a virtualized version of current architecture.
Definitely a hard thing to get over at times.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@coliver said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
Right now, I'm planning on one host with multiple VMs. So if I had this separate, internal network, methinks performance would be better on a virtual private switch, rather than using virtual external switches bound to a physical NIC that is a part of a separate VLAN on the physical switch.
Probably not. But you're talking yourself out of it now so I don't need to say anything else.
Yeah, during this thought process, I'll likely be talking myself out of most things that would be just a virtualized version of current architecture.
So green field it. Ignore current infrastructure for a bit. How would you make this work in an ideal environment. Then look at where what you have now differs from that ideal. Are those differences necessary? Would moving them toward ideal adversely effect users?
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@coliver
Right now, I'm planning on one host with multiple VMs. So if I had this separate, internal network, methinks performance would be better on a virtual private switch, rather than using virtual external switches bound to a physical NIC that is a part of a separate VLAN on the physical switch.If the VMs are on the same host no need to give them internal and external virtual NICs. They will communicate over the external virtual switch, but the traffic wont go to the physical NIC/out to the LAN.
You only want internal switch between VMs where they are only supposed to talk with each other/not be on a LAN.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
On performance, you're right about VLANs, they're designed for security. I guess you could argue you'd reducing potential broadcast traffic, but in this situation that wouldn't matter, as the number of devices is the same. It looks more and more like the separate-network-for-server-to-server communication is unnecessary.
I didn't think they were for security...
I thought VLANs were purely for segregation of traffic to make quality of service/planning better. Yeah sure, something on VLAN1 wont interact with VLAN2... but its the same switch/hardware/cables. So I presume if I can get access to that kit with Wireshark or something id be able to get the traffic regardless of VLANs, and the fact they are VLANs wouldn't matter... Could be wrong here though (probably am)...
-
@jimmy9008 said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
On performance, you're right about VLANs, they're designed for security. I guess you could argue you'd reducing potential broadcast traffic, but in this situation that wouldn't matter, as the number of devices is the same. It looks more and more like the separate-network-for-server-to-server communication is unnecessary.
I didn't think they were for security...
I thought VLANs were purely for segregation of traffic to make quality of service/planning better.
No that's the myth. They actually make those things worse. They make planning harder and confuse people about QoS. They add overhead and bottlenecks so you have to plan more and do more QoS just ot overcome the VLAN problems. VLANs are for security in some limited cases and for management on a massive scale.
-
@jimmy9008 said in Food for thought: Fixing an over-engineered environment:
but its the same switch/hardware/cables. So I presume if I can get access to that kit with Wireshark or something id be able to get the traffic regardless of VLANs, and the fact they are VLANs wouldn't matter... Could be wrong here though (probably am)...
That's subnets that you are thinking of. If you can do that with a VLAN, it's not a VLAN The definition of a VLAN means that that can't be done.
-
Okay, I've not read everything but starting from the top...
Networking - VLANs are gone. You describe very clearly in the OP that they serve no purpose, don't talk about them again. Gone. Done. Over. One Big Flat Network, OBFN.
Servers - Definitely no need for more than one. Going down to just one will significantly improve your performance and your reliability. Right now your apps depend on the separate database server which depends on your SAN. That's an inverted pyramid with another tier. So instead of the normal three tiers of risk, you have five! Collapsing that down to one will make you so much more reliable. Hyper-V is fine. So is KVM.
Storage - This is easy, local disks. Either all SSD or one SSD pool and one spinner pool. That's all.
-
REDIS should be on Linux, REDIS on Windows is crazy. It's expensive and slow.
-
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
REDIS should be on Linux, REDIS on Windows is crazy. It's expensive and slow.
I just finished reading a little on REDIS yesterday, and when I asked myself why we're running it on Windows, the answer came to me. Previous and most of current regime (I'm the exception) = if there's a way to do X with Microsoft, use Microsoft.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
REDIS should be on Linux, REDIS on Windows is crazy. It's expensive and slow.
I just finished reading a little on REDIS yesterday, and when I asked myself why we're running it on Windows, the answer came to me. Previous and most of current regime (I'm the exception) = if there's a way to do X with Microsoft, use Microsoft.
sigh
I'd have the same reaction to anyone that just defaulted to = if there's a way to do X with CentoOS, use CentOS mentality.
-
@travisdh1 said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
REDIS should be on Linux, REDIS on Windows is crazy. It's expensive and slow.
I just finished reading a little on REDIS yesterday, and when I asked myself why we're running it on Windows, the answer came to me. Previous and most of current regime (I'm the exception) = if there's a way to do X with Microsoft, use Microsoft.
sigh
I'd have the same reaction to anyone that just defaulted to = if there's a way to do X with CentoOS, use CentOS mentality.
Except one is saving the company money the other is costing them? Not sure if that's a direct corollary.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
REDIS should be on Linux, REDIS on Windows is crazy. It's expensive and slow.
I just finished reading a little on REDIS yesterday, and when I asked myself why we're running it on Windows, the answer came to me. Previous and most of current regime (I'm the exception) = if there's a way to do X with Microsoft, use Microsoft.
Don't ask why not Microsoft, put it in dollars and ask why spending so much and not able to maintain the latest versions. Don't mention the tech, that's not how IT communicates. Make them see business terms, make them make business decisions.
-
@travisdh1 said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
REDIS should be on Linux, REDIS on Windows is crazy. It's expensive and slow.
I just finished reading a little on REDIS yesterday, and when I asked myself why we're running it on Windows, the answer came to me. Previous and most of current regime (I'm the exception) = if there's a way to do X with Microsoft, use Microsoft.
sigh
I'd have the same reaction to anyone that just defaulted to = if there's a way to do X with CentoOS, use CentOS mentality.
Right, the reaction is "tech over business".
-
I agree with the idea of thinking in terms of greenfield and using the hardware I have, rather than looking at how stuff is structure now and simply trying to replicate that but with VMs.
Here's my current thought process about what VMs to make and storage.
Current Server 2 becomes the new Hyper-V Host
- This server has the best processor and most RAM of the other two.
- Combine the Intel S3500 SSDs from the other two and create a RAID 5 array of the six 300 GB disks to have 1.5 TB of usable storage. This would leave two unused drive bays.
- Between all of the servers right now 971 GB of storage is used. Perhaps RAID 6 with 1.2 TB of usable storage makes more sense.
- Have five VM guests: IIS Server, SQL Server, REDIS, PostFix server, VM for what will become our backup solution
For the REDIS, PostFix, and other VM, I plan on giving them each one VHD. I'm curious about your folks' opinions on storage for IIS and SQL Server.
Current storage for the physical SQL Server:
- Two SSDs in RAID 1 presenting a disk where the OS and SQL Server application are installed
- Four SSDs in RAID 10 present a disk where it appears the actual database files are stores, as well as SQL Server's backups.
Current storage for the physical IIS Server:
- Two SSDs in RAID 1 presenting a disk where the OS and applications are installed
- Four Winchester disks in RAID 10 presenting a disk where the files used by our web application / IIS are stored.
For a virtualized SQL server and IIS , does it make sense to have separate VHDs for the OS / application and actual database files / virtual folders? Or would it be better to have a single VHD with separate partitions? Perhaps the greater question is there any advantage having such separation?
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
For a virtualized SQL server and IIS , does it make sense to have separate VHDs for the OS / application and actual database files / virtual folders?
These should not be lumped together, these are polar opposite workloads. IIS is a stateless system with essentially zero storage needs. It should be treated very differently than your most storage heavy system, SQL Server.
For IIS, you'd just use a single VHD and that's it. Easy peasy.
For SQL Server, it likely still makes sense to have a VHD for the OS and one for the data and one for the logs.
-
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
Perhaps the greater question is there any advantage having such separation?
With SQL Server separation you can limit log growth, or snapshot data separately from other things that you don't care about in the same way.
-
REDIS and Postfix are way more likely, as stateful machines, to need special consideration compared to IIS. IIS is like Apache.
-
@scottalanmiller said in Food for thought: Fixing an over-engineered environment:
@eddiejennings said in Food for thought: Fixing an over-engineered environment:
For a virtualized SQL server and IIS , does it make sense to have separate VHDs for the OS / application and actual database files / virtual folders?
These should not be lumped together, these are polar opposite workloads. IIS is a stateless system with essentially zero storage needs. It should be treated very differently than your most storage heavy system, SQL Server.
For IIS, you'd just use a single VHD and that's it. Easy peasy.
For SQL Server, it likely still makes sense to have a VHD for the OS and one for the data and one for the logs.
I don't think I was clear. They are not lumped together. They'll be two separate VMs.