Looking at New Virtual Host Servers (ESXi)
-
@obsolesce said in Looking at New Virtual Host Servers (ESXi):
@networknerd said in Looking at New Virtual Host Servers (ESXi):
Even though it's a small workload, I would still look at storage performance requirements closely before you make a purchase so you get the correct speed of drives. How is the OBR10 with 7200 RPM drives performing today? Would looking at 10K RPM drives improve performance and make a true business impact with your applications?
You're correct that it's important to first look at storage performance requirements closely.
OBR10 with big TB 7200 RPM drives is still slow as hell for a big hypervisor. I know this for a fact and experienced it first hand on a host with about (back then, 50) running VMs using 6x 8TB 3.5" spinners (RAID10) as the main storage for the VMs, with a bunch of 1.8" SSDs for read/write caching.
When I had the SSD caching disabled for some planned and scheduled maintenance, the whole thing crawled. You do not want to run a bunch of VMs on a few 7200 RPM drives. You can't get high-capacity HDDs at 10k+ RPM, so if you're limited to 4-8 or so 3.5" bays, you need the big slow ones generally.
Basically, if you will be running a large number of VMs on a small number of 7200 RPM spinners even in a RAID10, you'll need to use some kind of r/w or read caching technology, typically, if your VMs are doing things.
Could be an option to have both. One RAID1 array with SSDs say 2x4TB and one RAID1 with 3.5" HDDs, for example 2x10TB Ultrastar He10. Fast SSD storage for VMs that need that and plenty of slow storage for VMs that need that.
-
@phlipelder said in Looking at New Virtual Host Servers (ESXi):
@wrx7m said in Looking at New Virtual Host Servers (ESXi):
Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.
A pair of 6134 would avoid the Windows Server core tax. It’s tje nest bang for the GHz buck and our goto for most builds.
Need more pRAM then 6134M to gain access to 3TB per node.
That would reduce CPU performance though, in order to get access to RAM sizes above 600% of his current need, not much of a benefit.
-
@scottalanmiller said in Looking at New Virtual Host Servers (ESXi):
@wrx7m said in Looking at New Virtual Host Servers (ESXi):
Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.
...That's way more performance per thread (just because these are two generations newer machines) and double the threads and reducing the CPU to CPU overhead...
Are you talking about if the workload has to shift from one pCPU to the other one as some kind of bottle neck? If so, I've never thought of it this way but it would be an interesting point.
-
@scottalanmiller said in Looking at New Virtual Host Servers (ESXi):
@phlipelder said in Looking at New Virtual Host Servers (ESXi):
@wrx7m said in Looking at New Virtual Host Servers (ESXi):
Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.
A pair of 6134 would avoid the Windows Server core tax. It’s tje nest bang for the GHz buck and our goto for most builds.
Need more pRAM then 6134M to gain access to 3TB per node.
That would reduce CPU performance though, in order to get access to RAM sizes above 600% of his current need, not much of a benefit.
I'm not sure I understand?
-
@phlipelder said in Looking at New Virtual Host Servers (ESXi):
@scottalanmiller said in Looking at New Virtual Host Servers (ESXi):
@phlipelder said in Looking at New Virtual Host Servers (ESXi):
@wrx7m said in Looking at New Virtual Host Servers (ESXi):
Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.
A pair of 6134 would avoid the Windows Server core tax. It’s tje nest bang for the GHz buck and our goto for most builds.
Need more pRAM then 6134M to gain access to 3TB per node.
That would reduce CPU performance though, in order to get access to RAM sizes above 600% of his current need, not much of a benefit.
I'm not sure I understand?
CPU performance will be impacted a little, which means workloads will run slower. With the only benefit being that in case he later needed a RAM increase of a completely absurd amount that would never, ever happen, he theoretically could do it.
While it sounds nice to have access to memory options greater than 1.5TB, it's not of any real world value to the OP, he doesn't need anywhere close to that. But having slower CPUs will affect him, even if just a tiny bit, in the real world every day that they own the server.
-
Also, with very rare exception, single CPU approaches use less power meaning lower cost of operating, better carbon footprint, less HVAC needs, less noise, etc.
-
And if we are talking about the theoretical "well but he could grow significantly" kinds of things, far more realistically if he needed more than 1.5TB of RAM he's very likely to need more CPU, too. By going with a single CPU now, he leaves open the option of doubling the CPU in the future, too. Which in any normal case of needed 2TB of RAM or more, you'd want more than the small amount listed here.
-
@donahue said in Looking at New Virtual Host Servers (ESXi):
@scottalanmiller said in Looking at New Virtual Host Servers (ESXi):
@wrx7m said in Looking at New Virtual Host Servers (ESXi):
Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.
...That's way more performance per thread (just because these are two generations newer machines) and double the threads and reducing the CPU to CPU overhead...
Are you talking about if the workload has to shift from one pCPU to the other one as some kind of bottle neck? If so, I've never thought of it this way but it would be an interesting point.
Workloads shifting between definitely causes a bottleneck, as does a split cache, and in many cases people may be forced to have a workload running partially on one CPU and partially on another which cases a lot of extra latency.
-
@scottalanmiller said in Looking at New Virtual Host Servers (ESXi):
@phlipelder said in Looking at New Virtual Host Servers (ESXi):
@scottalanmiller said in Looking at New Virtual Host Servers (ESXi):
@phlipelder said in Looking at New Virtual Host Servers (ESXi):
@wrx7m said in Looking at New Virtual Host Servers (ESXi):
Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.
A pair of 6134 would avoid the Windows Server core tax. It’s tje nest bang for the GHz buck and our goto for most builds.
Need more pRAM then 6134M to gain access to 3TB per node.
That would reduce CPU performance though, in order to get access to RAM sizes above 600% of his current need, not much of a benefit.
I'm not sure I understand?
CPU performance will be impacted a little, which means workloads will run slower. With the only benefit being that in case he later needed a RAM increase of a completely absurd amount that would never, ever happen, he theoretically could do it.
While it sounds nice to have access to memory options greater than 1.5TB, it's not of any real world value to the OP, he doesn't need anywhere close to that. But having slower CPUs will affect him, even if just a tiny bit, in the real world every day that they own the server.
Okay, I understand. The 6134 series are equivalent to the 3/7 series in E5-2600 CPUs. They are lower core count higher GHz parts. We almost always deploy for GHz before core count unless business needs, and budget, allow for the top end processors that have both.
-
While at Ignite Dell had their new R7415 AMD EPYC single socket based 2U there. There's also a 1U version in the R6415.
Because of the extra PCIe lanes available in the EPYC CPU setup along with the extra memory channels one can get close to dual-processor performance out of a single CPU setup. So, go 16 Core EPYC single socket, load up the needed memory and storage, and off you go.
I suggest having a boo at this setup. We are ...
-
@phlipelder said in Looking at New Virtual Host Servers (ESXi):
While at Ignite Dell had their new R7415 AMD EPYC single socket based 2U there. There's also a 1U version in the R6415.
Because of the extra PCIe lanes available in the EPYC CPU setup along with the extra memory channels one can get close to dual-processor performance out of a single CPU setup. So, go 16 Core EPYC single socket, load up the needed memory and storage, and off you go.
I suggest having a boo at this setup. We are ...
Much like how IBM and Oracle have been designing servers for years.
-
@wrx7m There are a lot of models which should fit your needs. You can find more information on this page: https://www.starwindsoftware.com/starwind-hyperconverged-appliance .
Also, you can request a demo on that page to see how HA works in real life - it's free :smiling_face_with_smiling_eyes: