Help choosing replacement Hyper-V host machines and connected storage
-
@JohnFromSTL said:
I will have to justify why the cheaper SATA drives aren't a good idea and will just put my foot down if necessary.
Well, start by justifying why SAS are better here. If you can't articulate to techs why SAS would be better, maybe they are not. The value of SAS is determined by the IOPS that you need and the type of workload. NL-SAS is often so close in price to SATA that we generally start there because the price increase is often around 1% while the performance is generally closer to 10%.
-
Four total servers meaning two hosts with four VMs? I'm unclear why there are four hosts.
-
@scottalanmiller said:
@JohnFromSTL said:
I don't feel comfortable using SATA drives in these servers, and I have zero experience with NL SAS drives. Any thoughts on this?
SATA is just a protocol, where are you getting a "concern" from?
NL-SAS is just a trade term for SAS at 7200 RPM, it's not something you have "experience with." It would be like saying you drive the highway regularly but don't have "experience driving at 40 MPH."
There are two types of drives, SAS and SATA. SAS are more efficient at mixed workloads, that is all. The speed of the spindles changes nothing but the speed. You no more need experience on a spindle speed than you do on a CPU frequency.
I simply haven't used them before.
-
@scottalanmiller said:
Four total servers meaning two hosts with four VMs? I'm unclear why there are four hosts.
Four servers total, two for redundancy.
-
@JohnFromSTL said:
I simply haven't used them before.
SATA drives? It's totally transparent to you. SATA is what is in desktops and laptops and nearly any SMB NAS device or SAN device. You'll normally encounter SATA at least ten to one over SAS. But other than the speed difference, they are the same drives. It's literally nothing but an "under the hood" protocol for the drives to talk to the RAID controller. Other than being listed as SATA instead of SAS in the RAID card's interface, you have no way to tell them apart.
-
@JohnFromSTL said:
Four servers total, two for redundancy.
Oh.... all running Hyper-V, two as production and two for failover for those two?
-
@scottalanmiller said:
@JohnFromSTL said:
I simply haven't used them before.
SATA drives? It's totally transparent to you. SATA is what is in desktops and laptops and nearly any SMB NAS device or SAN device. You'll normally encounter SATA at least ten to one over SAS. But other than the speed difference, they are the same drives. It's literally nothing but an "under the hood" protocol for the drives to talk to the RAID controller. Other than being listed as SATA instead of SAS in the RAID card's interface, you have no way to tell them apart.
Fair enough, they are much cheaper anyways. I'm going to lunch, be back after a while.
-
@scottalanmiller said:
@JohnFromSTL said:
Four servers total, two for redundancy.
Oh.... all running Hyper-V, two as production and two for failover for those two?
Yes sir, that's the plan at least. I'm heading out to get lunch.
-
@JohnFromSTL said:
Fair enough, they are much cheaper anyways. I'm going to lunch, be back after a while.
Chances are, NL-SAS will be what you want. But you have to see the actual prices to know.
-
@JohnFromSTL said:
@scottalanmiller said:
@JohnFromSTL said:
Four servers total, two for redundancy.
Oh.... all running Hyper-V, two as production and two for failover for those two?
Yes sir, that's the plan at least. I'm heading out to get lunch.
Cool, now I follow.
So the next question is.... why Prod/DR failover design instead of just "cluster" design? If you treat them as clusters you can load balance and get better performance "every day" and only go to the limitations of the design in cases where something has failed.
-
@scottalanmiller said:
@JohnFromSTL said:
@scottalanmiller said:
@JohnFromSTL said:
Four servers total, two for redundancy.
Oh.... all running Hyper-V, two as production and two for failover for those two?
Yes sir, that's the plan at least. I'm heading out to get lunch.
Cool, now I follow.
So the next question is.... why Prod/DR failover design instead of just "cluster" design? If you treat them as clusters you can load balance and get better performance "every day" and only go to the limitations of the design in cases where something has failed.
I'm not against it at all; I just haven't set one up before.
-
@scottalanmiller said:
@JohnFromSTL said:
Fair enough, they are much cheaper anyways. I'm going to lunch, be back after a while.
Chances are, NL-SAS will be what you want. But you have to see the actual prices to know.
Still running RAID-10 for the best performance?
-
@JohnFromSTL said:
@scottalanmiller said:
@JohnFromSTL said:
Fair enough, they are much cheaper anyways. I'm going to lunch, be back after a while.
Chances are, NL-SAS will be what you want. But you have to see the actual prices to know.
Still running RAID-10 for the best performance?
Yup, that would only change if you were REALLY on the fence with RAID 6 and only needed "anything" to tip the scales. No significant changes in performance for RAID 10 would remain the choice 99% of the time (or more) if it was the choice with same speed SATA drives before.
-
@JohnFromSTL If you'd like to arrange a call, we can go over some of these specs in greater detail to help work out the solution. We here at xByte can definitely help lay some groundwork out for you. I don't see anything about memory requirements for capacity planning yet. Has it been mentioned?
-
@mprftw said:
@JohnFromSTL If you'd like to arrange a call, we can go over some of these specs in greater detail to help work out the solution. We here at xByte can definitely help lay some groundwork out for you. I don't see anything about memory requirements for capacity planning yet. Has it been mentioned?
I believe 192 GB to 256 GB would be adequate.
Please feel free to give me a call. I believe Lyndsie has my number in an email. Thank you.
-
How did this project go?