SW Port - New Server for virtual host - Sanity Check
-
Hey folks,
Hoping to get some feedback on a new server we are looking at purchasing. I've done a lot of reading/research/workload profiling and am just trying to see if I missed anything obvious.
The scenario: we currently have 2 ESXi hosts which run the following workloads: AD, file server, Exchange, WDS, RD Gateway, RD Session Host, Shoretel HQ, Shoretel voice switch, Shoretel virtual appliance, vCenter, Veeam, a web app server for our access control system, a SAP Business Objects server, a SQL server which runs our custom warehouse inventory management system, and a web server which interfaces with the SQL server for business partner access.
The primary server which runs most of these workloads is a Dell R720. The secondary is an older Supermicro server. The plan is to use the new server as the primary ESXi host hosting most of our workloads and then using the R720 as the secondary. I would use Veeam to replicate from the primary to the secondary every 15 minutes. For our most critical server (the SQL server), I plan on either setting up a secondary SQL server on the secondary host and doing log shipping or I'm actually interested in seeing how Veeam's upcoming CDP replication works. The Supermicro server would be turned into storage for use as a Veeam backup repository.
I've spec'd out the following hardware for the new host:
- Dell R630 - 10 2.5'' drive bay chassis - I normally buy 2U servers for the drive capacity, but I figured 10 drive bays should be pretty good as I can use RAID 5 now to get better capacity and still have half my drive bays empty in case of future expansion.
- 2xIntel E5-2640v4 10-core 2.4GHz - This seems to me the best compromise between cores and GHz. I currently have 28 vCPUs provisioned.
- 128 GB RAM - enough RAM to run all my workloads plus a little extra
- 5x800GB Intel S3610 SSDs in RAID 5 (3.2TB raw capacity) - I decided to go with the S3610 (a mix use drive) not so much because of endurance concerns, but I've read the read intensive drives can have poor write latency consistency. Dell is not ordering the 800GB models of this drive at the time so they quoted me the S3520s (see final bullet point).
- SSDs are more than enough IOPS and with RAID 5 I'll get the capacity I need. I did compare to spinning HDD and found the number of drives needed in RAID 10 for HDD vs the number of SSDs in RAID 5 was pretty close, so I decided to go for the little extra and get the SSDs.
- PERC H730P 2GB
- 2x750W PSU
- iDrac 8 Enterprise
- Intel I350 QB 1Gb network card
- Internal dual SD module
- ProSupport: 3yr 24x7 4-hr Mission Critical
- Vendor: xByte. So, after doing some reading in the community here, I think I am going to take the plunge and go with xByte. I compared a quote with them and Dell and xByte allows me to get some more RAM and the SSD model I want at around the same price as direct from Dell. I'll be honest, I'm still a little hesitant about "refurbished" equipment.
- So, is there any thing obvious that jumps out as being misplaced? Anyone running a similar setup and have any opinions? Thanks!
- P.S. I can't seem to turn off the bulleted list without everything becoming unbulleted
-
With the first bullet I assume you'll be using SSD's in OBR5.
Dell R630 - 10 2.5'' drive bay chassis - I normally buy 2U servers for the drive capacity, but I figured 10 drive bays should be pretty good as I can use RAID 5 now to get better capacity and still have half my drive bays empty in case of future expansion.
-
Which later you say you are, but you'll only have 3.2 TB raw. is that enough space for your business to grow into over the next 5 years?
-
First question that pops out is... why ESXi on such a small system? Seems like an odd choice that is going to add a lot of cost and I don't see anything beneficial. It could turn out to be much more limiting in the long run. I would consider Hyper-V for this.
-
I would recommend slimming down the core count on your Processors, Microsoft licensing is going to burn you here with that many cores, as you have to buy a minimum of 16 (but you'll end up purchasing 20 per host).
If you can get away with a 8-core CPU (odds are you really can) you can save some money here.
-
@dustin did you invite @beta over for this here already?
-
@scottalanmiller said in SW Port - New Server for virtual host - Sanity Check:
@dustin did you invite @beta over for this here already?
Yes.
-
@scottalanmiller yes I just joined xD
-
@beta said in SW Port - New Server for virtual host - Sanity Check:
@scottalanmiller yes I just joined xD
Welcome to the community!
-
@DustinB3403 said in SW Port - New Server for virtual host - Sanity Check:
- Dell R630 - 10 2.5'' drive bay chassis - I normally buy 2U servers for the drive capacity, but I figured 10 drive bays should be pretty good as I can use RAID 5 now to get better capacity and still have half my drive bays empty in case of future expansion.
I would only do this if you have a strong aversion to the extra 3" of rack space being used and if you are super confident that you would never want large capacity storage. This might not sound limiting now, but it might be very limiting when needs change in two years.
-
@beta said in SW Port - New Server for virtual host - Sanity Check:
@scottalanmiller yes I just joined xD
Awesome. Welcome to the community and I guessed at your username and it worked!
-
@scottalanmiller We are already running ESXi, it's what I know and am comfortable with. If this was greenfield project, I'd consider Hyper-V, but that's what we are on now.
-
@DustinB3403 It is definitely more than enough for now. I figure in 5 years time, since I have extra drive bays, I can add more disk if needed.
-
As for the other hardware specs, I'd say they are perfectly acceptable.
I also have my questions about ESXi in this configuration, but you've already mentioned why you're using it.
-
@beta said in SW Port - New Server for virtual host - Sanity Check:
@DustinB3403 It is definitely more than enough for now. I figure in 5 years time, since I have extra drive bays, I can add more disk if needed.
You can, but only as a separate array, which you'll have to introduce into ESXi.
I try to evaluate my growth over the past 5 years, and determine what that delta is per year. Once I have that (5 year sum) I add 20% for storage.
Makes life simpler.
-
@DustinB3403 Yes, I took that into consideration too. I do have in my favor we are a non-profit, so I can get MS licensing from Techsoup which is MUCH cheaper than anywhere else so the updated MS licensing doesn't hurt us too badly.
-
@beta said in SW Port - New Server for virtual host - Sanity Check:
@scottalanmiller We are already running ESXi, it's what I know and am comfortable with. If this was greenfield project, I'd consider Hyper-V, but that's what we are on now.
Gotcha. I would still consider it, it's an expanding cost that you can nip in the bud. Every new hardware purchase is a great trigger for a huge cost savings.
-
The generally considered best practice is OBR (one big raid) for the life of the server.
Splitting arrays was never really a good thing to do.
-
@beta said in SW Port - New Server for virtual host - Sanity Check:
@DustinB3403 It is definitely more than enough for now. I figure in 5 years time, since I have extra drive bays, I can add more disk if needed.
Yup, and SSDs will get bigger and cheaper all of the time. But there is something nice about knowing you CAN pop in 10TB HDs as a separate tier, too.
-
@beta said in SW Port - New Server for virtual host - Sanity Check:
@DustinB3403 Yes, I took that into consideration too. I do have in my favor we are a non-profit, so I can get MS licensing from Techsoup which is MUCH cheaper than anywhere else so the updated MS licensing doesn't hurt us too badly.
But it hurts some. Why spend the extra for something that you won't have a true benefit of?