So many choices for Virtualization, need help narrowing down.
-
@RobQ I have multiple clients running small SQL databases on standard SATA 7.5k drives. I have one client running a 50GB MS Dynamics 2010 application on those drives.
-
You could get two hosts with lots of smaller drives and just replicate the VMs between them. It'll give you the IOPS you need, but the fault tolerance of two hosts. Alternatively, you could run 3 hosts with vSAN, but that would be a bit overkill (though not overkill to the extent of buying a SAN).
-
Oh, I forgot - get a decent controller cache. That'll go a long way to help your IOPS.
-
@RobQ said:
We've been proposed the good old 3-2-1 setup with 3 Host servers, 2 switches, SAN, and VMware. I've also been convinced that I don't really need a SAN because we CAN tolerate some down time if we have a failure.
The SAN doesn't add to your uptime, it actually takes away from it. It is only because you can tolerate downtime that having a 3-2-1 "Inverted Pyramid of Doom" could even be proposed. The 3-2-1 IPOD is not about uptime at all but about profit and, to a small degree, ease of management although it is not really as easy as simpler approaches.
-
What is your tolerance for downtime? If you can take the four hours needed for HP or Dell to get onsite to do maintenance, going with a single server is often the best option. Very cost effective, very simple to maintain, very little to go wrong and when it does very easy to fix. And remember, four hours is 24x7, so if you go down at night or evening or weekend, that failure might not even show up as an impact at all to you. The chances that it will fail in the middle of the eight hour work day is actually relatively low compared to the overall failure rate.
-
@RobQ said:
I've looked at Virtual storage appliances, and I've looked at Scale's HC3.
Scale makes some interesting stuff but it is way, way better for Linux environments than for Windows because it uses many nodes, each one counting against your licensing, and each one with only one CPU. So exactly awful for Microsoft licensing which is all built around dual processor nodes. So you might easily double or triple your license costs to go that route.
-
@JaredBusch said:
I do not like VMWare's lockout of backup tools, I feel they could provide ONLY that piece and keep the rest of the fancy tools locked up. It is basically the cost of your Windwos server to buy Essentials anyway.
Yeah, that drives me crazy too. ESXi Free would dominate if it wasn't for that. But I guess they just have no interest in that market piece. HyperV and XenServer work fine for that.
-
@PSX_Defector said:
For hardware, yeah, no need for a SAN. Perhaps the Dell VRTX would be up your alley?
Nice boxes but I've yet to see any use case for one in the SMB. Too little protection with too much compute power. Perfect for an enterprise branch office with 400 users, but for an SMB the mix of things just doesn't work. It's a 4-1 rather than a 3-2-1, which is nice. But quad compute nodes and no storage failover doesn't mix well in SMB settings.
-
@JaredBusch said:
@RobQ I have multiple clients running small SQL databases on standard SATA 7.5k drives. I have one client running a 50GB MS Dynamics 2010 application on those drives.
As long as you are not using parity RAID, that often works fine. SMB databases rarely need a ton of performance. If necessary, a little SSD cache will often do the trick.
-
@Nara said:
You could get two hosts with lots of smaller drives and just replicate the VMs between them. It'll give you the IOPS you need, but the fault tolerance of two hosts. Alternatively, you could run 3 hosts with vSAN, but that would be a bit overkill (though not overkill to the extent of buying a SAN).
Agreed, two hosts, at most, is the way to go. And consider it all carefully. Most hosts means more money and spending money up front is worse than losing that same money in a possible, future outage.
-
@Nara said:
Oh, I forgot - get a decent controller cache. That'll go a long way to help your IOPS.
Agreed, get the biggest RAID cache option that you can, does a ton, especially in the SMB where big workloads often fit into the RAID cache! You can get 1GB from pretty much everyone these days.
-
There are three reasonable options, depending on your uptime needs:
- Use a single host, local storage, no failover. This is the cheapest, easiest and best bang for the buck. This is very likely your best option.
- Use two hosts, local storage and use a product like Veeam to async replicate between the two. This means near zero downtime but manual failover and you are looking at losing ~20 minutes of data during the process (depending on the replication schedule.) Next cheapest, but not bad.
- Use two hosts with sync replication and HA. This requires more expensive licensing from your hypervisor vendor typically (Vmware, for example) but means automatic, transparent failover between the hosts and no data loss in that event (although some things can't be done reliably this way like AD, SQL Server, etc. - those have to be handled at the application layer regardless.) This also requires a third party replication option for the storage like HP VSA for VMware or StarWind for HyperV. Most costly, best protection, most caveats and complication.
-
@scottalanmiller That's kind of the Million Dollar Question. We CAN tolerate 4 Hour Downtime 4 one server/service. I'm just not sure we can tolerate 4 hours plus recovery time for our entire server infrastructure.
-
Rob - Thanks for using Unitrends. What model Unitrends appliance do you have? Is it one of the newer ones that can be used for instant recovery of VMware & Windows clients? If so, you may want to work with support to adjust your retention settings so that you can allocated resources on the appliance for failover. Please let me know what questions I can answer for you about this.
-
I like Unitrends, but went with Veeam and a couple of VMware hosts for our infrastructure. GRT application recovery and the the ability to spin up my backups every time on a cheap Dell host to test (automatically) is golden. I just migrated my last Exchange 2003 mailbox to a virtualized Exchange system just last night and everything is humming.
You can use Veeam to replicate your mission-critical VMs between hosts and achieve downtime to next to nothing, but I haven't implemented that yet. Still have one ERP system to move and that is my next project.
-
@KatieUnitrends It is a recover 813. I am currently working with support to tweak things. I think it may have been a bit under spec'd, but we're working through the process of determining how to move forward. I actually had Lincoln Glover and Robert Walker on site looking at it last week.
I chose Unitrends because we had the old Physical servers, so we wanted the ability to do file level backups as we all quickly restore. Once fully virtual Veeam may be a good option, but Unitrends was a quick way to gain some semblance of sanity knowing our servers could go down at any point.
-
This post is deleted! -
@scottalanmiller I think Option 1 or 2 is what I'm leaning to. We're going to have backups of the files, and VM's on the uni-trends appliance, so I'm not terribly worried about recover-ability anymore. As part of our DR plan we have a vendor that will provide hardware, power, connectivity and space. So if I have everything backed up and taken off site, I should be able to get a replacement Unitrends appliance, bring in my Backups, and get rock and rolling pretty quickly.
I'm leaning toward Jared's idea of having one host with smaller fast drives for my heavier workloads, and a host with lots of slower drives. That way, I'd be comfortable with the performance, but still have fail-over capability if I need it.
-
@PSX_Defector I'm one dude, and if we have a disaster I'll be pulled in a million directions. So installing OS's is something I'd rather not do.
I will stick with Vmware more than likely. I've gone the Hyper-V route, and Vmware is just to freaking easy to use for me to move away from it.
-
@RobQ said:
@PSX_Defector I'm one dude, and if we have a disaster I'll be pulled in a million directions. So installing OS's is something I'd rather not do.
Then keep some base templates with the info. Restoring the OS is about a five minute deal in that case. Installing from scratch is ~45 minutes to get to a functional OS level. Restoration is what is gonna take a long time.
Either way, it's set it and forget it for the most part in a "holy shit the world is ending" scenario. Most often your gonna be doing restores of morons who jacked with the files. Or like what greeted me this morning, someone jacking with all the permissions on their content server, all 4,000,000 images.