Storage Virtualization / Hyperconvergence Technologies - Best Use Case?
-
I generally have used local SAS drives for virtualzation anyway it provides another level of protection over using 1 SAN for everything. We had a SAN but it was used for Database stuff only. Not sure I'd go with 7200RPM. I'd stay the happy medium of 10k instead of 15k.
I have used 7200RPM/10K Enterpise sata drives for file server where large files are needed with no problems. I'm not sure how well a 7200rpm even in RAID would handle multiple VMs. Booting them up would be the major issue there though, after boot it might handle most things fine. -
@NetworkNerd said:
Has anyone out there used any of these technologies? Should we be pointing to these kinds of things rather than suggesting something beyond local storage?
These are actually pretty much the only technologies that you would want to be looking at for virtualization today. Using NAS or SAN is very passe and technically not ideal unless you are using them as part of a much larger storage strategy in which the virtualization is only one portion. Using SAN or NAS specifically for virtualization has never actually been a good use case. The use of it came from the enterprise space where large SAN was already in place and heavily used for non-virtualization needs and virtualization just leveraged an existing storage framework because it was there, not because it was ideal.
There are exceptions, but it is rare that you would want production VM workloads running on anything but converged, software defined storage once you outgrow the needs of pure local storage - which alone covers most use cases.
The exceptions start to happen when the environment gets so big that the storage team and the virtualization and platform teams need a complete separation of duties for legal or political reasons. Then hyperconverged is not an option but similar technologies like Gluster, still are.
-
@thecreativeone91 said:
I'm not sure how well a 7200rpm even in RAID would handle multiple VMs.
Drive speed is just that. A 7.2K drive is 72% the speed of a 10K drive. So if a 10K drive is 100 IOPS, that makes a 7.2K drive 72 IOPS.
So if you had a four drive RAID 10 of 10K drives, you have 400 Read IOPS. Do a six drive RAID 10 of 7.2K drives and you have 432 IOPS.
Drive speed is never a factor on its own. You only select drive speeds as part of a holistic storage subsystem. If you ever get the feeling that a drive is "too slow" or you only want a "happy medium", step back and remember that individual spindle turn rates is only one part of the performance picture and is never a factor on its own.
-
@NetworkNerd said:
So, if you were starting over with server virtualization at your company, would you look at maybe getting servers with all local storage, 7200 RPM drives, and using one of these software technologies over going to 10K SAS, SSDs, or vSAN?
These technologies might change the IOPS equation some but the need for faster spindle speeds, SSDs and other technologies remain. You still need to look at the big picture. No amount of caching can completely overcome drive subsystem speeds.
-
@NetworkNerd said:
After seeing some storage virtualization vendors at Spiceworld like Infinio and Maxta, it makes me wonder how applicable / valuable those types of technologies would be to the SMB.
Infinio is just a cache. It assumes that you still have external storage. It is designed to accelerate your SAN or NAS to make it work even better. Which is a great idea. It does this by using system RAM and CPU which, in turn, means that you lose those resources for your VMs. It's a great idea but not without tradeoffs and it does nothing to change the need for storage, just makes it possible for existing storage to work better. And it works best in a large pool of virtualization servers, not lone ones (or else the entire cache is only <8GB.)
http://www.infinio.com/product/how-it-works
Using Infinio eats up two vCPUs and 8GB of RAM on each host. So consider that when looking at the big picture. If you have a single virtualization platform you will lose a tiny bit of CPU performance and 8GB of RAM. If you started with 64GB, your platform just dropped to 56GB. Not exactly a trivial shrinkage. That's between one and eight typical VMs that you can't run because you are adding this cache - per host.
-
I remember Maxta and Pernix as well as Atlantis saying they do storage reclamation and dedupe. But I think each has it's own virtual appliance that runs on each host to be able to do this.
-
Infinio sounded cool but will only work for NAS or SAN from what I remember - no local storage or DAS (at least not right now).
-
@NetworkNerd said:
Infinio sounded cool but will only work for NAS or SAN from what I remember - no local storage or DAS (at least not right now).
DAS should work, I would be pretty surprised if it had any means of detecting when something was DAS or SAN since the only difference is if there is a switch hooked up.
-
@NetworkNerd said:
I remember Maxta and Pernix as well as Atlantis saying they do storage reclamation and dedupe. But I think each has it's own virtual appliance that runs on each host to be able to do this.
That's pretty much what they would have to do, which is how VSA worked. It's about the only available approach when working in that way.
-
I thought the VSA had to be setup a certain way from the beginning but was near impossible to add to the cluster later (because a certain amount of storage on each host was to protect against another host failing) whereas these software solutions would be able to install in an existing environment non-intrusively and allow you to add hosts / more storage at any time.
-
@NetworkNerd said:
I thought the VSA had to be setup a certain way from the beginning but was near impossible to add to the cluster later (because a certain amount of storage on each host was to protect against another host failing) whereas these software solutions would be able to install in an existing environment non-intrusively and allow you to add hosts / more storage at any time.
Yes, but they are all VMs.
-
@scottalanmiller said:
@NetworkNerd said:
I thought the VSA had to be setup a certain way from the beginning but was near impossible to add to the cluster later (because a certain amount of storage on each host was to protect against another host failing) whereas these software solutions would be able to install in an existing environment non-intrusively and allow you to add hosts / more storage at any time.
Yes, but they are all VMs.
I think NetworkNerd is saying that you can't (his and my understanding) add VSA after the fact because the underlying disk that ESXi is using is already partitioned off, so there won't be any free space, or most likely not enough, to implement VSA after the fact?
I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?
-
@Dashrender said:
I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?
You can build your own VSA to see how it works. You can do it with Linux or BSD quite easily. You build a virtual NAS (which is what VSA means) and use DRBD (Linux) or HAST (BSD) to make the cluster work. You share the storage to the local machine via NFS. Now you have a VM that can provide storage for the other VMs locally.
-
@Dashrender said:
@scottalanmiller said:
@NetworkNerd said:
I thought the VSA had to be setup a certain way from the beginning but was near impossible to add to the cluster later (because a certain amount of storage on each host was to protect against another host failing) whereas these software solutions would be able to install in an existing environment non-intrusively and allow you to add hosts / more storage at any time.
Yes, but they are all VMs.
I think NetworkNerd is saying that you can't (his and my understanding) add VSA after the fact because the underlying disk that ESXi is using is already partitioned off, so there won't be any free space, or most likely not enough, to implement VSA after the fact?
I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?
Yep - that's exactly what I meant.
-
@scottalanmiller said:
@Dashrender said:
I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?
You can build your own VSA to see how it works. You can do it with Linux or BSD quite easily. You build a virtual NAS (which is what VSA means) and use DRBD (Linux) or HAST (BSD) to make the cluster work. You share the storage to the local machine via NFS. Now you have a VM that can provide storage for the other VMs locally.
Quite easily to SAM is not so easy to the person who is semi-familiar with Linux.
-
@NetworkNerd said:
@scottalanmiller said:
@Dashrender said:
I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?
You can build your own VSA to see how it works. You can do it with Linux or BSD quite easily. You build a virtual NAS (which is what VSA means) and use DRBD (Linux) or HAST (BSD) to make the cluster work. You share the storage to the local machine via NFS. Now you have a VM that can provide storage for the other VMs locally.
Quite easily to SAM is not so easy to the person who is semi-familiar with Linux.
It does sound like a cool project to try out to get more familiar with those technologies though. If I find some spare hardware I may dig into it to test it out.
-
@NetworkNerd said:
@scottalanmiller said:
@Dashrender said:
I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?
You can build your own VSA to see how it works. You can do it with Linux or BSD quite easily. You build a virtual NAS (which is what VSA means) and use DRBD (Linux) or HAST (BSD) to make the cluster work. You share the storage to the local machine via NFS. Now you have a VM that can provide storage for the other VMs locally.
Quite easily to SAM is not so easy to the person who is semi-familiar with Linux.
Something SAM needs to be reminded of occasionally.
-
@art_of_shred said:
@NetworkNerd said:
@scottalanmiller said:
@Dashrender said:
I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?
You can build your own VSA to see how it works. You can do it with Linux or BSD quite easily. You build a virtual NAS (which is what VSA means) and use DRBD (Linux) or HAST (BSD) to make the cluster work. You share the storage to the local machine via NFS. Now you have a VM that can provide storage for the other VMs locally.
Quite easily to SAM is not so easy to the person who is semi-familiar with Linux.
Something SAM needs to be reminded of occasionally.
That's why you are here, Art - to slap him around a bit.
-
@NetworkNerd said:
@art_of_shred said:
@NetworkNerd said:
@scottalanmiller said:
@Dashrender said:
I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?
You can build your own VSA to see how it works. You can do it with Linux or BSD quite easily. You build a virtual NAS (which is what VSA means) and use DRBD (Linux) or HAST (BSD) to make the cluster work. You share the storage to the local machine via NFS. Now you have a VM that can provide storage for the other VMs locally.
Quite easily to SAM is not so easy to the person who is semi-familiar with Linux.
Something SAM needs to be reminded of occasionally.
That's why you are here, Art - to slap him around a bit.
Well, I'm here to chew bubble gum and slap people ...and I'm all out of bubble gum.
-
Someone has to chew the gum around here.