StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)
-
@FATeknollogee said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
@scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
@FATeknollogee said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
@scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
@FATeknollogee said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
Curious question...what happened to Starwind vSAN for Linux (KVM), is that not a thing anymore?
It is for sure, they talked about it at MangoCon
OK..wonder how Starwind HCA/vSAN compares to VMware vSAN!
Only requires two nodes, is available for free, has some really breakthrough tech, is cross platform, Network RAID vs RAIN, etc.
Forgetting the number of nodes (for a minute), are you saying it performs better than VMware's vSAN?
It performs better than anyone. It's insanely fast.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
It is yet anothe niche approach to doing specific things and not the solution to everything under the sun, like the people pushing it claim
Actually, it basically is. Because HCI is essentially just "logical design". It's not some magic, it's just the obvious, logical way to build systems of any scale. One can easily show that every stand alone server is HCI, too. Basically HCI encompasses everything that isn't IPOD or just overbuild SAN infrastructure which has a place, but is incredibly niche.
HCI is the only logical approach to 95% of the world's workloads. Just loads and loads of people either get by with terrible systems, or use HCI and don't realize it.
But the real issue is that HCI alternatives come with massive caveats and have only niche use cases that make sense.
-
@Dashrender said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
So why is hardware RAID slower than software?
Because:
- It's an insanely low needs function so there is no benefit to investing there. There is essentially "no work" being done.
- It's extremely basic IO, not something that an ASIC can do better than a CPU that is already designed for exactly that task.
- The spare overhead of the CPU is so much that there is no cost effective way to duplicate the power.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
Because these ASICs aren't priority - mining ASICs and speed trading ASICs make money, it's a worthwhile investment. A RAID controller ASIC does a job and sells a controller for $200 once, with the customer grumbling about being able to do it all in software for free anyway.
And good controllers are $600+ and at that price can't compete with the software in performance. Mining or graphics use ASICs or GPUs for very special case math making the special hardware valuable. RAID doesn't do special math, it does basic math and mostly just IO. So the reasons that ASICs are good for mining don't exist with RAID, at all.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
Databases don't (or rather shouldn't) need storage replication in 2019. There are plenty of native tools for that, which are safer, cheaper and more efficient.
Absolutely. So having the storage be local, not remote, carries the real benefits. HCI doesn't imply replication any more than SAN does. Most do, of course, and if you want FT that's generally how you do it.
So databases, when done correct, generally make the most sense on stand alone boxes with local storage - a one node HCI setup.
For databases that do need the platform, rather than the application, to handle HA or FT, then HCI with more than one node is the best option.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
The reason software RAID outperforms hardware these days is much simpler - hardware raid asics never got as much investment and boosting as regular CPUs, so what we have is modern massive CPUs vs RAID controllers that haven't seem much progress since the late 90s. And since nobody cares enough to invest in them or make them cheaper, they simply die out, which is well and proper.
Exactly, there is really no benefit to anyone to make hardware RAID faster. The cost would be enormous, the benefits nominal. It's just not important. Even if you had gobs of money to throw at it, you can't get it enough faster to ever justify. If you need something that fast, you pretty much can't be on RAID anyway. You'd be spending hundreds of thousands to get essentially immeasurable performance when for cheaper you could blow it away with some high performance NVMe setup that doesn't use RAID at all.
So while, in theory, hardware RAID could be built at some crazy cost to be faster, it can't be in practical terms. And anything that you did do would waste money that could have been used to make the overall system faster in some way.
Bottom line... RAID performance itself is a nearly worthless pursuit. The different between RAID 6 and RAID 10 might be big, but the difference between software RAID 10 and hardware RAID 10 and MD and ZFS and Adaptec and LSI is all "background noise."
-
@scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
Actually, it basically is. Because HCI is essentially just "logical design". It's not some magic, it's just the obvious, logical way to build systems of any scale. One can easily show that every stand alone server is HCI, too. Basically HCI encompasses everything that isn't IPOD or just overbuild SAN infrastructure which has a place, but is incredibly niche.
HCI is the only logical approach to 95% of the world's workloads. Just loads and loads of people either get by with terrible systems, or use HCI and don't realize it.
But the real issue is that HCI alternatives come with massive caveats and have only niche use cases that make sense.
Thanks for proving my point When all you have is a hammer, everything starts looking like a nail, eh?
Absolutely. So having the storage be local, not remote, carries the real benefits. HCI doesn't imply replication any more than SAN does. Most do, of course, and if you want FT that's generally how you do it.
Now you are confusing basic local storage with HCI. If I install a bunch of ESXi servers using their local disks, with local-only VMs, am I running an HCI setup?
For databases that do need the platform, rather than the application, to handle HA or FT, then HCI with more than one node is the best option.
No, for those, it definitely makes more sense to use an addon that enables replication, sharding and other horizontal scaling techniques.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
Now you are confusing basic local storage with HCI. If I install a bunch of ESXi servers using their local disks, with local-only VMs, am I running an HCI setup?
If you install any hypervisor onto a single server with compute, storage and network, that is hyperconverged. Everything is contained in 1 physical box.
HCI is everything is contained in 1 big virtual box, with a bunch of individual physical boxes providing resources, that can run a portion of the entire workload and that get put into that virtual box.
So no, installing ESXi on a bunch of individual servers and having nothing "box them together" is not HCI. You'd need to use ESXi's vSAN or hyperconverged product.
-
Hell your desktop or laptop is hyperconverged.
Everything is self contained.
-
And the ESXi vSAN product is the tool that ESXi promotes, but it requires at least 3 physical boxes (ideally) but they'll let it slide if you only have 2 servers and a single VM to act as a quorum.
-
@DustinB3403 said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
Hell your desktop or laptop is hyperconverged.
Everything is self contained.
Yup, this is all just marketing hype. In the real world, a standalone host is just a standalone host, it was before HCI was a thing and will be after.
Also note, I always use the term HCI, not just HC, and I always mean it to be exactly what it is being sold as - a way of building virtualized infrastructure so that the shared storage in use, is provided by the same machines that host the workloads, off of their internal drives. I could get into the networking aspect of things, but that will only make my point stronger - mixing everything on a single host is a bad idea. -
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
so that the shared storage in use
HCI isn't just shared storage. It's shared everything.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
In the real world, a standalone host is just a standalone host, it was before HCI was a thing and will be after.
HC was always a thing, though, that's the thing. That it got buzz is different. We've had HC all along, just people didn't call it anything.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
mixing everything on a single host is a bad idea.
What do you mean, mixing everything? The magic sauce is what makes tools like Starwinds vSAN an amazing tool. It works with the hypervisor to manage all of your hosts from a single interface. Should any host go down, those resources are offline, but the VM's that may have been on there are moved to the remaining members of the HCI environment (of multiple physical hosts).
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
mixing everything on a single host is a bad idea.
No, it's separating it that is the bad idea. Separate means less performance and more points of failure. It's just like hardware and software RAID... when tech is new you need unique hardware to offload it, over time, that goes away. This has happened, at this point, with the whole stack. And did long ago, there was just so much money is gouging people with SANs that every vendor clung to that as long as they could.
But putting those workloads outside of the server make it slower, costlier, and riskier. There's really no benefits.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
Also note, I always use the term HCI, not just HC, and I always mean it to be exactly what it is being sold as - a way of building virtualized infrastructure so that the shared storage in use, is provided by the same machines that host the workloads, off of their internal drives.
That's fine, but that's not HC or HCI. That's one vendor's product of it (or several.) HC is not the property of a vendor, it's an architecture, and an old one that has battle tested and logically is the only primary way to build systems.
-
The easiest way I can think to explain your rational @dyasny is to pretend I'm building a server, but because I don't trust the RAID controller that I can purchase for my MB, I purchase a bunch of external disks, plug those into another MB and then attach that storage back to my server via iSCSI over the network.
How is this safer, more reliable and cheaper than just adding all of the physical resources into a single server? Then combining 2, 3 or however many of the identical servers together with some magic sauce and managing it from a single interface?
-
@DustinB3403 said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
HCI isn't just shared storage. It's shared everything.
Great, so we are also running the SDN controllers on all the hosts. Even an OVN controller is a huge resource hog. A Neutron controller in Openstack is even worse. And then the big boys come in, have you tried to build an Arista setup?
I am not talking theory here, I'm talking implementation, as someone who built datacenters and both public and private clouds at scale. Running the entire stack on each host, along with the actual workload is a horrible idea.
What do you mean, mixing everything? The magic sauce is what makes tools like Starwinds vSAN an amazing tool.
Sounds like marketing bs to me, sorry Magic sauce? Really?
It works with the hypervisor to manage all of your hosts from a single interface. Should any host go down, those resources are offline, but the VM's that may have been on there are moved to the remaining members of the HCI environment (of multiple physical hosts).
Sounds like any decently built virtualized DC solution, from proxmox to ovirt to vcenter and xenserver. How is it "magic" exactly?
The easiest way I can think to explain your rational @dyasny is to pretend I'm building a server, but because I don't trust the RAID controller that I can purchase for my MB, I purchase a bunch of external disks, plug those into another MB and then attach that storage back to my server via iSCSI over the network.
This is a ridiculous example. What you describe is instead of having a server with a disk controller, disks , GPU and NICs, I'd install a single card that is a NIC, a GPU and can store data. So that instead of the PCI bus accessing each controller separately with better bandwidth, all the IO and different workloads are driven through a single PCI bus channel. And then use "magic" to install several of those hybrid monster cards in the hopes of making them work better.
How is this safer, more reliable and cheaper than just adding all of the physical resources into a single server? Then combining 2, 3 or however many of the identical servers together with some magic sauce and managing it from a single interface?
There you go with the magic sauce koolaid again.
-
@scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
HC was always a thing, though, that's the thing. That it got buzz is different. We've had HC all along, just people didn't call it anything.
OK, just so we're on the same page here, are you saying we should simply install a bunch of localhosts and be done, for all the types of workloads out there?
No, it's separating it that is the bad idea.
No, it's mixing it that is the bad idea. See, I can also do this
Separate means less performance and more points of failure.
It would seem so, but in fact, you already have to run those services (storage, networking, control plane) anyway, and they all consume resources, and a lot of them. And then you dump the actual workload on the same hosts as well, so either you simply have much less to assign to the workload and the services, or they have to compete for those resources. Either is bad, and when one host fails, EVERYTHING on it fails. So you have to not just deal with a storage node outage or a controller outage, or a hypervisor outage, but with all of them at the same time. How exactly is that better for performance and MTBF?
It's just like hardware and software RAID... when tech is new you need unique hardware to offload it, over time, that goes away. This has happened, at this point, with the whole stack. And did long ago, there was just so much money is gouging people with SANs that every vendor clung to that as long as they could.
I'm not saying SANs are the answer to everything, I'm saying loading all the infrastructure services plus the actual workload on a host is insane. If you have a cluster of hosts providing FT SDN, and another cluster providing FT SDS and a cluster of hypervisors using those service to run workloads using the networking and storage provided, I'm all for it. This system can easily deal with an outage of any physical component, without triggering chain reactions across the stack. But this is just software defined infrastructure, not HCI.
But putting those workloads outside of the server make it slower, costlier, and riskier. There's really no benefits.
Again, I don't care much for appliance-like solutions. A SAN or a Ceph cluster, I can use either, hook it up to my hypervisors and use the provided block devices. But if you want me to run the (just for example here) Ceph RBD as well as the VMs and the SDN controller service on the same host - I will not take responsibility for such a setup.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
It would seem so, but in fact, you already have to run those services (storage, networking, control plane) anyway, and they all consume resources, and a lot of them. And then you dump the actual workload on the same hosts as well, so either you simply have much less to assign to the workload and the services, or they have to compete for those resources. Either is bad, and when one host fails, EVERYTHING on it fails. So you have to not just deal with a storage node outage or a controller outage, or a hypervisor outage, but with all of them at the same time. How exactly is that better for performance and MTBF?
@scottalanmiller - where is your - hypervisors are not basket and eggs - post?