New Infrastructure to Replace Scale Cluster
-
@Dashrender said in New Infrastructure to Replace Scale Cluster:
This a term that Scott Allen Miller coined ages ago.
No he didn't. Might be where you firs theard it, but it is not his.
-
@JaredBusch said in New Infrastructure to Replace Scale Cluster:
@Dashrender said in New Infrastructure to Replace Scale Cluster:
This a term that Scott Allen Miller coined ages ago.
No he didn't. Might be where you firs theard it, but it is not his.
Thanks I stand corrected.
-
@dyasny said in New Infrastructure to Replace Scale Cluster:
I've never seen a well built SAN go completely down in over 20 years of working with them.
I have. Most SANs fail with reckless abandon. Really good ones are incredibly stable, but everything fails sometimes.
Now, in the storage industry, a "good SAN" would be defined as one that is a part of a cluster. No single box is every that reliable, even the best ones are easily subject to the forklift problem if nothing else.
SANs can approach mainframes in reliability, but to do so is so costly that no one does it. In the real world, that kind of storage carries really high risks and/or cost compared to other options.
-
@dyasny said in New Infrastructure to Replace Scale Cluster:
On the other hand, hyperconvergence is a resource drain, with systems like gluster and ceph eating up resources they share with the hypervisor, with neither being aware of each other, and VMs end up murdered by OOM, or just stalled due to CPU overcommitment.
That's misleading. Much like how software RAID uses system resources, which is true. But the cost of creating good external resources is high and the internal needs are low and cheap to increase. Software RAID blows hardware RAID out of the water on performance, even when shared. And you just account for that in your planning.
And there is an assumption that hypervisors and storage are not aware. That can be true, but isn't necessarily. And if you use a SAN, it's guaranteed to be true.
So both of these points are selling points for HC, rather than against it.
-
@scottalanmiller said in New Infrastructure to Replace Scale Cluster:
@dyasny said in New Infrastructure to Replace Scale Cluster:
I've never seen a well built SAN go completely down in over 20 years of working with them.
I have. Most SANs fail with reckless abandon. Really good ones are incredibly stable, but everything fails sometimes.
Now, in the storage industry, a "good SAN" would be defined as one that is a part of a cluster. No single box is every that reliable, even the best ones are easily subject to the forklift problem if nothing else.
SANs can approach mainframes in reliability, but to do so is so costly that no one does it. In the real world, that kind of storage carries really high risks and/or cost compared to other options.
let me rephrase myself. I've seen disks, controllers, PSUs, even backplanes and mobos fail in SANs, None of that ever caused an actual outage.
-
@dyasny said in New Infrastructure to Replace Scale Cluster:
Gluster and other regular network based storage systems are going to be the bottleneck for the VM performance. So unless you don't care about everything being sluggish, you should think about getting a separate fabric for the storage comms, even if you hyperconverge.
Gluster is not known for high performance. But in the real world, SANs are normally the bottleneck on performance. Storage is always the slow point, and safe storage is even more of one.
-
@dyasny said in New Infrastructure to Replace Scale Cluster:
@scottalanmiller said in New Infrastructure to Replace Scale Cluster:
@dyasny said in New Infrastructure to Replace Scale Cluster:
I've never seen a well built SAN go completely down in over 20 years of working with them.
I have. Most SANs fail with reckless abandon. Really good ones are incredibly stable, but everything fails sometimes.
Now, in the storage industry, a "good SAN" would be defined as one that is a part of a cluster. No single box is every that reliable, even the best ones are easily subject to the forklift problem if nothing else.
SANs can approach mainframes in reliability, but to do so is so costly that no one does it. In the real world, that kind of storage carries really high risks and/or cost compared to other options.
let me rephrase myself. I've seen disks, controllers, PSUs, even backplanes and mobos fail in SANs, None of that ever caused an actual outage.
Right, and I've seen all of those cause outages.
I've seen all of those fail in servers, too. And in some cases, they cause outages and in some they don't. SANs are just servers, but with a special purpose. Same risks that any similar server would have, the tech is all the same.
The problem in the real world is that SANs are so oversold that the work going into making them safe is often skipped because customers aren't as demanding as they are with servers. So the average SAN has a higher price tag for lower reliability than you normally get in the server side of the same market.
-
@scottalanmiller try an ec2 i3.metal
-
@dyasny said in New Infrastructure to Replace Scale Cluster:
The better option, IMO, is to use two hosts as hypervisors, and the third - pack with disks, and use as the storage device (NFS or iSCSI). And also install the engine on it, as a VM or on baremetal - doesn't matter.
Seems a waste. You lose a lot of performance with the networking overhead, you use three hosts for the job of two, and you give up HA. That's a lot of negative. Even if you already own the third host, doing an inverted pyramid of doom is the worst possible use of the existing resources. Better to retire the third host than to make it an anchor that will drown the other two nodes.
-
@Dashrender said in New Infrastructure to Replace Scale Cluster:
@dyasny said in New Infrastructure to Replace Scale Cluster:
@DustinB3403 no, in this particular setup, you have two options. The original one would be to go hyperconverged, installing both the storage and hypervisors services on all 3 hosts, and to also deploy the engine (vsphere equivalent) as a VM in the setup (that's called self hosted engine).
The better option, IMO, is to use two hosts as hypervisors, and the third - pack with disks, and use as the storage device (NFS or iSCSI). And also install the engine on it, as a VM or on baremetal - doesn't matter.
You will have less hypervisors, true, but having a storage service on the hypervisors is a resource drain, so you don't actually lose as much in terms of resources. And you gain a proper storage server, less management headache, and a setup that can scale nicely if you decide to add hypervisors or buy a real SAN. Performance will also be better, and you might even end up with more available disk space, because you will not have to keep 3 replicas of every byte like gluster/ceph require you to do.
Isn't that an IPOD though?
Correct, a standard three node IPOD.
https://mangolassi.it/topic/8743/risk-single-server-versus-the-smallest-inverted-pyramid-design
-
@dyasny said in New Infrastructure to Replace Scale Cluster:
And of course, we haven't even touched on HA.
oVirt provides HA out of the box, as long as a living host has enough resources available to start the protected VMs.
By definition, HA can't be provided "out of the box." HA is something you do, not something you buy. A product may have features to make HA easier, but a product itself can't do HA.
In an IPOD, oVirt would simply automate LA (low availability). HA must be significantly higher than standard availability. The proposed IPOD design results in significantly lower than standard. (Where standard is an enterprise server with local storage and no system of this kind whatsoever.)
-
@JaredBusch said in New Infrastructure to Replace Scale Cluster:
@Dashrender said in New Infrastructure to Replace Scale Cluster:
This a term that Scott Allen Miller coined ages ago.
No he didn't. Might be where you firs theard it, but it is not his.
Actually, I did
May, 2013. It's from a short article originally on SW, but was then codified in this article on the Inverted Pyramid of Doom on SMBITJournal.
I actually did use it first (and second.) It's standard industry terminology now, but before 2013 it was only known as the 3-2-1 Architecture.
-
Here is the Origin of the Inverted Pyramid of Doom. In the thread, people even ask where it came from and it is mentioned that I had made it because the topic was so common and a name didn't exist for it yet.
-
The first person to use the term after it was coined was @NetworkNerd (now with VMware.)
-
@scottalanmiller said in New Infrastructure to Replace Scale Cluster:
By definition, HA can't be provided "out of the box." HA is something you do, not something you buy. A product may have features to make HA easier, but a product itself can't do HA.
In an IPOD, oVirt would simply automate LA (low availability). HA must be significantly higher than standard availability. The proposed IPOD design results in significantly lower than standard. (Where standard is an enterprise server with local storage and no system of this kind whatsoever.)
This is just a bunch of terms you invented on the spot. No truth to them. HA can be provided "out of the box" by a system that is capable of it. It is not "something you do", it's a product or system feature.
-
@dyasny said in New Infrastructure to Replace Scale Cluster:
This is just a bunch of terms you invented on the spot. No truth to th
HA stands for "High Availability". It's not me inventing new terms. HA has always meant "high availability". Using HA to mean "something unrelated to availability" is the new invention in this case. Actual HA in the "IT terminology" rather than the marketing terminology can absolutely never be "purchased" as availability has to be measured by resulting risk, not feature.
-
@dyasny said in New Infrastructure to Replace Scale Cluster:
It is not "something you do", it's a product or system feature.
This is absolutely untrue in any technical, engineering, or business situation. In marketing and sales where the terms are fake, yes, HA is applied to absolutely anything. But that is never acceptable in an IT situation (or engineering, etc.)
This the same tactic that many sales people use to try to use "redundancy" incorrectly.
And it is @StorageNinja from VMware who said this.
If you believe it is "something you buy", what would it be since you can buy that name on literally anything with no consistency in meaning. We know what the team means in IT. But you would need to define what it means to you for us to understand what you are thinking that it means. Since it can't be tied to redundancy (oVirt has none), nor to reliability or availability, I doubt anyone has a guess what would then make something be a purchasable HA feature.
-
For obvious reasons, HA can be applied in only one meaningful way... to mean "high availability." It's obvious and honest and useful. Any other use of it, to mean something unrelated to resulting availability rates, is totally meaningless - it's just empty words that there is never a reason to say.
For example, a SAN with lower than average availability. Calling that HA means nothing, literally nothing. It doesn't mean redundant, it doesn't tell us anything about the availability, it doesn't tell us anything. It's just empty words slapped on something.
oVirt provides no high availability, obviously. If it did, it would be trivial to demonstrate that you could buy and "switch on" that HA feature in a situation with extremely low availability. Saying therefore that "high availability" can mean "low availability" clearly makes no sense.
-
https://mangolassi.it/topic/10337/defining-high-availability/
The only sources I can find that don't agree that high availability is defined by a relative measure of availability also say that DR and HA are overlapping.
-
@scottalanmiller you're obviously not inventing the term HA, but you are inventing this ridiculous saying about HA being what you do and not what you buy. You can, of course, hack HA into almost any service, but a product that is already built with HA in mind is something you buy and use as designed - and you get HA. Out of the box, if you bought and configured all the prerequisites. oVirt, vCenter, and a ton of other products have it designed into them, so if you pay for it, and for the hardware that supports it, you can have it right there out of the box, if you follow the setup guide. Everything else is just you throwing meaningless pronouncements in the air.
You buy VMware, with the HA features (don't remember if those cost extra, doesn't matter here). You buy hardware that supports whatever VMWare uses for HA (IPMI/redfish/redundant switches etc - whatever is the best practice) and you follow the config guide to set it up - you have yourself highly available VMs, with all the standard properties for HA - downtime SLA, splitbrain avoidance etc etc. These are features you pay for (that's what "buy" means in the English language), both on the software and hardware side of things.
And yes, I've decided arguing with you here is a huge waste of time, because for every comment you come back with 10, and I have no bandwidth for replying to that much, so if you think you "won" an argument or whatever tickles your fancy, sure, go ahead. I'll just answer if I want to, at my own convenience. Hope you don't mind.