The Inverted Pyramid of Doom Challenge
-
As everyone knows, the Inverted Pyramid of Doom or IPOD (aka 3-2-1 Architecture) is a well establish, vendor-pushed architecture for SMB IT that is heavily derided due to "business facing redundancy" to hit checkboxes while resting on fragile, single point of failure storage (SAN, NAS, DAS, etc.) Why the IPOD design is dangerous. reckless or impractical has been discussed ad nauseum. Yet people continue to not just consider it but often to recommend it and often to go so far as to call into question the skills of anyone who doesn't follow the marketing line of "how to do SMB architecture".
Well, I'm issuing a challenge. We all (I hope by now) know that SANs have their place and a super obvious one that explains why enterprises use them almost universally and know why that usage has no applicability to normal SMBs - scale. I'm such a proponent of SANs that I've spoken at conferences about designing them and recently did a webinar with Spiceworks on when to consider them which was very well received (if anyone knows how to get a link to that, let me know.) And pretty soon it is believed that I am going to be doing a talk on the IPOD and when to choose it (yes, really.) I know some times when an IPOD makes sense, but they revolve around scale.
So here is my challenge. If you or someone you know thinks that an IPOD is an acceptable design choice for a system involving one, two or three compute nodes for production workloads I want them to post their design and decisions here! If the IPOD is so widely a wise choice, there should be loads and loads of examples collected over time.
Now, beyond the scale, the design must also include what the business goals and how the IPOD was able to meet those business needs better than the standard "promoted" design choices for the same scenario. Those design choices would be:
- Single server, local storage.
- Dual servers, local storage (rapid recovery from backup.)
- Dual servers, asyn replication (the "Veeam" approach.)
- Dual servers, full sync, full HA (the "Starwind" approach.)
As with any IT project, cost effectiveness must be considered. There is no business for which cost is truly no object. So business objectives must involve being cost effective compared to something that would be comparable in other business needs. Business needs would normally revolve around speed, reliability, cost, etc.
Also, this must be for a greenfield project, not a brownfield. I have recommended IPOD setups in a brownfield within the last four hours. Of course in cases where a SAN is already implemented it might make a lot of sense to keep it as zero cost is a pretty big factor. Although proposing brownfields just as examples would not be bad, it won't count for "the challenge."
Political reasons should not be used, of course we all know that vendor kickbacks, corrupt MSP deals, corporate favours and the like happen and while they will affect what decisions are made, they aren't relevant to this kind of discussion.
-
Is there a prize if someone does nail it?
-
Bragging rights
-
@hutchingsp had a good success story that I'm not sure I agree with, mostly because I'm unsure of the details. But he did an HDS HUS 110 based IPOD that was a pretty well planned out implementation. I'd like to see him present.
-
@Breffni-Potter said:
Is there a prize if someone does nail it?
If so I think I won it in a PM to Scott, I wouldn't care to repeat it though lol
-
Not sure I'd go so far as to call it a "presentation" but we went with HDS for servers and storage as Scott mentioned.
My reasoning is that we're used to shared storage, and done correctly it's a godsend vs. the terrible burden that many people make it out to be.
We're a VMware shop and we have about 75 VM's (and increasing) and about 20TB of data (and increasing).
I looked at a lot of options.
Local storage is, to me, not an option at this scale, and I say that simply because with a single 10TB (and growing) file server on a single box you're going to have a bad day if that box fails, and it makes updates such as firmware and ESXi updates on the box something which have to be planned for.
Replicated local storage is something that I considered as we have experience of HP StoreVirtual, albeit the physical version vs. the virtual appliance.
In the end I ruled it out because it introduces more complexity than I wanted, and once you're into three hosts and 35-40TB usable and you want decent IOPS you end up buying an insane amount of RAW capacity.
I did a lot of research and came to the conclusion that for us, I needed to put to one side every "geek" instinct that I had and look at what we needed - which was dumb, deathly reliable block storage.
I'd seen HDS mentioned a fair bit on Spiceworks so spoke to John773 (@John-Nicholson) about them and got a ton of useful advice and found HDS the most helpful, easiest to deal with, and least slimy of all the vendors I'd been speaking to.
We settled on a HUS110 with a couple of tiers of disks combined into a single dynamic pool, so it just presents as 36TB (or so) of usable storage which we carve into LUNs and the hot/cold data automatically tiers between the fast and slow disk.
Connectivity is direct attached FC so there are no switches, the hosts just connect directly into the controllers so it's really being used as a high end DAS array rather than a true SAN (though at that point I think most people would use the term "SAN" even if semantically it is incorrect).
The HUS is 99.999% uptime rated - the environment it's in is not, so the HUS is not the thing I should be worrying about in terms of reliability.
There endeth the presentation.
-
@hutchingsp Thanks for popping in!
-
@hutchingsp said in The Inverted Pyramid of Doom Challenge:
In the end I ruled it out because it introduces more complexity than I wanted, and once you're into three hosts and 35-40TB usable and you want decent IOPS you end up buying an insane amount of RAW capacity.
We have this now and we use the same capacity with replicated local disks as you would with a SAN with RAID 10. Are you using RAID 6 or something else to get more capacity from the SAN than you can with RLS? We aren't wasting any capacity having the extra redundancy and reliability of the local disks with RAIN.
-
@hutchingsp said in The Inverted Pyramid of Doom Challenge:
In the end I ruled it out because it introduces more complexity than I wanted, and once you're into three hosts and 35-40TB usable and you want decent IOPS you end up buying an insane amount of RAW capacity.
We are getting 66K IOPS from one tier. And then more, but nowhere close, from the spinning tier. Maybe another 2K. It's not insane IOPS, but it is good.
-
@hutchingsp said in The Inverted Pyramid of Doom Challenge:
In the end I ruled it out because it introduces more complexity than I wanted...
I'm confused here. How does RLS introduce more complexity? Our RLS system is so simple I couldn't possible install any enterprise SAN on the market as easily. It takes literally no knowledge or setup at all. We are using Scale's RLS hyperconverged system and I literally cannot fathom it being easier today. Just making a LUN would be more complex than using RLS alone. Just needing to know that you need to make a LUN is more complex. With the RLS that we have, you just "use it". If you want more power, you can choose to manage your performance tiering manually, but there is no need to if you don't want to as it does it automatically for you out of the box.
-
@hutchingsp said in The Inverted Pyramid of Doom Challenge:
Local storage is, to me, not an option at this scale, and I say that simply because with a single 10TB (and growing) file server on a single box you're going to have a bad day if that box fails,
No issues there. A 10TB fileserver failing over with local storage is automatic recovery in about one second. It's effectively instant. No different than failing over any other workload type.
-
@hutchingsp said in The Inverted Pyramid of Doom Challenge:
and it makes updates such as firmware and ESXi updates on the box something which have to be planned for.
No issues there either. Our firmware updates are painless and transparent. We still don't do it during the middle of the day for obvious reasons, unnecessary risk is unnecessary risk. But if we did, no one would notice.
-
@hutchingsp sorry that it was so long for me to address the post. When you first posted it seemed reasonable and we did not have any environment of our own that exactly addressed the scale and needs that you have. But for the past seven months, we've been running on a Scale cluster, first a 2100 and now a 2100/2150 hybrid and that addresses every reason that I feel that you were avoiding RLS and addresses them really well.
The concern that I would have with your design is not reliability, the HUS SANs are very reliable. But they are far from free and far from easy to use. That cost is not cheap. You are in great shape that you can do direct connections (DAS) rather than SAN for better reliability and nominally better performance with much lower cost, but at three hosts plus the SAN, I can only imagine that you had to spend a lot more for that complex and limited setup than you would have with RLS that would not have the issues that you feel that it would.
Of course to really know we'd have to compare real world numbers. And compare soft benefits. How well suited is the HUS plus three node design to scaling up? If you add one more node, do you need switches? Is it two more nodes? With the RLS design, in our situation at least, we can add many additional nodes and scale up transparently gaining capacity and IOPS performance and throughput.
I don't have enough details to compare the final decision that you made compared to some other solutions that are out there. But in the real world, the reasons that you listed as to why you felt RLS was not an option for you are demonstrably not concerns in our environment. It's not a theoretical case, but one that we run every day.
-
@hutchingsp said in The Inverted Pyramid of Doom Challenge:
The HUS is 99.999% uptime rated - the environment it's in is not, so the HUS is not the thing I should be worrying about in terms of reliability.
We get that from normal servers. If the HUS isn't saving costs, nor improving reliability, what is the benefit? I'm shocked that the HUS is only five nines. Not that that is the deciding factor, but with all of your concerns about firmware, failing of 10TB workloads and such... why did that not rule out the HUS? It seems like either the HUS isn't "up" enough to justify the cost and/or your other downtimes are so great that it doesn't matter anywhere. I'm just curious as to the cost benefit of the HUS over local storage given that your HUS isn't getting a significant improvement in uptime and the downtime that it does get is hidden in the "background noise" or your environment's outages?
-
In the scenario above, using the SAN here is much like a lab scenario. You don't care about the reliability, it's "reliable enough" and it is recognized that reliability is not the concern. So that is much like a lab environment. What I am unclear about, though is without that reliability goal, what goal did the SAN address? It seems like it is both slower and much more expensive than local storage would be. So cost and performance don't seem like they could be the drivers. And while reliability was listed as not a key factor, it was mentioned in the list as a key factor, which seems to go against the final conclusion.
-
Actually had one come up today where a company had thirteen hosts and was using a SAN. Of course, the problem with this isn't that the design is wrong but that it totally fits the guidelines of when you should use a SAN. So in no way does it go against the stated guidance on SAN and IPOD choices. It's an enterprise SAN with more than thirteen attached nodes (and more being added) so it is well within the range of scale where we expect an IPOD/SAN combo to be totally logical and valid.
https://community.spiceworks.com/topic/1741842-migrating-boot-from-san-hosts
The real question is "is this actually an SMB that just has huge needs, or is it a larger business?" But in either case, it's a good example of where a SAN actually makes total sense (but isn't a requirement, other viable options at that size, too), but does so meeting the established, traditional guidelines (more than a dozen nodes on the same shared storage pool) so is not an exception or revelation in any way.
-
@scottalanmiller said in The Inverted Pyramid of Doom Challenge:
We have this now and we use the same capacity with replicated local disks as you would with a SAN with RAID 10. Are you using RAID 6 or something else to get more capacity from the SAN than you can with RLS? We aren't wasting any capacity having the extra redundancy and reliability of the local disks with RAIN.
With HDT pools you could have Tier 0 be RAID 5 SSD, RAID 10 for 10K's in the middle Tier, and RAID 6 for NL-SAS with sub-lun block tiering across all of that. Replicated local you generally can't do this (or you can't dynamically expand individual tiers). Now as 10K drives make less sense (hell magnetic drives make less sense) the cost benefits of a fancy tiering system might make less sense. Then again I see HDS doing tiering now between their custom FMD's regular SSD's, and NL's in G series deployments so there's still value in having a big ass array that can do HSM.
-
@scottalanmiller His previous RAIN system (HP Store Virtual) requires local RAID. So your stuck with nested RAID 5's (Awful IO performance) or nested RAID 10 (awful capacity capabilities, but great residency). Considering the HDS has 5 nines already though kinda moot.
-
@scottalanmiller said in The Inverted Pyramid of Doom Challenge:
@hutchingsp said in The Inverted Pyramid of Doom Challenge:
In the end I ruled it out because it introduces more complexity than I wanted...
I'm confused here. How does RLS introduce more complexity? Our RLS system is so simple I couldn't possible install any enterprise SAN on the market as easily. It takes literally no knowledge or setup at all. We are using Scale's RLS hyperconverged system and I literally cannot fathom it being easier today. Just making a LUN would be more complex than using RLS alone. Just needing to know that you need to make a LUN is more complex. With the RLS that we have, you just "use it". If you want more power, you can choose to manage your performance tiering manually, but there is no need to if you don't want to as it does it automatically for you out of the box.
To be fair in comparison he could have deployed Hitachi Unified Compute (Their CI stack) and gotten the same experience (basically someone builds out the solution for you and gives you pretty automated tools to abstract the complexity away, as well as have API drive endpoints for provisioning against etc). This isn't an argument for an architecture (and I like HCI, I REALLY do) this is an argument about build vs. buy your making. HCI with VSA's can be VERY damn complicated. HCI can be very simple (especially when it's built by someone else). CI can do this too.
-
@scottalanmiller said in The Inverted Pyramid of Doom Challenge:
@hutchingsp sorry that it was so long for me to address the post. When you first posted it seemed reasonable and we did not have any environment of our own that exactly addressed the scale and needs that you have. But for the past seven months, we've been running on a Scale cluster, first a 2100 and now a 2100/2150 hybrid and that addresses every reason that I feel that you were avoiding RLS and addresses them really well.
The other issue with Scale (and this is no offense to them) or anyone in the roll your own hypervisor game right now is there are a number of verticle applications that the business can not avoid, that REQUIRE specific hypervisors, or storage be certified. To get this certification you have to spend months working with their support staff to validate capabilities (performance, availability, predictability being one that can strike a lot of hybrid or HCI players out) as well as commit to cross engineering support with them. EPIC EMR (and the underlying Cachè database), application from industrial control systems from Honeywell and all kinds of others.
This is something that takes time, it takes customers asking for it, it takes money, and it takes market clout. I remember when even basic Sage based applications refused to support virtualization at all (then Hyper-V). It takes time for market adoption and even in HCI there are still some barriers (SAP is still dragging their feat on HANA certifications for HCI). At the end of the day customer choice is great, and if you can be a trailblazer and risk support to help push vendors to be more opened minded (That's great) but not everyone can do this.
There are other advantages to his design over a HCI design. If he has incredibly data heavy growth in his environment he doesn't have to add hosts. As licensing for Microsoft applications stacks (datacenter, SQL etc) are tied to CPU Core's here in the near future adding hosts to add storage can be come rather expensive if you don't account for it properly. Now you could mount external storage to the cluster to put the growing VM's on, but I'm not sure if Scale Supports that? He also within the HUS can either grow existing pools, add new pools (maybe a dedicated cold Tier 3) or PIN LUN's to a given tier (Maybe put a database always in flash). There's a lot of control here of storage costs and performance (if you have patience to manage it. Sadly no VVOLs support coming to the old HUS's.