Identifying SAN, NAS and DAS
-
@MattSpeller said:
I've read some of your posts before and you came off fairly negative on SAN's - is this primarily from single point of failure (incorrect use case) or are they inherently risky? What makes DAS so attractive in comparison (if it is)? When do you choose a SAN vs DAS vs NAS?
This is an article he wrote about SANs - http://www.smbitjournal.com/2013/06/the-inverted-pyramid-of-doom/.
-
@MattSpeller said:
I've read some of your posts before and you came off fairly negative on SAN's - is this primarily from single point of failure (incorrect use case) or are they inherently risky?
I should be super clear here... I am not anti-SAN at all. SANs are totally awesome. It is the misuse of SAN (or anything else) that I am very negative about. The problem is is that SANs are difficult to understand and very often completely misused. It is only because very few people understand them yet deploy them anyway and storage is generally the most critical component of any infrastructure so playing fast and loose with storage design is much more risky than with, so, getting the wrong CPU architecture chosen.
A SAN is inherently more risky than DAS because of the extra switch. Both are more risky than local storage because of the extra chassis and cabling (and possibly switch.)
Where SAN gets really problematic is when it is used for the wrong reasons and at the wrong time. People often associate ideas like high availability with a SAN but SAN doesn't imply anything like that. SANs are very powerful like a gun. A gun it a very raw tool, but when you let people play with guns without taking a safety training class that's how people blow toes off accidentally. SANs are like that. Very powerful, but they make it way too easy to blow your own toes off.
-
@NetworkNerd said:
This is an article he wrote about SANs - http://www.smbitjournal.com/2013/06/the-inverted-pyramid-of-doom/.
Should be noted that the inverted pyramid would be the same if the pyramid point was a DAS or a NAS too. It isn't SAN that makes it a problem there. It is the upside down architecture resting on a single point.
SAN guarantees that your pyramid is three levels or more. NAS implies it but does not guarantee it. DAS guarantees only two levels. So SAN gets picked on the most but only because it is the only option that has the fact that there is a switching level inherent to the concept.
-
@MattSpeller said:
What makes DAS so attractive in comparison (if it is)?
DAS is always simpler than SAN. They are the same physical devices, just one has an extra switch and one does not. You can take one server and one storage array and hook it up as a DAS, then add one cable and one switch and turn it into a SAN is like ten seconds. Then remove the switch again and go back to DAS.
SAN is nice because you can scale beyond the limits of physical connectors. That is the one and only upside to SAN versus DAS. DAS doesn't need the switch. The switch adds cost, latency (albeit generally pretty small) and risk (an extra device or layer to fail.)
So a SAN is a DAS with a longer dependency chain.
-
The reason that NAS is often preferred to SAN is because it is generally far better understood by IT practitioners. Nearly everyone in IT has worked with mapped drives, and that is NAS. So how they work is obvious and not confusing. A NAS provides arbitration for access and that is critical. The NAS doesn't trust the users, it protects users from each other (or themselves.)
SAN is often confused with NAS and this is where it becomes very dangerous. Because SANs lack arbitration for file access, but because users tend to expect it, it is extremely common for users to destroy data on a SAN because they think that it is robust and provides protections that it does not.
If you were an IT practitioner who know SAN inside and out and did not know NAS well, then that recommendation would be reversed. But the number of people familiar with file protocols is very high and the number familiar with block is very low. So this would be extremely rare.
-
@MattSpeller said:
When do you choose a SAN vs DAS vs NAS?
I wrote about this a few years ago in Choosing a Storage Type.
The rule of thumb is written like this...
Local Storage -> DAS -> NAS -> SAN
Start to the left and stop when the type is an option. Only move to the right when the option to the left is not possible. So local storage is always the default starting point. Many times you need to have external storage, but not nearly as often as people generally assume. You can very often use local storage. Even for some very complex things.
If you must use external storage, DAS is the least complicated (mostly) and does not require a network. So evaluate DAS. If DAS meets your needs, use DAS. If not, move right.
Evaluate if you can do what you need with NAS. It's very rare that you can't use either DAS or NAS for what you want to do. NAS covers nearly every base that local storage and DAS cannot do.
If none of those do what you need, SAN is the storage type that handles the remaining use cases - which basically boil down to large scale storage consolidation. In an SMB, that would be extremely rare. But in an enterprise that large scale consolidation (that is cost savings, not performance or reliability) is a huge factor and makes SAN very common and very sensible there.
-
Having a SAN is like putting NO2 in your car engine. Sure it'll make the car faster, but most people don't need the headache, cost, maintenance and additional wear & tear. If a car mechanic recommended it to every soccer mom who brought in their minivan, or every FedEx truck, they'd be run out of town soon enough. Sadly that isn't yet the case for SMB IT.
-
Common use cases:
Local Storage: Nearly everything. All normal storage use cases. From low cost stand alone servers to super high performance systems to cluster nodes to high availability systems.
NAS: Shared storage for users, generally accessed over a large array of machines. NAS storage tends to focus on "users" rather than "servers."
DAS: Small scale storage consolidation.
SAN: Large scale storage consolidation.
-
I've got them sorted now, thank you!
-
Would it be better to use a file based storage system for the backend of a virtual infrastructure (VMware supports NFS, HyperV supports SMB3.0, Xen will use whatever you have mounted if I remember right) Or use a block based storage system? Is it better to have access to the underlying architecture, or give the hypervisor the raw block storage to do with it what it will? I've got my own ideas on the subject matter, but I would like to hear what other have to say.
-
@Nic said:
Having a SAN is like putting NO2 in your car engine. Sure it'll make the car faster, but most people don't need the headache, cost, maintenance and additional wear & tear. If a car mechanic recommended it to every soccer mom who brought in their minivan, or every FedEx truck, they'd be run out of town soon enough. Sadly that isn't yet the case for SMB IT.
Except that SAN doesn't make you faster, it in fact makes you slower (normally).
-
@coliver said:
Would it be better to use a file based storage system for the backend of a virtual infrastructure (VMware supports NFS, HyperV supports SMB3.0, Xen will use whatever you have mounted if I remember right) Or use a block based storage system? Is it better to have access to the underlying architecture, or give the hypervisor the raw block storage to do with it what it will? I've got my own ideas on the subject matter, but I would like to hear what other have to say.
I'd say it really depends on your situation. Using Scott's post above
Local Storage -> DAS -> NAS -> SAN
If you're using local storage or DAS because they are usable, you're forced into using direct access (though in the case of Hyper-V I'm not really sure what the access is like when access local/DAS?
If NAS fits your bill, then your forced into use a file protocol, and of course lastly, if you end at SAN as the only viable solution you're back to block level.
-
@coliver said:
Would it be better to use a file based storage system for the backend of a virtual infrastructure (VMware supports NFS, HyperV supports SMB3.0, Xen will use whatever you have mounted if I remember right) Or use a block based storage system? Is it better to have access to the underlying architecture, or give the hypervisor the raw block storage to do with it what it will? I've got my own ideas on the subject matter, but I would like to hear what other have to say.
"Better" is tough to determine here. The general recommendation, especially for VMware and Xen which have this very mature, is to use NFS before iSCSI. This is mostly because the protocol provides "accident prevention", is easier to set up and reacts well to bonding. iSCSI requires you to use MPIO and to more carefully control the protocol to protect yourself.
iSCSI will normally outperform NFS, but performance is rarely a major concern and it is only "normally" faster.
So NFS is generally recommended (even by the vendors) because it is safe and easy. If you need more performance and are concerned about NFS then iSCSI isn't really the solution and what you really need to do is look at something with lower latency like FC or SAS.
-
On HyperV, the use of SMB3 is pretty nascent and very vendors provide a good platform for it. So there tends to be a trend to remain with iSCSI for HyperV because of this. Not because SMB3 isn't the better option at a protocol level but because it is so new it remains mostly impractical.