Local Storage vs SAN ...
-
@JaredBusch said in Local Storage vs SAN ...:
@scottalanmiller said in Local Storage vs SAN ...:
@JaredBusch said in Local Storage vs SAN ...:
@scottalanmiller said in Local Storage vs SAN ...:
It's taking a long time to upload
You know, there are no issues with plugins on my nodebb systems. You should really look closer at what your errors are.
I'm not uploading it HERE. I'm uploading it to YouTube.
I meant to click reply to your prior post, the one with the link and no video preview. But my point stands.
That's not an issue of a broken plugin. I cant find any plugin that does that. They removed the youTube plugins from the repos.
-
@BraswellJay said in Local Storage vs SAN ...:
We are planning a server upgrade and I find myself faced with the question of whether a SAN is necessary. I know there have been many posts both here and on other forums about SANs being oversold in situations where they are not needed. My gut instinct is that my situation is one that really doesn't require a SAN, yet I still find myself unsure that I understand the various questions that I should be considering when making this decision.
I bought a copy of Linux Administration Best Practices by @scottalanmiller and am reviewing the chapters on system storage, in particular the parts on SANs, local storage and replicated local storage.
Our needs are not sophisticated. We will have only a handful of VMs. A file server, sql server, freepbx, inventory management system server, security system server and an internal application server for a few internal tools. For most of these we can afford some downtime in the event of a host failure. The exception is really the SQL server. While it would not be catastrophic for some downtime it would be far superior from a continuity perspective if it could fail over to a secondary host if necessary.
With that in mind, I had planned for two hosts so we could survive a failure of one of them. My primary confusion though is how would I accomplish replicated local storage. Is this functionality that the hypervisor must provide? The best practices book mentions several technologies (DRBD, Gluster, CEPH) that can be used for RLS but I would think that these would have to run in the hypervisor itself and not as separate VMs on the host. Is that correct?
In general, for relatively small environments such as mine, is it feasible to even attempt local storage replication? Our MSP has quoted an EMC SAN device to the tune of $25k so that VMs could be migrated between hosts with storage being on the SAN. What would an implementation without the SAN look like if I wanted to maintain the replication and the ability for the VMs to be migrated between hosts?
A Hyper-Converged Infrastructure setup would be the best way to go IMO.
Two nodes with decent AMD EPYC 16 Core 155 Watt+ CPU and 8x 64GB ECC if Rome/Milan based or 12x 64GB ECC if Genoa based.
We only do Microsoft's Storage Spaces Direct (S2D) and Azure Stack HCI with most of our HCI platforms being S2D.
The first place to start is here: www.liveoptics.com
Get a baseline for each VM. Daily highs and lows, weekly, and monthly. Get an idea of what the demands are on the current infrastructure.
With solid evidence on-hand, go to planning the HCI setup with enough IOPS to live today and into a 5 year future. That means knowing some company history to get an idea of growth.
-
@scottalanmiller said in Local Storage vs SAN ...:
vSAN is any SAN run virtualized
I think that is incorrect. The definition is virtual storage area network. A software defined storage area network if you will.
That is not the same as a virtualized storage area network.
-
@scottalanmiller said in Local Storage vs SAN ...:
@Pete-S said in Local Storage vs SAN ...:
DRBD, Gluster and Ceph are simply technologies used to build a vSAN.
They can be, but 99% of the time no SAN layer will be used. I've never seen Gluster or CEPH used to make a vSAN and DRBD mostly only in a lab. They are so much faster and more robust without the SAN layer that it's not popular to do that. So much of their value comes from removing the need and complexity of the networking layer since the storage itself is already replicated to each node. If you add the vSAN layer, you have to deal with a loss of redundancy (in the connection layer) and build that back in.
I don't think that there is such a thing as a SAN layer by definition.
A SAN is just a storage area network. It doesn't imply that it has to have SAS, iSCSI or fiber channel or any other protocol that is traditionally used by physical SAN units.I'd say a SAN is an architecture more than a specific technology.
-
@Pete-S said in Local Storage vs SAN ...:
@scottalanmiller said in Local Storage vs SAN ...:
vSAN is any SAN run virtualized
I think that is incorrect. The definition is virtual storage area network. A software defined storage area network if you will.
That is not the same as a virtualized storage area network.
There's some contention around the "vSAN"/"VSAN" designation.
StarWind and VMware adopted the vSAN designation for their Hyper-Converged Infrastructure solution sets IIRC. Both did.
HCI means local storage on each node, a dedicated network fabric for node to node storage I/O, and resilience/redundancy for the disks based on how many nodes and what kind of performance is needed.
Fault Domains are at the disk and node level while some products allow for a form of Stretch Cluster which could be rack to rack, DC to DC, or intra-DC within a certain amount of latency (S2D/AzSHCI is 5ms or less).
-
@PhlipElder said in Local Storage vs SAN ...:
StarWind and VMware adopted the vSAN designation for their Hyper-Converged Infrastructure solution sets IIRC. Both did.
Both do vSAN. So it makes sense as they run SAN appliances on VMs.
But neither use it to designate hyperconvergence, which is important, because it doesn't.
Both of them offer HCI options, both offer it uses their vSAN products.
Both of them also offer "traditional" SAN that is virtualized using those vSAN products as well.
-
@PhlipElder said in Local Storage vs SAN ...:
HCI means local storage on each node, a dedicated network fabric for node to node storage I/O, and resilience/redundancy for the disks based on how many nodes and what kind of performance is needed.
Well, it doesn't quite mean all of that. It just means putting everything onto the individual node. It doesn't actually imply the network fabric, resiliancy, redundancy or anything like that. All of those concepts were layered onto the term much later by marketing teams. Hyperconvergence itself is much simpler, like all of these terms.
-
@Pete-S said in Local Storage vs SAN ...:
A SAN is just a storage area network. It doesn't imply that it has to have SAS, iSCSI or fiber channel or any other protocol that is traditionally used by physical SAN units.
I'd say a SAN is an architecture more than a specific technology.It is, for sure. But there is a specific type of technology, not specific technology, to make that architecture.
SAS doesn't qualify to be a SAN, for example. If you connect via SAS, that makes it local storage. If you use iSCSI, that makes it SAN attached.
SAS doesn't create a network, iSCSI does. Hence the difference. To be a storage area NETWORK, you need a network protocol. So the architecture designates the type of technology.
SANs came about to address the limitations of direct attach (SAS, SCSI, ATA, etc.) We already had shared storage before we had SAN. SAN let that shared block storage go onto a network. So you need the network protocol to make it a SAN.
-
@Pete-S said in Local Storage vs SAN ...:
I think that is incorrect. The definition is virtual storage area network. A software defined storage area network if you will.
So yes, in the same way that SAN technically refers to the network and not the devices or protocols, but is rarely used that way.
But in that sense, vSAN has existed as long as we had software controlled switches, because thats the "v" piece if we use it that way and then all those Starwind and VMware products can't be vSANs. They are only a vSAN in the sense that the misuse of SAN means the appliance, not the network, and they are that appliance virtualized. In both cases, and all others not mentioned here, it is the virtualization of the appliance, not the network, called vSAN by the vendors, engineers and end users.
In lots of cases, the network is virtualized too, just by the nature of how it is used. But it's virtualized whether vSAN is used or not. That's just SDN.
-
@scottalanmiller said in Local Storage vs SAN ...:
@PhlipElder said in Local Storage vs SAN ...:
StarWind and VMware adopted the vSAN designation for their Hyper-Converged Infrastructure solution sets IIRC. Both did.
Both do vSAN. So it makes sense as they run SAN appliances on VMs.
VMware vSAN runs directly on the hypervisor as far as I know. I haven't installed it myself even if I specced it for customers.
-
@Pete-S said in Local Storage vs SAN ...:
@scottalanmiller said in Local Storage vs SAN ...:
@PhlipElder said in Local Storage vs SAN ...:
StarWind and VMware adopted the vSAN designation for their Hyper-Converged Infrastructure solution sets IIRC. Both did.
Both do vSAN. So it makes sense as they run SAN appliances on VMs.
VMware vSAN runs directly on the hypervisor as far as I know. I haven't installed it myself even if I specced it for customers.
They CLAIM that to be true, but they, like MS, often speak in licensing terms rather than how things are physically implemented.
What's funny is that if that is true, it would obviously make it not a vSAN at all. Which is totally plausible as it is a latecomer to the market and like everything with "virtual" or "cloud" slapped on it, they are just playing on the marketing name that people have heard. vSAN is the product name, not its description.
VMware vSAN uses a proprietary SAN protocol to distant nodes (and I assume the local one for transparency) making it... a traditional physical SAN. Just a converged one, rather than a remote one.
None of that is bad. It's all just funny that they claim to explicitly not be the product description whose name they used.
-
Examples in known open source worlds...
If you run ProxMox with DRBD on the Debian (host) layer, it's RLS assuming ProxMox has local disks.
If you then make that block storage available over the network, it becomes a SAN (a traditional / physical SAN.) A SAN with replication for resiliency.
If you run ProxMox and make a VM of Ubuntu and in that VM install DRBD it may or may not be RLS depending on where the host is getting its storage from for that VM. To the VM it will appear as if it is RLS, but we really don't know unless we check the stack. It's just the replication piece here.
If you then make that DRBD block layer in the VM available over the network, it becomes a vSAN.