Cross Posting - Storage Spaces Conundrum
-
@Dashrender said in Cross Posting - Storage Spaces Conundrum:
Of reliability is an issue. But you mentioned loosing a RAID card wouldn't remove access to data. That only happens if a) you have redundant RAID cards in front of that storage, or b) you have two copies of the data (meaning at least two times the needed storage).
Or don't use RAID cards
-
@Breffni-Potter said in Cross Posting - Storage Spaces Conundrum:
@Dashrender said
Why won't one node do it? Where is the bottle neck in one node?
Because you can increase speed by lowering the demand on a single device.
But you can also increase speed by making on device faster. Scale up rather than scale out. Scale up is going to get really costly here, he needs a VMAX to keep scaling up. But it can be done.
We used to do single SANs with 96Gb/s FC connected to them. They were pretty fast.
-
@dafyre said in Cross Posting - Storage Spaces Conundrum:
For the amount of storage he's talking, if you're looking at scale-out, I can definitely say Exablox would be good here.
That's what I would think.
-
@DustinB3403 said in Cross Posting - Storage Spaces Conundrum:
Hello Spiceheads (from here),
I am currently looking at implementing a large file server. I have a Lenovo server with 70x 1.8tb 10k sas drives attached via DAS. This server will be used as a file server. Serving up 80% small files 1 - 2mb and 20% large files 10GB+.
> What I am not sure about is how to provision the drives. Do I use RAID? Should I use storage spaces? Or should I go with something else like ScaleIO, OpenIO, Starwinds etc..?
I am looking for a solution that is scalable so if I wanted to increase the volume and I was also thinking about a little future proofing so setting this up so I could scale it out if I wanted to.
This dose need to be resilient with a quick turn around should a disk go down and it also needs to be scalable.
Looking forward to hearing your views.
StarWind assumes you use some local RAID (hardware or software). We do replication and per-node redundancy is handled by RAID. So we do RAID61 (RAID1-over-RAID6) for SSDs and HDDs, RAID51 (RAID1-over-RAID5) for SSDs, RAID01 (RAID1-over-RAID0) for SSDs and HDDs (3-way replication is recommended), and like RAID101 (RAID1-over-RAID10) for HDDs and SSDs. It's very close to what SimpliVity does if you care. ScaleIO does 2-way replication on the smaller block level and needs no local RAID (but they take away one node from the capacity equation so from 3 nodes raw you'll get [(3-1)/2] you can really use). OpenIO is something I've never seen before you posted so I dunno what they do.
-
@KOOLER said in Cross Posting - Storage Spaces Conundrum:
OpenIO is something I've never seen before you posted so I dunno what they do.
We have it running here They are here in ML, too.
-
@scottalanmiller said in Cross Posting - Storage Spaces Conundrum:
@KOOLER said in Cross Posting - Storage Spaces Conundrum:
OpenIO is something I've never seen before you posted so I dunno what they do.
We have it running here They are here in ML, too.
That's interesting! Nice to see more storage startups from Europe (France?).
are they VM-running or do they have native port ?
-
I would suggest going with physical RAID as it usually provides you with better performance comparing to the software once and StarWind or ScaleIO on top of it as a vSAN.
ScaleIO is expensive and has good automation and management functionality.
StarWind is much less expensive and has redundant RAM caching in case you decide going HA some day.They both scale very well so should fit you perfectly.
-
@KOOLER said in Cross Posting - Storage Spaces Conundrum:
@scottalanmiller said in Cross Posting - Storage Spaces Conundrum:
@KOOLER said in Cross Posting - Storage Spaces Conundrum:
OpenIO is something I've never seen before you posted so I dunno what they do.
We have it running here They are here in ML, too.
That's interesting! Nice to see more storage startups from Europe (France?).
are they VM-running or do they have native port ?
VM, but you can install wherever.
-
@Jhon.Smith said in Cross Posting - Storage Spaces Conundrum:
I would suggest going with physical RAID as it usually provides you with better performance comparing to the software once
Actually that switched around 2000 when the Pentium IIIS hit the market (the S was the 1.1+ GHz line with the double cache, the ancestor of the Xeons) because the average system had enough spare CPU that the software RAID overhead was no longer an issue and the mainline CPUs and memory were so much faster than the RAID cards that it overall made the system faster. The gap between hardware and software RAID and the amount of spare resources for software RAID has continued to increase since then.
-
@scottalanmiller said in Cross Posting - Storage Spaces Conundrum:
Actually that switched around 2000 when the Pentium IIIS hit the market (the S was the 1.1+ GHz line with the double cache, the ancestor of the Xeons) because the average system had enough spare CPU that the software RAID overhead was no longer an issue and the mainline CPUs and memory were so much faster than the RAID cards that it overall made the system faster. The gap between hardware and software RAID and the amount of spare resources for software RAID has continued to increase since then.
Why do people then continue to buy RAID cards in 2016, if there are many free software RAID solutions that should be faster?
-
@Jhon.Smith said in Cross Posting - Storage Spaces Conundrum:
@scottalanmiller said in Cross Posting - Storage Spaces Conundrum:
Actually that switched around 2000 when the Pentium IIIS hit the market (the S was the 1.1+ GHz line with the double cache, the ancestor of the Xeons) because the average system had enough spare CPU that the software RAID overhead was no longer an issue and the mainline CPUs and memory were so much faster than the RAID cards that it overall made the system faster. The gap between hardware and software RAID and the amount of spare resources for software RAID has continued to increase since then.
Why do people then continue to buy RAID cards in 2016, if there are many free software RAID solutions that should be faster?
Some features of hardware RAID are useful. It isn't because it performs better then software RAID. It's much easier for bench workers to replace a drive with hardware RAID, that supports hot swap and blind swap, then it is for a bench worker to replace a drive with software RAID.
-
@coliver said in Cross Posting - Storage Spaces Conundrum:
Some features of hardware RAID are useful. It isn't because it performs better then software RAID. It's much easier for bench workers to replace a drive with hardware RAID, that supports hot swap and blind swap, then it is for a bench worker to replace a drive with software RAID.
Makes sense.
-
@Jhon.Smith said in Cross Posting - Storage Spaces Conundrum:
Why do people then continue to buy RAID cards in 2016, if there are many free software RAID solutions that should be faster?
Because speed is not a significant factor, ease of use and compatibility is. Hardware RAID is a requirement for VMware. Software RAID exists for Hyper-V but it's not production quality (or if it is in 2016, it's not tested yet) so we consider that hardware RAID is a requirement there, even though software RAID is technically an option.
So that leaves KVM and Xen as the only software RAID platforms for production use that exist and they are less than 50% of the market. They are great, but not the most deployed. So software RAID isn't even available to most people. Hardware RAID always was on the market because Netware and Windows Servers either lacked software RAID (Netware) or it wasn't stable (Windows.)
And of KVM and Xen, most deployments are XenServer, which "officially" doesn't support software RAID but it is baked in and works great. But causal users don't normally use it, because there is no GUI for it.
But enterprise server deployments are software RAID only. Big iron (Mainframes and minis like big UNIX systems) don't even have hardware RAID options. Hardware RAID has never existed outside of the smaller server / commodity space and even there has never been ubiquitous. Also, most NAS and SAN products use software RAID. When storage engineers make systems, it's nearly always software RAID. When small business generalists make systems, they tend to buy hardware.
Hardware RAID has one advantage over software RAID... blind swapping. A tech or even a secretary can swap drives without telling anyone and everything will be fine. They will rebuild without intervention. Software RAID isn't available for blind swap on any OS today, so a system admin who knows what they are doing has to be available to work with the remote hands to swap the drive. SMBs have people who do reckless things, like pulling drives and replacing them without checking or documenting. Hardware RAID makes this less dangerous. But software RAID is faster and more reliable.
-
@coliver said in Cross Posting - Storage Spaces Conundrum:
@Jhon.Smith said in Cross Posting - Storage Spaces Conundrum:
@scottalanmiller said in Cross Posting - Storage Spaces Conundrum:
Actually that switched around 2000 when the Pentium IIIS hit the market (the S was the 1.1+ GHz line with the double cache, the ancestor of the Xeons) because the average system had enough spare CPU that the software RAID overhead was no longer an issue and the mainline CPUs and memory were so much faster than the RAID cards that it overall made the system faster. The gap between hardware and software RAID and the amount of spare resources for software RAID has continued to increase since then.
Why do people then continue to buy RAID cards in 2016, if there are many free software RAID solutions that should be faster?
Some features of hardware RAID are useful. It isn't because it performs better then software RAID. It's much easier for bench workers to replace a drive with hardware RAID, that supports hot swap and blind swap, then it is for a bench worker to replace a drive with software RAID.
SOftware RAID supports hot swap too, it's exclusively blind swap that is the difference.