Large or small Raid 5 with SSD
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Pete-S said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
So would this make a 4 drive raid 5 and an 8 drive raid 6 be similar in reliability?
You'd have to define reliability here. You are twice as likely to experience a drive failure on the 8-drive array. For data loss you are about the same - if you don't replace the failed drive.
In real life I feel it comes down to practical things. Like how big your budget is and how much storage you need. 4TB SSD is pretty standard so if you need 24 TB SSD then you need to use more drives. In almost no case would it be a good idea to use many small drives.
Many small drives will typically overrun the controller, too, making the performance gains that you expect to get, all lost.
Yes and as you mentioned above NVMe is where it's at when it comes to performance. SATA and SAS SSDs are for legacy applications - as Intel says.
-
How do you figure out where a RAID card will bottleneck when you are using an SSD RAID, say a RAID10 of some model of SSDs with a Dell H740p up to the maximum number of drives and channels?
How many would it take to have to realistically consider the RAID card itself being the bottleneck?
-
@Obsolesce said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
So in general, an 8 drive raid 5 is more risky than a 4 drive raid 5, but how much so? I want to know how to calculate the tipping point between safety and cost.
It's pretty close, but not exactly, twice as likely to lose a drive. For loose calculations, just use double. If the four drive array is going to lose a drive once every five years, the eight drive array will lose two.
Or might not lose any in five years, perhaps, if you aren't nearing the DWPD rating, have a good RAID card with caching, nvme caching, use RAM caching, etc... Have a SSD mirror JUST for caching, whatever. There are ways to extend life.
How big are the drives? The more drives, the more performance you get, but more likely for one to fail. The smaller the drives, the faster the "rebuilds" depending on how you look at it and what RAID level.
If you have a high number of drives and they are pretty large, and are SSD, then a RAID6 is fine. How much performance do you actually need?
I am consolidating down to one server. A simple raid 5 with like 4x3.84TB SSD's is appealing because its gives me more IOPS than I need, takes up less bays (which gives me more flexibility in the future). It also allows me to not have to worry about what array everything is stored on.
I can do this project with two arrays, a smaller raid with SSD's and then some HDD's in raid 10. But to get the IOPS I need, I would need more drives than I prefer to use. If I am keeping everything on board the host, then if hypothetically I have 16 bay host, then I would probably want to have like 14 HDD's in that raid 10 and probably 2 SSD's in raid 1.
Live optics says I've got ~3k IOPS peak between all my existing hosts, and ~800 at 95%. But I think at least half of that is being bottlenecked at the network level because it is coming across from my synology. Latency is also an issue, especially with my current setup.
-
and the cost for that 4 drive raid 5 is not much more than filling it with spinners
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Pete-S said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
So would this make a 4 drive raid 5 and an 8 drive raid 6 be similar in reliability?
You'd have to define reliability here. You are twice as likely to experience a drive failure on the 8-drive array. For data loss you are about the same - if you don't replace the failed drive.
In real life I feel it comes down to practical things. Like how big your budget is and how much storage you need. 4TB SSD is pretty standard so if you need 24 TB SSD then you need to use more drives. In almost no case would it be a good idea to use many small drives.
Many small drives will typically overrun the controller, too, making the performance gains that you expect to get, all lost.
Depending the type of performance you need, isn't this somewhat easy to do? Like <12 SSD's? At some point, you are bottlenecked at the PCIe lanes and you've got to get complicated or go with an entirely different type of storage system.
-
@Donahue said in Large or small Raid 5 with SSD:
and the cost for that 4 drive raid 5 is not much more than filling it with spinners
Yeah, prices are decently close today.
-
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Pete-S said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
So would this make a 4 drive raid 5 and an 8 drive raid 6 be similar in reliability?
You'd have to define reliability here. You are twice as likely to experience a drive failure on the 8-drive array. For data loss you are about the same - if you don't replace the failed drive.
In real life I feel it comes down to practical things. Like how big your budget is and how much storage you need. 4TB SSD is pretty standard so if you need 24 TB SSD then you need to use more drives. In almost no case would it be a good idea to use many small drives.
Many small drives will typically overrun the controller, too, making the performance gains that you expect to get, all lost.
Depending the type of performance you need, isn't this somewhat easy to do?
Like <12 SSD's? At some point, you are bottlenecked at the PCIe lanes and you've got to get complicated or go with an entirely different type of storage system.
Generally more like six.
RAID controllers keep getting faster, but so do SSDs.
https://mangolassi.it/topic/2072/testing-the-limits-of-the-dell-h710-raid-controller-with-ssd
-
@Donahue said in Large or small Raid 5 with SSD:
Live optics says I've got ~3k IOPS peak between all my existing hosts, and ~800 at 95%.
@Donahue said in Large or small Raid 5 with SSD:
A simple raid 5 with like 4x3.84TB SSD's is appealing
That'll be just dandy. Depends on the SSDs, but that's at least 11k IOPS (edit: at 90% write). Still a 16TB RAID5 though, and rebuild performance is 30% by default.
-
@Obsolesce said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
Live optics says I've got ~3k IOPS peak between all my existing hosts, and ~800 at 95%.
@Donahue said in Large or small Raid 5 with SSD:
A simple raid 5 with like 4x3.84TB SSD's is appealing
That'll be just dandy. Depends on the SSDs, but that's at least 11k IOPS. Still a 16TB RAID5 though, and rebuild performance is 30% by default.
what do you mean by ... and rebuild performance is 30% by default...?
-
@Donahue said in Large or small Raid 5 with SSD:
@Obsolesce said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
Live optics says I've got ~3k IOPS peak between all my existing hosts, and ~800 at 95%.
@Donahue said in Large or small Raid 5 with SSD:
A simple raid 5 with like 4x3.84TB SSD's is appealing
That'll be just dandy. Depends on the SSDs, but that's at least 11k IOPS. Still a 16TB RAID5 though, and rebuild performance is 30% by default.
what do you mean by ... and rebuild performance is 30% by default...?
On Dells, when a drive rebuilds, it does it at 30% capabilities by default. I assume to prevent production services coming to a crawl. You can change it though.
-
@Obsolesce said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@Obsolesce said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
Live optics says I've got ~3k IOPS peak between all my existing hosts, and ~800 at 95%.
@Donahue said in Large or small Raid 5 with SSD:
A simple raid 5 with like 4x3.84TB SSD's is appealing
That'll be just dandy. Depends on the SSDs, but that's at least 11k IOPS. Still a 16TB RAID5 though, and rebuild performance is 30% by default.
what do you mean by ... and rebuild performance is 30% by default...?
On Dells, when a drive rebuilds, it does it at 30% capabilities by default. I assume to prevent production services coming to a crawl. You can change it though.
30% of capabilities, not 30% speed, though. So it is difficult to calculate.
-
Ok, lets add a layer to this. Lets assume the raid 5 will lose a disk. Do I run with no spare of any kind, and when it fails, then buy a replacement and switch it out? Is the URE risk primarily during rebuild, or anytime it is in a degraded state? I know that SSD's are generally an order of magnitude (or two) safer in this regard, but I want to have this planned out ahead of time.
-
also, am I right to assume that network contention can influence IOPS?
-
@Donahue said in Large or small Raid 5 with SSD:
Is the URE risk primarily during rebuild, or anytime it is in a degraded state?
URE is quite nominal on SSDs typically. Not zero, but not like you are used to, either.
-
@Donahue said in Large or small Raid 5 with SSD:
also, am I right to assume that network contention can influence IOPS?
Resulting IOPS to a third party service, but not IOPS themselves.
-
But I know that you don't have a SAN, so in your case the answer is no.
-
@Donahue said in Large or small Raid 5 with SSD:
Ok, lets add a layer to this. Lets assume the raid 5 will lose a disk. Do I run with no spare of any kind, and when it fails, then buy a replacement and switch it out?
You can, lots of places with four hour SLA hardware replacement plans do that. I wouldn't do that without a warranty to cover the replacements, though.
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
Is the URE risk primarily during rebuild, or anytime it is in a degraded state?
URE is quite nominal on SSDs typically. Not zero, but not like you are used to, either.
but is the risk only present one I initiate a rebuild? As in, if a primary failure occurs, do I have time to assess my options before starting? I am basically trying to figure out if I should buy 4 or 5 drives. I know you said earlier that with raid 5, you may as well add that 5th drive to the array and make it a raid 6 as opposed to sitting on the shelf.
-
I am probably looking at more like next day replacement
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
also, am I right to assume that network contention can influence IOPS?
Resulting IOPS to a third party service, but not IOPS themselves.
It will certainly improve latency. That synology is averaging 14.6ms reads, with spikes over 280. writes are averaging 4.5ms with spikes over 200.