RAID on SSD's
-
@ccwtech said in RAID on SSD's:
If you have more than 6 or 7 SSD's you need a separate controller then?
If you have that many you probably need to reconsider what you are doing. At some point you are building something really special purpose and will either need multiple controllers (if speed is actually an issue) or software RAID or move to non-traditional disk formats.
-
This is different than my question about SSD's, but since you mention it, I do have a server that I am looking at putting 10 1.2 TB 10K RPM drives in. The software vendor doesn't support a NAS.
-
@ccwtech said in RAID on SSD's:
This is different than my question about SSD's, but since you mention it, I do have a server that I am looking at putting 10 1.2 TB 10K RPM drives in. The software vendor doesn't support a NAS.
You're less likely to saturate your RAID card with spinning rust at that size.. so that's should be a non issue.
-
@ccwtech said in RAID on SSD's:
If you have more than 6 or 7 SSD's you need a separate controller then?
RAID 5 isn't something I have done for years...
You don't "need" a separate controller, simply you will saturate both a separate SAS controller (RAID HW) and an integrated SATA one (SW RAID). Essentially, you can saturate the band of a PCIe 3.1 8x link.
-
@francesco-provino said in RAID on SSD's:
You don't "need" a separate controller, simply you will saturate both a separate SAS controller (RAID HW) and an integrated SATA one (SW RAID). Essentially, you can saturate the band of a PCIe 3.1 8x link.
So what are options when do you need to have more than 6-7 SSD's in a server then?
-
The performance of a single storage array is limited by the width of the PCIe lane. The only way to overcome this limitation is striping arrays across multiple PCIe interfaces.
I don't think you need something like that in a scale-up setup, we are talking about many Gbyte/s and several millions of IOPS.
-
@ccwtech said in RAID on SSD's:
@francesco-provino said in RAID on SSD's:
You don't "need" a separate controller, simply you will saturate both a separate SAS controller (RAID HW) and an integrated SATA one (SW RAID). Essentially, you can saturate the band of a PCIe 3.1 8x link.
So what are options when do you need to have more than 6-7 SSD's in a server then?
That's not the limitation. The speed doesn't keep increasing.
YOu are asking about performance numbers literally past any but something like .0001% of all companies in the world would need at most. You are moving from a new hundred IOPS today, there is no possibility that you need to leap to 10+ million IOPS tomorrow.
-
@ccwtech said in RAID on SSD's:
@francesco-provino said in RAID on SSD's:
You don't "need" a separate controller, simply you will saturate both a separate SAS controller (RAID HW) and an integrated SATA one (SW RAID). Essentially, you can saturate the band of a PCIe 3.1 8x link.
So what are options when do you need to have more than 6-7 SSD's in a server then?
No problem with both software raid or hardware raid card: a modern LSI/AVAGO/Broadcom controller can take up to 255 SAS/SATA SSD in a single array. Just, don't forget that the controller will be the performance bottleneck of the array.
-
I've got a hypervisor with 10 SSDs and 6 spinners. The SSDs have special needs and is a RAID 10, but nowhere else do I have SSDs in a RAID 10. 5 everywhere else.
-
@scottalanmiller said in RAID on SSD's:
@ccwtech said in RAID on SSD's:
@francesco-provino said in RAID on SSD's:
You don't "need" a separate controller, simply you will saturate both a separate SAS controller (RAID HW) and an integrated SATA one (SW RAID). Essentially, you can saturate the band of a PCIe 3.1 8x link.
So what are options when do you need to have more than 6-7 SSD's in a server then?
That's not the limitation. The speed doesn't keep increasing.
YOu are asking about performance numbers literally past any but something like .0001% of all companies in the world would need at most. You are moving from a new hundred IOPS today, there is no possibility that you need to leap to 10+ million IOPS tomorrow.
The problem is @Francesco-Provino confused him by taking about saturating the bus. A fact, but not a relevant fact to the question at hand.
@CCWTech just needs to put the SSD in a RAID5 and move on.
-
I don't feel confused...
-
... But I do have another question... For SSD's...
If you are doing RAID 5, would you ever use that as a boot volume? Or do a RAID 1 for boot and RAID 5 for data?
-
@ccwtech said in RAID on SSD's:
... But I do have another question... For SSD's...
If you are doing RAID 5, would you ever use that as a boot volume? Or do a RAID 1 for boot and RAID 5 for data?
OBR5 (One Big Raid 5). Having drives for the hypervisor alone is such a waste, it's not like the box is going to be rebooting every 5 minutes. You'll get better performance and capacity by using OBR5.
-
@travisdh1 said in RAID on SSD's:
@ccwtech said in RAID on SSD's:
... But I do have another question... For SSD's...
If you are doing RAID 5, would you ever use that as a boot volume? Or do a RAID 1 for boot and RAID 5 for data?
OBR5 (One Big Raid 5). Having drives for the hypervisor alone is such a waste, it's not like the box is going to be rebooting every 5 minutes. You'll get better performance and capacity by using OBR5.
Even if it was - what difference would that make? Why waste drives and drive bays for booting a hypervisor (or an OS)?
-
For some reason I recall that booting to a RAID 5 was unreliable (shouldn't do), but it's been so long since I have worked with RAID 5 I'm not sure where I heard that.
-
@ccwtech said in RAID on SSD's:
For some reason I recall that booting to a RAID 5 was unreliable (shouldn't do), but it's been so long since I have worked with RAID 5 I'm not sure where I heard that.
I do not/have not had this experience.
-
@ccwtech said in RAID on SSD's:
For some reason I recall that booting to a RAID 5 was unreliable (shouldn't do), but it's been so long since I have worked with RAID 5 I'm not sure where I heard that.
RAID 5 on HDD is unreliable today, and shouldn't ever be used. SSD drives don't have the same issues. That might be what you're remembering.
-
@travisdh1 said in RAID on SSD's:
@ccwtech said in RAID on SSD's:
For some reason I recall that booting to a RAID 5 was unreliable (shouldn't do), but it's been so long since I have worked with RAID 5 I'm not sure where I heard that.
RAID 5 on HDD is unreliable today, and shouldn't ever be used. SSD drives don't have the same issues. That might be what you're remembering.
I believe so as well.
-
@travisdh1 said in RAID on SSD's:
@ccwtech said in RAID on SSD's:
For some reason I recall that booting to a RAID 5 was unreliable (shouldn't do), but it's been so long since I have worked with RAID 5 I'm not sure where I heard that.
RAID 5 on HDD is unreliable today, and shouldn't ever be used. SSD drives don't have the same issues. That might be what you're remembering.
While I agree that it shouldn't be use - it's not because it's unreliable. It's because it's changes of running into a URE during a resilver is very high depending upon the size of the array (i.e. if the array is 12TB or higher, the chances of having an URE during rebuild is near 100% on consumer drives, or 120 TB on enterprise class drives)
-
@ccwtech said in RAID on SSD's:
For some reason I recall that booting to a RAID 5 was unreliable (shouldn't do), but it's been so long since I have worked with RAID 5 I'm not sure where I heard that.
That was never a RAID issue. You are thinking of a specific problem with any non-RAID 1 on software RAID where you can't boot to RAID before the RAID has been created. It's nothing to do with the RAID but with the boot sector not existing until after the RAID has been created. Chicken and egg problem. RAID 1 gets around this by having a full copy of the entire boot sector on every disk in the array.