RAID on SSD's
-
@ccwtech said in RAID on SSD's:
I set up my servers with RAID 10 and 10 or 15 K RPM Drives and a dedicated RAID card and an SSD Drive for CacheCade.
However with the decrease in pricing for SSD drives, I would like to start using them more in servers.
- Still do RAID 10 with SSD's?
- Is a dedicated RAID card still needed if using SSD's?
- Usually RAID 5 makes sense. Still RAID 10 if you want maximum performance. Be aware that you can saturate almost any SATA/SAS controller with the performances of 6-7 SSD.
- Apply the same consideration for the HDD.
-
If you have more than 6 or 7 SSD's you need a separate controller then?
RAID 5 isn't something I have done for years...
-
@ccwtech said in RAID on SSD's:
- Still do RAID 10 with SSD's?
You gravite towards RAID 5 rather than 10 as the factors are so different.
-
@ccwtech said in RAID on SSD's:
- Is a dedicated RAID card still needed if using SSD's?
Nothing really changes here. The factors for using hardware RAID remain essentially the same.
-
@ccwtech said in RAID on SSD's:
If you have more than 6 or 7 SSD's you need a separate controller then?
If you have that many you probably need to reconsider what you are doing. At some point you are building something really special purpose and will either need multiple controllers (if speed is actually an issue) or software RAID or move to non-traditional disk formats.
-
This is different than my question about SSD's, but since you mention it, I do have a server that I am looking at putting 10 1.2 TB 10K RPM drives in. The software vendor doesn't support a NAS.
-
@ccwtech said in RAID on SSD's:
This is different than my question about SSD's, but since you mention it, I do have a server that I am looking at putting 10 1.2 TB 10K RPM drives in. The software vendor doesn't support a NAS.
You're less likely to saturate your RAID card with spinning rust at that size.. so that's should be a non issue.
-
@ccwtech said in RAID on SSD's:
If you have more than 6 or 7 SSD's you need a separate controller then?
RAID 5 isn't something I have done for years...
You don't "need" a separate controller, simply you will saturate both a separate SAS controller (RAID HW) and an integrated SATA one (SW RAID). Essentially, you can saturate the band of a PCIe 3.1 8x link.
-
@francesco-provino said in RAID on SSD's:
You don't "need" a separate controller, simply you will saturate both a separate SAS controller (RAID HW) and an integrated SATA one (SW RAID). Essentially, you can saturate the band of a PCIe 3.1 8x link.
So what are options when do you need to have more than 6-7 SSD's in a server then?
-
The performance of a single storage array is limited by the width of the PCIe lane. The only way to overcome this limitation is striping arrays across multiple PCIe interfaces.
I don't think you need something like that in a scale-up setup, we are talking about many Gbyte/s and several millions of IOPS.
-
@ccwtech said in RAID on SSD's:
@francesco-provino said in RAID on SSD's:
You don't "need" a separate controller, simply you will saturate both a separate SAS controller (RAID HW) and an integrated SATA one (SW RAID). Essentially, you can saturate the band of a PCIe 3.1 8x link.
So what are options when do you need to have more than 6-7 SSD's in a server then?
That's not the limitation. The speed doesn't keep increasing.
YOu are asking about performance numbers literally past any but something like .0001% of all companies in the world would need at most. You are moving from a new hundred IOPS today, there is no possibility that you need to leap to 10+ million IOPS tomorrow.
-
@ccwtech said in RAID on SSD's:
@francesco-provino said in RAID on SSD's:
You don't "need" a separate controller, simply you will saturate both a separate SAS controller (RAID HW) and an integrated SATA one (SW RAID). Essentially, you can saturate the band of a PCIe 3.1 8x link.
So what are options when do you need to have more than 6-7 SSD's in a server then?
No problem with both software raid or hardware raid card: a modern LSI/AVAGO/Broadcom controller can take up to 255 SAS/SATA SSD in a single array. Just, don't forget that the controller will be the performance bottleneck of the array.
-
I've got a hypervisor with 10 SSDs and 6 spinners. The SSDs have special needs and is a RAID 10, but nowhere else do I have SSDs in a RAID 10. 5 everywhere else.
-
@scottalanmiller said in RAID on SSD's:
@ccwtech said in RAID on SSD's:
@francesco-provino said in RAID on SSD's:
You don't "need" a separate controller, simply you will saturate both a separate SAS controller (RAID HW) and an integrated SATA one (SW RAID). Essentially, you can saturate the band of a PCIe 3.1 8x link.
So what are options when do you need to have more than 6-7 SSD's in a server then?
That's not the limitation. The speed doesn't keep increasing.
YOu are asking about performance numbers literally past any but something like .0001% of all companies in the world would need at most. You are moving from a new hundred IOPS today, there is no possibility that you need to leap to 10+ million IOPS tomorrow.
The problem is @Francesco-Provino confused him by taking about saturating the bus. A fact, but not a relevant fact to the question at hand.
@CCWTech just needs to put the SSD in a RAID5 and move on.
-
I don't feel confused...
-
... But I do have another question... For SSD's...
If you are doing RAID 5, would you ever use that as a boot volume? Or do a RAID 1 for boot and RAID 5 for data?
-
@ccwtech said in RAID on SSD's:
... But I do have another question... For SSD's...
If you are doing RAID 5, would you ever use that as a boot volume? Or do a RAID 1 for boot and RAID 5 for data?
OBR5 (One Big Raid 5). Having drives for the hypervisor alone is such a waste, it's not like the box is going to be rebooting every 5 minutes. You'll get better performance and capacity by using OBR5.
-
@travisdh1 said in RAID on SSD's:
@ccwtech said in RAID on SSD's:
... But I do have another question... For SSD's...
If you are doing RAID 5, would you ever use that as a boot volume? Or do a RAID 1 for boot and RAID 5 for data?
OBR5 (One Big Raid 5). Having drives for the hypervisor alone is such a waste, it's not like the box is going to be rebooting every 5 minutes. You'll get better performance and capacity by using OBR5.
Even if it was - what difference would that make? Why waste drives and drive bays for booting a hypervisor (or an OS)?
-
For some reason I recall that booting to a RAID 5 was unreliable (shouldn't do), but it's been so long since I have worked with RAID 5 I'm not sure where I heard that.
-
@ccwtech said in RAID on SSD's:
For some reason I recall that booting to a RAID 5 was unreliable (shouldn't do), but it's been so long since I have worked with RAID 5 I'm not sure where I heard that.
I do not/have not had this experience.