SAS SSD vs SAS HDD in a RAID 10?
-
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
BTW: why are we calling hard drives "Winchester drives"?
Someone should update this wiki article to be other countries, and @scottalanmiller
https://en.wikipedia.org/wiki/History_of_hard_disk_drives
Yeah I've already looked up "Winchester drive". I still don't understand why you guys would refer to modern hard disk drives as Winchester drives. That would be like referring to all gasoline vehicles as Model-T's..
I don't. Scott does. Because Scott does, a number of other people do also. The term is a correct usage.
How is it correct usage?
Just because a once common term has fallen out of common usage, that does not invalidate it as a correct term. This nickname was common, and is no longer so. Doesn't make it wrong. Well, any more wrong than it was to start.
So it's a nickname, not a technical term? If that's the case, then I'd say it's more confusing than anything since it's an antiquated nickname. Just call them hard disk drives or spindle drives or something. That seems a lot more clear and it still differentiates it from SSD drives.
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm planning the build on a new server. I originally intended on putting 8 x "900GB 15K RPM SAS 12Gbps 512e" drives into a RAID 10 config using an H740P adapter, but then I saw that there are quite a few options for SAS SSD. I haven't really learned too much about the differences of putting SSD in RAID and how it compares to HDD in RAID, so I wanted to see if anyone here (Scott) had any input on the matter.
Example: Would it be worth putting, say, 6 x "1.6TB SSD SAS Mix Use 12Gbps 512e" drives into a RAID 10 instead? Is there a better approach with SSD in RAID?
RAID 6 is the way to go. We lost a server after replacing a drive and it's RAID 10 pair decided to drop out about 5 to 10 minutes into a rebuild.
In our comparison testing 8x 10K SAS drives in RAID 6 has a mean throughput of 800MiB/Second and about 250-450 IOPS per disk depending on the storage stack configuration.
SAS SSD would be anywhere from 25K IOPS per disk to 55K-75K IOPS per disk depending on whether read intensive, mixed use, or write intensive. There are some good deals out there on HGST SSDs (our preferred SAS SSD vendor).
-
@phlipelder said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm planning the build on a new server. I originally intended on putting 8 x "900GB 15K RPM SAS 12Gbps 512e" drives into a RAID 10 config using an H740P adapter, but then I saw that there are quite a few options for SAS SSD. I haven't really learned too much about the differences of putting SSD in RAID and how it compares to HDD in RAID, so I wanted to see if anyone here (Scott) had any input on the matter.
Example: Would it be worth putting, say, 6 x "1.6TB SSD SAS Mix Use 12Gbps 512e" drives into a RAID 10 instead? Is there a better approach with SSD in RAID?
RAID 6 is the way to go. We lost a server after replacing a drive and it's RAID 10 pair decided to drop out about 5 to 10 minutes into a rebuild.
In our comparison testing 8x 10K SAS drives in RAID 6 has a mean throughput of 800MiB/Second and about 250-450 IOPS per disk depending on the storage stack configuration.
SAS SSD would be anywhere from 25K IOPS per disk to 55K-75K IOPS per disk depending on whether read intensive, mixed use, or write intensive. There are some good deals out there on HGST SSDs (our preferred SAS SSD vendor).
Yeah, I've decided on RAID 6 if I am able to go with SSD drives. I am building out the server on Dell and purchasing through our VAR when it comes time to order.
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
BTW: why are we calling hard drives "Winchester drives"?
Because the platters (bullet holder) have to spin while an arm (hammer) moves to find whatever is needed.
Winchester guns are grand symbols of manual action required. With a lot of moving parts.
Whereas any fully automatic weapon would be like an SSD. No moving parts to find whatever is needed. (Fire bullets)
-
@dustinb3403 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
BTW: why are we calling hard drives "Winchester drives"?
Because the platters (bullet holder) have to spin while an arm (hammer) moves to find whatever is needed.
Winchester guns are grand symbols of manual action required. With a lot of moving parts.
Whereas any fully automatic weapon would be like an SSD. No moving parts to find whatever is needed. (Fire bullets)
Now this is a freaking answer!:smiling_face_with_open_mouth_smiling_eyes:
-
In 1973, IBM introduced the IBM 3340 "Winchester" disk drive and the 3348 data module, the first significant commercial use of low mass and low load heads with lubricated platters and the last IBM disk drive with removable media. This technology and its derivatives remained the standard through 2011. Project head Kenneth Haughton named it after the Winchester 30-30 rifle because it was planned to have two 30 MB spindles; however, the actual product shipped with two spindles for data modules of either 35 MB or 70 MB. - Wikipedia
-
@scottalanmiller Didn't know you used Wikipedia ^_^
-
@obsolesce said in SAS SSD vs SAS HDD in a RAID 10?:
@scottalanmiller Didn't know you used Wikipedia ^_^
Mostly Wikipedia uses me
-
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
@obsolesce said in SAS SSD vs SAS HDD in a RAID 10?:
@scottalanmiller Didn't know you used Wikipedia ^_^
Mostly Wikipedia uses me
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@phlipelder said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm planning the build on a new server. I originally intended on putting 8 x "900GB 15K RPM SAS 12Gbps 512e" drives into a RAID 10 config using an H740P adapter, but then I saw that there are quite a few options for SAS SSD. I haven't really learned too much about the differences of putting SSD in RAID and how it compares to HDD in RAID, so I wanted to see if anyone here (Scott) had any input on the matter.
Example: Would it be worth putting, say, 6 x "1.6TB SSD SAS Mix Use 12Gbps 512e" drives into a RAID 10 instead? Is there a better approach with SSD in RAID?
RAID 6 is the way to go. We lost a server after replacing a drive and it's RAID 10 pair decided to drop out about 5 to 10 minutes into a rebuild.
In our comparison testing 8x 10K SAS drives in RAID 6 has a mean throughput of 800MiB/Second and about 250-450 IOPS per disk depending on the storage stack configuration.
SAS SSD would be anywhere from 25K IOPS per disk to 55K-75K IOPS per disk depending on whether read intensive, mixed use, or write intensive. There are some good deals out there on HGST SSDs (our preferred SAS SSD vendor).
Yeah, I've decided on RAID 6 if I am able to go with SSD drives. I am building out the server on Dell and purchasing through our VAR when it comes time to order.
Serious question, now that you seem to understand the concepts of what you may actually need.
Why R6? Your current workload seems to be nowhere near that level of redundancy, and does not appear to need it. Use a pair of SSD in R1 or a triplet in R5.
Yeah, getting new hardware is a time to evaluate this. But why the big jump to R6?
Edit: Yes, I realize hat your early posts stated you wanted to minimize any potential downtime.
-
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@phlipelder said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm planning the build on a new server. I originally intended on putting 8 x "900GB 15K RPM SAS 12Gbps 512e" drives into a RAID 10 config using an H740P adapter, but then I saw that there are quite a few options for SAS SSD. I haven't really learned too much about the differences of putting SSD in RAID and how it compares to HDD in RAID, so I wanted to see if anyone here (Scott) had any input on the matter.
Example: Would it be worth putting, say, 6 x "1.6TB SSD SAS Mix Use 12Gbps 512e" drives into a RAID 10 instead? Is there a better approach with SSD in RAID?
RAID 6 is the way to go. We lost a server after replacing a drive and it's RAID 10 pair decided to drop out about 5 to 10 minutes into a rebuild.
In our comparison testing 8x 10K SAS drives in RAID 6 has a mean throughput of 800MiB/Second and about 250-450 IOPS per disk depending on the storage stack configuration.
SAS SSD would be anywhere from 25K IOPS per disk to 55K-75K IOPS per disk depending on whether read intensive, mixed use, or write intensive. There are some good deals out there on HGST SSDs (our preferred SAS SSD vendor).
Yeah, I've decided on RAID 6 if I am able to go with SSD drives. I am building out the server on Dell and purchasing through our VAR when it comes time to order.
Serious question, now that you seem to understand the concepts of what you may actually need.
Why R6? Your current workload seems to be nowhere near that level of redundancy, and does not appear to need it. Use a pair of SSD in R1 or a triplet in R5.
Yeah, getting new hardware is a time to evaluate this. But why the big jump to R6?
Edit: Yes, I realize hat your early posts stated you wanted to minimize any potential downtime.
I was going to post exactly this.
Your current array of 4 drives (I think you said you have 4) Is at a pretty large risk compared to a 4 drive SSD array (again the drives are at least one order of magnitude safer than most HDDs).
Sure it might only be another $400 for that extra disk to give you RAID 6 vs RAID 5, but why spend it?
One of the big things Scott harps at around here is correct spending. Personally I'm a bit surprised he hasn't brought this fact up already (OK he did a bit when I asked why one would look at SATA SSD RAID 10 instead of NVMe RAID 1 - costs).
This really does boil down to math, but odds are of course never zero, and someone does have to be the one who suffers the failure outside of the typical odds from time to time.
-
R1 is definitely the best choice if you can do it. Get up to big enough drives and just get two.
-
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
R1
I like how you abbreviated the abbreviation there and saved two characters of redundant bandwidth!
:thumbs_up: :thumbs_up: :thumbs_up: -
@pete-s said in SAS SSD vs SAS HDD in a RAID 10?:
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
R1
I like how you abbreviated the abbreviation there and saved two characters of redundant bandwidth!
:thumbs_up: :thumbs_up: :thumbs_up:I took the time to document RAID notation years ago
-
@dustinb3403 said in SAS SSD vs SAS HDD in a RAID 10?:
OBR5 is the standard if you are going to be using an SSD
A URE isn't the only failure or corruption mode on a SSD. You can have drives that are not dead, but you want to shoot (firmware acting squirrel and you get 500ms). Also, 16TB SSDs that have deduplication and other data services in front of them can take a LONG TIME to rebuild (making that 7+1 a non-fun rebuild).
Throw in people using cheap TLC and QLC (crap write speed and latency after the DRAM and SLC buffer exhausted) and I wouldn't say as a rule RAID 5 for traditional RAID groups of SSDs is always a good idea. If you have an SDS layer that wide stripes across multiple servers, and limits the URE domain to an individual object this is a bit more controlled. If I have a small log file that writes in a circle a lot (My Casandra/REDIS systems) erasure codes may not be worth it has given the volume of ingestion.
I'm a bigger fan of RAID 5 on SSD in systems where I can pick and chose my RAID level on a single object, LUN etc so I can break up the write outliers that are small.
-
@dashrender said in SAS SSD vs SAS HDD in a RAID 10?:
This really does boil down to math, but odds are of course never zero, and someone does have to be the one who suffers the failure outside of the typical odds from time to time.
Human error tends to be the biggest cause. People go to replace a drive while a rebuild is going on and swap the wrong drive.
-
@storageninja said in SAS SSD vs SAS HDD in a RAID 10?:
@dustinb3403 said in SAS SSD vs SAS HDD in a RAID 10?:
OBR5 is the standard if you are going to be using an SSD
A URE isn't the only failure or corruption mode on a SSD. You can have drives that are not dead, but you want to shoot (firmware acting squirrel and you get 500ms). Also, 16TB SSDs that have deduplication and other data services in front of them can take a LONG TIME to rebuild (making that 7+1 a non-fun rebuild).
True. . . yet in this case we are discussing a 2TB array.
Throw in people using cheap TLC and QLC (crap write speed and latency after the DRAM and SLC buffer exhausted) and I wouldn't say as a rule RAID 5 for traditional RAID groups of SSDs is always a good idea. If you have an SDS layer that wide stripes across multiple servers, and limits the URE domain to an individual object this is a bit more controlled. If I have a small log file that writes in a circle a lot (My Casandra/REDIS systems) erasure codes may not be worth it has given the volume of ingestion.
I didn't state this was a rule, just a general starting point.
I'm a bigger fan of RAID 5 on SSD in systems where I can pick and chose my RAID level on a single object, LUN etc so I can break up the write outliers that are small.
-
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
@pete-s said in SAS SSD vs SAS HDD in a RAID 10?:
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
R1
I like how you abbreviated the abbreviation there and saved two characters of redundant bandwidth!
:thumbs_up: :thumbs_up: :thumbs_up:I took the time to document RAID notation years ago
:grinning_face_with_smiling_eyes: I think you made that up all by yourself :winking_face:
-
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@phlipelder said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm planning the build on a new server. I originally intended on putting 8 x "900GB 15K RPM SAS 12Gbps 512e" drives into a RAID 10 config using an H740P adapter, but then I saw that there are quite a few options for SAS SSD. I haven't really learned too much about the differences of putting SSD in RAID and how it compares to HDD in RAID, so I wanted to see if anyone here (Scott) had any input on the matter.
Example: Would it be worth putting, say, 6 x "1.6TB SSD SAS Mix Use 12Gbps 512e" drives into a RAID 10 instead? Is there a better approach with SSD in RAID?
RAID 6 is the way to go. We lost a server after replacing a drive and it's RAID 10 pair decided to drop out about 5 to 10 minutes into a rebuild.
In our comparison testing 8x 10K SAS drives in RAID 6 has a mean throughput of 800MiB/Second and about 250-450 IOPS per disk depending on the storage stack configuration.
SAS SSD would be anywhere from 25K IOPS per disk to 55K-75K IOPS per disk depending on whether read intensive, mixed use, or write intensive. There are some good deals out there on HGST SSDs (our preferred SAS SSD vendor).
Yeah, I've decided on RAID 6 if I am able to go with SSD drives. I am building out the server on Dell and purchasing through our VAR when it comes time to order.
Serious question, now that you seem to understand the concepts of what you may actually need.
Why R6? Your current workload seems to be nowhere near that level of redundancy, and does not appear to need it. Use a pair of SSD in R1 or a triplet in R5.
Yeah, getting new hardware is a time to evaluate this. But why the big jump to R6?
Edit: Yes, I realize hat your early posts stated you wanted to minimize any potential downtime.
For our current setup, we have 5 drives total: 4 drives in a RAID5 plus 1 dedicated drive as a hot-spare. We have about 1.6 TB total storage, 100GB of which is for Windows Server. The rest of the storage space is for our SQL database and it is nearing 90% full. I am looking to add another 2 or so GB of storage on top of that after migration.
The reason I wanted to go with RAID 6 if in an SSD setup is simply because it offers more protection than RAID 5. I want to eliminate outages as much as humanly possible. I don't want to have to restore from backups as much as possible.
I guess I could put two 4TB SSD's in a RAID 1 but there's doesn't seem to be an SSD of that capacity as an option while customizing the R740 I am building.
EDIT: Well it looks like I have the "3.84TB SSD SAS Mix Use 12Gbps 512n" as an option but that is over $4,000. I can compare total prices here in a bit but still, I might just prefer a RAID 6 unless there's a huge savings.
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@phlipelder said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm planning the build on a new server. I originally intended on putting 8 x "900GB 15K RPM SAS 12Gbps 512e" drives into a RAID 10 config using an H740P adapter, but then I saw that there are quite a few options for SAS SSD. I haven't really learned too much about the differences of putting SSD in RAID and how it compares to HDD in RAID, so I wanted to see if anyone here (Scott) had any input on the matter.
Example: Would it be worth putting, say, 6 x "1.6TB SSD SAS Mix Use 12Gbps 512e" drives into a RAID 10 instead? Is there a better approach with SSD in RAID?
RAID 6 is the way to go. We lost a server after replacing a drive and it's RAID 10 pair decided to drop out about 5 to 10 minutes into a rebuild.
In our comparison testing 8x 10K SAS drives in RAID 6 has a mean throughput of 800MiB/Second and about 250-450 IOPS per disk depending on the storage stack configuration.
SAS SSD would be anywhere from 25K IOPS per disk to 55K-75K IOPS per disk depending on whether read intensive, mixed use, or write intensive. There are some good deals out there on HGST SSDs (our preferred SAS SSD vendor).
Yeah, I've decided on RAID 6 if I am able to go with SSD drives. I am building out the server on Dell and purchasing through our VAR when it comes time to order.
Serious question, now that you seem to understand the concepts of what you may actually need.
Why R6? Your current workload seems to be nowhere near that level of redundancy, and does not appear to need it. Use a pair of SSD in R1 or a triplet in R5.
Yeah, getting new hardware is a time to evaluate this. But why the big jump to R6?
Edit: Yes, I realize hat your early posts stated you wanted to minimize any potential downtime.
For our current setup, we have 5 drives total: 4 drives in a RAID5 plus 1 dedicated drive as a hot-spare. We have about 1.6 TB total storage, 100GB of which is for Windows Server. The rest of the storage space is for our SQL database and it is nearing 90% full. I am looking to add another 2 or so GB of storage on top of that after migration.
The reason I wanted to go with RAID 6 if in an SSD setup is simply because it offers more protection than RAID 5. I want to eliminate outages as much as humanly possible. I don't want to have to restore from backups as much as possible.
I guess I could put two 4TB SSD's in a RAID 1 but there's doesn't seem to be an SSD of that capacity as an option while customizing the R740 I am building.
You only want to add 2GB of capacity or 2TB of capacity?