RAID Performance Calculators
-
How many IOPs do you need? Assuming you're not in an IOP defect now, DPACk should tell you that to. But if you are in a defect today, it will be much harder to know.
If you have the time and resources, you could see about throwing an SSD in a system, loading it up with your workload and see what DPACK tells you then...
-
With QD1 on a RAID 5 12 disk array I'm looking at 48,000 IOPS.
If I use QD32 on a RAID 5 12 disk array I'm looking at 525,600 IOPS.
Can someone clarify this?
-
@DustinB3403 said:
In doing the math, I have 1 remaining question. Should I use QD1 or QD32 read/write performance markers?
That's tough. That you can only determine from measuring your actual usage and, in reality, you need a big blend of numbers. I'd start by getting a number from both to provide a range of reasonable possibilities.
-
@DustinB3403 said:
With QD1 on a RAID 5 12 disk array I'm looking at 48,000 IOPS.
If I use QD32 on a RAID 5 12 disk array I'm looking at 525,600 IOPS.
Can someone clarify this?
You'll likely be somewhere very much in the middle. These are kind of the best case and worst case numbers on a very large curve.
-
@Dashrender We likely need very little IOPS. We don't run any intensive applications, most literally Network shares and Domain functions.
-
One of the secrets of IOPS, and of many many things in IT, is that there is not a real answer in terms of an actual number, the numbers are massive estimates. What you should actually get is a big curve or 3D wave that represents IOPS under different conditions.
-
Should I really be concerned about this?
The goal is to run and host VM's and network share data off of two XenServer Host. Or should I simply state that "on the low end, we'd see IOPS at 48K to 525K, which is still mountains faster than the SR disk we have now in stand alone servers"
-
@DustinB3403 said:
@Dashrender We likely need very little IOPS. We don't run any intensive applications, most literally Network shares and Domain functions.
What is your current IOP availability? Are you having drive related issues today? If not, then as long as you match the current number (nearly impossible to not blow it away with SSD drives) you should be golden.
If I have 8 SAS 15K drives in RAID 10 today to replace them with 8 SSD in RAID 5, I personally wouldn't even look at IOP numbers as they are 10-1000x more than they were before, and probably 100-1000x more.
-
@Dashrender sadly we're in a RAID 5 SR Array at the moment.
I'm just looking for more "ammunition' for this proposal.
-
Then your gain will be even greater!
-
@DustinB3403 said:
Should I really be concerned about this?
You need to be concerned that you have "enough IOPS." But realistically you are looking at overrunning your controller with that many SSDs. So.... is it reasonable to worry that your smallish company's needs will be in excess of the IOPS possible from a high end enterprise RAID controller?
No, not reasonable. You would know if you were doing something super crazy that would require special case storage. In the real world, small companies of say 500 and fewer users have been able to function just fine off of large RAID 6 spinning rust arrays for years. Just moving from 10K to 15K drives can be a jump beyond what is needed. And then moving from R6 to R10 gives a bit leap. A large R10 array of 10K drives is enough for nearly any companies.
Having a 12 disk R5 SSD array is so many orders of magnitude faster than even very large, very fast spinning disk array that it is effectively impossible that you have a need for storage of that performance magnitude. If you did, you would be dysfunctional today, right?
-
@scottalanmiller said:
@DustinB3403 said:
Should I really be concerned about this?
You need to be concerned that you have "enough IOPS." But realistically you are looking at overrunning your controller with that many SSDs. So.... is it reasonable to worry that your smallish company's needs will be in excess of the IOPS possible from a high end enterprise RAID controller?
No, not reasonable. You would know if you were doing something super crazy that would require special case storage. In the real world, small companies of say 500 and fewer users have been able to function just fine off of large RAID 6 spinning rust arrays for years. Just moving from 10K to 15K drives can be a jump beyond what is needed. And then moving from R6 to R10 gives a bit leap. A large R10 array of 10K drives is enough for nearly any companies.
Having a 12 disk R5 SSD array is so many orders of magnitude faster than even very large, very fast spinning disk array that it is effectively impossible that you have a need for storage of that performance magnitude. If you did, you would be dysfunctional today, right?
Good point.
If we needed it today, we'd already be aware of it. (headed to lunch thanks for the input)
-
@Dashrender said:
If I have 8 SAS 15K drives in RAID 10 today to replace them with 8 SSD in RAID 5, I personally wouldn't even look at IOP numbers as they are 10-1000x more than they were before, and probably 100-1000x more.
Exactly. What @dashrender is talking about here but stating as an example, is the concept of using relative decision making rather than absolute decision making.
What I mean is.... it is effectively impossible to determine the absolute performance of the different solutions. But you can pretty easily determine which was is better and more or less by what degree.
So how good is the R5 SSD array? Who knows. But how much better is it? That we can roughly figure out.
-
Then the question becomes, is it worth the expense of going to SSD?
Even buying consumer drives (say 480GB drive for $150 yesterday), maxing that out at 8 drives in RAID 5 gives me 3.3 TB and insane IOPs. More than my controller can handle.
But is that the smart spend? In my case probably not. I can get 2 TB NL SAS drives for $80. Put them in a RAID 6, giving me 12 TB usable and roughly (8 * 70 IOPs) 540 IOPs (something less due to RAID 6). In my case, backup storage, this is probably enough IOPs, and I'd be at 4 times the storage and 1/2 the cost.
-
@Dashrender said:
Then the question becomes, is it worth the expense of going to SSD?
But don't forget that SSDs have the power saving advantage to offset their cost too. Even at only $50 a drive, that can be a big percentage of the different in per drive costs.
-
@Dashrender said:
But is that the smart spend? In my case probably not. I can get 2 TB NL SAS drives for $80. Put them in a RAID 6, giving me 12 TB usable and roughly (8 * 70 IOPs) 540 IOPs (something less due to RAID 6). In my case, backup storage, this is probably enough IOPs, and I'd be at 4 times the storage and 1/2 the cost.
R6 has a 6x write hit. But NL-SAS drives should deliver a lot more than 70 IOPS. More like 140 IOPS. NL-SAS is faster than 7200 RPM SATA which is 50% faster than 5400 RPM SATA.
-
With good queue depth you can get well over 100 IOPS from a WD Red or WD Green which are 5400 RPM SATA drives.
So if 5400 RPM SATA can push 120 IOPS, 7200 RPM SATA should close in on 170 IOPS. The same spindle on SAS generally gets 5% - 20% improvement over that. So reasonable to see 200 IOPS from NL-SAS.
-
So 8 * 150, as a more reasonable starting point, is 1,200 RIOPS from an OBR6 array. But only 200 WIOPS. So your blend is important.
-
And, of course, this is from the array itself. The RAID controller should have a RAM cache. That can, depending on the workload, make a truly massive difference especially if you have 1GB or more.