Disk Speed and IOPS Benchmarking Questions
-
You tested a RAID 0 array? was it your plan to use RAID 0?
-
What applications are you going to run? Things like ERP systems often have specific I/O requirements for optimal configuration. Maybe that will tell you more if you are in line with what is needed.
-
Oh yeah, that's a mistake. WTF. I fixed it.
-
@NetworkNerd said:
What applications are you going to run? Things like ERP systems often have specific I/O requirements for optimal configuration. Maybe that will tell you more if you are in line with what is needed.
My applications are pretty generic.
I am more interested in learning HOW to test for this kind of stuff, and what the results mean. And in general how to tune a RAID controller for optimization.
-
@BRRABill Not that I can really tell you, but tuning has as much to do with your application as it does the disk. As mentioned elsewhere, file size to stripe size can make a huge impact.
-
@Dashrender said:
@BRRABill Not that I can really tell you, but tuning has as much to do with your application as it does the disk. As mentioned elsewhere, file size to stripe size can make a huge impact.
Yup
I have the luxury of sitting on this log server project for a while so I'm going to document the differences between stripe sizes and other RAID options.
TBH I've never built a log server before but I'm willing to bet that it's a lot of small files and tons of I/O so that's what I'll optimize for initially. Something like you'd setup for a big database with lots of tiny entries.
Setup: PE2900, single cpu, 2gb ram, 4x 73gb 15kRPM, 6x 1tb 7200RPM
OS is currently CENTOS7 so I can get some practice. -
@scottalanmiller Have you done any writeups on stripe size? I couldn't find anything on your site.
-
Question 2:
Why has no one answered question 1. LOL.Question 2 (FOR REALS):
Does stripe size even matter with SSD? -
-
QUESTION 3:
How does the controller pick what "policy" is setups up for the drive?For example, the SATA 7200rpm drives are set to "Write Through" for write policy, which DELL says
"The controller sends a write-request completion signal only after the data is written to the disk. Write-through caching provides better data security than write-back caching, since the system assumes the data is available only after it has been safely written to the disk. "The SSD drives are set to "Write Back", which DELL says
"The controller sends a write-request completion signal as soon as the data is in the controller cache but has not yet been written to disk. Write back caching may provide improved performance since subsequent read requests can retrieve data quickly from the cache then from the disk. However, data loss may occur in the event of a system failure which prevents that data from being written on a disk. Other applications may also experience problems when actions assume that the data is available on the disk. "Considering the H710 has battery backup, and the EDGE SSD has power loss circuitry, this should not be an issue though, correct?
But it probably WOULD be an issue on the regular drives with no such power loss circuitry?
Am I thinking about that correctly?
-
-
@BRRABill said:
This is the RAID1 array of the EDGE SSDs on the H710:
I don't mean to harp on this, but is it RAID 1 or RAID 10? RAID 1 has an implied expectation of only having two disk (though some controllers do support more than just two disks in a simple mirror set, for example, 3 fully mirrored drives).
With RAID 10, we know it's a minimum of 4 drives, but could be many many more.
Also, as Scott has pointed out, considering the reduction in risks and the lack of UREs in SSDs, RAID 5 is definitely an option these days.
-
@BRRABill said:
QUESTION 3:
How does the controller pick what "policy" is setups up for the drive?For example, the SATA 7200rpm drives are set to "Write Through" for write policy, which DELL says
"The controller sends a write-request completion signal only after the data is written to the disk. Write-through caching provides better data security than write-back caching, since the system assumes the data is available only after it has been safely written to the disk. "The SSD drives are set to "Write Back", which DELL says
"The controller sends a write-request completion signal as soon as the data is in the controller cache but has not yet been written to disk. Write back caching may provide improved performance since subsequent read requests can retrieve data quickly from the cache then from the disk. However, data loss may occur in the event of a system failure which prevents that data from being written on a disk. Other applications may also experience problems when actions assume that the data is available on the disk. "Considering the H710 has battery backup, and the EDGE SSD has power loss circuitry, this should not be an issue though, correct?
But it probably WOULD be an issue on the regular drives with no such power loss circuitry?
Am I thinking about that correctly?
You are thinking about it the same way that I am. But if you needed performance, it hasn't been completely uncommon to use write-back even on spinning rust, as long as you have battery backup/flash backup on your RAID controller.
-
@Dashrender said:
@BRRABill said:
This is the RAID1 array of the EDGE SSDs on the H710:
I don't mean to harp on this, but is it RAID 1 or RAID 10? RAID 1 has an implied expectation of only having two disk (though some controllers do support more than just two disks in a simple mirror set, for example, 3 fully mirrored drives).
With RAID 10, we know it's a minimum of 4 drives, but could be many many more.
Also, as Scott has pointed out, considering the reduction in risks and the lack of UREs in SSDs, RAID 5 is definitely an option these days.
No, it is a valid question.
It was going to be a RAID5 array. I had planned on getting 3 480GB SSDs and setting up a RAID5 array. However, xByte was out of that size, so they upgraded me to the the 960GB SSDs. (What a great company!) 960GB is more than I needed anyway, so 1920GB would be waaaaay more than I needed! So after speaking with Scott, I decided to go back to a RAID1 (1 not 10) and store the extra SSD on the shelf for later. The thought was that it would last for many years on the shelf, and if it wasn't needed now, why use it for no reason?
So it is TWO of the EDGE 960GB SSDs in a RAID1 (mirrored) array.
-
@Dashrender said:
You are thinking about it the same way that I am. But if you needed performance, it hasn't been completely uncommon to use write-back even on spinning rust, as long as you have battery backup/flash backup on your RAID controller.
But in the scenario where, say, a power supply or board dies, wouldn't you lose data?
Granted, that has a small chance of happening. In fact, before finding ML, I hadn't ever even heard of that as a possibility! (I asked ... if you have a UPS, why would you ever lose power? Oh, grasshopper!)
-
@BRRABill said:
@Dashrender said:
You are thinking about it the same way that I am. But if you needed performance, it hasn't been completely uncommon to use write-back even on spinning rust, as long as you have battery backup/flash backup on your RAID controller.
But in the scenario where, say, a power supply or board dies, wouldn't you lose data?
Granted, that has a small chance of happening. In fact, before finding ML, I hadn't ever even heard of that as a possibility! (I asked ... if you have a UPS, why would you ever lose power? Oh, grasshopper!)
No, because the data is kept alive by the battery backup/flash on the RAID card. When the system is brought back online, the first thing the RAID controller does, besides verify the array is good, is write all data in the cache to the drives.
Now, if the RAID controller dies, sure, you'll have data loss. But have you ever lost a RAID card? I haven't. Even in Scott's viewing of thousand if not 10's of thousands of servers with RAID cards, I don't think his number is more than a handful that have ever died.
-
@Dashrender said:
No, because the data is kept alive by the battery backup/flash on the RAID card. When the system is brought back online, the first thing the RAID controller does, besides verify the array is good, is write all data in the cache to the drives.
How long does it store it? As long as the battery lasts?
-
@BRRABill said:
Question 2 (FOR REALS):
Does stripe size even matter with SSD?Absolutely it does, but the question is how much
It may be less workload dependant (average file size)
I expect it'll have a sweet spot around or a multiple of it's chunk size.
Chunk size... this is what happens when I post before coffee.
-
@BRRABill said:
@Dashrender said:
No, because the data is kept alive by the battery backup/flash on the RAID card. When the system is brought back online, the first thing the RAID controller does, besides verify the array is good, is write all data in the cache to the drives.
How long does it store it? As long as the battery lasts?
in the case of batteries - yes, in the case of flash, nearly forever.
Though the batteries will probably last for days if not a lot longer.
-
@BRRABill said:
I have a new server that has seen me go from an H310 to an H710. It's also seen a change from 7200RPM SATA drives to EDGE SSD.
I'd like to post the numbers that I got from testing, and have some questions answered. I am sure this all makes sense somewhere, I'm just not sure where.
I'll post the numbers, and then my questions.
Hopefully this thread can bring about some configuration settings for anyone looking to configure their RAID cards optimally.
CrystalMark should be renamed to CrystalCrap.
Intel I/O meter, DiskSPD or Oracle VDBench.