Server 2012 R2 Storage Spaces versus Hardware RAID, how do you decide?
-
Does anyone use Spaces in production? I'm getting semi-close read speeds in a 4x 850 Pro SSD Raid 0 using Storage Spaces and far superior write speeds ( expected ) when compared to a 6x identical-drive Raid 10 using the Perc H710P ( 1GB cache ) on my recently deployed server, and trying to decide whether to do a big Raid 10 of all 10 SSDs or to do a 2-drive Raid 1 and an 8 drive Raid 10 using Storage Spaces after getting some disappointing performance w/ the current Raid 10 setup on it.
-
Wouldn't you saturate the connection long before it mattered anyway?
$0.02 - I'd stick with the RAID controller, that's a damn good one and the 1gb model has battery backup too IIRC.
-
@MattSpeller said:
Wouldn't you saturate the connection long before it mattered anyway?
$0.02 - I'd stick with the RAID controller, that's a damn good one and the 1gb model has battery backup too IIRC.
Flash, I think.
-
That's a big deal with Storage Spaces - how do you intend to deal with power loss issues? Traditionally software RAID depends on 100% reliable power to the machine. Any moderately decent hardware RAID card has special protections for this because machines in this category often are subject to more dangerous power loads. If you go with software RAID you take on a lot more responsibility at the human level for designing a system that will not be risky in terms of corruption on power loss.
-
for science - have you benchmarked them both & would you be willing to share the info?
-
@MattSpeller said:
Wouldn't you saturate the connection long before it mattered anyway?
$0.02 - I'd stick with the RAID controller, that's a damn good one and the 1gb model has battery backup too IIRC.
How would I calculate that?
The drives are connected through a Perc H710P Mini w/ 1GB of cache. -
Initial 2 thread IO benchmark using SQLIO.
Left is a 6 SSD OBR10, right is a 4 SSD Storage Space:
-
The IOs/sec seems terrible with both options, I think these drives are supposed to do 100,000 EACH and both benchmarks pull less than 50,000. That said, I don't fully grasp IOPS yet or how to correctly test it so this may just be my being ignorant at the moment.
-
This thread is giving me a hardware stiffy, I need to figure out how to do this for a living.
-
Crystal benchmarks coming shortly.
-
@creayt said:
The IOs/sec seems terrible with both options, I think these drives are supposed to do 100,000 EACH and both benchmarks pull less than 50,000. That said, I don't fully grasp IOPS yet or how to correctly test it so this may just be my being ignorant at the moment.
IOPS are operations and not all operations are equal.
-
@MattSpeller said:
This thread is giving me a hardware stiffy, I need to figure out how to do this for a living.
I recently got an offer to do this full time for a living. Had to turn it down, though.
-
@creayt said:
The IOs/sec seems terrible with both options, I think these drives are supposed to do 100,000 EACH and both benchmarks pull less than 50,000.
You're right there, have you tried a larger test file?
-
From these numbers, the hardware RAID is coming back with better IOPS, better throughput and lower latency!
-
@scottalanmiller said:
I recently got an offer to do this full time for a living. Had to turn it down, though.
Is there a way to apprentice for this kind of thing? I need beautiful bleeding edge hardware in my life very badly.
-
@scottalanmiller said:
From these numbers, the hardware RAID is coming back with better IOPS, better throughput and lower latency!
The hardware RAID is a 6 drive OBR10, and this is a read test, so it's actually losing pretty hard with the exception of a few latency anomalies ( average is 0 for both ), no? Reading from 6 drives versus 4. I haven't run the write tests yet.
-
@MattSpeller said:
@scottalanmiller said:
I recently got an offer to do this full time for a living. Had to turn it down, though.
Is there a way to apprentice for this kind of thing? I need beautiful bleeding edge hardware in my life very badly.
So do I, so do I.
I was so close to pulling the trigger on an MSI Stealth Pro w/ a 5th-gen i7 quad last night but then I remembered what you said about Skylake being around the corner. This Radeon 5750 running 3 1440p monitors bullshit is really killing my experience.
-
@creayt That'd be a nice machine regardless
-
@MattSpeller said:
@creayt That'd be a nice machine regardless
Mostly while I become legit w/ system building and overclocking I want a large-screen laptop that can drive 3 screens at 60Hz, so that one seemed excellent for the price ( $1699 ). But it just seems self-indulgent to pick up a semi-cutting-edge-ish laptop like that when the new proc generation is less than 2 months away. Especially if they reduce heat substantially, this is a thin 17" workstation that probably gets pretty hot, so probably worth the wait twice over. It's running an GTX 970M 3GB I think, which w/ the quad core proc in that thin of a shell probably gets pretty steamy.
-
@scottalanmiller said:
From these numbers, the hardware RAID is coming back with better IOPS, better throughput and lower latency!
Taking the threads up past the default 2 to 16 ( it's a dual proc octocore so I chose 16, correct me if that's a poor choice ) gets the numbers almost exactly the same interestingly. Except, the hardware RAID had a latency that was about 50% worse than the worst latencey the Space had, and again this is 6 drives ( hardware ) versus 4.