Server 2012 R2 Storage Spaces versus Hardware RAID, how do you decide?
-
@MattSpeller said:
This thread is giving me a hardware stiffy, I need to figure out how to do this for a living.
I recently got an offer to do this full time for a living. Had to turn it down, though.
-
@creayt said:
The IOs/sec seems terrible with both options, I think these drives are supposed to do 100,000 EACH and both benchmarks pull less than 50,000.
You're right there, have you tried a larger test file?
-
From these numbers, the hardware RAID is coming back with better IOPS, better throughput and lower latency!
-
@scottalanmiller said:
I recently got an offer to do this full time for a living. Had to turn it down, though.
Is there a way to apprentice for this kind of thing? I need beautiful bleeding edge hardware in my life very badly.
-
@scottalanmiller said:
From these numbers, the hardware RAID is coming back with better IOPS, better throughput and lower latency!
The hardware RAID is a 6 drive OBR10, and this is a read test, so it's actually losing pretty hard with the exception of a few latency anomalies ( average is 0 for both ), no? Reading from 6 drives versus 4. I haven't run the write tests yet.
-
@MattSpeller said:
@scottalanmiller said:
I recently got an offer to do this full time for a living. Had to turn it down, though.
Is there a way to apprentice for this kind of thing? I need beautiful bleeding edge hardware in my life very badly.
So do I, so do I.
I was so close to pulling the trigger on an MSI Stealth Pro w/ a 5th-gen i7 quad last night but then I remembered what you said about Skylake being around the corner. This Radeon 5750 running 3 1440p monitors bullshit is really killing my experience.
-
@creayt That'd be a nice machine regardless
-
@MattSpeller said:
@creayt That'd be a nice machine regardless
Mostly while I become legit w/ system building and overclocking I want a large-screen laptop that can drive 3 screens at 60Hz, so that one seemed excellent for the price ( $1699 ). But it just seems self-indulgent to pick up a semi-cutting-edge-ish laptop like that when the new proc generation is less than 2 months away. Especially if they reduce heat substantially, this is a thin 17" workstation that probably gets pretty hot, so probably worth the wait twice over. It's running an GTX 970M 3GB I think, which w/ the quad core proc in that thin of a shell probably gets pretty steamy.
-
@scottalanmiller said:
From these numbers, the hardware RAID is coming back with better IOPS, better throughput and lower latency!
Taking the threads up past the default 2 to 16 ( it's a dual proc octocore so I chose 16, correct me if that's a poor choice ) gets the numbers almost exactly the same interestingly. Except, the hardware RAID had a latency that was about 50% worse than the worst latencey the Space had, and again this is 6 drives ( hardware ) versus 4.
-
@creayt metal coffee mug + the fan blowing hot air out the side...
-
Running Crystals then will do some write IO tests.
-
The Storage Space seemed to beat the hardware in Crystal when considering disk quantity.
-
Hardware RAID appears to be walloping the Space w/ the SQLIO tool for write testing so far. Results to follow.
-
8 thread writes @ 8k
16 thread writes @ 64k
-
Ok, who dares me to install 2012 on OBR ZERO just to run some Crystal and see what she can do?
-
Oh I see, different drive config.
-
@MattSpeller said:
@scottalanmiller said:
I recently got an offer to do this full time for a living. Had to turn it down, though.
Is there a way to apprentice for this kind of thing? I need beautiful bleeding edge hardware in my life very badly.
It doesn't pay much. I did it as a contract job for a while. We did high end (many times custom) servers for GE, Coast Guard, Mining Companies, food industry etc. Back then (2007 or 2008). We were doing servers with 48-100 cores, double that thread with hyperthreading. And around 512gb ram. Some SSDs stuff too though the ssds were a bit unreliable back then.
-
So the RAID controller had a subtle, ambiguous setting available to switch it from PCIe 2 mode to PCIe 3 mode ( though it was labeled something more cryptic ). Simply enabling it made it jump from this:
to this:
Thank god for iDRACs.
-
That's awesome, good testing.
-
The RAID card is PCIe. Anyone familiar w/
"Memory Mapped I/O above 4GB" as seen in the pic below? Tempted to enable it because it seems like it would only benefit the RAID ( the only PCIe device in the server ).