Does turning off the virtualization features make your CPU go faster for non-virtualized workloads?
-
@scottalanmiller said:
@creayt said:
Since it picks up all of the fumbles the H710 Perc makes: Trim support, guaranteed per-drive overprovisioning, winning in Crystal benchmarks ( so far ).
You are just describing software RAID. Nothing special here.
I see. So in general is software RAID for specifically SSD deployments superior to a humble controller like a Perc H710P because of these features then? Or should the hardware RAID still be better independent of these things?
According to my understanding of Anandtech's review you can just completely transform performance on these 850 Pro drives particularly by ensuring proper overprovisioning, which I'd seem to lose at this point w/ the hardware RAID.
http://www.anandtech.com/show/8216/samsung-ssd-850-pro-128gb-256gb-1tb-review-enter-the-3d-era/7
-
@scottalanmiller said:
If Ford renamed the Pinto the "Grand Tourer" would you just buy it? What if they added a new color scheme? This is the storage industry's Pinto with a new coat of paint and a new brand name. MS is making an effort, but we can't be blinded by a good advertising campaign and forget what it is underneath.
That seems to kind of dismiss the ability of software to be re-engineered to dramatically different effect. I know that I've gone back into an algorithm I created years earlier, tweaked, reorganized, and optimized it, and got 50000%+ better performance out of the update than its original writing had. Is that the wrong way to think about software RAID? Why couldn't Microsoft theoretically write some amazing code that made it way way way faster than it ever had been before?
-
@creayt said:
I see. So in general is software RAID for specifically SSD deployments superior to a humble controller like a Perc H710P because of these features then? Or should the hardware RAID still be better independent of these things?
Software RAID and hardware RAID only refers to where the RAID is implemented. But there are commonalities. Hardware RAID's purpose is two-fold: one to fix the problems with Windows software RAID and two to make things easy so that you don't need to be a storage expert.
With the exception of Windows software RAID, software RAID in any enterprise OS crushes hardware RAID and has since 2002 (when 133 FSB Pentium III was standard.) Software RAID is faster and more powerful, but requires more work and knowledge. For the SMB market where performance rarely matters and ease of use matters a lot, hardware RAID really wins. Any, of course, anytime you run Windows you want hardware RAID because of the fragility in Windows software RAID.
But in the enterprise space (big iron servers) hardware RAID has never even existed. Hardware RAID has existed solely for the purpose of solving issues with Windows.
-
@scottalanmiller said:
And we've seen people lose data from Storage Spaces failing, so this isn't just theory.
Ah, I see. That's kind of what I'm looking for. What happened to cause the data loss? Random failure?
-
@scottalanmiller said:
@creayt said:
I see. So in general is software RAID for specifically SSD deployments superior to a humble controller like a Perc H710P because of these features then? Or should the hardware RAID still be better independent of these things?
Software RAID and hardware RAID only refers to where the RAID is implemented. But there are commonalities. Hardware RAID's purpose is two-fold: one to fix the problems with Windows software RAID and two to make things easy so that you don't need to be a storage expert.
With the exception of Windows software RAID, software RAID in any enterprise OS crushes hardware RAID and has since 2002 (when 133 FSB Pentium III was standard.) Software RAID is faster and more powerful, but requires more work and knowledge. For the SMB market where performance rarely matters and ease of use matters a lot, hardware RAID really wins. Any, of course, anytime you run Windows you want hardware RAID because of the fragility in Windows software RAID.
But in the enterprise space (big iron servers) hardware RAID has never even existed. Hardware RAID has existed solely for the purpose of solving issues with Windows.
This is mind blowing, I had no idea. Unfortunately I need to stick w/ Windows for the foreseeable future at least as I'm just now dipping into servers myself and this is for a personal project ( new web app I'm creating ). Very informative, thank you.
-
@creayt said:
@scottalanmiller said:
And we've seen people lose data from Storage Spaces failing, so this isn't just theory.
Ah, I see. That's kind of what I'm looking for. What happened to cause the data loss? Random failure?
Yes, the fear is around the entire framework failing. Data recovery from Windows software RAID, and long term stability, have never been all that great.
Now if the only goal is speed, your priorities change. So if you really just care about how fast it can go, you look at things differently.
-
@scottalanmiller said:
Now if the only goal is speed, your priorities change. So if you really just care about how fast it can go, you look at things differently.
I see. I guess it makes sense at least w/ my limited knowledge of how it all works. If a single 850 Pro using system RAM as the write cache can pull off the numbers below on my home-made $1000 workstation ( over 4 GB/s read and write ), and my server has 256GB RAM for Storage Spaces to use, I imagine the hardware RAID wouldn't stand a chance. The risk of data loss is very scary though, and may end up being the deciding factor. Out of curiosity, were the data loss issues you saw pre Server 2012 era or post?
-
The issues have mostly been around array failure. Either at run time or at reboot that the array simply fails and the array is lost with or without a drive failure. The software RAID equivalent of a DAC, I suppose. I have no doubt that Microsoft is putting tremendous effort into addressing traditional shortcomings and working to catch up to their decade-long lag versus Solaris and other platforms on this. But Storage Spaces is still nascent and needs time to prove its reliability because I will be comfortable recommending it given a twenty year history of problems with the product and some continuing reports of issues still.
-
@creayt said:
What RAID level is giving you those numbers?
The 1:10 Sequential ratio seems really wrong.
-
@MattSpeller said:
@creayt said:
What RAID level is giving you those numbers?
The 1:10 Sequential ratio seems really wrong.
That's literally a SINGLE 850 Pro 256 GB using the box's RAM as a write back cache ( Samsung's "rapid mode" ).
-
@creayt ohhhhhhhhhhhhh ok - that was messing with my brain. thank you for clarification.
-
@MattSpeller said:
@creayt ohhhhhhhhhhhhh ok - that was messing with my brain. thank you for clarification.
NP.
I should note in case it matters that it's a quad 3.8 Ghz Xeon w/ HT and 32 GB DDR3 1600.
My dual-core i7-5500U w/ 8GB of RAM puts these up w/ a single 840 Evo though, notice the awkwardly spectacular 6GB write.
-
@creayt I'm going home to benchmark my (comparitively) budget build 8320 / 840pro
I don't think I have the software installed for the RAM drive boost thingy whatever - I should investigate that.
-
@MattSpeller said:
@creayt I'm going home to benchmark my (comparitively) budget build 8320 / 840pro
I don't think I have the software installed for the RAM drive boost thingy whatever - I should investigate that.
What you want is Samsung Magician:
http://www.samsung.com/global/business/semiconductor/minisite/SSD/global/html/support/downloads.htmlIt also lets you overprovision the drive while booted into Windows in a few clicks.
To get these ridic numbers I overprovision really hard, above 25%, FYI. Because it uses the system RAM as the cache ( I think you need at least 8 to even enable "rapid mode", but the more you have the better ).
-
@creayt sweet, will investigate!
-
Remember that the more RAM that you use as cache, the more data is potentially in flight during a power loss. If you have 128GB of RAM cache for your storage, that could be a tremendous amount of data that never makes it to the disk.
-
@scottalanmiller said:
Remember that the more RAM that you use as cache, the more data is potentially in flight during a power loss. If you have 128GB of RAM cache for your storage, that could be a tremendous amount of data that never makes it to the disk.
So would your recommendation be "Storage Spaces aren't fit for production, always, always go hardware RAID if you're running a mission-critical database"?
And if so, given my hardware:
R620
10x 1TB 850 Pro SSDs
2x Xeon E5-2680 octos
256GB DDR 1600 ECCAnd my workload:
Single web app that's a hybrid between a personal to do app and a full enterprise project manager
IIS
Java-based app server
MySQL
MongoDB
Node JSWould your recommendation be to just go OBR10?
-
Yes, OBR10 and hardware RAID would be my recommendation. Even if you sacrifice a little speed, the protection against failure is a bit better. I would sleep better with hardware RAID there.
-
@scottalanmiller said:
Yes, OBR10 and hardware RAID would be my recommendation. Even if you sacrifice a little speed, the protection against failure is a bit better. I would sleep better with hardware RAID there.
Do you have any blog posts on what block size settings to use for web app/database mixed-load OBR10s? Or a favorite primer link you hand out to newbs?
-
@creayt said:
@scottalanmiller said:
Yes, OBR10 and hardware RAID would be my recommendation. Even if you sacrifice a little speed, the protection against failure is a bit better. I would sleep better with hardware RAID there.
Do you have any blog posts on what block size settings to use for web app/database mixed-load OBR10s? Or a favorite primer link you hand out to newbs?
No, afraid not.