Examining the Dell PERC H310 Controller
-
thank you Dear @scottalanmiller
-
I have one of these. Do you want me to run some tests?
Also, should I get rid of this and get the next model up? Going to do 3 Edge SSDs on it in a RAID 5 config.
-
You have an H310 that is not being used for anything?
-
It's in my new server that is currently sitting dormant, awaiting the release of Server 2016.
-
@BRRABill said:
It's in my new server that is currently sitting dormant, awaiting the release of Server 2016.
Ah ha, so we might have some time yet! Months, probably. Yeah, let's do some testing!
-
Let's start with a Linux live CD. Linux Mint probably has more drivers than most, but their live CD is really easy to deal with. I'd download that and let's see what it sees with that controller.
-
OK.
I won't see it again until Monday. Let me know what you want me to do, and I'll do it.
Maybe it'll prove itself worthy and I won't have to upgrade.
-
You want to try running it as a Linux server?
-
Or do you mean the H310 being worthy?
-
@scottalanmiller said:
Or do you mean the H310 being worthy?
I mean the H310 being worthy.
In that for my usage I can keep it instead of moving to the T710.
-
Not likely, we know it has no cache hardware or not.
-
Well, since I'm not running any crazy apps and the thing and will have SSDs, maybe it'll be OK for me.
-
@BRRABill said:
Well, since I'm not running any crazy apps and the thing and will have SSDs, maybe it'll be OK for me.
Defeats the point of SSDs quite a bit, though, and increases wear and tear on them dramatically.
-
@scottalanmiller said:
Defeats the point of SSDs quite a bit, though, and increases wear and tear on them dramatically.
Why is that?
-
@BRRABill said:
@scottalanmiller said:
Defeats the point of SSDs quite a bit, though, and increases wear and tear on them dramatically.
Why is that?
Because the RAID cache is a major component of speed by moving things into memory. And the wear and tear is because with SSDs you set the cache to be primarily for writes and many of the writes, especially when you have RAID 5 which suffers from 400% write expansion, are absorbed by the RAID controller. If a single block is changed 20 times, the controller might absorb all of those writes and keep them from going to the disks at all. And it can queue things for efficient writing. Very important with SSDs and parity arrays.
-
Wouldn't it have the same issue with "spinning rust" as you guys call it?
-
@BRRABill said:
Wouldn't it have the same issue with "spinning rust" as you guys call it?
Except there is no appreciable wear and tear from writes with spinning rust.
-
@scottalanmiller said:
Except there is no appreciable wear and tear from writes with spinning rust.
Is it proven (questioning the theory, not you) that is really a concern with SSDs? Especially server grade SSDs?
-
@BRRABill said:
Is it proven (questioning the theory, not you) that is really a concern with SSDs? Especially server grade SSDs?
That writes wear them out? Yes, it is very well established that writes are the only significant reliability concern to SSDs. Shock, temperature, operating duration, read frequency all have effectively zero effect on them. Writes alone cause them measurable wear.
-
The risk is far lower than people like to make it out to be and enterprise drives are much better than non-enterprise drives, but normally drives do not take direct writes in any serious server situation. Having enterprise drives without a cache in front of them is an odd pairing and not something that we would ever expect to see in an enterprise scenario. RAID array cache is one of the most significant features looked for in servers. 1GB of cache is normally a minimum today.
Add to that parity write expansion and you might have a lot more writes than is normally expected.