RAID 10, 20 Disks, How Many Hot Spares
-
So you want a scenario? This is contrived and not mine to make but here we go...
- SMBs should basically always have their servers in colocation facilities. What SMB has the facilities to host their own properly? Datacenters charge for manual labor and don't always provide easy access for vendors. Having a hot spare in the datacenter can be instant recovery happening instead of waiting hours or days for the vendor to get in with spare parts (it means you can get NBD deals instead of 4 hour ones to save money) adding tons of protection for very little money. This grows significantly if you don't have a vendor doing the swaps but plan to do it yourself. NTG's travel time to our old datacenter was four hours, for example.
- Even in a datacenter, cold spares can take a long time to get put into place if the DC is busy, especially if things happen off hours. And there is risk that the wrong drive will be replaced, the server can't be found or whatever. Pay for a Tier IV and that stuff mostly goes away, but SMBs often are in lower tier DCs or do on premises and take risks that people will be less trained and make more mistakes.
- IT Pros often don't understand RAID and will power down a machine when the RAID needs a drive replaced. A lot of people tackle this in the real world when they aren't the sole IT guy and are forced to make systems that are as self healing as possible because they don't always know who is going to be doing the work, especially years in the future when the systems will be most likely to fail. It's an investment in better processes. So even simple on premises systems have reasons why it can make sense.
- Many SMBs don't have full time IT staff, that alone explains everything.
- Many SMBs don't have on premises IT staff, again, totally explains it.
- Many SMBs have fewer IT staff than they have physical locations.
- MSPs often are not given blanket access to customer facilities and need to provide rapid protection faster than a customer may reliably be able to provide physical access.
- Systems in remote locations do not always have reliable supply chains, especially outside of the US. Whether you are on an island in Lake Superior, in Matagalpa Nicaragua, on a cruise ship, in a research station on a mountain or in a state that gets way too much snow, hurricanes or flooding... having hot spares that can take care of things when staff and/or supply chains cannot get drives swapped promptly can be absolutely critical.
- Many SMBs run without IT staff and need systems to be as self healing as possible.
-
@scottalanmiller said in RAID 10, 20 Disks, How Many Hot Spares:
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
@scottalanmiller said in RAID 10, 20 Disks, How Many Hot Spares:
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
@scottalanmiller said in RAID 10, 20 Disks, How Many Hot Spares:
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
My use case is on prem easy access. Define yours and maybe we can agree on something.
- No one even suggested that on prem was going on, that's a totally false assumption. So you can't make up a use case and then use it to make the "it's always this way."
No one said it wasn't
So because you inject your own details and no one specifically disputes them, they become true?
That seems to be what you do
Okay, what detail did I interject? I'm working from the OP and nothing else. What have I added?
We all come at this with different perspectives. You looked at this and assumed it was in a colo. I assumed it was on prem. We don't even know enough to speculate (but we do anyways because it's a fun thought experiment). We don't even know what it's hosting, what level of risk is acceptable to the business, etc.
Given what we do know:
"there is a single RAID array of 20 spinning disks in RAID 10 and the person asking wants to know how many hot spares would be recommended."
If it were in a colo I'd put spares in it. If it were on prem I'd not waste a slot on hot spares unless there was a really insanely risk averse business case.
-
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
We all come at this with different perspectives. You looked at this and assumed it was in a colo.
No, I did not and do not. I only assume that the question is about what is asked - the risk offset from adding more hot spares. Colo was only mentioned because you told me that I had to provide a scenario in which the OP was acceptable.
I assumed and still do that colo is one of the options, but I have no idea what they are doing, only that they have an array and are now looking at risk offset values.
-
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
"there is a single RAID array of 20 spinning disks in RAID 10 and the person asking wants to know how many hot spares would be recommended."
If it were in a colo I'd put spares in it. If it were on prem I'd not waste a slot on hot spares unless there was a really insanely risk averse business case.
Even in that case, I would rarely put hot spares in it in a colo. We have servers and have had servers in colos for years, both SMB and Wall St. enterprise and in both cases - no hot spares.
Reason? Our risk aversion did not dictate that it was necessary and our colocation facilities could handle relatively rapid swaps of spare equipment. Colo makes hot spares somewhat more reasonable, but it is still a risk aversion and access use case primarily. Even there, I think it's rarely a good financial decision for most workloads.
We use Colocation America right now, so our swaps would be about six hours. Four to five hours for the vendor to get the drive there, about an hour for them to coordinate, get the tech to the server, do the swap, etc. Well worth not wasting the money on the extra drives to sit around doing nothing for us.
-
I totally agree with @MattSpeller in that most companies would be better served by more IOPS and more capacity than they hav and that hot spares are relatively useless for them. That part I am totally in agreement with.
-
Why would you not put a hot spare in a RAID 10? -- Especially if you are trying to mitigate some risk of a drive failing.
-
@dafyre said in RAID 10, 20 Disks, How Many Hot Spares:
Why would you not put a hot spare in a RAID 10? -- Especially if you are trying to mitigate some risk of a drive failing.
Well the obvious reasons against it are these two things:
- Those spare slots could potentially be used for other purposes (Matt's IOPS and Capacity point.)
- The cost of the hot spares is easily higher than the protection value that they provide.
Those are the two arguments against hot spares in the general sense. Really they are the same thing said twice, but I'll point out the differences and why we separate them for discussion...
- The first is about "how the existing physical equipment could be better used". The cost of lost opportunity in the technical space.
- The second is about "how the same money could be better spent". The cost of lost opportunity in the financial space.
-
@scottalanmiller said in RAID 10, 20 Disks, How Many Hot Spares:
I totally agree with @MattSpeller in that most companies would be better served by more IOPS and more capacity than they hav and that hot spares are relatively useless for them. That part I am totally in agreement with.
-
So is it done? Does Matt understand and agree to the point that Scott was making?
-
@Dashrender said in RAID 10, 20 Disks, How Many Hot Spares:
So is it done? Does Matt understand and agree to the point that Scott was making?
Yes I believe so.
TL;DR attempt
#1 #2 #3#4 (counting edits)RAID10 does not need hot spares
If you have spare slots you'd be better served by a larger array with more IOPS
The corner case (the one raised by the op's question?) is would hot spares reduce the risk of array failure. The answer is 100% absolutely yes it will reduce the risk of failure.
The disagreement (I think..?) was if that's necessary. We agreed that it isn't necessary to have any hot spares for RAID10 unless there's mitigating factors (examples: remote COLO with horrific access issues, extremely risk averse use case).