Large or small Raid 5 with SSD
-
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@travisdh1 said in Large or small Raid 5 with SSD:
Normal operation of the RAID would correct the issue. Degraded status depends on the type of RAID IE: RAID6 degraded mode should function as a RAID5, so a URE doesn't become a problem until the 2nd drive fails.
To be clear, a URE during normal degraded operations does impact one file, but not the array. From the point of view of the array, nothing is wrong. During a rebuild, that same URE takes out the entire array in a parity RAID system. So very different results from the same URE.
AWWWW - this is what I was missing. OK a normal read operation will only break one file. Thanks. that explains a lot!
Correct. And often it's a small file that no one cares about or might even be in "empty space" and truly doesn't matter.
URE to the filesystem is at risk only for the size of the data stored that matters, which is normally tiny compared to the size of the full array.
E.g. an 8TB array might hold 4.5TB of data or which only 2TB is ever needed again. The risk is in a 2TB domain, rather than an 8TB domain. And IF it hits in that space, it is isolated to one file impacted. So the mitigation is extreme.
You hit UREs on your desktop all of the time, and it almost never matters.
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
I know you said earlier that with raid 5, you may as well add that 5th drive to the array and make it a raid 6 as opposed to sitting on the shelf.
Not "might as well", but "had better make sure you do." Difference in risk is astronomic. If you are even thinking hot spare is an option, we've not explain adequately how it works.
I was thinking cold spare, not hot spare. I don't want the array rebuilding automatically before I have time to make a conscience decision to do it. But the different is similar, I still would have a spare and is not helping the array at all just sitting on the shelf.
This isn't a good idea. You should have an array stable enough that you want it rebuilt. If you have this fear, you need a safer array.
Having never personally used a raid 5, all I have to go on is information that is presented online through mediums like ML. Some, perhaps even most, of the information I find is either out of date or pertains to the use of raid 5 with spinners. I know that in the last 4 years I have had two or three spinners fail in raid 10 arrays, and a few single drives fail in desktops, both spinners and SSD's. So in my mind, a drive failure is a reasonable assumption to occur in the next 5 years. But, we have also never had drives with warranties, so that changes the cost equation too.
I am not sure that my fear is rational, because my understanding of the actual risk is limited.
-
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
I know you said earlier that with raid 5, you may as well add that 5th drive to the array and make it a raid 6 as opposed to sitting on the shelf.
Not "might as well", but "had better make sure you do." Difference in risk is astronomic. If you are even thinking hot spare is an option, we've not explain adequately how it works.
I was thinking cold spare, not hot spare. I don't want the array rebuilding automatically before I have time to make a conscience decision to do it. But the different is similar, I still would have a spare and is not helping the array at all just sitting on the shelf.
This isn't a good idea. You should have an array stable enough that you want it rebuilt. If you have this fear, you need a safer array.
Having never personally used a raid 5, all I have to go on is information that is presented online through mediums like ML. Some, perhaps even most, of the information I find is either out of date or pertains to the use of raid 5 with spinners. I know that in the last 4 years I have had two or three spinners fail in raid 10 arrays, and a few single drives fail in desktops, both spinners and SSD's. So in my mind, a drive failure is a reasonable assumption to occur in the next 5 years. But, we have also never had drives with warranties, so that changes the cost equation too.
I am not sure that my fear is rational, because my understanding of the actual risk is limited.
The MORE you fear a drive failure, the MORE you would fear not rebuilding instantly, automatically. Your fear does not match your response.
-
That a drive might fail is not in question. In five years, there is a good chance of a drive failing.
What you need to do is apply that to your thinking and say "If I fear drives failing, what protects me from that?"
-
am I wrong to think that the probability of two drives failing is much less than the probability of just one drive failing? And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
-
@Donahue said in Large or small Raid 5 with SSD:
am I wrong to think that the probability of two drives failing is much less than the probability of just one drive failing?
You are correct, but no one is disagreeing with that. It's how you are using this info is what is incorrect.
-
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
-
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
why only 1 TB of capacity?
-
With spinners, you take a backup first because your resilver is often expected to fail. Or the risk is super high, at least.
The backup might take two hours, while the rebuild might take two weeks.
With SSD, the backup might take longer than the rebuild. So the factors of that alone change a lot, too.
-
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
why only 1 TB of capacity?
How big do you expect SSDs to be when you have many in an array realistically?
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
why only 1 TB of capacity?
How big do you expect SSDs to be when you have many in an array realistically?
so you're talking about the single drive, not the array. Got it.
Though when resilvering, you still read the entire array worth.
-
For the sake of this thread, I am probably going to use 3.84TB SSD's, but the point remains.
-
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
why only 1 TB of capacity?
How big do you expect SSDs to be when you have many in an array realistically?
so you're talking about the single drive, not the array. Got it.
Though when resilvering, you still read the entire array worth.
Correct, the time to resilver is primarily based on the size of the drive being rebuild. That's the bottleneck, the time to write data back to the one drive.
So if 4x 10TB drives takes 2 days to replace a drive.
8x 5TB drives would take 1 day to replace a drive.It's not exact, but it is really close.
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
why only 1 TB of capacity?
How big do you expect SSDs to be when you have many in an array realistically?
so you're talking about the single drive, not the array. Got it.
Though when resilvering, you still read the entire array worth.
Correct, the time to resilver is primarily based on the size of the drive being rebuild. That's the bottleneck, the time to write data back to the one drive.
So if 4x 10TB drives takes 2 days to replace a drive.
8x 5TB drives would take 1 day to replace a drive.It's not exact, but it is really close.
but with twice the chance of having to rebuild.
-
TANSTAAFL
-
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
why only 1 TB of capacity?
How big do you expect SSDs to be when you have many in an array realistically?
so you're talking about the single drive, not the array. Got it.
Though when resilvering, you still read the entire array worth.
Correct, the time to resilver is primarily based on the size of the drive being rebuild. That's the bottleneck, the time to write data back to the one drive.
So if 4x 10TB drives takes 2 days to replace a drive.
8x 5TB drives would take 1 day to replace a drive.It's not exact, but it is really close.
but with twice the chance of having to rebuild.
Correct, that you need to rebuild happens roughly twice as often.
-
So I got a quote from xbyte, but it includes this. I had been expecting that I would just load the hypervisor on the raid 5 array and that having a seperate R1 array for the OS was an old way of approaching this. Thoughts?
-
@Donahue said in Large or small Raid 5 with SSD:
So I got a quote from xbyte, but it includes this. I had been expecting that I would just load the hypervisor on the raid 5 array and that having a seperate R1 array for the OS was an old way of approaching this. Thoughts?
How much money does that add?