How can we recover data from Hard Drives were on RAID 10 without controller?
-
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
-
@manxam said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
Nope.
Had a virtualization host RAID 10 drive, of six, die.
I popped by, did a hot swap of the dead drive, rebuild started, and I sat for a coffee with the on-site IT person.
About 5 minutes into that coffee we heard a BEEP, BEEP-BEEP, and then nothing. It was sitting at the RAID POST prompt indicating failed array and no POST.
It's pair had died too.
I'll stick with RAID 6 thank you very much. We'd still have had the server.
We ended up installing a fresh OS, setting things up, and recovering from backup (ShadowProtect) after flattening and setting up the array again.
-
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@manxam said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
Nope.
Had a virtualization host RAID 10 drive, of six, die.
I popped by, did a hot swap of the dead drive, rebuild started, and I sat for a coffee with the on-site IT person.
About 5 minutes into that coffee we heard a BEEP, BEEP-BEEP, and then nothing. It was sitting at the RAID POST prompt indicating failed array and no POST.
It's pair had died too.
I'll stick with RAID 6 thank you very much. We'd still have had the server.
We ended up installing a fresh OS, setting things up, and recovering from backup (ShadowProtect) after flattening and setting up the array again.
You can't say that. There's way more work being done on the drives with a RAID6, maybe then 3 or 4 drives would have went out close together instead of just two. If you think a RAID10 was the cause of 2 drives dieing, then holy shit a RAID 6 woulda killed 3+.
My guesses are one or more of the folowing:
- a bad batch of drives
- wrong drives
- drives used past their warranty/expectancy or whatever
- lack of monitoring
And by the way, a RAID 10 isn't really a "rebuild". It's not a very disk intensive thing like it is with a RAID 6.
-
@Obsolesce said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@manxam said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
Nope.
Had a virtualization host RAID 10 drive, of six, die.
I popped by, did a hot swap of the dead drive, rebuild started, and I sat for a coffee with the on-site IT person.
About 5 minutes into that coffee we heard a BEEP, BEEP-BEEP, and then nothing. It was sitting at the RAID POST prompt indicating failed array and no POST.
It's pair had died too.
I'll stick with RAID 6 thank you very much. We'd still have had the server.
We ended up installing a fresh OS, setting things up, and recovering from backup (ShadowProtect) after flattening and setting up the array again.
You can't say that. There's way more work being done on the drives with a RAID6, maybe then 3 or 4 drives would have went out close together instead of just two. If you think a RAID10 was the cause of 2 drives dieing, then holy shit a RAID 6 woulda killed 3+.
My guesses are one or more of the folowing:
- a bad batch of drives
- wrong drives
- drives used past their warranty/expectancy or whatever
- lack of monitoring
And by the way, a RAID 10 isn't really a "rebuild". It's not a very disk intensive thing like it is with a RAID 6.
I think he meant that RAID10 can only handle 1 drive failure with certainty, while RAID6 can handle 2 drive failures with certainty.
Rebuild is not much different really. On RAID6 all drives in the arrays are read concurrently and one full drive of data is written to the new drive. On RAID10 one drive is read and one full drive of data is written to the new drive. So the read intensity and write intensity is the same per drive, there are just more drives that needs to be read when rebuilding a RAID6 array.
-
@Pete-S said in How can we recover data from Hard Drives were on RAID 10 without controller?:
I think he meant that RAID10 can only handle 1 drive failure with certainty, while RAID6 can handle 2 drive failures with certainty.
Rebuild is not much different really. On RAID6 all drives in the arrays are read concurrently and one full drive of data is written to the new drive. On RAID10 one drive is read and one full drive of data is written to the new drive. So the read intensity and write intensity is the same per drive, there are just more drives that needs to be read when rebuilding a RAID6 array.
Sure, it can handle two drive failures - but at what costs? I mean if you're SSD, then sure, great, hell, the argument is there for RAID 5, but then, back to only able to loose one drive, so meh. But RAID 6 is so bloody slow compared to RAID 10, etc. If that's the only reason you're going RAID 6, I'm not sure the logic is there to support it.
-
@Obsolesce said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@manxam said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
Nope.
Had a virtualization host RAID 10 drive, of six, die.
I popped by, did a hot swap of the dead drive, rebuild started, and I sat for a coffee with the on-site IT person.
About 5 minutes into that coffee we heard a BEEP, BEEP-BEEP, and then nothing. It was sitting at the RAID POST prompt indicating failed array and no POST.
It's pair had died too.
I'll stick with RAID 6 thank you very much. We'd still have had the server.
We ended up installing a fresh OS, setting things up, and recovering from backup (ShadowProtect) after flattening and setting up the array again.
You can't say that. There's way more work being done on the drives with a RAID6, maybe then 3 or 4 drives would have went out close together instead of just two. If you think a RAID10 was the cause of 2 drives dieing, then holy shit a RAID 6 woulda killed 3+.
My guesses are one or more of the folowing:
- a bad batch of drives
- wrong drives
- drives used past their warranty/expectancy or whatever
- lack of monitoring
And by the way, a RAID 10 isn't really a "rebuild". It's not a very disk intensive thing like it is with a RAID 6.
Please re-read what I wrote and stop interpreting it.
-
@Dashrender said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Pete-S said in How can we recover data from Hard Drives were on RAID 10 without controller?:
I think he meant that RAID10 can only handle 1 drive failure with certainty, while RAID6 can handle 2 drive failures with certainty.
Rebuild is not much different really. On RAID6 all drives in the arrays are read concurrently and one full drive of data is written to the new drive. On RAID10 one drive is read and one full drive of data is written to the new drive. So the read intensity and write intensity is the same per drive, there are just more drives that needs to be read when rebuilding a RAID6 array.
Sure, it can handle two drive failures - but at what costs? I mean if you're SSD, then sure, great, hell, the argument is there for RAID 5, but then, back to only able to loose one drive, so meh. But RAID 6 is so bloody slow compared to RAID 10, etc. If that's the only reason you're going RAID 6, I'm not sure the logic is there to support it.
To me it's simple. If you need speed you are on SSDs. Period.
If you need storage space then it's 3.5" HDDs with RAID1 for a small array (<=16TB) and RAID6 for a large array.There might be some need for RAID10 on HDDs somewhere but I think it's making less and less sense.
-
@Pete-S said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Dashrender said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Pete-S said in How can we recover data from Hard Drives were on RAID 10 without controller?:
I think he meant that RAID10 can only handle 1 drive failure with certainty, while RAID6 can handle 2 drive failures with certainty.
Rebuild is not much different really. On RAID6 all drives in the arrays are read concurrently and one full drive of data is written to the new drive. On RAID10 one drive is read and one full drive of data is written to the new drive. So the read intensity and write intensity is the same per drive, there are just more drives that needs to be read when rebuilding a RAID6 array.
Sure, it can handle two drive failures - but at what costs? I mean if you're SSD, then sure, great, hell, the argument is there for RAID 5, but then, back to only able to loose one drive, so meh. But RAID 6 is so bloody slow compared to RAID 10, etc. If that's the only reason you're going RAID 6, I'm not sure the logic is there to support it.
To me it's simple. If you need speed you are on SSDs. Period.
If you need storage space then it's 3.5" HDDs with RAID1 for a small array (<=16TB) and RAID6 for a large array.aww, so you're just against RAID 10 or RAID 5 period.
-
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Obsolesce said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@manxam said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
Nope.
Had a virtualization host RAID 10 drive, of six, die.
I popped by, did a hot swap of the dead drive, rebuild started, and I sat for a coffee with the on-site IT person.
About 5 minutes into that coffee we heard a BEEP, BEEP-BEEP, and then nothing. It was sitting at the RAID POST prompt indicating failed array and no POST.
It's pair had died too.
I'll stick with RAID 6 thank you very much. We'd still have had the server.
We ended up installing a fresh OS, setting things up, and recovering from backup (ShadowProtect) after flattening and setting up the array again.
You can't say that. There's way more work being done on the drives with a RAID6, maybe then 3 or 4 drives would have went out close together instead of just two. If you think a RAID10 was the cause of 2 drives dieing, then holy shit a RAID 6 woulda killed 3+.
My guesses are one or more of the folowing:
- a bad batch of drives
- wrong drives
- drives used past their warranty/expectancy or whatever
- lack of monitoring
And by the way, a RAID 10 isn't really a "rebuild". It's not a very disk intensive thing like it is with a RAID 6.
Please re-read what I wrote and stop interpreting it.
I'm curious where he got it wrong? RAID 10's are considered ridiculously reliable. The most likely reason for a failure of two drives in a RAID 10 is a single batch of drives - so they all or several reach failure at the same time.
-
@Dashrender said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Pete-S said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Dashrender said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Pete-S said in How can we recover data from Hard Drives were on RAID 10 without controller?:
I think he meant that RAID10 can only handle 1 drive failure with certainty, while RAID6 can handle 2 drive failures with certainty.
Rebuild is not much different really. On RAID6 all drives in the arrays are read concurrently and one full drive of data is written to the new drive. On RAID10 one drive is read and one full drive of data is written to the new drive. So the read intensity and write intensity is the same per drive, there are just more drives that needs to be read when rebuilding a RAID6 array.
Sure, it can handle two drive failures - but at what costs? I mean if you're SSD, then sure, great, hell, the argument is there for RAID 5, but then, back to only able to loose one drive, so meh. But RAID 6 is so bloody slow compared to RAID 10, etc. If that's the only reason you're going RAID 6, I'm not sure the logic is there to support it.
To me it's simple. If you need speed you are on SSDs. Period.
If you need storage space then it's 3.5" HDDs with RAID1 for a small array (<=16TB) and RAID6 for a large array.aww, so you're just against RAID 10 or RAID 5 period.
For HDDs in general - yes.
-
@Dashrender said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Obsolesce said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@manxam said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
Nope.
Had a virtualization host RAID 10 drive, of six, die.
I popped by, did a hot swap of the dead drive, rebuild started, and I sat for a coffee with the on-site IT person.
About 5 minutes into that coffee we heard a BEEP, BEEP-BEEP, and then nothing. It was sitting at the RAID POST prompt indicating failed array and no POST.
It's pair had died too.
I'll stick with RAID 6 thank you very much. We'd still have had the server.
We ended up installing a fresh OS, setting things up, and recovering from backup (ShadowProtect) after flattening and setting up the array again.
You can't say that. There's way more work being done on the drives with a RAID6, maybe then 3 or 4 drives would have went out close together instead of just two. If you think a RAID10 was the cause of 2 drives dieing, then holy shit a RAID 6 woulda killed 3+.
My guesses are one or more of the folowing:
- a bad batch of drives
- wrong drives
- drives used past their warranty/expectancy or whatever
- lack of monitoring
And by the way, a RAID 10 isn't really a "rebuild". It's not a very disk intensive thing like it is with a RAID 6.
Please re-read what I wrote and stop interpreting it.
I'm curious where he got it wrong? RAID 10's are considered ridiculously reliable. The most likely reason for a failure of two drives in a RAID 10 is a single batch of drives - so they all or several reach failure at the same time.
A drive is a drive. It's a piece of machinery prone to failure just like any other. Period.
During the rebuild, it's partner does indeed get stressed as it handles both regular work and the read calls for its partner to write to. So, bunk on that.
-
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
During the rebuild, it's partner does indeed get stressed as it handles both regular work and the read calls for its partner to write to. So, bunk on that.
During a rebuild, production should be stopped if possible. If you were still using said server while a resilver was taking place of course there is going to be more stress on the array.
-
@openit Not sure whether the same way would work with QNAP RAID but Synology has a KB on it: https://www.synology.com/en-global/knowledgebase/DSM/tutorial/Storage/How_can_I_recover_data_from_my_DiskStation_using_a_PC
-
Looks like it really boils down to the lack of following the 3-2-1 rule or its revised version - 3-2-2 where you have both cloud-based and local external backups of your NAS' important data in your case.
-
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Dashrender said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Obsolesce said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@manxam said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
Nope.
Had a virtualization host RAID 10 drive, of six, die.
I popped by, did a hot swap of the dead drive, rebuild started, and I sat for a coffee with the on-site IT person.
About 5 minutes into that coffee we heard a BEEP, BEEP-BEEP, and then nothing. It was sitting at the RAID POST prompt indicating failed array and no POST.
It's pair had died too.
I'll stick with RAID 6 thank you very much. We'd still have had the server.
We ended up installing a fresh OS, setting things up, and recovering from backup (ShadowProtect) after flattening and setting up the array again.
You can't say that. There's way more work being done on the drives with a RAID6, maybe then 3 or 4 drives would have went out close together instead of just two. If you think a RAID10 was the cause of 2 drives dieing, then holy shit a RAID 6 woulda killed 3+.
My guesses are one or more of the folowing:
- a bad batch of drives
- wrong drives
- drives used past their warranty/expectancy or whatever
- lack of monitoring
And by the way, a RAID 10 isn't really a "rebuild". It's not a very disk intensive thing like it is with a RAID 6.
Please re-read what I wrote and stop interpreting it.
I'm curious where he got it wrong? RAID 10's are considered ridiculously reliable. The most likely reason for a failure of two drives in a RAID 10 is a single batch of drives - so they all or several reach failure at the same time.
A drive is a drive. It's a piece of machinery prone to failure just like any other. Period.
During the rebuild, it's partner does indeed get stressed as it handles both regular work and the read calls for its partner to write to. So, bunk on that.
Statiatically RAID 10 is reliable to an absurd degree. Even with drive technology generations old. Its so statistically reliable its incalculable.
-
@scottalanmiller said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Dashrender said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Obsolesce said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@manxam said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
Nope.
Had a virtualization host RAID 10 drive, of six, die.
I popped by, did a hot swap of the dead drive, rebuild started, and I sat for a coffee with the on-site IT person.
About 5 minutes into that coffee we heard a BEEP, BEEP-BEEP, and then nothing. It was sitting at the RAID POST prompt indicating failed array and no POST.
It's pair had died too.
I'll stick with RAID 6 thank you very much. We'd still have had the server.
We ended up installing a fresh OS, setting things up, and recovering from backup (ShadowProtect) after flattening and setting up the array again.
You can't say that. There's way more work being done on the drives with a RAID6, maybe then 3 or 4 drives would have went out close together instead of just two. If you think a RAID10 was the cause of 2 drives dieing, then holy shit a RAID 6 woulda killed 3+.
My guesses are one or more of the folowing:
- a bad batch of drives
- wrong drives
- drives used past their warranty/expectancy or whatever
- lack of monitoring
And by the way, a RAID 10 isn't really a "rebuild". It's not a very disk intensive thing like it is with a RAID 6.
Please re-read what I wrote and stop interpreting it.
I'm curious where he got it wrong? RAID 10's are considered ridiculously reliable. The most likely reason for a failure of two drives in a RAID 10 is a single batch of drives - so they all or several reach failure at the same time.
A drive is a drive. It's a piece of machinery prone to failure just like any other. Period.
During the rebuild, it's partner does indeed get stressed as it handles both regular work and the read calls for its partner to write to. So, bunk on that.
Statiatically RAID 10 is reliable to an absurd degree. Even with drive technology generations old. Its so statistically reliable its incalculable.
Guess my experience kinda blows that assumption out of the water eh?
-
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@manxam said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
Nope.
Had a virtualization host RAID 10 drive, of six, die.
I popped by, did a hot swap of the dead drive, rebuild started, and I sat for a coffee with the on-site IT person.
About 5 minutes into that coffee we heard a BEEP, BEEP-BEEP, and then nothing. It was sitting at the RAID POST prompt indicating failed array and no POST.
It's pair had died too.
I'll stick with RAID 6 thank you very much. We'd still have had the server.
We ended up installing a fresh OS, setting things up, and recovering from backup (ShadowProtect) after flattening and setting up the array again.
In that example, for which we would need a lot more info to understand, you cant say raid 6 would have survived. It may have died too.
Questions about how long was the remaining drive left before the replacement provided, were they damaged, was it actually controller failure, etc.
Ive had raid 1 fail recently, but it was the controller not the drives.
Mathematically and from research, raid 10 is safe to a degree that makes it safe, when operated correctly, to a point that you never actually need to worry about it. And raid 6 is mathematically less safe.
Using an anecdote, especially one where the key factors arent mentioned, doesnt even remotely suggest that avoiding raid 10 is a proper takeaway, or that using raid 6 is better, or that raid 6 would have protected you, or if it did that it wasnt a fluke.
Regardless of there being a possible anecdote to where raid 10 failed, your response to it suggests that you were expecting it to be impossible to fail, which doesnt make sense. Its implaussible to fail, thats not the same.
A similar reaction would be to avoid flying because you had been in a crash. But that you were in a rare crash doesnt change that driving is more dangerous than flying. Its a misunderstanding of how to apply the lesson learned.
-
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Dashrender said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Obsolesce said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@manxam said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
Nope.
Had a virtualization host RAID 10 drive, of six, die.
I popped by, did a hot swap of the dead drive, rebuild started, and I sat for a coffee with the on-site IT person.
About 5 minutes into that coffee we heard a BEEP, BEEP-BEEP, and then nothing. It was sitting at the RAID POST prompt indicating failed array and no POST.
It's pair had died too.
I'll stick with RAID 6 thank you very much. We'd still have had the server.
We ended up installing a fresh OS, setting things up, and recovering from backup (ShadowProtect) after flattening and setting up the array again.
You can't say that. There's way more work being done on the drives with a RAID6, maybe then 3 or 4 drives would have went out close together instead of just two. If you think a RAID10 was the cause of 2 drives dieing, then holy shit a RAID 6 woulda killed 3+.
My guesses are one or more of the folowing:
- a bad batch of drives
- wrong drives
- drives used past their warranty/expectancy or whatever
- lack of monitoring
And by the way, a RAID 10 isn't really a "rebuild". It's not a very disk intensive thing like it is with a RAID 6.
Please re-read what I wrote and stop interpreting it.
I'm curious where he got it wrong? RAID 10's are considered ridiculously reliable. The most likely reason for a failure of two drives in a RAID 10 is a single batch of drives - so they all or several reach failure at the same time.
A drive is a drive. It's a piece of machinery prone to failure just like any other. Period.
During the rebuild, it's partner does indeed get stressed as it handles both regular work and the read calls for its partner to write to. So, bunk on that.
There is stress. But trivial stress on one drive versus heavy stress on many. The time and workload differences are huge.
They arent comparable stresses. Very big numerical differences.
-
@Dashrender said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Obsolesce said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@manxam said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
Nope.
Had a virtualization host RAID 10 drive, of six, die.
I popped by, did a hot swap of the dead drive, rebuild started, and I sat for a coffee with the on-site IT person.
About 5 minutes into that coffee we heard a BEEP, BEEP-BEEP, and then nothing. It was sitting at the RAID POST prompt indicating failed array and no POST.
It's pair had died too.
I'll stick with RAID 6 thank you very much. We'd still have had the server.
We ended up installing a fresh OS, setting things up, and recovering from backup (ShadowProtect) after flattening and setting up the array again.
You can't say that. There's way more work being done on the drives with a RAID6, maybe then 3 or 4 drives would have went out close together instead of just two. If you think a RAID10 was the cause of 2 drives dieing, then holy shit a RAID 6 woulda killed 3+.
My guesses are one or more of the folowing:
- a bad batch of drives
- wrong drives
- drives used past their warranty/expectancy or whatever
- lack of monitoring
And by the way, a RAID 10 isn't really a "rebuild". It's not a very disk intensive thing like it is with a RAID 6.
Please re-read what I wrote and stop interpreting it.
I'm curious where he got it wrong? RAID 10's are considered ridiculously reliable. The most likely reason for a failure of two drives in a RAID 10 is a single batch of drives - so they all or several reach failure at the same time.
That's true - sort of. But the single batch problem is a red herring. Even in a single defective batch failures normally occur very far apart. Its statistically almost impossible for the batch problem to cause a RAID 1 failure alone.
Almost all double drive failures come from long replacement times between failure and resilver. If replaced promptly, the batch problem effectively doesnt exist. People talk about it, but its not real.
But those are how double drives fail, which all but never happens. In the real world, controller, backplane, and connection failures - which often look like and behave like double drive failure - are what people often experience. This is most commonly caused by vibrations or similar causing drives to enter and leave an array.
Ive seen it on all kinds of arrays. RAID 1 just last week. If no one analysed the drives theyd think we had double drive failure. But we didnt. A drive just left and rejoined from vibration in a bad order and got overwritten. The physical drive didnt fail.
This failure affects all array types and is far more common than disk failures in most environments.
-
@scottalanmiller said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Dashrender said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@Obsolesce said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@manxam said in How can we recover data from Hard Drives were on RAID 10 without controller?:
@PhlipElder : Why did you stop deploying RAID 10? It's about the most fault tolerant and performance oriented RAID config one can get for hardware RAID.
Nope.
Had a virtualization host RAID 10 drive, of six, die.
I popped by, did a hot swap of the dead drive, rebuild started, and I sat for a coffee with the on-site IT person.
About 5 minutes into that coffee we heard a BEEP, BEEP-BEEP, and then nothing. It was sitting at the RAID POST prompt indicating failed array and no POST.
It's pair had died too.
I'll stick with RAID 6 thank you very much. We'd still have had the server.
We ended up installing a fresh OS, setting things up, and recovering from backup (ShadowProtect) after flattening and setting up the array again.
You can't say that. There's way more work being done on the drives with a RAID6, maybe then 3 or 4 drives would have went out close together instead of just two. If you think a RAID10 was the cause of 2 drives dieing, then holy shit a RAID 6 woulda killed 3+.
My guesses are one or more of the folowing:
- a bad batch of drives
- wrong drives
- drives used past their warranty/expectancy or whatever
- lack of monitoring
And by the way, a RAID 10 isn't really a "rebuild". It's not a very disk intensive thing like it is with a RAID 6.
Please re-read what I wrote and stop interpreting it.
I'm curious where he got it wrong? RAID 10's are considered ridiculously reliable. The most likely reason for a failure of two drives in a RAID 10 is a single batch of drives - so they all or several reach failure at the same time.
A drive is a drive. It's a piece of machinery prone to failure just like any other. Period.
During the rebuild, it's partner does indeed get stressed as it handles both regular work and the read calls for its partner to write to. So, bunk on that.
There is stress. But trivial stress on one drive versus heavy stress on many. The time and workload differences are huge.
They arent comparable stresses. Very big numerical differences.
on a single drive POV, what is the difference in stress level, and what causes it?