IT Survey: Preemptive Drive Replacement in RAID Arrays
-
@Dashrender Also maintenance on exceptionally expensive to access sites (think weather station in Greenland or something)
-
@MattSpeller said:
@Dashrender Also maintenance on exceptionally expensive to access sites (think weather station in Greenland or something)
That still doesn't make sense because of the failure curve of hard drives. We have no idea if the new drive will die immediately or soon after installation. They would then have to have a second maintenance event to replace the failed drive. Now this may happen either way but it makes more sense to wait until the drive actually fails then to preemptively replace it. Especially if you can get months to years out of the drive you would have replaced.
-
@coliver It makes more sense in that scenario than it does in any other I can think of!
I can think of much better ways to setup a remote station like that - I'm just trying to see if there's a scenario where his advice is actually... good.
-
For the hard to access station, they should have spares on a shelf, but in theory, when you buy a drive and store it for 3 years, what happens with the warranty if you put it in and it dies after a month?
-
@Breffni-Potter spares are a luxury unless you use them on a regular basis
-
@MattSpeller said:
@Breffni-Potter spares are a luxury unless you use them on a regular basis
ala weather station in greenland.
Shipping cannot be easy, so what are they to do?
-
@Breffni-Potter said:
For the hard to access station, they should have spares on a shelf, but in theory, when you buy a drive and store it for 3 years, what happens with the warranty if you put it in and it dies after a month?
It would be out of warranty. But this wouldn't be the situation as @MattSpeller is describing. If they only visit the site say once every 3 months, presumably they would bring drives with them.
But really, you wouldn't setup a system that relied on this type of solution in this scenerio, you'd choose something with more robustness built in. Though I can't tell you what that would look like. Perhaps 2 or even three equal sized arrays kept in sync with redundant data paths, etc. If the data is that important, but you can only visit the site once every three months, you can't just use the day to day setup in most cases.
-
@Breffni-Potter Oh! Yeah I totally agree - that scenario leads to lots of unusual setups
-
@Dashrender exactly, there are much better ways to set that kinda thing up - I think we're still looking for a scenario where dude-buddy-guy from SW forums would be right. He may just be 100% wrong.
-
I confess to enjoying "devil's advocate" and thought experiments a lot
-
@MattSpeller said:
@Dashrender exactly, there are much better ways to set that kinda thing up - I think we're still looking for a scenario where dude-buddy-guy from SW forums would be right. He may just be 100% wrong.
well, again, my friends suggested reason, lack of personnel resources in times of emergency, could be a reason.
-
I will tell you 3 concrete facts.
-
You must never reboot the servers. Constant up time is vital.
-
Don't install updates, Microsoft will only break the server to force you to upgrade to the latest version.
-
Linux is not safe for production. Too complicated and too buggy.
Why are these facts true?
Because my experience, training and mentors have fostered a closed minded set of views in my mind and because of this I need to ignore all propaganda. I am not here to listen and learn, I am only here to teach others of the correct way of doing things.
Yes my job security might be at risk because I am not open to new ideas or learning new concepts but I'm irreplaceable here.
-
-
Oh by the way.
If you use mixed operating systems, (ala 7/Vista/8.1/10) when you get Cryptolocker or other Malware, the damage is limited to one group of operating systems.
-
@Breffni-Potter lol 10/10
-
Well played @Breffni-Potter
-
@DustinB3403 said:
I've heard of doing it every 2 - 3 years, but not as a part of routine maintenance.
What is schedule for routine maintenance with where you heard this?
2-3 years would definitely constitute routine maintenance. I think even for people doing this, 2-3 years seems extremely short.
-
@DustinB3403 said:
To follow up, I've never performed it either. But have heard people say that they replace their drives to avoid the urgent rush of a RAID being depreciated, because of a failed drive.
But there is an urgent rush anyway, they didn't avoid one. And they create more of them. It's literally the same as crashing your car to avoid accidents, preemptively.
-
@scottalanmiller Oh I completely agree, and said something very similar to that analogy when I heard this.
-
@DustinB3403 said:
Some people simply don't want to understand what has to be performed to rebuild the array when you replace drives just to replace them.
But they have to understand that to do the replacement. A preemptive replacement is an full failure as well. Just a human breaking the array rather than the drive failing and breaking it. Full knowledge of how to repair the array is needed and is identical in both cases.
The extra knowledge needed with preemptive is when you can safely do it since if you did it when another drive had failed you easily could make a degraded array into a fully lost array.
-
@Drew said:
I'm guessing this isn't exactly what you're referring to but I thought I'd add my experience anyway. I guess it depends on what you mean by "perfectly healthy". One manufacturer might consider a drive perfectly healthy while another might not.
Meaning, no use of failure indicators at all. Just replacing drives because you replace them, not because there is any indication of issues.