Budget Backups: Which is better
-
@Dashrender said:
Please save me the math and time, what size pipe are you talking about? and at what cost?
That's the ~$80 pipes available in Mississippi, Tennessee, Texas, Kansas, etc. Gb/s to the home is rolling out now. And cheap. It's only available in a few markets today, but the switch to Gig fiber is already underway in the US and we are one of the slowest Internet countries in the world. Iceland has had these speeds for nearly a decade to every home, not just some (at least in metro areas.) The adoption rate of high speed fiber is going to change things a lot.
Larger businesses are already way past these speeds. I've seen larger SMBs today with 40Gb/s pipes! So that 2.8 hour backup falls to 4.2 minutes.
-
@Carnival-Boy said:
@scottalanmiller said:
Traditional hard drives are not supposed to spin down or take any temperature changes let alone experience external movement.
Not sure what you mean here. What's the difference between an external hard drive and an internal hard drive on a laptop. Both spin up and down, move around and experience temperature changes. I find it bizarre to have a market for external hard drives if they're not supposed to be portable. I expect hard drives to wear out quicker under these conditions, but I'm not using them much.
External drives are just internal drives. A market is determined by what people will buy. No one has engineered an external drive, there is no such thing. All they do is test drives in the factory and those that pass basic tests but are "just barely passing" are used as external drives because with the abuse, it doesn't really matter. The better drives become the internal drives. But this difference is not why externals fail. External drives simply take a level of abuse completely unlike what fixed drives face.
In laptops drive failure is many times higher than even desktops. There is a reason why good laptops moved to SSDs many years ago. The speed was a big advantage but the reliability (and power savings) were the real driving factors.
Wearing out is not a factor. Not from spinning. Hard drives primarily die from physical shock (caused my motion) and temperature changes. Five minutes of moving around and changing temps can easily extract more wear and tear than a year of continuous spinning.
-
@ajstringham said:
While this is technically true, the chances of it happening to someone two or more times is very low. That's just probability.
Technical true is all that matters. We are talking about failure risks. The failure of one drive does not make another failure any less likely. This is fundamental math.
The conditions that cause a first drive to fail actually make a second failing for the same person much more likely because unlike straight probability, in drive failures of this nature it is environmentals that are the leading cause of failure and chances are the drives have the same environmentals. So actually the chances of one person experiencing many failures while another has none is extremely high.
-
@Carnival-Boy said:
I re-use drives so if they failed I would know during the next backup and I've yet to have one fail. I've had a couple DOA, but never one fail after successful use.
Everyone gets lucky somewhere. I have never had a server die in fifteen years at NTG. Never. But we know that we are just getting lucky and expect them to fail. Servers should not go fifteen years without any downtime, it's just statistically improbable. But just as some people experience many failures, others experience many fewer. When failure rates are below ~50%, you get "hot spots" of failure just because there is so much opportunity for some people to not be the ones to experience it. This makes individual experience irrelevant in determining risk.
In your case, you might be lucky or you might have a really good process for keeping the failure rates from skyrocketing.
But you would not know all failures unless you are doing a full restore and testing results. Mechanical failures, the main ones, yes you would definitely find. But UREs causing gaps in files being unreadable would not.
-
@scottalanmiller said:
Wearing out is not a factor. Not from spinning. Hard drives primarily die from physical shock
Not a factor? Surely the more you use a drive, the more likely it is to fail? If I have a server that runs 24/7 and a PC that is turned on twice a year and used for a couple of hours, with the same hard drives, I would expect the server hard drive is more likely to fail. Wouldn't you?
Where are you getting an average failure rate of 30% from? That seems incredibly high and makes me wonder why I haven't experienced anything remotely close to it.
-
@scottalanmiller so what, 10% probably not even, have this in the US now? This isn't typical yet until it's well over 50% you definitely can't count this as a real solution as a general statement (not that you were).
Nebraska has 2 cities where only the inner city has the 1 GB fiber. As for costs, the $80 level I think gets you around $200 Mb, the 1 Gb is somewhere north of $150 in homes.
I'm currently paying $850 for 10/10 delivered over fiber. Though some new players have come to town, so when my current contract is up, I'll be lowering that significantly.
-
@Carnival-Boy said:
@scottalanmiller said:
Wearing out is not a factor. Not from spinning. Hard drives primarily die from physical shock
Not a factor? Surely the more you use a drive, the more likely it is to fail? If I have a server that runs 24/7 and a PC that is turned on twice a year and used for a couple of hours, with the same hard drives, I would expect the server hard drive is more likely to fail. Wouldn't you?
Where are you getting an average failure rate of 30% from? That seems incredibly high and makes me wonder why I haven't experienced anything remotely close to it.
No, absolutely not. The starts and stops on the drive are much more damaging than constantly running at a consistent temp, etc.
-
@Carnival-Boy said:
@scottalanmiller said:
Wearing out is not a factor. Not from spinning. Hard drives primarily die from physical shock
Not a factor? Surely the more you use a drive, the more likely it is to fail? If I have a server that runs 24/7 and a PC that is turned on twice a year and used for a couple of hours, with the same hard drives, I would expect the server hard drive is more likely to fail. Wouldn't you?
Where are you getting an average failure rate of 30% from? That seems incredibly high and makes me wonder why I haven't experienced anything remotely close to it.
In all fairness, I've never heard of a spinning disk dying from "wearing out". Generally they die due to sudden power loss, power surges, or sudden and extreme environmental changes. Or water. But power loss and power surges are the most common reasons I see drives die.
-
@Carnival-Boy said:
Not a factor? Surely the more you use a drive, the more likely it is to fail?
Not significantly, no. Under good conditions, you would expect pretty easily 20+ years from a drive that never spins down. Wear and tear will wear it out eventually. But as a backup drive where you are looking at lifetimes in weeks of spinning, not decades, the spinning of the drive is not at all a factor. Completely immeasurable in the scope of risk.
-
@ajstringham said:
In all fairness, I've never heard of a spinning disk dying from "wearing out". Generally they die due to sudden power loss, power surges, or sudden and extreme environmental changes. Or water. But power loss and power surges are the most common reasons I see drives die.
Most will start to completely wear out as they approach 30 years. I've seen ones well over 20 that had no issues, but everyone was terrified to move them, even a few inches, as the vendor (Oracle) said that their experience was that they would last for years more, but any bump or movement and it was expected to be all over.
-
@Dashrender said:
@scottalanmiller so what, 10% probably not even, have this in the US now? This isn't typical yet until it's well over 50% you definitely can't count this as a real solution as a general statement (not that you were).
So let's say 10% have this. 0% have cheap SSDs that are big enough for normal backups. What I said was that SSDs would rapidly diminish in backup value as high speed WANs are already rolling out that completely negate their value in those markets and the WANs are rolling out faster than SSDs are growing. So by the time SSDs are big enough to useful for this to many people, the number of people who have a use for them will be increasingly smaller and smaller.
-
@scottalanmiller said:
@ajstringham said:
In all fairness, I've never heard of a spinning disk dying from "wearing out". Generally they die due to sudden power loss, power surges, or sudden and extreme environmental changes. Or water. But power loss and power surges are the most common reasons I see drives die.
Most will start to completely wear out as they approach 30 years. I've seen ones well over 20 that had no issues, but everyone was terrified to move them, even a few inches, as the vendor (Oracle) said that their experience was that they would last for years more, but any bump or movement and it was expected to be all over.
You would think with the size increases and power and cooling savings that moving to new disk would be worth the efforts - wow.. 20 years. If we go back to 1994, the average drive was what? 2 GB? consumer?
-
@scottalanmiller said:
Not significantly, no. Under good conditions, you would expect pretty easily 20+ years from a drive that never spins down.
Fair enough. I've had loads of disk failures in servers. Though I guess a server may fail a disk without it actually noticeably failing, whereas a disk in a PC may fail but, unless something happens, I may remain unaware of it - if you see what I mean?
Still, 30% failure rate seems high. I've seen very few failures in laptops and have looked after a lot over the years.
-
@scottalanmiller said:
The chances of a second failure remain the same regardless of a first failure.
Statistical chances remain the same of course. Of course the statistics that it a user will have two drives fail are much harder to meet that one.
Statistics work both ways on this.
-
@Carnival-Boy said:
Where are you getting an average failure rate of 30% from? That seems incredibly high and makes me wonder why I haven't experienced anything remotely close to it.
Like I said, it is literally impossible to determine the failure rate because it is based primarily around motion and temperature. Unless we are monitoring both at every moment of your drives' lives we have no way to know what your failure rate would be expected to be. Things as simple as how you carry the drive, how you unplug it, etc. are major factors.
And how is failure rate determined? For a spinning drive, the rough rate is 3% / year. But for a drive that sits on a shelf for a decade, the rate might be lower, but at the moment of being transported it might experience a 1% rate for that few minutes.
There are so many factors that you cannot actually have a number for this.
Compare it to the failure rates of cars. You can say that Ferraris have a global average of 5% have accidents per year. But those that drive faster have higher accident rates. But no matter how fast you drive it, if it sits in a garage for years on end, the accident rate for that particular car would be low.
-
@Dashrender said:
@scottalanmiller said:
@ajstringham said:
In all fairness, I've never heard of a spinning disk dying from "wearing out". Generally they die due to sudden power loss, power surges, or sudden and extreme environmental changes. Or water. But power loss and power surges are the most common reasons I see drives die.
Most will start to completely wear out as they approach 30 years. I've seen ones well over 20 that had no issues, but everyone was terrified to move them, even a few inches, as the vendor (Oracle) said that their experience was that they would last for years more, but any bump or movement and it was expected to be all over.
You would think with the size increases and power and cooling savings that moving to new disk would be worth the efforts - wow.. 20 years. If we go back to 1994, the average drive was what? 2 GB? consumer?
That would be pretty large for 20 years ago, even for a server.
-
@Dashrender said:
You would think with the size increases and power and cooling savings that moving to new disk would be worth the efforts - wow.. 20 years. If we go back to 1994, the average drive was what? 2 GB? consumer?
The thing is that lots of businesses look at momentary costs (migration, purchases) and ignore ongoing operational expenses. No matter how little business sense it makes, many times these things are just left to rot.
-
@JaredBusch said:
@scottalanmiller said:
The chances of a second failure remain the same regardless of a first failure.
Statistical chances remain the same of course. Of course the statistics that it a user will have two drives fail are much harder to meet that one.
Statistics work both ways on this.
Sort of. The chances that you'll have two failures is low. But the chances that you'll have a second after you've had a first is not any lower than the chances that someone else will experience their first one. So while it works both ways, once one failure has occurred, it only works one way.
-
@scottalanmiller said:
So let's say 10% have this. 0% have cheap SSDs that are big enough for normal backups. What I said was that SSDs would rapidly diminish in backup value as high speed WANs are already rolling out that completely negate their value in those markets and the WANs are rolling out faster than SSDs are growing. So by the time SSDs are big enough to useful for this to many people, the number of people who have a use for them will be increasingly smaller and smaller.
You think high speed WANs will hit the masses by next summer? I think 1 TB SSDs will be around $200 or less by next summer, considering you can get them for $400-$600 now. This probably means you'll be able to get 2 and 3 TB too.
I don't believe that high speed access is rolling out faster than SSDs are growing in speed and lowering in price, but if you have something to point me toward that shows this, I'd love to read it.
-
@ajstringham said:
That would be pretty large for 20 years ago, even for a server.
20 years ago I had 1GB in my desktop. 2GB in a server would have been very easy to find. The 2.1GB SCSI drives were probably the norm by then.