Budget Backups: Which is better
-
@scottalanmiller said:
Wearing out is not a factor. Not from spinning. Hard drives primarily die from physical shock
Not a factor? Surely the more you use a drive, the more likely it is to fail? If I have a server that runs 24/7 and a PC that is turned on twice a year and used for a couple of hours, with the same hard drives, I would expect the server hard drive is more likely to fail. Wouldn't you?
Where are you getting an average failure rate of 30% from? That seems incredibly high and makes me wonder why I haven't experienced anything remotely close to it.
-
@scottalanmiller so what, 10% probably not even, have this in the US now? This isn't typical yet until it's well over 50% you definitely can't count this as a real solution as a general statement (not that you were).
Nebraska has 2 cities where only the inner city has the 1 GB fiber. As for costs, the $80 level I think gets you around $200 Mb, the 1 Gb is somewhere north of $150 in homes.
I'm currently paying $850 for 10/10 delivered over fiber. Though some new players have come to town, so when my current contract is up, I'll be lowering that significantly.
-
@Carnival-Boy said:
@scottalanmiller said:
Wearing out is not a factor. Not from spinning. Hard drives primarily die from physical shock
Not a factor? Surely the more you use a drive, the more likely it is to fail? If I have a server that runs 24/7 and a PC that is turned on twice a year and used for a couple of hours, with the same hard drives, I would expect the server hard drive is more likely to fail. Wouldn't you?
Where are you getting an average failure rate of 30% from? That seems incredibly high and makes me wonder why I haven't experienced anything remotely close to it.
No, absolutely not. The starts and stops on the drive are much more damaging than constantly running at a consistent temp, etc.
-
@Carnival-Boy said:
@scottalanmiller said:
Wearing out is not a factor. Not from spinning. Hard drives primarily die from physical shock
Not a factor? Surely the more you use a drive, the more likely it is to fail? If I have a server that runs 24/7 and a PC that is turned on twice a year and used for a couple of hours, with the same hard drives, I would expect the server hard drive is more likely to fail. Wouldn't you?
Where are you getting an average failure rate of 30% from? That seems incredibly high and makes me wonder why I haven't experienced anything remotely close to it.
In all fairness, I've never heard of a spinning disk dying from "wearing out". Generally they die due to sudden power loss, power surges, or sudden and extreme environmental changes. Or water. But power loss and power surges are the most common reasons I see drives die.
-
@Carnival-Boy said:
Not a factor? Surely the more you use a drive, the more likely it is to fail?
Not significantly, no. Under good conditions, you would expect pretty easily 20+ years from a drive that never spins down. Wear and tear will wear it out eventually. But as a backup drive where you are looking at lifetimes in weeks of spinning, not decades, the spinning of the drive is not at all a factor. Completely immeasurable in the scope of risk.
-
@ajstringham said:
In all fairness, I've never heard of a spinning disk dying from "wearing out". Generally they die due to sudden power loss, power surges, or sudden and extreme environmental changes. Or water. But power loss and power surges are the most common reasons I see drives die.
Most will start to completely wear out as they approach 30 years. I've seen ones well over 20 that had no issues, but everyone was terrified to move them, even a few inches, as the vendor (Oracle) said that their experience was that they would last for years more, but any bump or movement and it was expected to be all over.
-
@Dashrender said:
@scottalanmiller so what, 10% probably not even, have this in the US now? This isn't typical yet until it's well over 50% you definitely can't count this as a real solution as a general statement (not that you were).
So let's say 10% have this. 0% have cheap SSDs that are big enough for normal backups. What I said was that SSDs would rapidly diminish in backup value as high speed WANs are already rolling out that completely negate their value in those markets and the WANs are rolling out faster than SSDs are growing. So by the time SSDs are big enough to useful for this to many people, the number of people who have a use for them will be increasingly smaller and smaller.
-
@scottalanmiller said:
@ajstringham said:
In all fairness, I've never heard of a spinning disk dying from "wearing out". Generally they die due to sudden power loss, power surges, or sudden and extreme environmental changes. Or water. But power loss and power surges are the most common reasons I see drives die.
Most will start to completely wear out as they approach 30 years. I've seen ones well over 20 that had no issues, but everyone was terrified to move them, even a few inches, as the vendor (Oracle) said that their experience was that they would last for years more, but any bump or movement and it was expected to be all over.
You would think with the size increases and power and cooling savings that moving to new disk would be worth the efforts - wow.. 20 years. If we go back to 1994, the average drive was what? 2 GB? consumer?
-
@scottalanmiller said:
Not significantly, no. Under good conditions, you would expect pretty easily 20+ years from a drive that never spins down.
Fair enough. I've had loads of disk failures in servers. Though I guess a server may fail a disk without it actually noticeably failing, whereas a disk in a PC may fail but, unless something happens, I may remain unaware of it - if you see what I mean?
Still, 30% failure rate seems high. I've seen very few failures in laptops and have looked after a lot over the years.
-
@scottalanmiller said:
The chances of a second failure remain the same regardless of a first failure.
Statistical chances remain the same of course. Of course the statistics that it a user will have two drives fail are much harder to meet that one.
Statistics work both ways on this.
-
@Carnival-Boy said:
Where are you getting an average failure rate of 30% from? That seems incredibly high and makes me wonder why I haven't experienced anything remotely close to it.
Like I said, it is literally impossible to determine the failure rate because it is based primarily around motion and temperature. Unless we are monitoring both at every moment of your drives' lives we have no way to know what your failure rate would be expected to be. Things as simple as how you carry the drive, how you unplug it, etc. are major factors.
And how is failure rate determined? For a spinning drive, the rough rate is 3% / year. But for a drive that sits on a shelf for a decade, the rate might be lower, but at the moment of being transported it might experience a 1% rate for that few minutes.
There are so many factors that you cannot actually have a number for this.
Compare it to the failure rates of cars. You can say that Ferraris have a global average of 5% have accidents per year. But those that drive faster have higher accident rates. But no matter how fast you drive it, if it sits in a garage for years on end, the accident rate for that particular car would be low.
-
@Dashrender said:
@scottalanmiller said:
@ajstringham said:
In all fairness, I've never heard of a spinning disk dying from "wearing out". Generally they die due to sudden power loss, power surges, or sudden and extreme environmental changes. Or water. But power loss and power surges are the most common reasons I see drives die.
Most will start to completely wear out as they approach 30 years. I've seen ones well over 20 that had no issues, but everyone was terrified to move them, even a few inches, as the vendor (Oracle) said that their experience was that they would last for years more, but any bump or movement and it was expected to be all over.
You would think with the size increases and power and cooling savings that moving to new disk would be worth the efforts - wow.. 20 years. If we go back to 1994, the average drive was what? 2 GB? consumer?
That would be pretty large for 20 years ago, even for a server.
-
@Dashrender said:
You would think with the size increases and power and cooling savings that moving to new disk would be worth the efforts - wow.. 20 years. If we go back to 1994, the average drive was what? 2 GB? consumer?
The thing is that lots of businesses look at momentary costs (migration, purchases) and ignore ongoing operational expenses. No matter how little business sense it makes, many times these things are just left to rot.
-
@JaredBusch said:
@scottalanmiller said:
The chances of a second failure remain the same regardless of a first failure.
Statistical chances remain the same of course. Of course the statistics that it a user will have two drives fail are much harder to meet that one.
Statistics work both ways on this.
Sort of. The chances that you'll have two failures is low. But the chances that you'll have a second after you've had a first is not any lower than the chances that someone else will experience their first one. So while it works both ways, once one failure has occurred, it only works one way.
-
@scottalanmiller said:
So let's say 10% have this. 0% have cheap SSDs that are big enough for normal backups. What I said was that SSDs would rapidly diminish in backup value as high speed WANs are already rolling out that completely negate their value in those markets and the WANs are rolling out faster than SSDs are growing. So by the time SSDs are big enough to useful for this to many people, the number of people who have a use for them will be increasingly smaller and smaller.
You think high speed WANs will hit the masses by next summer? I think 1 TB SSDs will be around $200 or less by next summer, considering you can get them for $400-$600 now. This probably means you'll be able to get 2 and 3 TB too.
I don't believe that high speed access is rolling out faster than SSDs are growing in speed and lowering in price, but if you have something to point me toward that shows this, I'd love to read it.
-
@ajstringham said:
That would be pretty large for 20 years ago, even for a server.
20 years ago I had 1GB in my desktop. 2GB in a server would have been very easy to find. The 2.1GB SCSI drives were probably the norm by then.
-
@scottalanmiller said:
Like I said, it is literally impossible to determine the failure rate because it is based primarily around motion and temperature.
You came up with the figure of 30%. Did you just make it up or what? I live in a mild climate, don't drop the drives and transport them in a padded container.
-
@scottalanmiller said:
@ajstringham said:
That would be pretty large for 20 years ago, even for a server.
20 years ago I had 1GB in my desktop. 2GB in a server would have been very easy to find. The 2.1GB SCSI drives were probably the norm by then.
Funny how a drive size standard for servers is now, a mere 20 years later, a size of flash drive you can't even find anymore because it's too small. I think the smallest you can really get anywhere feasibly is 4GB, with most having gone to 8GB.
-
@Dashrender said:
You think high speed WANs will hit the masses by next summer? I think 1 TB SSDs will be around $200 or less by next summer, considering you can get them for $400-$600 now. This probably means you'll be able to get 2 and 3 TB too.
Did I say that? I just said the SSDs will be getting big enough to be backup drives at a diminishingly useful rate as high speed network connections are becoming widely available at the same time. Backup scale SSDs don't exist yet, high speed WANs are widely available. So far, the WAN is massively outpacing the SSD in this use case. SSDs will find a place for this, but not like they would have as many of their markets are already gone because they even arrive and every day high speed WANs have more and more saturation.
And 1TB SSD isn't enough for most and $200/TB is too expensive for most SMBs still. That's not a good price for backups. You'll find that that remains a very niche price/capacity ratio.
-
@Carnival-Boy said:
You came up with the figure of 30%. Did you just make it up or what? I live in a mild climate, don't drop the drives and transport them in a padded container.
Yes, it has to be made up. There is no way to predict failure rates on a non-fixed drive other than it must be much larger than the 3% number of fixed drives. Much larger.