Budget Backups: Which is better
-
@Carnival-Boy said:
For me they are cheap, portable and reliable. I'd never want to go back to tape.
Portable and cheap but not reliable. They are not built to be moved around. The failure rate is very high. 2.5" drives are more reliable than 3.5" in this usage but are still fragile and being smaller means lower capacity and higher cost.
-
@scottalanmiller said:
@Carnival-Boy said:
For me they are cheap, portable and reliable. I'd never want to go back to tape.
Portable and cheap but not reliable. They are not built to be moved around. The failure rate is very high. 2.5" drives are more reliable than 3.5" in this usage but are still fragile and being smaller means lower capacity and higher cost.
OH how nice things will be once 1+ TB SSD are considered cheap.
-
Define very high failure rate. I've been happy enough with reliability but haven't seen any industry figures.
-
@scottalanmiller said:
Portable and cheap but not reliable. They are not built to be moved around. The failure rate is very high. 2.5" drives are more reliable than 3.5" in this usage but are still fragile and being smaller means lower capacity and higher cost.
Your opinion of failure rates in aggregate across many drives may very well be true, but that does not invalidate the use by individual people.
If you are checking that backups complete successfully, then this is going to be a failure caught as soon as it happens. You go drop $100 on a new one and you are done. Even for the person that has one fail, basic probability means they will not likely have it happen again any time soon.
Yes someone will always be the unlucky one that rolls snake eyes on the dice 5 times in a row. But that is the extremely rare case.
-
@Dashrender 1TB SSDs aren't that expensive if you are considering them for backup. Even the mid-range ones run between 400-500$ then you just need a 2.5" drive case.
-
@Dashrender yes, SSDs are far better for portable backup storage. Although 1TB is generally too small even for most SMBs today as mainline storage is so cheap that capacity tends to sprawl. But SSDs are super resilient, can take a shock and have a great shelf life while being able to go through environmental changes. Traditional hard drives are not supposed to spin down or take any temperature changes let alone experience external movement.
Although as SSDs get cheaper, the cost of WAN bandwidth continues to drop. It will increasingly be impractical to do backups to physical, transportable media at scales that SSD can handle. Already today on the good networks you can backup 1TB in 2.85 hours. Hard to justify the cost of buying many 1TB SSDs and taking the time to manually hook them up, unhook them, risk losing them or having them stolen or just risk people forgetting to do it (the fate of 90% of manual backup media systems) when less than three hours of overnight bandwidth will support a full automated, offsight, unattended backup system.
So while SSDs will solve a lot of problems they do so at a time where that usefulness will rapidly evaporate.
-
@scottalanmiller said:
Already today on the good networks you can backup 1TB in 2.85 hours.
Please save me the math and time, what size pipe are you talking about? and at what cost?
-
@JaredBusch said:
Your opinion of failure rates in aggregate across many drives may very well be true, but that does not invalidate the use by individual people.
If you are checking that backups complete successfully, then this is going to be a failure caught as soon as it happens.
Not in this case. Failure rates on portable drives can be vastly high. It depends on the case, but they will always be much higher than fixed drives and as high as near 100%. There is no potential for predicting failure rates for a specific situation other than that they cannot be anywhere close to the 3% per year failure rate of fixed, enterprise datacenter drives. Average failure rates on portable drives is probably closer to 30% or higher, but it all depends.
The problem with this system, though, is that failures cannot be detected until it is too late, because the drives are almost guaranteed to fail after the backup is complete, not during the backup process. It is when they are being transported and stored that they are likely to fail from motion and temperature changes or when they are fired back up when they heat up again.
-
@JaredBusch said:
Even for the person that has one fail, basic probability means they will not likely have it happen again any time soon.
The chances of a second failure remain the same regardless of a first failure.
-
@scottalanmiller said:
Traditional hard drives are not supposed to spin down or take any temperature changes let alone experience external movement.
Not sure what you mean here. What's the difference between an external hard drive and an internal hard drive on a laptop. Both spin up and down, move around and experience temperature changes. I find it bizarre to have a market for external hard drives if they're not supposed to be portable. I expect hard drives to wear out quicker under these conditions, but I'm not using them much.
-
@scottalanmiller said:
@JaredBusch said:
Even for the person that has one fail, basic probability means they will not likely have it happen again any time soon.
The chances of a second failure remain the same regardless of a first failure.
While this is technically true, the chances of it happening to someone two or more times is very low. That's just probability.
-
@scottalanmiller said:
The problem with this system, though, is that failures cannot be detected until it is too late, because the drives are almost guaranteed to fail after the backup is complete, not during the backup process. It is when they are being transported and stored that they are likely to fail from motion and temperature changes or when they are fired back up when they heat up again.
I re-use drives so if they failed I would know during the next backup and I've yet to have one fail. I've had a couple DOA, but never one fail after successful use.
-
@Dashrender said:
Please save me the math and time, what size pipe are you talking about? and at what cost?
That's the ~$80 pipes available in Mississippi, Tennessee, Texas, Kansas, etc. Gb/s to the home is rolling out now. And cheap. It's only available in a few markets today, but the switch to Gig fiber is already underway in the US and we are one of the slowest Internet countries in the world. Iceland has had these speeds for nearly a decade to every home, not just some (at least in metro areas.) The adoption rate of high speed fiber is going to change things a lot.
Larger businesses are already way past these speeds. I've seen larger SMBs today with 40Gb/s pipes! So that 2.8 hour backup falls to 4.2 minutes.
-
@Carnival-Boy said:
@scottalanmiller said:
Traditional hard drives are not supposed to spin down or take any temperature changes let alone experience external movement.
Not sure what you mean here. What's the difference between an external hard drive and an internal hard drive on a laptop. Both spin up and down, move around and experience temperature changes. I find it bizarre to have a market for external hard drives if they're not supposed to be portable. I expect hard drives to wear out quicker under these conditions, but I'm not using them much.
External drives are just internal drives. A market is determined by what people will buy. No one has engineered an external drive, there is no such thing. All they do is test drives in the factory and those that pass basic tests but are "just barely passing" are used as external drives because with the abuse, it doesn't really matter. The better drives become the internal drives. But this difference is not why externals fail. External drives simply take a level of abuse completely unlike what fixed drives face.
In laptops drive failure is many times higher than even desktops. There is a reason why good laptops moved to SSDs many years ago. The speed was a big advantage but the reliability (and power savings) were the real driving factors.
Wearing out is not a factor. Not from spinning. Hard drives primarily die from physical shock (caused my motion) and temperature changes. Five minutes of moving around and changing temps can easily extract more wear and tear than a year of continuous spinning.
-
@ajstringham said:
While this is technically true, the chances of it happening to someone two or more times is very low. That's just probability.
Technical true is all that matters. We are talking about failure risks. The failure of one drive does not make another failure any less likely. This is fundamental math.
The conditions that cause a first drive to fail actually make a second failing for the same person much more likely because unlike straight probability, in drive failures of this nature it is environmentals that are the leading cause of failure and chances are the drives have the same environmentals. So actually the chances of one person experiencing many failures while another has none is extremely high.
-
@Carnival-Boy said:
I re-use drives so if they failed I would know during the next backup and I've yet to have one fail. I've had a couple DOA, but never one fail after successful use.
Everyone gets lucky somewhere. I have never had a server die in fifteen years at NTG. Never. But we know that we are just getting lucky and expect them to fail. Servers should not go fifteen years without any downtime, it's just statistically improbable. But just as some people experience many failures, others experience many fewer. When failure rates are below ~50%, you get "hot spots" of failure just because there is so much opportunity for some people to not be the ones to experience it. This makes individual experience irrelevant in determining risk.
In your case, you might be lucky or you might have a really good process for keeping the failure rates from skyrocketing.
But you would not know all failures unless you are doing a full restore and testing results. Mechanical failures, the main ones, yes you would definitely find. But UREs causing gaps in files being unreadable would not.
-
@scottalanmiller said:
Wearing out is not a factor. Not from spinning. Hard drives primarily die from physical shock
Not a factor? Surely the more you use a drive, the more likely it is to fail? If I have a server that runs 24/7 and a PC that is turned on twice a year and used for a couple of hours, with the same hard drives, I would expect the server hard drive is more likely to fail. Wouldn't you?
Where are you getting an average failure rate of 30% from? That seems incredibly high and makes me wonder why I haven't experienced anything remotely close to it.
-
@scottalanmiller so what, 10% probably not even, have this in the US now? This isn't typical yet until it's well over 50% you definitely can't count this as a real solution as a general statement (not that you were).
Nebraska has 2 cities where only the inner city has the 1 GB fiber. As for costs, the $80 level I think gets you around $200 Mb, the 1 Gb is somewhere north of $150 in homes.
I'm currently paying $850 for 10/10 delivered over fiber. Though some new players have come to town, so when my current contract is up, I'll be lowering that significantly.
-
@Carnival-Boy said:
@scottalanmiller said:
Wearing out is not a factor. Not from spinning. Hard drives primarily die from physical shock
Not a factor? Surely the more you use a drive, the more likely it is to fail? If I have a server that runs 24/7 and a PC that is turned on twice a year and used for a couple of hours, with the same hard drives, I would expect the server hard drive is more likely to fail. Wouldn't you?
Where are you getting an average failure rate of 30% from? That seems incredibly high and makes me wonder why I haven't experienced anything remotely close to it.
No, absolutely not. The starts and stops on the drive are much more damaging than constantly running at a consistent temp, etc.
-
@Carnival-Boy said:
@scottalanmiller said:
Wearing out is not a factor. Not from spinning. Hard drives primarily die from physical shock
Not a factor? Surely the more you use a drive, the more likely it is to fail? If I have a server that runs 24/7 and a PC that is turned on twice a year and used for a couple of hours, with the same hard drives, I would expect the server hard drive is more likely to fail. Wouldn't you?
Where are you getting an average failure rate of 30% from? That seems incredibly high and makes me wonder why I haven't experienced anything remotely close to it.
In all fairness, I've never heard of a spinning disk dying from "wearing out". Generally they die due to sudden power loss, power surges, or sudden and extreme environmental changes. Or water. But power loss and power surges are the most common reasons I see drives die.