Budget Backups: Which is better
-
@ajstringham said:
Yeah, but most businesses don't even have 100Mbps WAN connections. Also, if they have satellite offices in an area that isn't a major metroplex, they might have a 3-10Mbps upload limit. I don't see WAN as outpacing SSDs for backup for anyone except the enterprise.
Sure. But that's not the point. Zero businesses have affordable SSDs for backup today. Lots of businesses have affordable WAN for backup today. One is already way ahead of the other. To make SSDs appear really useful you have to assume the future state of SSDs with the current state of WAN which makes no sense.
-
@Carnival-Boy said:
I love your analogies. I know about as much about cars as I do about hard drives, but I'll have a go. Generally, most of the damage to a car is done starting it up, when the engine is cold and the car experiences a dramatic change in temperature.
My analogy was about car accidents which only happen when driving or, I suppose, if the garage collapses on them.
-
@scottalanmiller said:
@ajstringham said:
Yeah, but most businesses don't even have 100Mbps WAN connections. Also, if they have satellite offices in an area that isn't a major metroplex, they might have a 3-10Mbps upload limit. I don't see WAN as outpacing SSDs for backup for anyone except the enterprise.
Sure. But that's not the point. Zero businesses have affordable SSDs for backup today. Lots of businesses have affordable WAN for backup today. One is already way ahead of the other. To make SSDs appear really useful you have to assume the future state of SSDs with the current state of WAN which makes no sense.
I agree that WAN is a much more viable option at this point. SMBs will go disk over SSD every time at the current cost of drives. Still, WAN isn't the preferred option of the three. Disk is.
-
@Carnival-Boy said:
But if you don't know what the failure rate is, you can't just make a figure up!
It's known to be significantly higher than 3%. Like I've been saying. 30% is one order of magnitude higher. It is a very useful reference point for expected failure rates on average. The real failure rates will vary over a massive range. If you want a reference number, 30% is probably the best that you can get.
-
@ajstringham said:
I agree that WAN is a much more viable option at this point. SMBs will go disk over SSD every time at the current cost of drives. Still, WAN isn't the preferred option of the three. Disk is.
Are you sure? That doesn't match anything that I have seen in recent years.
-
@scottalanmiller said:
@ajstringham said:
I agree that WAN is a much more viable option at this point. SMBs will go disk over SSD every time at the current cost of drives. Still, WAN isn't the preferred option of the three. Disk is.
Are you sure? That doesn't match anything that I have seen in recent years.
When I was a Unitrends installer, that's what I saw. I did several dozen installs, and for both backup and archiving, it was almost always disk. The size of the businesses varied, both in terms of size and budget. Disk was still preferred.
-
In the last few years, I've seen most businesses (that I interact with in person and via forums) going mostly to WAN backups when possible. Disks are still common, but nothing like they were five years ago. For larger backups that need to be transported, still tape.
-
@ajstringham said:
When I was a Unitrends installer, that's what I saw. I did several dozen installs, and for both backup and archiving, it was almost always disk. The size of the businesses varied, both in terms of size and budget. Disk was still preferred.
We are talking about portable disk here, not a disk array. Are you sure that you are not referring to arrays (fixed disk?)
-
@scottalanmiller said:
@ajstringham said:
When I was a Unitrends installer, that's what I saw. I did several dozen installs, and for both backup and archiving, it was almost always disk. The size of the businesses varied, both in terms of size and budget. Disk was still preferred.
We are talking about portable disk here, not a disk array. Are you sure that you are not referring to arrays (fixed disk?)
Obviously the appliance is disk based, but for archiving and even rotating sets of archives, disk is still what I saw almost exclusively.
-
@scottalanmiller said:
It's known to be significantly higher than 3%. Like I've been saying. 30% is one order of magnitude higher. It is a very useful reference point for expected failure rates on average. The real failure rates will vary over a massive range. If you want a reference number, 30% is probably the best that you can get.
I don't believe that a @scottalanmiller completely made up figure is probably the best that I can get. 4% is also significantly higher than 3%, so maybe I'll go with that. I also don't accept that there is no such thing as wear and tear when hard drives are used - any physical device will suffer from this. I'm not a hard drive expert, but that's a basic law of physics.
-
@ajstringham said:
Obviously the appliance is disk based, but for archiving and even rotating sets of archives, disk is still what I saw almost exclusively.
That didn't answer anything. You see fixed disk (arrays) or mobile disk (USB / IEEE 1394 / eSATA) for archiving?
-
@Carnival-Boy said:
I don't believe that a @scottalanmiller completely made up figure is probably the best that I can get. 4% is also significantly higher than 3%, so maybe I'll go with that. I also don't accept that there is no such thing as wear and tear when hard drives are used - any physical device will suffer from this. I'm not a hard drive expert, but that's a basic law of physics.
Never said or suggested that there was no wear and tear. I said it was completely insignificant - which is a statistical fact and incredibly well established by every drive study. How do you think drives are expected to run 20 years or more but will experience noticeable wear and tear in just hours or days of usage? Those two things cannot go together.
-
4% is not a reasonable failure number. 3% is a best case for the best drives. External USB arrays don't get those drives. 3% is not achievable by those drives even under ideal (fixed, datacenter) conditions.
-
Failure rates vary a lot too. Google found 3%. Backblaze found even datacenters see 4.2%. That's with consumer drives from BB.
-
@scottalanmiller said:
Never said or suggested that there was no wear and tear. I said it was completely insignificant - which is a statistical fact and incredibly well established by every drive study. How do you think drives are expected to run 20 years or more but will experience noticeable wear and tear in just hours or days of usage? Those two things cannot go together.
@ajstringham suggested there was no wear and tear. I don't understand your question. What do you mean they'll experience noticeable wear and tear in just hours?
Can you give me a link to a hard drive study saying wear and tear is a completely insignificant cause of failure? I'm only going on Wikipedia which talks about wear and tear and may be wrong, but it makes sense to me.
-
@Carnival-Boy said:
@ajstringham suggested there was no wear and tear. I don't understand your question. What do you mean they'll experience noticeable wear and tear in just hours?
Ah, AJ might have overstated it. There is effectively no wear and tear, not that there is none. It's so trivial as any consideration of it is a complete waste. Expectation of 20+ years of wear and tear before wearing out is 7,300 days. Running as a backup system how many days does it run, maybe 30? 30 is completely non-noticeable out of 7,300.
My point is that you can't expect any measurable wear and tear in the time used as a backup system.
-
@Carnival-Boy said:
Can you give me a link to a hard drive study saying wear and tear is a completely insignificant cause of failure? I'm only going on Wikipedia which talks about wear and tear and may be wrong, but it makes sense to me.
A link to what? There is no study needed for basic math. I'm lost as to what you are looking for. Just think about how wear and tear works on drives. No external source needed.
-
Basic math? I'm talking about basic physics. If you run a physical device with moving parts for for 8,760 hours (one year 24/7) wear and tear will have a significant influence on the probability of it failing. I don't believe external environmental factors and shocks are 100% of the reason a drive will fail in these conditions.
-
@Carnival-Boy said:
Basic math? I'm talking about basic physics.
I understand but basic physics says that wear and tear does not constitute "wearing out" for more than 7300 days of continuous usage. So the physics + math say that 30 days or so of usage is completely inconsequential in terms of wear and tear.
-
@Carnival-Boy said:
I don't believe external environmental factors and shocks are 100% of the reason a drive will fail in these conditions.
I understand that.