Backup strategy for customer data?
-
@scottalanmiller said in Backup strategy for customer data?:
When comparing tape, it's important to not look at raw capacity. LTO tape has hardware compression that is real time, on the fly and incredibly powerful. The compression ratios on tape are crazy. It's part of the sequential write mechanism. Hard drives don't offer this mechanism, nor could they because of the random access model. Tapes don't actually write raw, so an LTO8 is actually going to get 30TB on average. Sometimes less, sometimes more. But that's a real number to work with.
Question: Am I correct in assuming that this compression doesn't offer any benefit where the backup content is video media? If it DOES allow compression of video files, how good is the compression ratio?
-
@NashBrydges said in Backup strategy for customer data?:
Question: Am I correct in assuming that this compression doesn't offer any benefit where the backup content is video media? If it DOES allow compression of video files, how good is the compression ratio
That depends. But generally it does, but relatively little. You likely still want it on (especially on tape) because the compression mechanism normally speeds the writes to and from the media because it is compressed in real time. But heavily compressed video media is going to get very little additional compression, but generally some.
-
I did some comparisons of the cost involved for disk versus tape and disregarding the difference between the media types.
Tape is much cheaper per TB (about $11/TB) but you need to offset the cost of the tape drive/autoloader.
Disk on the other hand will require a more expensive server with more drive bays and also requires additional disks for partition data.In our case I found that at 150 TB of native storage it will break even. If you have more data in backup storage than that, then tape is cheaper.
-
In our case I'm thinking about two options.
OPTION 1
We'll put together a backup server with a large-ish disk array (maybe 100TB or so) connected with SAS to a tape autoloader. Backups go from backup clients to the disk array and when done it's all streamed to tape. The tapes are exchanged and put off-line. Each week a full backup of disks are taken off-site as well.To keep the networks separated as far as possible we can put the backup server on it's own hardware and it's own network and firewall it off from the production servers. So if production servers or VM hosts are breached the backup server is still intact. If somehow it's also compromised we have to restore everything from tape.
OPTION 2
We put a smaller backup array, say 10TB or so, on each physical VM host. Backups are run on each host from the production VMs to the backup VM with the backup array. Remember our VMs are running on local storage so this will not require any network traffic.When done, we stream the data from each backup VM to a "tape backup"-server that just basically contains the tape drive (with autoloader) and will write the data to tape. Firewall and tape handling will be the same as option 1. Since the disks with the backups are on each host, several backup servers have to be breached to lose all disk backups.
What do you think?
-
@Pete-S said in Backup strategy for customer data?:
What do you think?
I think you have done an awesome amount of research.
Why offsite disks if tape is already offsite? This seems like extra work that is not worth the cost of doing. Either way, when needing either these disks or the tapes, you are full restoring. I can't imagine that it would be a big enough different in restore times to matter in that scenario.
-
@Pete-S said in Backup strategy for customer data?:
What do you think?
The difference between options 1 and 2 seem to be two things to me.
- How much can be easily compromised at once
- Where the complexity of configuration is
Option 1 seems to be easier to compromise the entire setup, but is also easier to manage the configuration of the entire process.
Option 2 will be harder to compromise the entire setup, but is more complex to manage the entire setup.
-
@Pete-S said in Backup strategy for customer data?:
I did some comparisons of the cost involved for disk versus tape and disregarding the difference between the media types.
Tape is much cheaper per TB (about $11/TB) but you need to offset the cost of the tape drive/autoloader.
Disk on the other hand will require a more expensive server with more drive bays and also requires additional disks for partition data.In our case I found that at 150 TB of native storage it will break even. If you have more data in backup storage than that, then tape is cheaper.
How many tapes in the library?
How many briefcases to take off-premises for rotations?
Where is the brain trust to manage the tapes, their backup windows, and whether the correct tape set is in the drives?
If the tape libraries are elsewhere then the above goes away to some degree (distance comes into play).
-
@Pete-S said in Backup strategy for customer data?:
In our case I'm thinking about two options.
OPTION 1
We'll put together a backup server with a large-ish disk array (maybe 100TB or so) connected with SAS to a tape autoloader. Backups go from backup clients to the disk array and when done it's all streamed to tape. The tapes are exchanged and put off-line. Each week a full backup of disks are taken off-site as well.To keep the networks separated as far as possible we can put the backup server on it's own hardware and it's own network and firewall it off from the production servers. So if production servers or VM hosts are breached the backup server is still intact. If somehow it's also compromised we have to restore everything from tape.
OPTION 2
We put a smaller backup array, say 10TB or so, on each physical VM host. Backups are run on each host from the production VMs to the backup VM with the backup array. Remember our VMs are running on local storage so this will not require any network traffic.When done, we stream the data from each backup VM to a "tape backup"-server that just basically contains the tape drive (with autoloader) and will write the data to tape. Firewall and tape handling will be the same as option 1. Since the disks with the backups are on each host, several backup servers have to be breached to lose all disk backups.
What do you think?
Inside Job puts # 2 to rest. Let's just say there are plenty of stories about entire setups being wiped starting with the backups then hitting go on 0000 for the SANs.
-
@Pete-S This made me think of something I haven't considered since the 90s. Back then UNIX based systems were so much better at streaming data to the tape that we'd use IRIX systems to make backups of all the things rather than Windows. Anyone know if OS makes a difference in keeping tape drives fed with data today?
Back to your current question. A worst case scenario with either option will lead to full restore from off-site. So Option 1 would make the most sense to me. Feeding enough data to today's tape drives can be a challenge even from local disc.
-
@travisdh1 said in Backup strategy for customer data?:
Anyone know if OS makes a difference in keeping tape drives fed with data today?
Hasn't been a factor in a long time.
-
@PhlipElder said in Backup strategy for customer data?:
How many tapes in the library?
How many briefcases to take off-premises for rotations?
Where is the brain trust to manage the tapes, their backup windows, and whether the correct tape set is in the drives?
If the tape libraries are elsewhere then the above goes away to some degree (distance comes into play).A 2U high autoloader will have two magazines with 12 tape slots in each. With LTO-8 tapes that means 720TB of data (2.5:1 compression) in one batch without switching any tapes. 24 tapes will fit in one briefcase so not much of a logistical problem. If you go up to a 3U unit it will hold 40 tapes and I think that might fit in one briefcase as well.
Tapes have barcodes that the autoloader will scan so that's how the machine know which tape is the right one.
If you are going to swap several tapes at once, you can get additional magazines that holds the tape and just swap the entire magazine. For daily incremental backups you can swap one tape at a time - if you have less than 30 TB of data change per day.
You can also monitor that tapes have been replaced so you could set up that as a prerequisite for starting the next daily backup. We'll just have to see how long things take and how much data we need to backup on average before putting procedures in place.
I haven't actually used tape since the late 90s so it will be exiting testing this. For off-line storage and archival storage the specs are just so much better than harddrives. Bit error is 1 in 10^19 bits (enterprise HDDs are 1 in 10^15). That's actually 10,000 times better than HDDs. And 30 years of archival properties.
-
@Pete-S said in Backup strategy for customer data?:
Bit error is 1 in 10^19 bits (enterprise HDDs are 1 in 10^15). That's actually 10,000 times better than HDDs. And 30 years of archival properties.
yeah, the tech behind LTO8 is freaking fantastic. And unlike HDD where research is stagnating, tape keeps advancing.
-
@scottalanmiller said in Backup strategy for customer data?:
@Pete-S said in Backup strategy for customer data?:
Bit error is 1 in 10^19 bits (enterprise HDDs are 1 in 10^15). That's actually 10,000 times better than HDDs. And 30 years of archival properties.
yeah, the tech behind LTO8 is freaking fantastic. And unlike HDD where research is stagnating, tape keeps advancing.
I still haven't seen anything that scales like tape. Just keep adding drives and tapes as needed till you're into silly town: https://spectralogic.com/products/tfinity-exascale/
-
@travisdh1 said in Backup strategy for customer data?:
@scottalanmiller said in Backup strategy for customer data?:
@Pete-S said in Backup strategy for customer data?:
Bit error is 1 in 10^19 bits (enterprise HDDs are 1 in 10^15). That's actually 10,000 times better than HDDs. And 30 years of archival properties.
yeah, the tech behind LTO8 is freaking fantastic. And unlike HDD where research is stagnating, tape keeps advancing.
I still haven't seen anything that scales like tape. Just keep adding drives and tapes as needed till you're into silly town: https://spectralogic.com/products/tfinity-exascale/
Quick! I need tapes 39,763 and 40,659!
-
@Pete-S said in Backup strategy for customer data?:
@PhlipElder said in Backup strategy for customer data?:
How many tapes in the library?
How many briefcases to take off-premises for rotations?
Where is the brain trust to manage the tapes, their backup windows, and whether the correct tape set is in the drives?
If the tape libraries are elsewhere then the above goes away to some degree (distance comes into play).A 2U high autoloader will have two magazines with 12 tape slots in each. With LTO-8 tapes that means 720TB of data (2.5:1 compression) in one batch without switching any tapes. 24 tapes will fit in one briefcase so not much of a logistical problem. If you go up to a 3U unit it will hold 40 tapes and I think that might fit in one briefcase as well.
Tapes have barcodes that the autoloader will scan so that's how the machine know which tape is the right one.
If you are going to swap several tapes at once, you can get additional magazines that holds the tape and just swap the entire magazine. For daily incremental backups you can swap one tape at a time - if you have less than 30 TB of data change per day.
You can also monitor that tapes have been replaced so you could set up that as a prerequisite for starting the next daily backup. We'll just have to see how long things take and how much data we need to backup on average before putting procedures in place.
I haven't actually used tape since the late 90s so it will be exiting testing this. For off-line storage and archival storage the specs are just so much better than harddrives. Bit error is 1 in 10^19 bits (enterprise HDDs are 1 in 10^15). That's actually 10,000 times better than HDDs. And 30 years of archival properties.
We used to manage HP based tape libraries and their rotation process. It was a bear to manage.
We have one company we are working with that has a grand total of 124 tapes that they need to work with for one rotation.
GFS, that is Grandfather, Father, and Son, is an important factor in any backup regimen. Air-gap is super critical.
Having software
thethat manages it all for you is all fine and dandy until the software fails. BTDT and what a freaking mess that was when the servers hit a hard-stop.Ultimately, it does not matter what medium is used as GFS takes care of one HDD or tape dying due to bit rot (BTDT for both HDD and tape).
The critical element in a DR plan is air-gap. No access. Total loss recovery.
-
@dafyre said in Backup strategy for customer data?:
@travisdh1 said in Backup strategy for customer data?:
@scottalanmiller said in Backup strategy for customer data?:
@Pete-S said in Backup strategy for customer data?:
Bit error is 1 in 10^19 bits (enterprise HDDs are 1 in 10^15). That's actually 10,000 times better than HDDs. And 30 years of archival properties.
yeah, the tech behind LTO8 is freaking fantastic. And unlike HDD where research is stagnating, tape keeps advancing.
I still haven't seen anything that scales like tape. Just keep adding drives and tapes as needed till you're into silly town: https://spectralogic.com/products/tfinity-exascale/
Quick! I need tapes 39,763 and 40,659!
Oh yeah, does that bring back memories. Feeding the machine to get the combination of tapes needed to recover a set of databases or the like. Ugh. SMH
-
@PhlipElder said in Backup strategy for customer data?:
@dafyre said in Backup strategy for customer data?:
@travisdh1 said in Backup strategy for customer data?:
@scottalanmiller said in Backup strategy for customer data?:
@Pete-S said in Backup strategy for customer data?:
Bit error is 1 in 10^19 bits (enterprise HDDs are 1 in 10^15). That's actually 10,000 times better than HDDs. And 30 years of archival properties.
yeah, the tech behind LTO8 is freaking fantastic. And unlike HDD where research is stagnating, tape keeps advancing.
I still haven't seen anything that scales like tape. Just keep adding drives and tapes as needed till you're into silly town: https://spectralogic.com/products/tfinity-exascale/
Quick! I need tapes 39,763 and 40,659!
Oh yeah, does that bring back memories. Feeding the machine to get the combination of tapes needed to recover a set of databases or the like. Ugh. SMH
Sounds like a system that was only got half way. BackupExec?
-
@travisdh1 said in Backup strategy for customer data?:
@PhlipElder said in Backup strategy for customer data?:
@dafyre said in Backup strategy for customer data?:
@travisdh1 said in Backup strategy for customer data?:
@scottalanmiller said in Backup strategy for customer data?:
@Pete-S said in Backup strategy for customer data?:
Bit error is 1 in 10^19 bits (enterprise HDDs are 1 in 10^15). That's actually 10,000 times better than HDDs. And 30 years of archival properties.
yeah, the tech behind LTO8 is freaking fantastic. And unlike HDD where research is stagnating, tape keeps advancing.
I still haven't seen anything that scales like tape. Just keep adding drives and tapes as needed till you're into silly town: https://spectralogic.com/products/tfinity-exascale/
Quick! I need tapes 39,763 and 40,659!
Oh yeah, does that bring back memories. Feeding the machine to get the combination of tapes needed to recover a set of databases or the like. Ugh. SMH
Sounds like a system that was only got half way. BackupExec?
Yes. BUE. Spent three days on the horn with Symantec Support to no avail. It was a FusterCluck to say the least.
We ended up running a side-by-side migration with a new AD and managed to recover all of their data but it took close to a week.
We moved to StorageCraft's ShadowProtect and disk based backups after that.
This particular site was an anomaly. Their building's HVAC was totally messed up with primary feeds not capped above the false ceiling. So, normally cold air draw via the grate in that ceiling and either hot or cold air pumped into the rooms via the vents. Summer, everyone was warm while the temp above the ceiling tiles was +5C and winter everyone was cold while above the tiles it was +40C.
Partners with offices near the server closet would complain about the noise but we couldn't put a portable A/C unit in there because all of the power panels were full in that part of the building.
We had another catastrophic failure at that site with ShadowProtect allowing us to recover everything to a new server while the firm ran on their secondary. Data loss was limited to 24 files for one partner who was very easy going (fortunately). We were extremely happy when they bought a business condo and moved many years ago. Disk and server failures became a thing of the past. Heat is such a killer though Nehalem and later CPUs brought the server failure count down significantly.EDIT: Important detail in that last failure: Secondary was getting bad bits as was the backups. We found out that image based backups had a weakness in garbage in garbage out there. It took some hoop jumping to get to that 24 file point including the backups and volume shadow copies on both the secondary and the recovered primary. That was the image's strong point: recovering the snapshots along with everything else.
-
@scottalanmiller said in Backup strategy for customer data?:
@Pete-S said in Backup strategy for customer data?:
Bit error is 1 in 10^19 bits (enterprise HDDs are 1 in 10^15). That's actually 10,000 times better than HDDs. And 30 years of archival properties.
yeah, the tech behind LTO8 is freaking fantastic. And unlike HDD where research is stagnating, tape keeps advancing.
Would you really call it stagnating? They are basically at the atomic level already...
-
@Dashrender HDD's have stagnated, SSDs are their successor and after that who knows.