Backup and Recovery Goals
-
@Jason You're telling me....
I've wanted to move it since I started....
-
@Jason said:
@Dashrender said:
Either 2 TB 7200 SAS drives in RAID 10 or 1 TB SSD drives in RAID 5 (8 TB per previous discussion, currently using 6 TB)
Have you calculated your data growth rate? that only leaves room for 1.33% growth.
1.33%? Don't you mean more like 33% growth, one third more than what you're currently using.
-
@Jason said:
@Dashrender said:
Considerations:
8 port 10 GigE switch with 10GigE in each server. Could probably be done with bonded 1 GigE ports instead, minimum of two per server.What kind of data are you pushing over this next work to need 10Gig? We don't even use 10Gig on our production network, we only have it for the iSCSI networks and some uplinks from the datacenter. All server connections are 1gig to the production network.
This speed was to get full backups accomplished in a reasonable timeframe. According to the last thread, Scott figured out that 1 GigE would take around 17 hours to backup the 6 TB of data. Currently Dustin is seeing that it takes 24+ hours considering their current setup. Moving to dual 10 GigE would reduce that to around 1 hour.
If Dustin moves to a different backup solution, he might move to synthetic full backups and not require the 10 GigE. But restore to meet the previously stated RTO might still require it.
But perhaps four 1 GigE bonded might be good enough as well.
-
From the other thread it was discussed that we could use LTO Tapes to create our weekly full backups. Is this the correct item?
Additionally how might I tie this into my plan for backup.
I still need to determine, to what device I'm going to backup our VM's and Data to either a R620 server in RAID 10 on SR with 3 or 4TB drives running a CIFS server for of management with the LTO Tape.
Any recommendations.
Unfortunately I'm outside of my experience with Tapes and how to tie them in.
-
@DustinB3403 said:
From the other thread it was discussed that we could use LTO Tapes to create our weekly full backups. Is this the correct item?
Additionally how might I tie this into my plan for backup.
I still need to determine, to what device I'm going to backup our VM's and Data to either a R620 server in RAID 10 on SR with 3 or 4TB drives running a CIFS server for of management with the LTO Tape.
Any recommendations.
Unfortunately I'm outside of my experience with Tapes and how to tie them in.
You might want an autoloader to keep a months worth of tapes in it. which will be networked for control then have SCSI for data. Whatever you do if you are virtualizing make sure it can be passed through, though really the backup server should be a separate physical box.
-
@Jason That's the plan, any recommendations on an Auto Loader. I'd really like to use LTO 6 in this write up. @scottalanmiller said LTO 6 isn't released yet though ....
-
@DustinB3403 said:
@Jason That's the plan, any recommendations on an Auto Loader. I'd really like to use LTO 6 in this write up. @scottalanmiller said LTO 6 isn't released yet though ....
Something like this should work http://www8.hp.com/us/en/products/tape-automation/product-detail.html?oid=5336447#!tab=features
-
Why are you looking at tapes again?
Can StorageCraft not provide you what you want? StorageCraft your backups locally, then replicate it to your offsite server - wasn't an offsite server your original plan?
-
@Dashrender Tapes only for the weekly take home portion. As uploading over the WAN is unrealistic.
Storage Craft is to be used to backup to a new local server with 24TB of onsite storage.
-
@Dashrender Off site was in the plan, but I doubt I'll be able to get them to use a COLO or even our sister office overseas.
It's just more reasonable when written down I think.
-
Here's the current write up for what is a scaled up plan with Onsite data storage. The formatting is a bit rough, sorry about that.
Virtualization Servers R720xd
These servers are to be configured in RAID 5 (SSDs), totaling 11TB of storage capacity. Planned used is high availability XenServer between each server. All physical servers to be virtualized and run on these host in a XenPool.- Dell PowerEdge R720xd I 2.5" Chassis with up to 24 Hard Drives 2
- Intel Xeon E5-2670 2.6GHz/20M/1600MHz 8-Core 115W 4
- Dell PE R720 Normal Heat Sink 4
- 96GB (24x4G) DDR3 ECC RDIMM - Performance Optimized 2
- PERC H710P Controller with 1GB NV Cache (RAID 0/1/5/6/10/50/60) 2
- No RAID Configuration (Customer to Configure) 2
- Dell 750W Power Supply 2
- Dell 750W Power Supply - Redundant 2
- iDRAC7 Enterprise with Vflash, 16GB SD Card 2
- Dell 2U Sliding Ready Rails 2
- Broadcom 57800 2x10Gb BT + 2x1Gb BT Network Daughter Card 2
- Broadcom 5719 QP 1Gb Network Interface Card 2
- Standard Power Cord(s) - Qty to Match Power Supplies 2
- Dell 2U Front Bezel 2
- No Windows Operating System 2
- No VMware Software 2
- Custom Configuration and Full Testing, Full Firmware Updates 2
- 3 Year Dell ProSupport 4HR 7x24 Onsite: Non Mission Critical 2
1TB SSD Drives per server 12
Onsite Backup Server – R720xd
This server is to be configured in RAID 10, with a total storage capacity of 24TB. To keep 4 weekly FULL backups on local disk for recovery. Totaling 1 full month of Data and VM backup capabilities. Running a file server, the XenPool above will perform backup operations to this server.-
Dell PowerEdge R720xd I 3.5" Chassis with up to 12 Hard Drives 1
-
2 X Intel Xeon E5-2603 1.8GHz/10M/1066MHz 4-Core 80W 1
-
16GB (8x2G) DDR3 ECC RDIMM - Performance Optimized 1
-
PERC H710P Controller with 1GB NV Cache (RAID 0/1/5/6/10/50/60) 1
-
Dell 1100W Power Supply 1
-
Dell 1100W Power Supply – Redundant 1
-
iDRAC7 Enterprise with Vflash, 16GB SD Card 1
-
Dell 2U Sliding Ready Rails 1
-
Broadcom 57800 2x10Gb BT + 2x1Gb BT Network Daughter Card 1
-
Broadcom 5719 QP 1Gb Network Interface Card 1
-
Standard Power Cord(s) - Qty to Match Power Supplies 1
-
Custom Configuration and Full Testing, Full Firmware Updates 1
-
3 Year Dell ProSupport 4HR 7x24 Onsite: Non Mission Critical 1
-
WD RE 4 TB Enterprise Hard Drive: 3.5 Inch, 7200 RPM, SATA III, 64 MB Cache - WD4000FYYZ 12
Quantum LTO-6 HH Tape Drive
Use for offsite storage of backup information weekly. Extremely fast backup process up to 400MB/s-
Quantum SuperLoader 3 Tape Autoloader 1
-
Quantum LTO 6 Tapes (6.25 TB capacity Each) 4
-
I assume this means that the to production servers will replication between each other for failover?
Have you talked to Xbyte about what they can get you for pricing?
Why the RAID 10 on the backup server? If you don't need the performance, perhaps RAID 6 would work, that is assuming RAID 6 provides enough IOPs to get the backkup jobs done.
-
Assuming you're running StorageCraft on that same backup server, you won't have enough storage to keep four full system backups and the backups from StorageCraft (I think you said hourly for 3 days) on that same box.
-
@Dashrender said:
Assuming you're running StorageCraft on that same backup server, you won't have enough storage to keep four full system backups and the backups from StorageCraft (I think you said hourly for 3 days) on that same box.
For the Storage Craft operations (and I know I said complete rebuild at the top here) but I'm still going to try and push for reusing what we have.
Worst case add another server for hourly to backup too.
-
@Dashrender said:
I assume this means that the to production servers will replication between each other for failover?
Have you talked to Xbyte about what they can get you for pricing?
Why the RAID 10 on the backup server? If you don't need the performance, perhaps RAID 6 would work, that is assuming RAID 6 provides enough IOPs to get the backkup jobs done.
I haven't spoken with xbyte, all prices are list from their site, yes they were removed on purpose.
At the scale of the backup operation RAID 10 throughput only seems logical. Whereas with RAID6 there is no Write-Speed gain at all. Being 7200 RPM Spinners, write speed is an important part. We really want to reduce how long our backups take. RAID5 which is what we're on now, has no Write-Speed gain either.
So backing up takes a while to complete. Trying to improve on that.
-
@DustinB3403 said:
@Dashrender said:
Assuming you're running StorageCraft on that same backup server, you won't have enough storage to keep four full system backups and the backups from StorageCraft (I think you said hourly for 3 days) on that same box.
For the Storage Craft operations (and I know I said complete rebuild at the top here) but I'm still going to try and push for reusing what we have.
Worst case add another server for hourly to backup too.
I'm guessing that the Windows 2008 Server you have today would be more than up to the task.
-
@Dashrender said:
@DustinB3403 said:
@Dashrender said:
Assuming you're running StorageCraft on that same backup server, you won't have enough storage to keep four full system backups and the backups from StorageCraft (I think you said hourly for 3 days) on that same box.
For the Storage Craft operations (and I know I said complete rebuild at the top here) but I'm still going to try and push for reusing what we have.
Worst case add another server for hourly to backup too.
I'm guessing that the Windows 2008 Server you have today would be more than up to the task.
That's what I was thinking, throw some drives in it. RAID6 on that as its the incremental backup device (in my head) and be done with it.
With the above proposal we'd have a really good setup for the future.
Can anyone else punch some holes into the above?
-
@DustinB3403 said:
@Dashrender said:
I assume this means that the to production servers will replication between each other for failover?
Have you talked to Xbyte about what they can get you for pricing?
Why the RAID 10 on the backup server? If you don't need the performance, perhaps RAID 6 would work, that is assuming RAID 6 provides enough IOPs to get the backkup jobs done.
I haven't spoken with xbyte, all prices are list from their site, yes they were removed on purpose.
At the scale of the backup operation RAID 10 throughput only seems logical. Whereas with RAID6 there is no Write-Speed gain at all. Being 7200 RPM Spinners, write speed is an important part. We really want to reduce how long our backups take. RAID5 which is what we're on now, has no Write-Speed gain either.
So backing up takes a while to complete. Trying to improve on that.
Sure, but we discussed this in the other thread - your performance issues with regards to how long it takes to backup have nothing (or little) to do with your drive setup, and everything to do with your network setup.
Backing up 6 TB of data over 1 GigE = 6,000,000 MB / 100 MB/s = 60,000 seconds = 1000 mins = 16.67 hours. You were at 24 hours, and we know that data had to travel both directions for several sections of your network before it hit the storage, so this accounts for the additional time required for your backups.Reducing the network congestion alone will save save you around 8 hours. Going to a bonded pair of GigE will cut it nearly in half from there, Going 10 GigE, well Scott already told us would take you to around 2 hours. Though getting to that 2 hour mark might actually put a strain on your disk resources - but I don't know that for sure.
-
@Dashrender On the above proposal I have additional NICs included on the servers for bonded pairs.
Since is makes sense to do, and the jump in improvement is noticeable that is what I am planning.
That or using the 10Gb NICS on the servers, and a 10GB switch for this purpose. It's not on the writeup above as I forgot to research it. I'll do that now.
-
Also, what a new R720xd for your backup server? I'd go with a gen or two old machine for that. It doesn't need processing power. Does it really need 16 GB of RAM?
Correct me if I'm wrong, the backup server is nothing more than a SAMSD. Pure simple storage. If you could buy a NAS for less, it would be worth it, but at this size you probably can't, so getting a several generation old server with enough storage should be fine.