BackUp device for local or colo storage
-
iSCSI Buffalo drive attached to a Server 2008 file server with 1Gbe NIC (single) which backups to a 2 and 4 drive Synology devices.
Each synology backup different items.
We also have an ancient "archive server" which has 6 drives, running Server 2003 which actually runs the Storage Craft software. Single NIC connected, 1Gbe, 8GB RAM with a Quad Core AMD Opteron 1385 CPU.
-
No wonder you have issues!
So the iSCSI traffic for the Buffalo goes over the same NIC as the traffic being sent to the 2 Synology devices?
And it's all driven by the StorageCraft software that's running on the Server 2003 box?
What does the Buffalo device do that's different than the 2 Synology devices?
Is the Buffalo the primary storage, boot storage, etc, for the Server 2008?
-
The iSCSI target is housing our network shares.
The buffalo is being decommissioned but it was a backup device.
-
@DustinB3403 said:
We also have an ancient "archive server" which has 6 drives, running Server 2003 which actually runs the Storage Craft software. Single NIC connected, 1Gbe, 8GB RAM with a Quad Core AMD Opteron 1385 CPU.
So everything flows through this machine? All 8TB of backups goes through this choke point? Have you checked CPU to see if it is maxed out? Memory to see if it is exhausted? IOPS to see if you are beyond the limits of the drives?
-
The server 2003 is horribly slow. CPU usage is constantly peaking. Memory usage doesn't seem to be hit very hard.
But this device is also looking to be tossed. I was considering just using it for drive space as just another backup of our backup sort of device.
Maybe not?
-
@DustinB3403 said:
1Gbe
1Gb/s has a realistic maximum transfer rate of 800Mb/s and that would be HARD to hit and sustain. 8TB on 1Gb/s is 21.2 hours to copy. That's with zero bottlenecks anywhere, just wide open streaming without ever dropping the speed.
-
Even in relative idle times this servers slow. I don't know how old it even is. 6-8 years maybe
-
Realistically, you need a core backup infrastructure of 10Gb/s in a bonded pair which would drop your network bottleneck from 21.2 hours to 1.05 hours. Of course other bottlenecks will be exposed. But this is key. Your fundamental network infrastructure cannot handle your backup needs. This means you cannot restore in an emergency either. Nothing you do will speed it up, waiting a full day minimum would be your only option. And likely you would need to do a lot of different things at once and be very unable to keep the line fully saturated for a full day while doing the restore.
-
@scottalanmiller said:
@DustinB3403 said:
1Gbe
1Gb/s has a realistic maximum transfer rate of 800Mb/s and that would be HARD to hit and sustain. 8TB on 1Gb/s is 21.2 hours to copy. That's with zero bottlenecks anywhere, just wide open streaming without ever dropping the speed.
Well, then he's actually doing pretty good, if he says it takes around 24 hours to backup the whole system (all current 6 TB). He might have a bottle neck somewhere, but not a horrible one.
-
@DustinB3403 said:
Even in relative idle times this servers slow. I don't know how old it even is. 6-8 years maybe
Given that 2003 R2 came out in 2005, it is presumably 10+ years old.
-
@scottalanmiller said:
1Gb/s has a realistic maximum transfer rate of 800Mb/s and that would be HARD to hit and sustain. 8TB on 1Gb/s is 21.2 hours to copy. That's with zero bottlenecks anywhere, just wide open streaming without ever dropping the speed.
This math alone proves that using NAUBackup to create full backups won't really be much better than the current solution. Definitely sounds like it's time for a network upgrade.
-
Or just a dedicated 10Gig switch for the management port on Xen and the onsite backup solutions.
-
Of course I'd have to put 10Gbe NICs into the host servers.
-
@DustinB3403 said:
Of course I'd have to put 10Gbe NICs into the host servers.
Not necessarily, you only need your aggregate to be faster. I'm assuming that bonded NICs have not been set up? Get that fixed. If every server was 2Gb/s and the backup host was 10Gb/s you'd take rather an amazing leap forward just there. Probably enough to find other bottlenecks.
-
If you identify a single server that needs more speed, you can go up to triple or quadruple GigE if need be before making a leap to 10GigE connections.
-
You might find a single server or two with 10GigE needs, but likely not the majority. Spend opportunistically.
-
Might as well loop in StorageCraft themselves too: @Steven
-
@scottalanmiller said:
If you identify a single server that needs more speed, you can go up to triple or quadruple GigE if need be before making a leap to 10GigE connections.
What's the current cost for a 10 GigE card. Assuming he doesn't already have open ports of GigE, he'll need to buy regardless.
-
I'm surprised, an unmanged 10 GigE swith 8 port is $760
http://www.newegg.com/Product/Product.aspx?Item=N82E16833122529
A two port card from Dell is $650. Third party might be considerably less.
-
Yup, I've been pushing Netgear 10GigE for a long time now. I think that Dell has some decent 10GigE fiber switches for around $2K as well.