BackUp device for local or colo storage
-
@coliver Possibly.
The biggest bottleneck with the existing backup solution is the server performing the work. Which is just constantly getting hit.
Port bonding on the new setup would reduce some cost, at the price of reducing what we can run VM wise since those ports would be tied up.
-
@DustinB3403 said:
@coliver Possibly.
The biggest bottleneck with the existing backup solution is the server performing the work. Which is just constantly getting hit.
Port bonding on the new setup would reduce some cost, at the price of reducing what we can run VM wise since those ports would be tied up.
The cost of an additional 4 port 1Gbe card is minimal. You could easily add that to all your systems for a fraction the cost of the 10Gbe switch and adapters.
-
I'm forking to a new thread. Will post a link shortly.
-
New topic discussing just the goals of this project.
http://mangolassi.it/topic/6453/backup-and-recovery-goals -
@scottalanmiller said:
Wouldn't you carry off daily?
Sorry just saw this, its a nuisance to have to swap tape or drive daily to do it. Our current plan is carry off weekly.
-
@DustinB3403 said:
Cost consciousness.
Is there that much added value in doubling what we have for those "if" events.
Remember this post when you ask for a full second server to run your VM environment.
-
@DustinB3403 said:
@coliver Possibly.
The biggest bottleneck with the existing backup solution is the server performing the work. Which is just constantly getting hit.
Port bonding on the new setup would reduce some cost, at the price of reducing what we can run VM wise since those ports would be tied up.
What do you mean? you typically bond all the NICs in a VM host together and all the VMs on the host share the pipe.
Next question, do you really use 800 Mb (realistic use from 1 Gb ports) on each server at the same time?
-
I've never bonded all of the NICs as we haven't had the need for it.
In most cases we've simply allocated a specific NIC for a specific number of VM's.
-
Unless you need to leave bandwidth overhead for something, why split it?
It's just like you always you OBR10 unless you have a specific reason not to.
-
Why Bond when I'm still only capable of pushing 1Gb/s at best?
-
@DustinB3403 said:
I've never bonded all of the NICs as we haven't had the need for it.
Aren't we seeing bottlenecks, though? Bonding is a standard, best practice.
-
@DustinB3403 said:
Why Bond when I'm still only capable of pushing 1Gb/s at best?
What is limiting you to 1Gb/s if not the GigE link?
-
And you bond for failover, not just speed.
-
@Dashrender said:
What do you mean? you typically bond all the NICs in a VM host together and all the VMs on the host share the pipe.
Up to four NICs.
-
The switches between all of the separate devices aren't they?
Plus this is all existing equipment that is weird. With the new equipment I can get all of this sorted.
-
Assuming the switches (possibly new switch) understand link bonding (aggregation) will treat the 4 lines as one.
So you have two servers, on the same switch, with 4 cables going to one server, and 4 cables going to another. This would allow the servers to talk to each other at 4 Gb
-
-
@Dashrender said:
Assuming the switches (possibly new switch) understand link bonding (aggregation) will treat the 4 lines as one.
So you have two servers, on the same switch, with 4 cables going to one server, and 4 cables going to another. This would allow the servers to talk to each other at 4 Gb
Wouldn't that really be 2.4GB/s not 4Gb/s assuming you realistically only get 800Mb/s.
-
@DustinB3403 said:
@Dashrender said:
Assuming the switches (possibly new switch) understand link bonding (aggregation) will treat the 4 lines as one.
So you have two servers, on the same switch, with 4 cables going to one server, and 4 cables going to another. This would allow the servers to talk to each other at 4 Gb
Wouldn't that really be 2.4GB/s not 4Gb/s assuming you realistically only get 800Mb/s.
LOL - yeah, but when you write it, you would write 4 Gb, because that's what the links are.
-
@DustinB3403 said:
@Dashrender said:
Assuming the switches (possibly new switch) understand link bonding (aggregation) will treat the 4 lines as one.
So you have two servers, on the same switch, with 4 cables going to one server, and 4 cables going to another. This would allow the servers to talk to each other at 4 Gb
Wouldn't that really be 2.4GB/s not 4Gb/s assuming you realistically only get 800Mb/s.
3.2Gb/s? Math fail.