ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    BackUp device for local or colo storage

    IT Discussion
    backup disaster recovery
    7
    195
    89.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DustinB3403D
      DustinB3403
      last edited by

      Well even in that case I would still look at a bigger switch with 8 - 12 ports. possibly with some level of management on it.

      1 Reply Last reply Reply Quote 0
      • coliverC
        coliver
        last edited by

        Can you do port bonding? I thought I read someone suggest that but didn't see your response. That would be a really good stop gap solution for now.

        DustinB3403D 1 Reply Last reply Reply Quote 0
        • DustinB3403D
          DustinB3403 @coliver
          last edited by

          @coliver Possibly.

          The biggest bottleneck with the existing backup solution is the server performing the work. Which is just constantly getting hit.

          Port bonding on the new setup would reduce some cost, at the price of reducing what we can run VM wise since those ports would be tied up.

          coliverC DashrenderD 2 Replies Last reply Reply Quote 0
          • coliverC
            coliver @DustinB3403
            last edited by

            @DustinB3403 said:

            @coliver Possibly.

            The biggest bottleneck with the existing backup solution is the server performing the work. Which is just constantly getting hit.

            Port bonding on the new setup would reduce some cost, at the price of reducing what we can run VM wise since those ports would be tied up.

            The cost of an additional 4 port 1Gbe card is minimal. You could easily add that to all your systems for a fraction the cost of the 10Gbe switch and adapters.

            1 Reply Last reply Reply Quote 1
            • DustinB3403D
              DustinB3403
              last edited by

              I'm forking to a new thread. Will post a link shortly.

              1 Reply Last reply Reply Quote 0
              • DustinB3403D
                DustinB3403
                last edited by

                New topic discussing just the goals of this project.
                http://mangolassi.it/topic/6453/backup-and-recovery-goals

                1 Reply Last reply Reply Quote 0
                • DustinB3403D
                  DustinB3403 @scottalanmiller
                  last edited by

                  @scottalanmiller said:

                  Wouldn't you carry off daily?

                  Sorry just saw this, its a nuisance to have to swap tape or drive daily to do it. Our current plan is carry off weekly.

                  1 Reply Last reply Reply Quote 0
                  • DashrenderD
                    Dashrender @DustinB3403
                    last edited by

                    @DustinB3403 said:

                    Cost consciousness.

                    Is there that much added value in doubling what we have for those "if" events.

                    Remember this post when you ask for a full second server to run your VM environment.

                    1 Reply Last reply Reply Quote 1
                    • DashrenderD
                      Dashrender @DustinB3403
                      last edited by

                      @DustinB3403 said:

                      @coliver Possibly.

                      The biggest bottleneck with the existing backup solution is the server performing the work. Which is just constantly getting hit.

                      Port bonding on the new setup would reduce some cost, at the price of reducing what we can run VM wise since those ports would be tied up.

                      What do you mean? you typically bond all the NICs in a VM host together and all the VMs on the host share the pipe.

                      Next question, do you really use 800 Mb (realistic use from 1 Gb ports) on each server at the same time?

                      scottalanmillerS 1 Reply Last reply Reply Quote 1
                      • DustinB3403D
                        DustinB3403
                        last edited by

                        I've never bonded all of the NICs as we haven't had the need for it.

                        In most cases we've simply allocated a specific NIC for a specific number of VM's.

                        scottalanmillerS 1 Reply Last reply Reply Quote 0
                        • DashrenderD
                          Dashrender
                          last edited by

                          Unless you need to leave bandwidth overhead for something, why split it?

                          It's just like you always you OBR10 unless you have a specific reason not to.

                          1 Reply Last reply Reply Quote 1
                          • DustinB3403D
                            DustinB3403
                            last edited by

                            Why Bond when I'm still only capable of pushing 1Gb/s at best?

                            scottalanmillerS 1 Reply Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller @DustinB3403
                              last edited by

                              @DustinB3403 said:

                              I've never bonded all of the NICs as we haven't had the need for it.

                              Aren't we seeing bottlenecks, though? Bonding is a standard, best practice.

                              1 Reply Last reply Reply Quote 0
                              • scottalanmillerS
                                scottalanmiller @DustinB3403
                                last edited by

                                @DustinB3403 said:

                                Why Bond when I'm still only capable of pushing 1Gb/s at best?

                                What is limiting you to 1Gb/s if not the GigE link?

                                1 Reply Last reply Reply Quote 0
                                • scottalanmillerS
                                  scottalanmiller
                                  last edited by

                                  And you bond for failover, not just speed.

                                  1 Reply Last reply Reply Quote 0
                                  • scottalanmillerS
                                    scottalanmiller @Dashrender
                                    last edited by

                                    @Dashrender said:

                                    What do you mean? you typically bond all the NICs in a VM host together and all the VMs on the host share the pipe.

                                    Up to four NICs.

                                    1 Reply Last reply Reply Quote 1
                                    • DustinB3403D
                                      DustinB3403
                                      last edited by

                                      The switches between all of the separate devices aren't they?

                                      Plus this is all existing equipment that is weird. With the new equipment I can get all of this sorted.

                                      scottalanmillerS 1 Reply Last reply Reply Quote 0
                                      • DashrenderD
                                        Dashrender
                                        last edited by

                                        Assuming the switches (possibly new switch) understand link bonding (aggregation) will treat the 4 lines as one.

                                        So you have two servers, on the same switch, with 4 cables going to one server, and 4 cables going to another. This would allow the servers to talk to each other at 4 Gb

                                        DustinB3403D 1 Reply Last reply Reply Quote 0
                                        • scottalanmillerS
                                          scottalanmiller @DustinB3403
                                          last edited by

                                          @DustinB3403 said:

                                          The switches between all of the separate devices aren't they?

                                          Yes

                                          1 Reply Last reply Reply Quote 0
                                          • DustinB3403D
                                            DustinB3403 @Dashrender
                                            last edited by

                                            @Dashrender said:

                                            Assuming the switches (possibly new switch) understand link bonding (aggregation) will treat the 4 lines as one.

                                            So you have two servers, on the same switch, with 4 cables going to one server, and 4 cables going to another. This would allow the servers to talk to each other at 4 Gb

                                            Wouldn't that really be 2.4GB/s not 4Gb/s assuming you realistically only get 800Mb/s.

                                            DashrenderD scottalanmillerS 2 Replies Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 5
                                            • 6
                                            • 7
                                            • 8
                                            • 9
                                            • 10
                                            • 7 / 10
                                            • First post
                                              Last post