ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Cannot decide between 1U servers for growing company

    IT Discussion
    18
    246
    135.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @coliver
      last edited by

      @coliver said:

      @ntoxicator said:

      In my opinion. There would be more overhead

      ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?

      Slightly more overhead... probably an immeasurable amount. At the same time you are going against best practices and defeating many of the advantages of virtualization in one fell swoop by not attaching the storage to your hypervisor and presenting a virtual disk to the VM.

      As I was writing out the downsides, I'm not actually sure that it is more overhead. Because the iSCSI has to be processed in software by the VM rather than in hardware by the host there is more network overhead in doing it to the guest.

      coliverC 1 Reply Last reply Reply Quote 0
      • coliverC
        coliver @scottalanmiller
        last edited by

        @scottalanmiller said:

        @coliver said:

        @ntoxicator said:

        In my opinion. There would be more overhead

        ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?

        Slightly more overhead... probably an immeasurable amount. At the same time you are going against best practices and defeating many of the advantages of virtualization in one fell swoop by not attaching the storage to your hypervisor and presenting a virtual disk to the VM.

        As I was writing out the downsides, I'm not actually sure that it is more overhead. Because the iSCSI has to be processed in software by the VM rather than in hardware by the host there is more network overhead in doing it to the guest.

        Right, I agree with this I assumed that it would be slightly more processing overhead for the hypervisor but since it would be doing it anyway it wouldn't be anything additional.

        scottalanmillerS AconboyA 2 Replies Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @ntoxicator
          last edited by

          @ntoxicator said:

          The goal was to migrate ALL data off the old NAS to the new larger NAS. But due to limitations and the storage size growing so rapidly became so difficult

          XenServer should be able to do that with no downtime. Did you look into its features for moving storage while it is running?

          1 Reply Last reply Reply Quote 1
          • scottalanmillerS
            scottalanmiller @coliver
            last edited by

            @coliver said:

            @scottalanmiller said:

            @coliver said:

            @ntoxicator said:

            In my opinion. There would be more overhead

            ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?

            Slightly more overhead... probably an immeasurable amount. At the same time you are going against best practices and defeating many of the advantages of virtualization in one fell swoop by not attaching the storage to your hypervisor and presenting a virtual disk to the VM.

            As I was writing out the downsides, I'm not actually sure that it is more overhead. Because the iSCSI has to be processed in software by the VM rather than in hardware by the host there is more network overhead in doing it to the guest.

            Right, I agree with this I assumed that it would be slightly more processing overhead for the hypervisor but since it would be doing it anyway it wouldn't be anything additional.

            Networking in the host is more efficient than in the guest. And both networking and storage is more efficient in Linux and Xen than in Windows. So double bonus on efficiency.

            1 Reply Last reply Reply Quote 1
            • ntoxicatorN
              ntoxicator
              last edited by

              I've moved VM's on Citrix Xen Server and Storage to another LUN at the time (when installed the 2nd Synology)

              It saturated the network.

              The current SuperMicro 1U server only has 2 Intel NIC cards. I have them bonded via Xen Center and LACP enabled.

              scottalanmillerS 1 Reply Last reply Reply Quote 0
              • AconboyA
                Aconboy @coliver
                last edited by

                @coliver said:

                @scottalanmiller said:

                @coliver said:

                @ntoxicator said:

                In my opinion. There would be more overhead

                ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?

                Slightly more overhead... probably an immeasurable amount. At the same time you are going against best practices and defeating many of the advantages of virtualization in one fell swoop by not attaching the storage to your hypervisor and presenting a virtual disk to the VM.

                As I was writing out the downsides, I'm not actually sure that it is more overhead. Because the iSCSI has to be processed in software by the VM rather than in hardware by the host there is more network overhead in doing it to the guest.

                Right, I agree with this I assumed that it would be slightly more processing overhead for the hypervisor but since it would be doing it anyway it wouldn't be anything additional.

                Guys, this is largely the reason we architected HC3 the way we did - give you the flexibility and HA of SAN/NAS without the complexity or overhead of VSA's and storage protocols. This is also why we built it specifically for the SMB and Mid-Market - at a price that makes sense specifically for our target market (not trying to sound too "salesy" but this is exacly why we built the platform).

                1 Reply Last reply Reply Quote 2
                • ntoxicatorN
                  ntoxicator
                  last edited by

                  Also my goal was to migrate to NFS storage away from iSCSI

                  as dealing with the RAW image or .cow2 image file is hell of alot easier.

                  scottalanmillerS 1 Reply Last reply Reply Quote 1
                  • ntoxicatorN
                    ntoxicator
                    last edited by ntoxicator

                    @Aconboy said:

                    HC3

                    How long has HC3 scale been avail? Today is my first time hearing and being introduced with the option

                    AconboyA scottalanmillerS 2 Replies Last reply Reply Quote 0
                    • AconboyA
                      Aconboy @ntoxicator
                      last edited by

                      @ntoxicator said:

                      @Aconboy said:

                      HC3

                      How long has HC3 scale been avail? Today is my first time hearding and being introduced with the option

                      We began it in 2008, and first customer ship was in late 2011. we have north of 5000 units in the field across 1800 or so customer sites.Take a look at www.scalecomputing.com

                      1 Reply Last reply Reply Quote 2
                      • dafyreD
                        dafyre
                        last edited by

                        The Scale systems are excellent. I know NTG has one. I've worked with their systems a couple of years ago, and the performance was night & day VS VMware and a similarly sized SAN. And their systems work really well.

                        scottalanmillerS 1 Reply Last reply Reply Quote 0
                        • scottalanmillerS
                          scottalanmiller @ntoxicator
                          last edited by

                          @ntoxicator said:

                          I've moved VM's on Citrix Xen Server and Storage to another LUN at the time (when installed the 2nd Synology)

                          It saturated the network.

                          The current SuperMicro 1U server only has 2 Intel NIC cards. I have them bonded via Xen Center and LACP enabled.

                          Yeah, storage migrations will do that 🙂 That's why you want block storage on a dedicated SAN if possible so that it uses its own "back channel" whenever possible so that it doesn't impact other things.

                          1 Reply Last reply Reply Quote 1
                          • scottalanmillerS
                            scottalanmiller @ntoxicator
                            last edited by

                            @ntoxicator said:

                            Also my goal was to migrate to NFS storage away from iSCSI

                            as dealing with the RAW image or .cow2 image file is hell of alot easier.

                            Agreed and good plan 🙂

                            1 Reply Last reply Reply Quote 0
                            • ntoxicatorN
                              ntoxicator
                              last edited by

                              I had the Synology NAS setup as Block Level storage for the Volume that serves out the ISCSI Luns. mehhhhh.

                              the complications! Lol.

                              This is why I was wanting to move to all new design and setup being that I already essentially have data on a centralized setup.

                              Could I get away with dual Synology 12-bay NAS units? (running in HA/replication).Probably

                              I was thinking about having 10Gbe backbone/interconnect for the NAS + The VM Node servers. So all that traffic rides on the 10GBE backbone and would not touch the 1Gbe switches.

                              AconboyA scottalanmillerS 2 Replies Last reply Reply Quote 2
                              • scottalanmillerS
                                scottalanmiller @ntoxicator
                                last edited by

                                @ntoxicator said:

                                @Aconboy said:

                                HC3

                                How long has HC3 scale been avail? Today is my first time hearing and being introduced with the option

                                Quite some time, they aren't new. They were a storage vendor before moving into hyperconvergence but they are one of the leaders in the HC space. They've been around longer than the terminology 🙂 HC is just starting to hit its stride in the market, though, so you'll start hearing about Scale and their competitors more and more in the near future.

                                1 Reply Last reply Reply Quote 0
                                • scottalanmillerS
                                  scottalanmiller @dafyre
                                  last edited by

                                  @dafyre said:

                                  The Scale systems are excellent. I know NTG has one. I've worked with their systems a couple of years ago, and the performance was night & day VS VMware and a similarly sized SAN. And their systems work really well.

                                  Scale especially kicks butt for Windows performance because of their stack.

                                  travisdh1T 1 Reply Last reply Reply Quote 0
                                  • AconboyA
                                    Aconboy @ntoxicator
                                    last edited by

                                    @ntoxicator - yup, we do that too - 10gig out of band from the LAN path for storage stack handling and cluster self-awareness.

                                    scottalanmillerS 1 Reply Last reply Reply Quote 1
                                    • scottalanmillerS
                                      scottalanmiller @ntoxicator
                                      last edited by

                                      @ntoxicator said:

                                      Could I get away with dual Synology 12-bay NAS units? (running in HA/replication).

                                      Well here is the issue there.... If you are doing this as a traditional NAS (NFS or SMB) for file shares like mapped Windows drives or automounted home directories for Linux users this works reliably and beautifully. The failover is smooth and transparent.

                                      From shops that have tested this with virtualization, the failover is not fast enough and the VMs typically fail so that you don't get the failover that you are hoping for but instead an outage with potential of corruption. The failover is just not fast enough (at least in real world tested scenarios) to use with VM HA.

                                      1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @Aconboy
                                        last edited by

                                        @Aconboy said:

                                        @ntoxicator - yup, we do that too - 10gig out of band from the LAN path for storage stack handling and cluster self-awareness.

                                        Yeah, we have a 10GigE fiber switching stack just to handle our OOB Scale communications! Talk about throughput capabilities!

                                        1 Reply Last reply Reply Quote 0
                                        • ntoxicatorN
                                          ntoxicator
                                          last edited by

                                          So if anyone can explain to me

                                          To do away with centralized storage such as what we have now and I've been moving to. I suppose this is what I've grown use to.

                                          In order to have localized storage at the node/hypervisor level. one or many of the hypervisors would be storing all the data and sharing out the NFS? Then its replicated between? probably with DRBD Storage.

                                          however, would would this be done with Citrix Xen Server for instance?

                                          AconboyA coliverC scottalanmillerS DustinB3403D 4 Replies Last reply Reply Quote 0
                                          • DashrenderD
                                            Dashrender @ntoxicator
                                            last edited by

                                            @ntoxicator said:

                                            @scottalanmiller

                                            Essentially What I was looking to do was KVM / VM with complete HA.

                                            I'm uncertain about keeping data local to individual servers. maybe because I have no experience with localized storage in an HA environment? Its all been shared centralized storage.

                                            But you don't HA today. At least not at the storage level.

                                            1 Reply Last reply Reply Quote 1
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 12
                                            • 13
                                            • 4 / 13
                                            • First post
                                              Last post