ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Cannot decide between 1U servers for growing company

    IT Discussion
    18
    246
    134.9k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • ntoxicatorN
      ntoxicator
      last edited by

      thank you. Wonderful explanation.

      1 Reply Last reply Reply Quote 1
      • scottalanmillerS
        scottalanmiller
        last edited by

        No problem 🙂

        Some extra info... big other providers of packaged Xen systems are Ubuntu, Suse and Oracle. Big backers of KVM are Red Hat and IBM.

        Big clouds using Xen: Amazon, Rackspace and IBM
        Big clouds using KVM: Digital Ocean, Vultr

        Xen is more powerful "out of the box." KVM is more extendible.

        Xen is more performant for Linux workloads. KVM is more performant for Windows workloads. Both are super fast and performance is not normally a deciding factor.

        Besides Scale, lots of other vendors build on KVM as well for similar reasoning. One vendor that we work with regularly that uses KVM because of the ease of automation is Unitrends.

        1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller
          last edited by

          ProxMox I avoid, KVM I do not 😉 It's ProxMox themselves that are the issues there, not that they are built on KVM.

          1 Reply Last reply Reply Quote 0
          • ntoxicatorN
            ntoxicator
            last edited by

            You the man. Amazing information here. Goes a long ways.

            You think there would be an issue upgrading the current xenserver node to 6.5? Presently 6.0

            I have 6.1 ISO sitting here right now that some other nodes were running - but I migrated them to Proxmox for testing/development.

            Aways worried something will 'break'

            1 Reply Last reply Reply Quote 1
            • DustinB3403D
              DustinB3403 @ntoxicator
              last edited by

              @ntoxicator said:

              So if anyone can explain to me

              To do away with centralized storage such as what we have now and I've been moving to. I suppose this is what I've grown use to.

              In order to have localized storage at the node/hypervisor level. one or many of the hypervisors would be storing all the data and sharing out the NFS? Then its replicated between? probably with DRBD Storage.

              however, would would this be done with Citrix Xen Server for instance?

              So I just wrote up an entire quote on this for my org.

              Your Xenservers would have enough capacity (storage) to run everything you have today, plus room for growth. You build the Xen installation on both host, and then configure them into a XenPool.

              This allows the VM's to migrate between the two (or more host) in the event you need to work on them.

              For free 2-Node HA, look into HA-Lizard.

              1 Reply Last reply Reply Quote 0
              • DustinB3403D
                DustinB3403 @ntoxicator
                last edited by

                @ntoxicator said:

                But wouldnt all that storage replication STILL be handled over 1Gbe backbone??!

                Bond the Ethernet together or install a 10GbE NIC into each host. 🙂

                1 Reply Last reply Reply Quote 0
                • ntoxicatorN
                  ntoxicator
                  last edited by

                  ethernet is bonded.

                  Could install 10GbE cards on servers. But the NAS would still be limitation as does not support PCI-e cards.

                  coliverC DustinB3403D scottalanmillerS 3 Replies Last reply Reply Quote 0
                  • coliverC
                    coliver @ntoxicator
                    last edited by

                    @ntoxicator said:

                    ethernet is bonded.

                    Could install 10GbE cards on servers. But the NAS would still be limitation as does not support PCI-e cards.

                    He's saying you should get rid of the NAS and have all storage on the servers.

                    1 Reply Last reply Reply Quote 1
                    • DustinB3403D
                      DustinB3403 @ntoxicator
                      last edited by

                      @ntoxicator said:

                      ethernet is bonded.

                      Could install 10GbE cards on servers. But the NAS would still be limitation as does not support PCI-e cards.

                      But your NAS isn't used as a place for your VM's to reside. As a backup target sure, but the NAS is worthless in the case of local storage on your Xen Hosts.

                      1 Reply Last reply Reply Quote 0
                      • ntoxicatorN
                        ntoxicator
                        last edited by

                        Understood. So would have to resort in Local storage on server an dutilize the HA-Lizard HA-iSCSI or HA-Lizard on the node....

                        Appears to be free module/software - was unaware of it. Is it proven?

                        NAS is handing out iSCSI LUN's as an FYI

                        scottalanmillerS 1 Reply Last reply Reply Quote 0
                        • ntoxicatorN
                          ntoxicator
                          last edited by

                          Also spec'ing 2 servers with internal storage with 4+ drives.. I see as being more expensive

                          As the server vendors charge high amounts for storage disks. As I'm sure folks would recommend SAS drives over consumer 7200RPM Sata

                          I really like the HGST NAS drives. High MTBF and great speed.

                          coliverC AconboyA 2 Replies Last reply Reply Quote 0
                          • coliverC
                            coliver @ntoxicator
                            last edited by

                            @ntoxicator said:

                            Also spec'ing 2 servers with internal storage with 4+ drives.. I see as being more expensive

                            As the server vendors charge high amounts for storage disks. As I'm sure folks would recommend SAS drives over consumer 7200RPM Sata

                            I really like the HGST NAS drives. High MTBF and great speed.

                            Vendors are selling both their name and the support that comes with the disks. You will probably find in the long run that having a decent warranty with replacement hard drives is going to even out over the life of the machine.

                            It's odd that you are talking about building an HA setup but then balk at the price of hard drives. That doesn't mesh well with what you are saying you want. Or am I reading too much into this?

                            AconboyA 1 Reply Last reply Reply Quote 0
                            • AconboyA
                              Aconboy @coliver
                              last edited by

                              @coliver Vendors also sell quite a few features that just aren't there in roll-your-own based approaches - site to site replication, zero footprint snapshots, etc.

                              1 Reply Last reply Reply Quote 0
                              • AconboyA
                                Aconboy @ntoxicator
                                last edited by

                                @ntoxicator on the HGST 1.8 drives - major firmware issues with those. The right IO patterns with them make them drop off of the bus with no warning.

                                ntoxicatorN 1 Reply Last reply Reply Quote 0
                                • ntoxicatorN
                                  ntoxicator
                                  last edited by

                                  My biggest frustration is the strangle hold the server vendors put on you.

                                  I know the type of hardware I want to use for the application. They want to hold my hand and tell me this or that, or say this is what we have as baseline and you have to do it this way.

                                  Just sell me the damn drive trays and let me populate the hard caddy's on my own; its not going to void the warranty. Only item effect would be local disk array - be our responsibility. Big woop

                                  If mobo fails or fails fan; thats not the fault of customer purchased hard drives

                                  I suppose with that mindset, I'm better off with supermicro build.

                                  But supermicro chasis build quality is NOT the same as say a nice Oracle server or HP/CISCO

                                  coliverC DustinB3403D 2 Replies Last reply Reply Quote 0
                                  • ntoxicatorN
                                    ntoxicator @Aconboy
                                    last edited by

                                    @Aconboy said:

                                    HGST 1.8

                                    Wasnt talking about the HGST 1.8 drives. Was talking about 0S03660

                                    1 Reply Last reply Reply Quote 0
                                    • coliverC
                                      coliver @ntoxicator
                                      last edited by

                                      @ntoxicator said:

                                      My biggest frustration is the strangle hold the server vendors put on you.

                                      I know the type of hardware I want to use for the application. They want to hold my hand and tell me this or that, or say this is what we have as baseline and you have to do it this way.

                                      Just sell me the damn drive trays and let me populate the hard caddy's on my own; its not going to void the warranty. Only item effect would be local disk array - be our responsibility. Big woop

                                      If mobo fails or fails fan; thats not the fault of customer purchased hard drives

                                      I suppose with that mindset, I'm better off with supermicro build.

                                      But supermicro chasis build quality is NOT the same as say a nice Oracle server or HP/CISCO

                                      So, purchase two or three refurb chassis from XByte. Then populate it with whatever drives you want. If I recall correctly they will support the servers less the drives.

                                      scottalanmillerS 1 Reply Last reply Reply Quote 0
                                      • DustinB3403D
                                        DustinB3403 @ntoxicator
                                        last edited by

                                        @ntoxicator said:

                                        Just sell me the damn drive trays and let me populate the hard caddy's on my own; its not going to void the warranty.

                                        E-Bay...

                                        Or vendors like xbyte, which when you buy their SSD drives can also sell you (or maybe they throw in) the cages.

                                        1 Reply Last reply Reply Quote 1
                                        • stacksofplatesS
                                          stacksofplates
                                          last edited by stacksofplates

                                          Amazon has a bunch of sleds also. Usually not too badly priced.

                                          1 Reply Last reply Reply Quote 0
                                          • scottalanmillerS
                                            scottalanmiller @ntoxicator
                                            last edited by

                                            @ntoxicator said:

                                            ethernet is bonded.

                                            Could install 10GbE cards on servers. But the NAS would still be limitation as does not support PCI-e cards.

                                            Bonding with iSCSI doesn't work. You need MPIO. Have to bond with NFS.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 7
                                            • 8
                                            • 9
                                            • 10
                                            • 11
                                            • 12
                                            • 13
                                            • 9 / 13
                                            • First post
                                              Last post