ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XenServer hyperconverged

    IT Discussion
    xenserver xenserver 7 xen orchestra hyperconvergence hyperconverged
    14
    111
    19.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierO
      olivier
      last edited by

      I still have 27 days to stay in track 😉

      1 Reply Last reply Reply Quote 4
      • olivierO
        olivier
        last edited by

        Just a new blog post talking about Erasure Code solution for XOSAN: https://xen-orchestra.com/blog/xenserver-hyperconverged-with-xosan/

        AKA a great solution to mix security, space available and scalability 🙂

        Just bought 10GB network stuff to dig on perfs.

        1 Reply Last reply Reply Quote 4
        • FATeknollogeeF
          FATeknollogee
          last edited by

          This is so schweeeeet...
          Let's get cracking on the beta testing.....
          Where do I sign up, I've got a SuperMicro 2U quad node box waiting & ready 😆

          1 Reply Last reply Reply Quote 0
          • olivierO
            olivier
            last edited by

            The setup is still to manual to be open for a "large scale" test.

            But tell me more on your setup, I can start to think what's the best mode in your case 😉

            1 Reply Last reply Reply Quote 0
            • FATeknollogeeF
              FATeknollogee
              last edited by

              I'll send that to you via email so I don't clutter this thread

              1 Reply Last reply Reply Quote 0
              • olivierO
                olivier
                last edited by

                Sure, go ahead.

                1 Reply Last reply Reply Quote 0
                • olivierO
                  olivier
                  last edited by

                  Article is out (as 5.12): https://xen-orchestra.com/blog/improving-xenserver-storage-performances-with-xosan/

                  Doc updated: https://xen-orchestra.com/docs/xosan.html

                  1 Reply Last reply Reply Quote 2
                  • R3dPand4R
                    R3dPand4 @olivier
                    last edited by

                    @olivier Have you tested how it behaves if you shut down both hosts and then fire them back up? Do you have either of them weighted or master/slave? Suppose you have to replace an entire node, do you just mount the XOSAN and it begins replicating? Sorry if I'm jumping way ahead, but this is very interesting.

                    1 Reply Last reply Reply Quote 1
                    • olivierO
                      olivier
                      last edited by

                      1. If you shutdown all XOSAN VMs, your SR is unreachable. When you start it, as soon the minimal number of nodes is up, it will work again (even if there is auto healing, it will be slower but work in R/W)

                      2. If you replace one node, you can do it live (we have a "replace" button in the UI!). The new node will replace the old one and get all the missing pieces via the heal process.

                      In the end, nothing to do to make it work again 🙂

                      1 Reply Last reply Reply Quote 3
                      • DashrenderD
                        Dashrender
                        last edited by

                        What if you shut down host 1 now, and host 2 an hour from now, then power one host 1? Will the system just run now with the old data?

                        1 Reply Last reply Reply Quote 1
                        • olivierO
                          olivier
                          last edited by

                          @Dashrender Can you be more specific on the total number of nodes from the start?

                          DustinB3403D DashrenderD 2 Replies Last reply Reply Quote 0
                          • DustinB3403D
                            DustinB3403 @olivier
                            last edited by

                            @olivier I think this is a 2 node setup that @Dashrender is discussing.

                            Can the system scale to more than 2 nodes?

                            olivierO 1 Reply Last reply Reply Quote 0
                            • DashrenderD
                              Dashrender @olivier
                              last edited by

                              @olivier said in XenServer hyperconverged:

                              @Dashrender Can you be more specific on the total number of nodes from the start?

                              Let's assume your picture, 6 total nodes, but only 2 copies of data. So assume node1 in orange is shutdown, then 1 hour later node2 orange is shutdown. What happens?

                              How about a 2 node setup? i.e. no other witnesses to the configuration.

                              1 Reply Last reply Reply Quote 0
                              • olivierO
                                olivier @DustinB3403
                                last edited by olivier

                                @dustinb3403 said in XenServer hyperconverged:

                                @olivier I think this is a 2 node setup that @Dashrender is discussing.

                                Can the system scale to more than 2 nodes?

                                This IS meant to scale (up to max pool size, 16 hosts, which is more a XenServer limit than ours 😛 )

                                @Dashrender The picture is not clear maybe: each "disk" icon is not a disk, but a XenServer host. So you have 6 hosts there. When you lost "enough" hosts (2 replicated hosts in the 6 setup, in the same "mirror"), it will be in read only. As soon one of this 2 is back online, R/W is back.

                                On a 2 node setup, there is an arbiter VM that acts like the witness. If you lose the host with the 2x VMs (one arbiter and one "normal"), you'll go in read only. No split brain possible.

                                DustinB3403D R3dPand4R 2 Replies Last reply Reply Quote 0
                                • DashrenderD
                                  Dashrender
                                  last edited by

                                  In this picture
                                  https://i.imgur.com/VKOVnkU.png

                                  how many systems have the data on it? only 2?

                                  olivierO 1 Reply Last reply Reply Quote 0
                                  • DustinB3403D
                                    DustinB3403 @olivier
                                    last edited by

                                    @olivier said in XenServer hyperconverged:

                                    @dustinb3403 said in XenServer hyperconverged:

                                    @olivier I think this is a 2 node setup that @Dashrender is discussing.

                                    Can the system scale to more than 2 nodes?

                                    This IS meant to scale (up to max pool size, 16 hosts, which is more a XenServer limit than ours 😛 )

                                    @Dashrender The picture is not clear maybe: each "disk" icon is not a disk, but a XenServer host. So you have 6 hosts there. When you lost "enough" hosts (2 replicated hosts in the 6 setup, in the same "mirror"), it will be in read only. As soon one of this 2 is back online, R/W is back.

                                    On a 2 node setup, there is an arbiter VM that acts like the witness. If you lose the host with the 2x VMs (one arbiter and one "normal"), you'll go in read only. No split brain possible.

                                    The question is (i think) if you lost all hosts in the orange group, would XOSAN and the operating VM's in the entire pool still be functional Read/Write until those servers are brought back online?

                                    1 Reply Last reply Reply Quote 0
                                    • R3dPand4R
                                      R3dPand4 @olivier
                                      last edited by

                                      @olivier So you just can't make any writes during this period of node failure/recovery? Are the writes cached? If so how much can be cached and for how long?

                                      1 Reply Last reply Reply Quote 0
                                      • olivierO
                                        olivier @Dashrender
                                        last edited by

                                        @dashrender said in XenServer hyperconverged:

                                        In this picture
                                        https://i.imgur.com/VKOVnkU.png

                                        how many systems have the data on it? only 2?

                                        This is a part of distributed-replicated setup. In this "branch", you have 2 XS hosts with 100GiB data on each, in "RAID1"-like.

                                        The others branches (not in the picture you displayed) are like a RAID0 on top.

                                        DustinB3403D 1 Reply Last reply Reply Quote 0
                                        • olivierO
                                          olivier
                                          last edited by

                                          The question is (i think) if you lost all hosts in the orange group, would XOSAN and the operating VM's in the entire pool still be functional Read/Write until those servers are brought back online?

                                          Because data are spread on all subvolumes (think like RAID10), you'll be in read only on the whole thing.

                                          You can avoid that if you decide to NOT stripe files on all subvolumes (which is the default behavior in Gluster by the way), but it's NOT a good thing for VMs (because heal time would be horrible, and subvolumes won't be balanced)

                                          1 Reply Last reply Reply Quote 0
                                          • DustinB3403D
                                            DustinB3403 @olivier
                                            last edited by

                                            so @olivier on each host in an XOSAN pool, is there a dedicated witness VM?

                                            If so that witness acts as the arbitrator for that host. Meaning if the VM goes offline the available storage, ram and CPU for that host is unavailable.

                                            It doesn't mean that there is an individual VM running on that host that wouldn't be able to move to either of the other 2 servers in the 3 server pool.

                                            Am I correct in thinking that the Orange, Yellow and Pink boxes are individual XS servers, presenting 100GB each to the pool?

                                            olivierO 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 3 / 6
                                            • First post
                                              Last post