ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XenServer hyperconverged

    IT Discussion
    xenserver xenserver 7 xen orchestra hyperconvergence hyperconverged
    14
    111
    19.2k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • R3dPand4R
      R3dPand4
      last edited by

      @olivier Thank you for clarifying, I'm assuming this would apply principally at least the same to a 2 node cluster? One goes down, writes are briefly suspended, writes resume on the Active node, failed node is replaced, then rebuild/healing process continues on the New node. How long are you expecting for rebuilds? I'm sure that's a loaded question because it's data dependent.....

      olivierO 1 Reply Last reply Reply Quote 0
      • olivierO
        olivier @DustinB3403
        last edited by olivier

        @dustinb3403 No writes are lost, it's handled on your VM level (VM OS wait for "ack" of virtual HDD but it's not answering, so it waits). Basically, cluster said: "writes command won't be answered as long as we figured it out".

        So it's safe 🙂

        1 Reply Last reply Reply Quote 1
        • olivierO
          olivier @R3dPand4
          last edited by olivier

          @r3dpand4 This is a good question. We made the choice to use "sharding", which means making blocks of 512MB for your data to be replicated or spread.

          So the heal time will be time to fetch all new/missing 512MB blocks of data since node was down. It's pretty fast on the tests I've done.

          R3dPand4R 1 Reply Last reply Reply Quote 1
          • R3dPand4R
            R3dPand4 @olivier
            last edited by

            @olivier So essentially just deduplication?

            olivierO 1 Reply Last reply Reply Quote 0
            • olivierO
              olivier @R3dPand4
              last edited by olivier

              @r3dpand4 That has nothing to do with deduplication. There is just chunks of files replicated or distributed-replicated (or even disperse for disperse mode).

              By the way, nobody talks about this mode, but it's my favorite 😛 Especially for large HDD, it's perfect. Thanks to the ability to lose any of n disk in your cluster. Eg with 6 nodes:

              This is disperse 6 with redundancy 2 (like RAID6 if you prefer). Any 2 XenServer hosts can be destroyed, it will continue to work as usual:

              And in this case (6 with redundancy of 2), you'll be able to address 4/6th of your total disk space!

              1 Reply Last reply Reply Quote 1
              • olivierO
                olivier
                last edited by

                Here it is with improved pics of XOSAN, I suppose it's more clear now:

                0_1505215577248_8_DISPERSE_6(2).PNG

                0_1505215604111_5_DISTRIB-REP 3x2.PNG

                What do you think?

                DustinB3403D 1 Reply Last reply Reply Quote 2
                • DustinB3403D
                  DustinB3403 @olivier
                  last edited by

                  @olivier That picture helps make it way more clear.

                  Each server is providing 100GB and either are standalone systems (disperse) or are paired (dist. repl).

                  olivierO 1 Reply Last reply Reply Quote 0
                  • olivierO
                    olivier @DustinB3403
                    last edited by

                    @dustinb3403 That's it, indeed 🙂

                    1. fist picture: you can lose up to 2 hosts (any of them)
                    2. second picture: you can lose up to 3 hosts (1 by pair)
                    1 Reply Last reply Reply Quote 0
                    • FATeknollogeeF
                      FATeknollogee
                      last edited by

                      What is the difference in performance between the two options?

                      olivierO 1 Reply Last reply Reply Quote 0
                      • olivierO
                        olivier @FATeknollogee
                        last edited by olivier

                        @fateknollogee said in XenServer hyperconverged:

                        What is the difference in performance between the two options?

                        Disperse requires more compute performance because it's a complex algorithm (based on reed-solomon). So it's slower vs replication, but it's not a big deal if you are using HDDs.

                        However, if you are using SSDs, disperse will be a bottleneck, so it's better to go on replicate.

                        Ideal solution? Disperse for large storage space on HDDs, and Replicated on SSDs… at the same time (using tiering, which will be available soon). Chunks that are read often will be promoted to the replicated SSDs storage automatically (until it's almost full). If more accessed chunks appears in the future, some chunks will be demoted to "slower" tier and replaced by the new hot ones.

                        1 Reply Last reply Reply Quote 1
                        • olivierO
                          olivier
                          last edited by

                          We validated our first provider: https://xen-orchestra.com/blog/xosan-on-10gbps-io/

                          Next? Probably a hardware provider 🙂

                          JaredBuschJ 1 Reply Last reply Reply Quote 5
                          • JaredBuschJ
                            JaredBusch @olivier
                            last edited by

                            @olivier said in XenServer hyperconverged:

                            We validated our first provider: https://xen-orchestra.com/blog/xosan-on-10gbps-io/

                            Next? Probably a hardware provider 🙂

                            Congrats

                            1 Reply Last reply Reply Quote 1
                            • 1
                            • 2
                            • 3
                            • 4
                            • 5
                            • 6
                            • 6 / 6
                            • First post
                              Last post