ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XenServer hyperconverged

    Scheduled Pinned Locked Moved IT Discussion
    xenserverxenserver 7xen orchestrahyperconvergencehyperconverged
    111 Posts 14 Posters 22.8k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • FATeknollogeeF
      FATeknollogee @olivier
      last edited by

      @olivier
      Very nice work.
      I'm claiming my place on the beta line.

      1 Reply Last reply Reply Quote 0
      • black3dynamiteB
        black3dynamite @olivier
        last edited by

        @olivier
        Any drawback while using the default storage type LVM?

        olivierO 1 Reply Last reply Reply Quote 1
        • olivierO
          olivier @black3dynamite
          last edited by

          @black3dynamite I'm not sure to understand the question.

          So far, the "stack" is:

          • Local Storage in LVM (created during XS install)
          • on top of that, filling it by a big data disk used by a VM
          • the VM will expose this data disk
          • XenServer will mount this data disk and create a file level SR on it
          • VMs will use this SR

          It sounds like a tons of extra layers, but that's the easiest one I found after a lot of tests (you can see it as a compromise between modifying the host too deeply to reduce the layers VS not modifying anything into the host but have more complexity to handle on VM level). You can consider it as an "hybrid" approach.

          Ideally, XenServer could be modified directly to allow this (like VMWare do with VSAN), and expose the configuration via XAPI.

          I think if we (XO project) show the way, it could (maybe) trigger some interest on Citrix side (which is only into XenDesktop/XenApp, but hyperconvergence even make sense here)

          black3dynamiteB 1 Reply Last reply Reply Quote 2
          • black3dynamiteB
            black3dynamite @olivier
            last edited by black3dynamite

            @olivier
            The question is based on you using EXT (thin) shared storage instead of LVM (thick) for XenServer.

            olivierO 1 Reply Last reply Reply Quote 0
            • olivierO
              olivier @black3dynamite
              last edited by

              @black3dynamite I can't use LVM because it's block-based. I can only work with file level backend.

              I did try to play with blocks, performance was also correct (a new layer so a small extra overhead). But I got a big issue in certain cases. Also, it was less scalable in "more than 3" hosts scenario.

              1 Reply Last reply Reply Quote 1
              • DanpD
                Danp @olivier
                last edited by

                @olivier Sounds promising. Can you elaborate on how adding additional overhead of XOSAN would yield an increase in performance of 40% / 200% ?

                olivierO 1 Reply Last reply Reply Quote 0
                • olivierO
                  olivier @Danp
                  last edited by

                  @Danp That's because despite the added layers, we are making some operations locally and also on a SSD (vs the existing NAS where everything is done remotely)

                  It's more indicative than a real apple to apple benchmark. The idea here, is to show the system would be usable, performances are not the main objective here.

                  1 Reply Last reply Reply Quote 3
                  • olivierO
                    olivier
                    last edited by olivier

                    Blog post available: https://xen-orchestra.com/blog/xenserver-hyperconverged/

                    0_1481105276656_hyperpool.jpg

                    1 Reply Last reply Reply Quote 3
                    • olivierO
                      olivier
                      last edited by

                      I also updated the benchmarks with a Crystal Diskmark working on a 4GiB file (avoiding ZFS cache). Difference of performance is now huge, so the impact of the replication is not that bad at the end.

                      scottalanmillerS 1 Reply Last reply Reply Quote 3
                      • scottalanmillerS
                        scottalanmiller @olivier
                        last edited by

                        @olivier said in XenServer hyperconverged:

                        I also updated the benchmarks with a Crystal Diskmark working on a 4GiB file (avoiding ZFS cache). Difference of performance is now huge, so the impact of the replication is not that bad at the end.

                        Awesome! This is all very exciting.

                        1 Reply Last reply Reply Quote 2
                        • hobbit666H
                          hobbit666
                          last edited by

                          Really looking forward to playing with this

                          1 Reply Last reply Reply Quote 0
                          • vhinzsanchezV
                            vhinzsanchez
                            last edited by vhinzsanchez

                            Just bumping up. Hopefully, Olivier will have some favorable update. 🤤

                            EDIT: corrected olivier...my apologies 😑

                            1 Reply Last reply Reply Quote 0
                            • olivierO
                              olivier
                              last edited by olivier

                              Well, sort of 😉 This command is now working:

                              xe sr-create name-label=XOSAN shared=true content-type=user type=xosan
                              

                              Still a lot of stuff between this and a viable product, I'm in the middle of testing the solution in terms of resilience and overhead. I need also a layer of glue to at least semi-automate the deployment of XOSAN on a pool, otherwise I'll spent to much time doing it manually for each beta tester ^^

                              Anyway, developing a storage on current storage architecture of XenServer is really a pain. Eager to see SMAPIv3 in action 😄

                              vhinzsanchezV 1 Reply Last reply Reply Quote 3
                              • vhinzsanchezV
                                vhinzsanchez @olivier
                                last edited by

                                Thanks @olivier

                                Just finished reading your blog with response to comment referring to January for beta...I think it's too early in January. Getting excited here just by reading. Nice idea/product, btw.

                                More power!

                                1 Reply Last reply Reply Quote 1
                                • olivierO
                                  olivier
                                  last edited by

                                  I still have 27 days to stay in track 😉

                                  1 Reply Last reply Reply Quote 4
                                  • olivierO
                                    olivier
                                    last edited by

                                    Just a new blog post talking about Erasure Code solution for XOSAN: https://xen-orchestra.com/blog/xenserver-hyperconverged-with-xosan/

                                    AKA a great solution to mix security, space available and scalability 🙂

                                    Just bought 10GB network stuff to dig on perfs.

                                    1 Reply Last reply Reply Quote 4
                                    • FATeknollogeeF
                                      FATeknollogee
                                      last edited by

                                      This is so schweeeeet...
                                      Let's get cracking on the beta testing.....
                                      Where do I sign up, I've got a SuperMicro 2U quad node box waiting & ready 😆

                                      1 Reply Last reply Reply Quote 0
                                      • olivierO
                                        olivier
                                        last edited by

                                        The setup is still to manual to be open for a "large scale" test.

                                        But tell me more on your setup, I can start to think what's the best mode in your case 😉

                                        1 Reply Last reply Reply Quote 0
                                        • FATeknollogeeF
                                          FATeknollogee
                                          last edited by

                                          I'll send that to you via email so I don't clutter this thread

                                          1 Reply Last reply Reply Quote 0
                                          • olivierO
                                            olivier
                                            last edited by

                                            Sure, go ahead.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 3 / 6
                                            • First post
                                              Last post