XenServer hyperconverged
-
@olivier
Very nice work.
I'm claiming my place on the beta line. -
@olivier
Any drawback while using the default storage type LVM? -
@black3dynamite I'm not sure to understand the question.
So far, the "stack" is:
- Local Storage in LVM (created during XS install)
- on top of that, filling it by a big data disk used by a VM
- the VM will expose this data disk
- XenServer will mount this data disk and create a file level SR on it
- VMs will use this SR
It sounds like a tons of extra layers, but that's the easiest one I found after a lot of tests (you can see it as a compromise between modifying the host too deeply to reduce the layers VS not modifying anything into the host but have more complexity to handle on VM level). You can consider it as an "hybrid" approach.
Ideally, XenServer could be modified directly to allow this (like VMWare do with VSAN), and expose the configuration via XAPI.
I think if we (XO project) show the way, it could (maybe) trigger some interest on Citrix side (which is only into XenDesktop/XenApp, but hyperconvergence even make sense here)
-
@olivier
The question is based on you using EXT (thin) shared storage instead of LVM (thick) for XenServer. -
@black3dynamite I can't use LVM because it's block-based. I can only work with file level backend.
I did try to play with blocks, performance was also correct (a new layer so a small extra overhead). But I got a big issue in certain cases. Also, it was less scalable in "more than 3" hosts scenario.
-
@olivier Sounds promising. Can you elaborate on how adding additional overhead of XOSAN would yield an increase in performance of 40% / 200% ?
-
@Danp That's because despite the added layers, we are making some operations locally and also on a SSD (vs the existing NAS where everything is done remotely)
It's more indicative than a real apple to apple benchmark. The idea here, is to show the system would be usable, performances are not the main objective here.
-
Blog post available: https://xen-orchestra.com/blog/xenserver-hyperconverged/
-
I also updated the benchmarks with a Crystal Diskmark working on a 4GiB file (avoiding ZFS cache). Difference of performance is now huge, so the impact of the replication is not that bad at the end.
-
@olivier said in XenServer hyperconverged:
I also updated the benchmarks with a Crystal Diskmark working on a 4GiB file (avoiding ZFS cache). Difference of performance is now huge, so the impact of the replication is not that bad at the end.
Awesome! This is all very exciting.
-
Really looking forward to playing with this
-
Just bumping up. Hopefully, Olivier will have some favorable update.
EDIT: corrected olivier...my apologies
-
Well, sort of This command is now working:
xe sr-create name-label=XOSAN shared=true content-type=user type=xosan
Still a lot of stuff between this and a viable product, I'm in the middle of testing the solution in terms of resilience and overhead. I need also a layer of glue to at least semi-automate the deployment of XOSAN on a pool, otherwise I'll spent to much time doing it manually for each beta tester ^^
Anyway, developing a storage on current storage architecture of XenServer is really a pain. Eager to see SMAPIv3 in action
-
Thanks @olivier
Just finished reading your blog with response to comment referring to January for beta...I think it's too early in January. Getting excited here just by reading. Nice idea/product, btw.
More power!
-
I still have 27 days to stay in track
-
Just a new blog post talking about Erasure Code solution for XOSAN: https://xen-orchestra.com/blog/xenserver-hyperconverged-with-xosan/
AKA a great solution to mix security, space available and scalability
Just bought 10GB network stuff to dig on perfs.
-
This is so schweeeeet...
Let's get cracking on the beta testing.....
Where do I sign up, I've got a SuperMicro 2U quad node box waiting & ready -
The setup is still to manual to be open for a "large scale" test.
But tell me more on your setup, I can start to think what's the best mode in your case
-
I'll send that to you via email so I don't clutter this thread
-
Sure, go ahead.