XenServer hyperconverged
-
Thanks @olivier
Just finished reading your blog with response to comment referring to January for beta...I think it's too early in January. Getting excited here just by reading. Nice idea/product, btw.
More power!
-
I still have 27 days to stay in track
-
Just a new blog post talking about Erasure Code solution for XOSAN: https://xen-orchestra.com/blog/xenserver-hyperconverged-with-xosan/
AKA a great solution to mix security, space available and scalability
Just bought 10GB network stuff to dig on perfs.
-
This is so schweeeeet...
Let's get cracking on the beta testing.....
Where do I sign up, I've got a SuperMicro 2U quad node box waiting & ready -
The setup is still to manual to be open for a "large scale" test.
But tell me more on your setup, I can start to think what's the best mode in your case
-
I'll send that to you via email so I don't clutter this thread
-
Sure, go ahead.
-
Article is out (as 5.12): https://xen-orchestra.com/blog/improving-xenserver-storage-performances-with-xosan/
Doc updated: https://xen-orchestra.com/docs/xosan.html
-
@olivier Have you tested how it behaves if you shut down both hosts and then fire them back up? Do you have either of them weighted or master/slave? Suppose you have to replace an entire node, do you just mount the XOSAN and it begins replicating? Sorry if I'm jumping way ahead, but this is very interesting.
-
-
If you shutdown all XOSAN VMs, your SR is unreachable. When you start it, as soon the minimal number of nodes is up, it will work again (even if there is auto healing, it will be slower but work in R/W)
-
If you replace one node, you can do it live (we have a "replace" button in the UI!). The new node will replace the old one and get all the missing pieces via the heal process.
In the end, nothing to do to make it work again
-
-
What if you shut down host 1 now, and host 2 an hour from now, then power one host 1? Will the system just run now with the old data?
-
@Dashrender Can you be more specific on the total number of nodes from the start?
-
@olivier I think this is a 2 node setup that @Dashrender is discussing.
Can the system scale to more than 2 nodes?
-
@olivier said in XenServer hyperconverged:
@Dashrender Can you be more specific on the total number of nodes from the start?
Let's assume your picture, 6 total nodes, but only 2 copies of data. So assume node1 in orange is shutdown, then 1 hour later node2 orange is shutdown. What happens?
How about a 2 node setup? i.e. no other witnesses to the configuration.
-
@dustinb3403 said in XenServer hyperconverged:
@olivier I think this is a 2 node setup that @Dashrender is discussing.
Can the system scale to more than 2 nodes?
This IS meant to scale (up to max pool size, 16 hosts, which is more a XenServer limit than ours )
@Dashrender The picture is not clear maybe: each "disk" icon is not a disk, but a XenServer host. So you have 6 hosts there. When you lost "enough" hosts (2 replicated hosts in the 6 setup, in the same "mirror"), it will be in read only. As soon one of this 2 is back online, R/W is back.
On a 2 node setup, there is an arbiter VM that acts like the witness. If you lose the host with the 2x VMs (one arbiter and one "normal"), you'll go in read only. No split brain possible.
-
In this picture
https://i.imgur.com/VKOVnkU.pnghow many systems have the data on it? only 2?
-
@olivier said in XenServer hyperconverged:
@dustinb3403 said in XenServer hyperconverged:
@olivier I think this is a 2 node setup that @Dashrender is discussing.
Can the system scale to more than 2 nodes?
This IS meant to scale (up to max pool size, 16 hosts, which is more a XenServer limit than ours )
@Dashrender The picture is not clear maybe: each "disk" icon is not a disk, but a XenServer host. So you have 6 hosts there. When you lost "enough" hosts (2 replicated hosts in the 6 setup, in the same "mirror"), it will be in read only. As soon one of this 2 is back online, R/W is back.
On a 2 node setup, there is an arbiter VM that acts like the witness. If you lose the host with the 2x VMs (one arbiter and one "normal"), you'll go in read only. No split brain possible.
The question is (i think) if you lost all hosts in the orange group, would XOSAN and the operating VM's in the entire pool still be functional Read/Write until those servers are brought back online?
-
@olivier So you just can't make any writes during this period of node failure/recovery? Are the writes cached? If so how much can be cached and for how long?
-
@dashrender said in XenServer hyperconverged:
In this picture
https://i.imgur.com/VKOVnkU.pnghow many systems have the data on it? only 2?
This is a part of distributed-replicated setup. In this "branch", you have 2 XS hosts with 100GiB data on each, in "RAID1"-like.
The others branches (not in the picture you displayed) are like a RAID0 on top.
-
The question is (i think) if you lost all hosts in the orange group, would XOSAN and the operating VM's in the entire pool still be functional Read/Write until those servers are brought back online?
Because data are spread on all subvolumes (think like RAID10), you'll be in read only on the whole thing.
You can avoid that if you decide to NOT stripe files on all subvolumes (which is the default behavior in Gluster by the way), but it's NOT a good thing for VMs (because heal time would be horrible, and subvolumes won't be balanced)