Xenserver and Storage
-
@storageninja said in Xenserver and Storage:
@olivier said in Xenserver and Storage:
That's why I asked if you have better knowledge of community on this solution because I really don't. So if it's the case, that it's not stable (darn, it's here since a long time!), that's indeed not an option.
Issue with HA lizzard is that it doesn't have a stateful quorum system (just pinging a single IP address). You can split brain it.
Afaik latest ha-lizzard docs suggest to work in active-passive only... There must be a reason...
-
@matteo-nunziati said in Xenserver and Storage:
@storageninja said in Xenserver and Storage:
@olivier said in Xenserver and Storage:
That's why I asked if you have better knowledge of community on this solution because I really don't. So if it's the case, that it's not stable (darn, it's here since a long time!), that's indeed not an option.
Issue with HA lizzard is that it doesn't have a stateful quorum system (just pinging a single IP address). You can split brain it.
Afaik latest ha-lizzard docs suggest to work in active-passive only... There must be a reason...
Can still happen.
-
Almost any vSAN works pretty the same way which is just mirroring the data and caches between two or more hosts and keeping those intact. The above mentioned StarWind Free https://www.starwindsoftware.com/starwind-virtual-san-free is a great fit for 2-node deployments since it is capable of running on top of hardware RAID and has some intelligent split-brain protection either over additional Ethernet link or using a witness node. The nice thing is that you still have community support even with free version. XOSAN/GlusterFS is an overkill here (not talking about the performance) and using/supporting DRBD-based scenario looks like shooting in the foot for me unless you are completely familiar with it and know what you are doing.
-
It's not overkill on user side, and we built it to be able to grow in one click. Have you ever even just checked how easy it's easy to deploy XOSAN? It's really few clicks
And it's second time I heard about the "intelligent split brain" management on StarWind but didn't see any paper nor a start of explanation about how it works (nor even a simple link). Can you elaborate please? If it's the witness node, it's the classical thing, but I'm curious about the split brain protection without using a witness node.
-
@olivier said in Xenserver and Storage:
And it's second time I heard about the "intelligent split brain" management on StarWind but didn't see any paper nor a start of explanation about how it works (nor even a simple link). Can you elaborate please? If it's the witness node, it's the classical thing, but I'm curious about the split brain protection without using a witness node.
My understanding is they can do multiple links, multiple heartbeats,
Or a discrete and Stateful witness service on a 3rd system that will completely solve the problem.
VMware vSAN prevents this on 2 node and stretched clsutering by keeping witness components with sequence numbers on the witness system. In a vote is called the one has a updated sequence number matching the winner that side wins. In the event a stretched cluster partitions and both have matching sequence numbers the "Primary" side wins.
I'd argue isolation behavior goes beyond the storage heartbeat to how isolation is handled at the VM and Hypervisor level. STNITH is kind of a barbaric way ot handle this in 2017.
Other fencing systems that exist in VMware are for HA. Pings between hosts (Default on management network, moved to vSAN network if in use) Isolation address's (can have multiple) and heartbeats through datastore heartbeats (a file that is updated) for non-vSAN datatsore's. Based on this you can configure different VM and host isolation responses (maintain power, power off, shut down etc).