Xenserver and Storage
-
@emad-r said in Xenserver and Storage:
Hmm I see alot of Vsan advice which is the correct way to go, but I also wonder cant he do a simple thing like GlusterFS VM ? as well ? will that work in this case, and be simpler route ?
GlusterFS is still RLS, the advice is not really to use a VSAN, but to use RLS. People used to be sloppy and use VSA to refer to RLS, now they use VSAN. Neither is correct as RLS is more than any one connection technology.
GlusterFS will work here, but it requires more nodes and is not practical at this scale. It would be slow and problematic. No advantages that I can think of.
-
Gluster on 2 nodes won't be slow or problematic (which problems?) just a bit complicated without a turnkey deployment method (ie XOSAN).
-
@emad-r said in Xenserver and Storage:
Hmm I see alot of Vsan advice which is the correct way to go, but I also wonder cant he do a simple thing like GlusterFS VM ? as well ? will that work in this case, and be simpler route ?
No simpler if not understood or with a turnkey "layer" on top.
Gluster is not that complicated, but still, you need to grasp some concepts. It's like Xen vs XenServer in short. Second is turnkey and you don't need to get all stuff needed vs learning Xen "alone" on your distro.
-
@olivier official gluster docs say a 2 node config will go readonly if 1 node dies... You need at least an arbiter node afaik
-
@matteo-nunziati This is why we have an extra arbiter VM in 2 nodes setup. I node got 2 VMs (1x normal and 1x arbiter), and the other one just a normal VM.
This way, if you lose the host with one gluster VM, it will still work and you can't have a split-brain scenario.
An arbiter node cost very few resources (it just works with metadata)
-
@olivier said in Xenserver and Storage:
@matteo-nunziati This is why we have an extra arbiter VM in 2 nodes setup. I node got 2 VMs (1x normal and 1x arbiter), and the other one just a normal VM.
This way, if you lose the host with one gluster VM, it will still work and you can't have a split-brain scenario.
An arbiter node cost very few resources (it just works with metadata)
Wow blowing my mind! Always considered physical gluster nodes where gluater was installed on dom0. x-D.
But what if the node w/ volume AND arbiter goes down? I'm still missing this... Is arbiter replicated in any way on the xen nodes? -
Gluster client is installed in Dom0 (the client to access data). But Gluster server are in VMs, so you got more flexibility.
If the node with arbiter goes down, yes, you are in RO. But you won't enter a split brain scenario (which is the worst case in 2 nodes thing).
Eg using DRBD, in 2 nodes in multi-master, if you just lose the replication link, and you wrote on both sides, you are basically f***ed (you'll need to discard data on one node).
There is no miracle: play defensive (RO if one node down) or risky (split brain). We chose the "intermediate" way, safe and having 50% of chance to lose the "right" node without being in RO then.
Obviously, 3 nodes is the best spot when you decide to use hyperconvergence at small scale. Because the usual 3rd physical server used previously for storage, can be also now a "compute" node (hypervisor) with storage, and you could lose any host of the 3 without being in read only (disperse 3).
edit: XOSAN allow to go from 2 to 3 nodes while your VM are running, ie without any service interruption. So you can start with 2 and extend later
-
@dbeato said in Xenserver and Storage:
@olivier I would not do HA Lizard, it is problematic with XenServer. You can ask @StorageNinja . I have gone through many SW posts having issues with this. I did recommend it once but it was not worth it. XOSAN will be much better
https://xen-orchestra.com/blog/xenserver-hyperconverged-with-xosan/
or if you can afford two more host with WIndows Server and StarWind VSAN then it would be good too.Note, XOSAN is just Gluster under the hood. You do NOT WANT TO RUN GLUSTSER WITH 2 nodes. IT IS NOT SUPPORTED. (you can run a 3rd metadata only node, but you need SOMETHING out there to provide quorum).
It requires a proper stateful quorum of a 3rd node. Also for maintenance, you really likely want 4 nodes at a minimum so you can do patching and still take a failure. You'll also need to consider having enough free capacity on the cluster to maintain health slack on the Bricks, (20-30%) AND take a failure, so do that math into your overhead. Also for reasons, I'll get into in a moment you REALLY want to run local raid on Gluster nodes.
Also note, Gluster's local drive failure handling is very... binary... RedHat (who owns Gluster) refuses to issue a general support statement for JBOD mode with their HCI product, and directs you to use RAID 6 for 7.2K drives (no RAID 10). Given the unpredictable latency issues with SSD's (Garbage collection triggering failure detection etc) their deployment guide completely skips SSDs (as I would expect until they can fix the failure detection code to be more dynamic, or they can build a HCL). JBOD because of these risks is a "Contact your Red Hat representative for details." (Code for we think this is a bad idea, but might do a narrowly tested RPQ type process).
-
@olivier said in Xenserver and Storage:
Gluster client is installed in Dom0 (the client to access data). But Gluster server are in VMs, so you got more flexibility.
This architecture has a few limitations vs. something running against bare metal on a hypervisor, or a 3 tier storage.
-
You are adding latency to the back-end disk path unless you are running SR-IOV pass thru of the HBA/RAID controller.
-
You are adding TCP overhead (CPU, and 10us of latency) to the front end EVEN if/when the data is local. If you are using NFS to present gluster to the hosts (the supported/tested method).
-
Unless you've invented a native client for Xen, you destroy the primary thing I liked about gluster (local DRAM on the client side being used for ultra-low latency reads) as you are adding 10us and TCP overhead (Well I guess you could do NFS RDMA, but that's even more non-standard/unstable than pNFS)
-
The above hairpins (BACK and front end) burn a lot of extra compute. As you scale (especially on the network transport side) this gets ugly on wasting CPU cores. If you have any applications licensed per core or socket this becomes a nasty "VSA TAX" on your environment vs. a traditional 3 tier storage array deployment or something more efficient.
I do agree with you that 2 node multi-master DRDB is hilarious dangerous. I've personally had to fix split brains multiple times from people doing this and the stateful system (like what gluster uses) is 1000x safer to use. The challenge with DRDB is that the people smart enough to deploy it correctly gennerally are smart enough to do something else instead....
-
-
@olivier said in Xenserver and Storage:
@matteo-nunziati This is why we have an extra arbiter VM in 2 nodes setup. I node got 2 VMs (1x normal and 1x arbiter), and the other one just a normal VM.
Just to be clear, that arbiter isn't on a node or else if that node that has 2 votes goes down the entire cluster goes boom....
To quote Christos this is trying to cheat physics....
-
@scottalanmiller said in Xenserver and Storage:
GlusterFS is still RLS, the advice is not really to use a VSAN, but to use RLS. People used to be sloppy and use VSA to refer to RLS, now they use VSAN. Neither is correct as RLS is more than any one connection technology.
Gluster can also be deployed as an external storage system as part of a classic 3 tier design...
-
That's a lot of interesting technical stuff, but it's not like that when you build a product Let me explain.
Market context
Here, we are in the hyperconvergence world. In this world, users want some advantages against traditional model (storage separated from compute). So, the first question you need to answer before building a solution, is "what users want?". So, we made some research and found that in general, people want, in decreasing order:
- Cost/ROI (in short, simpler infrastructure to manage will reduce cost)
- HA features
- Ease of scaling and correct level of performances (in short: they don't want to be blocked and have worst performances than with existing solutions, or that perfs are too bad compared to using advantages of costs/features/flexibility/security).
These "priorities" came from our studies but also from Gartner.
Technical context
Then, we are addressing XenServer world. When you have hyperconverged solution there, you have a some "limitations":
- a shared SR can only be used within a pool (so 1 to 16 hosts max)
- more you modify the Dom0, worst it is
So with this context, you won't scale more than 16 hosts MAX.
Real life usage
So we decided to take a look with some benchmarks, and despite choosing in priority something safe/flexible, we had pretty nice performances, as you can see in our multiple benchmarks.
In short: performances are correct. If it wasn't, we would have stopped the project (or switched to another technology).
Regarding the "cluster goes boom", no: it goes in RO for your VMs, so it won't erase/corrupt your data.
-
@jrc said in Xenserver and Storage:
So currently I have 2 HP servers that are being used and XenServer hosts. The shared storage is on an HP MSA1040 SAN, connected via 8Gb/s Fiber.
The servers have worked flawlessly since I got them, not a single issue and have only been re-booted for updates and upgrades. I cannot say the same for the SAN. It has gone done about 4 or 5 times, and these outages have highlighted the fragility of my setup.
The HP servers have 24 2.5" drive bays. So I am contemplating filling them with drives and moving away from the SAN, but in order to that I would need the space to be shared between the two hosts.
How can I do that? What would that look like? What kind of cost would it be (outside of buying the drives) and is it a good idea?
Someone mentioned VSAN to me while I was talking about this, but I am not that clued up about VSANs and how they work or how they are put together.
Any advice on this would be greatly appreciated. But please don't lecture me on how bad a SAN is, and that my setup is doomed or that I am an idiot for doing it this way. I am looking for a path forward and not a beratement for things that have long since passed.
You can mount some NVMe performers and some high-capacity spindles into your Xen hosts to run VMs from and leave aging HP SAN for backup purpose only. Two hosts with a SAN make zero sense really...
-
Having local storage is good for perfs, but you can't live migrate without moving the disks or HA on the other host.
I did a recap on local vs (non hyperconverged) shared storage in XS:
-
Note, XOSAN is just Gluster under the hood. You do NOT WANT TO RUN GLUSTSER WITH 2 nodes. IT IS NOT SUPPORTED. (you can run a 3rd metadata only node, but you need SOMETHING out there to provide quorum).
It requires a proper stateful quorum of a 3rd node. Also for maintenance, you really likely want 4 nodes at a minimum so you can do patching and still take a failure. You'll also need to consider having enough free capacity on the cluster to maintain health slack on the Bricks, (20-30%) AND take a failure, so do that math into your overhead. Also for reasons, I'll get into in a moment you REALLY want to run local raid on Gluster nodes.
Also note, Gluster's local drive failure handling is very... binary... RedHat (who owns Gluster) refuses to issue a general support statement for JBOD mode with their HCI product, and directs you to use RAID 6 for 7.2K drives (no RAID 10). Given the unpredictable latency issues with SSD's (Garbage collection triggering failure detection etc) their deployment guide completely skips SSDs (as I would expect until they can fix the failure detection code to be more dynamic, or they can build a HCL). JBOD because of these risks is a "Contact your Red Hat representative for details." (Code for we think this is a bad idea, but might do a narrowly tested RPQ type process).
Gluster one node performance is very... encouraging You definitely need more nodes for a reasonable performance (even with some NVMe back end).
That's another story... Software Defined Storage on top of the hardware RAID isn't something many companies do for a good reason (we do, but we're sliding away from that approach for anything beyond 2 or maybe 3 node configurations). You want raw device access (better even with firmware bypassed) or... nobody will guarantee just confirmed write had actually been 100% completed.
-
@olivier said in Xenserver and Storage:
Having local storage is good for perfs, but you can't live migrate without moving the disks or HA on the other host.
I did a recap on local vs (non hyperconverged) shared storage in XS:
Most of the "budget" SMB customers shouldn't care about that.
-
@kooler said in Xenserver and Storage:
@olivier said in Xenserver and Storage:
Having local storage is good for perfs, but you can't live migrate without moving the disks or HA on the other host.
I did a recap on local vs (non hyperconverged) shared storage in XS:
Most of the "budget" SMB customers shouldn't care about that.
This is not my point of view. Eg even for my small production setup, hosted in a DC, it's not obvious to migrate big VMs on local SR from a host to another to avoid service interruption.
edit: I'm using XOSAN for my own production setup, best way to sell a product
-
@olivier said in Xenserver and Storage:
Having local storage is good for perfs, but you can't live migrate without moving the disks or HA on the other host.
I did a recap on local vs (non hyperconverged) shared storage in XS:
That's not really a sensible statement. You can't live migrate the STORAGE of the VMs without moving the storage. If you want to move your VMs without moving storage, you stay in the same boat as with any external storage. If you need to move the storage live with external storage, you have the same issues.
You have to treat the two things differently to give any advantage to external storage on dedicated hardware. Literally, anything that looks like an advantage is always expecting it to "deliver less" than the local disks and therefore not asking as much of it. Like we expect the local disks to live migrate, but completely ignore asking the external storage to do that.
How does security improve by having more points to attack?
-
@olivier said in Xenserver and Storage:
@kooler said in Xenserver and Storage:
@olivier said in Xenserver and Storage:
Having local storage is good for perfs, but you can't live migrate without moving the disks or HA on the other host.
I did a recap on local vs (non hyperconverged) shared storage in XS:
Most of the "budget" SMB customers shouldn't care about that.
This is not my point of view. Eg even for my small production setup, hosted in a DC, it's not obvious to migrate big VMs on local SR from a host to another to avoid service interruption.
edit: I'm using XOSAN for my own production setup, best way to sell a product
SMBs should not be worried, in 99% of cases, about migrating VMs around. That's not an SMB need.
-
I consider myself as a SMB (3 sockets!) and I need live migration, that's really something useful. That's also used a LOT by our customers. Maybe a XenServer users bias. But it's real there.