I asked to modify the picture to draw a host with the disk inside to avoid confusion Thanks for the feedback on that guys
Best posts made by olivier
-
RE: XenServer hyperconverged
-
RE: XenServer hyperconverged
@fateknollogee If you are in replicated-2, you can add:
- 2 hosts to create a new extra RAID1 (bottom of a RAID0), creating a distributed-replicated (2x2)
- 1 host to go from replicated 2 replicated 3 (data is copied on three hosts, so it's a 1x3)
-
RE: XenServer hyperconverged
@dustinb3403 Scaling at one host: going on "triplicate" (same data on each node, you have 1/3 of total disk capacity, but you can lose 2 hosts and still have data access).
Scaling by adding 2 hosts at the same time: it's adding a subvolume, so more space.
-
RE: XenServer hyperconverged
@r3dpand4 said in XenServer hyperconverged:
@olivier We're talking about node failure where you're replacing hardware correct?
"On a 2 node setup, there is an arbiter VM that acts like the witness. If you lose the host with the 2x VMs (one arbiter and one "normal"), you'll go in read only."
So are Writes suspended until the 2nd node is brought back online and introduced to the XOSAN? Obviously that's going to be a lot longer than a few seconds even if you're talking about scaling outside of a 2 node cluster. Do you mean that Writes are suspended or cached in the event of a failure while Host is shutting down and then Writes resume as normal on the Active node? If this is the case when the new Host is introduced to the cluster the replication resumes, correct?Nope, writes are suspended when a node is down (time for system to know what to do). If there is enough nodes to continue, writes are resumed after being paused few secs. If there isn't enough nodes to continue, it will be then in read only.
Let's imagine you have 2x2 (distributed-replicated). You lose one XenServer host in the first mirror. After few secs, writes are back without having any service failed. Then, when you'll replace the faulty node, this fresh node will "keep up" the missing data in the mirror, but your VM won't notice it (healing status).
-
RE: XenServer hyperconverged
@dustinb3403 No writes are lost, it's handled on your VM level (VM OS wait for "ack" of virtual HDD but it's not answering, so it waits). Basically, cluster said: "writes command won't be answered as long as we figured it out".
So it's safe
-
RE: XenServer hyperconverged
@r3dpand4 This is a good question. We made the choice to use "sharding", which means making blocks of 512MB for your data to be replicated or spread.
So the heal time will be time to fetch all new/missing 512MB blocks of data since node was down. It's pretty fast on the tests I've done.
-
RE: XenServer hyperconverged
@r3dpand4 That has nothing to do with deduplication. There is just chunks of files replicated or distributed-replicated (or even disperse for disperse mode).
By the way, nobody talks about this mode, but it's my favorite Especially for large HDD, it's perfect. Thanks to the ability to lose any of n disk in your cluster. Eg with 6 nodes:
This is disperse 6 with redundancy 2 (like RAID6 if you prefer). Any 2 XenServer hosts can be destroyed, it will continue to work as usual:
And in this case (6 with redundancy of 2), you'll be able to address 4/6th of your total disk space!
-
RE: XenServer hyperconverged
@fateknollogee said in XenServer hyperconverged:
What is the difference in performance between the two options?
Disperse requires more compute performance because it's a complex algorithm (based on reed-solomon). So it's slower vs replication, but it's not a big deal if you are using HDDs.
However, if you are using SSDs, disperse will be a bottleneck, so it's better to go on replicate.
Ideal solution? Disperse for large storage space on HDDs, and Replicated on SSDs… at the same time (using tiering, which will be available soon). Chunks that are read often will be promoted to the replicated SSDs storage automatically (until it's almost full). If more accessed chunks appears in the future, some chunks will be demoted to "slower" tier and replaced by the new hot ones.
-
RE: Xenserver and Storage
Gluster client is installed in Dom0 (the client to access data). But Gluster server are in VMs, so you got more flexibility.
If the node with arbiter goes down, yes, you are in RO. But you won't enter a split brain scenario (which is the worst case in 2 nodes thing).
Eg using DRBD, in 2 nodes in multi-master, if you just lose the replication link, and you wrote on both sides, you are basically f***ed (you'll need to discard data on one node).
There is no miracle: play defensive (RO if one node down) or risky (split brain). We chose the "intermediate" way, safe and having 50% of chance to lose the "right" node without being in RO then.
Obviously, 3 nodes is the best spot when you decide to use hyperconvergence at small scale. Because the usual 3rd physical server used previously for storage, can be also now a "compute" node (hypervisor) with storage, and you could lose any host of the 3 without being in read only (disperse 3).
edit: XOSAN allow to go from 2 to 3 nodes while your VM are running, ie without any service interruption. So you can start with 2 and extend later
-
RE: What is KVM Best Management Tools in 2017?
When I start to read "java", I got a gag reflex
-
RE: What is KVM Best Management Tools in 2017?
@scottalanmiller said in What is KVM Best Management Tools in 2017?:
@olivier said in What is KVM Best Management Tools in 2017?:
@scottalanmiller We infected Cambridge Uni with my French fellows (OCaml is born in France).
It's a good language, but when you build something for the community, you have to make some choices. That was never a priority for Citrix (to be community thing/popular)
Oh it's good, but... there is no good reason for open source stuff, especially really important stuff that needs community support, to be written in it.
I totally agree. But community was never a thing for Citrix.
-
RE: Let's Convince Someone to release a FOSS PBX
@dashrender said in Let's Convince Someone to release a FOSS PBX:
@bigbear said in Let's Convince Someone to release a FOSS PBX:
Assuming they moved to a support only model for revenue, would they have to charge an arm and a leg like XO to cover their bases, basically driving the support cost out of most SMB anyway?LOL
@scottalanmiller said in Let's Convince Someone to release a FOSS PBX:
@dashrender said in Let's Convince Someone to release a FOSS PBX:
@bigbear said in Let's Convince Someone to release a FOSS PBX:
When you consider the exposure these guys are missing vs the small amount of revenue they get from their baseline PBX products... how many times a week do you tell someone to use FreePBX?
The question is how small is it? Do we really know?
Assuming they moved to a support only model for revenue, would they have to charge an arm and a leg like XO to cover their bases, basically driving the support cost out of most SMB anyway?
XO doesn't have to, their investors make them. Not the same thing.
Not true. It was more a seed raising.
edit: hahahahahahaha. I think I just understand, XO meant not "Xen Orchestra" Sorry for the confusion it didn't really made sense Sorry
-
RE: XCP-ng project
@scottalanmiller Yes, good idea. This would be doable to add ZFS support for local SR. I can do it in a week if I have a week free
-
RE: If all hypervisors were priced the same...
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.
Can you stop with that FUD? Thanks. It's not dying at all. I hear this since 2006. It's like saying Linux is not secure because Open Source.
-
RE: If all hypervisors were priced the same...
@tim_g said in If all hypervisors were priced the same...:
@olivier said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.
Can you stop with that FUD? Thanks. It's not dying at all. I hear this since 2006. It's like saying Linux is not secure because Open Source.
No fear or doubt, just uncertainty. But this is only because of how Citrix is treating Xen Server, and how Amazon is moving from Xen to KVM.
I feel the only thing that can save Xen is XCP-ng. I'm really hoping for it's success and have high hopes for it.
That's because you have a very partial view of Xen project. Xen project is far more than XenServer/XCP. Xen is the core hypervisor, used by a LOT of companies (from automotive to the Cloud).
A lot of companies are using it Xen + their own toolstack without making publicity around it (like AWS, which is NOT leaving Xen, just adding some instance on another HV to get some specific features not in Xen yet). Some companies (Gandi) even switch from KVM to Xen:
https://news.gandi.net/en/2017/07/a-more-xen-future/
So your opinion is mainly forged by limited number of sources, in a loop of telling "Xen is dying" since 10 years. The main reason is that because Xen is far less "segmented" than KVM (eg: easier to make clickbait articles on Xen security issues than KVM, despite KVM sec process is almost catastrophic/non-transparent)
-
RE: XOSAN with XO Community edition
FYI we started very interesting discussions with Linbits guys (we could achieve something really powerful by integrating Linstore inside XCP-ng as a new hyperconverged solution). It means really decent perfs (almost same as local storage) and keep it robust and simple.
-
RE: XOSAN with XO Community edition
Also we could achieve hyperconvergence "the other way" (unlike having a global shared filesystem like Gluster or Ceph) but use fine grained replication (per VM/VM disk). That's really interesting (data locality, tiering, thin pro etc.). Obviously, we'll collaborate to see how to integrate this in our stack
-
RE: XO-Lite beta
A good illustration of what I said: https://xen-orchestra.com/blog/xo-lite-components/
Next article on this: the design system that will be useful for all our apps (XO 6 included).