Happy to see XCP-ng's name back in here
Posts made by olivier
-
RE: Breaking Down Barriers: XCP-ng Offers Open-Source Power Over VMware
-
RE: XO-Lite beta
A good illustration of what I said: https://xen-orchestra.com/blog/xo-lite-components/
Next article on this: the design system that will be useful for all our apps (XO 6 included).
-
RE: XO-Lite beta
Hey there,
To answer @Pete-S ' question about the "why" of XO Lite:
- addressing the chicken-egg problem when you bootstrap your infrastructure (easier to have nothing to install to start doing very basic stuff).
- answering some cases where you lose access to your XOA (so as a "second choice", as it's better than the TUI for many operations, also don't need to plug a screen, any machine in the network could display the web UI, even a tablet/phone where SSH access isn't optimal vs a browser)
- prepare XO 6 new web UI, since it will share many components with it (so the work done for XO Lite will be "recycled" at 90% for XO 6 on the "basic" management features): kind of a head start on XO 6 if you prefer. Also helped us to develop new ways and process while building the new UI.
- giving XCP-ng a "visible" interface out-of-the-box with our brand identity
- finish to kill the last reasons for using XCP-ng Center
-
RE: KVM or VMWare
@Francesco-Provino by scale I meant the scale of Xen core developers (ie headcount). It's not that much especially compared to Xen adoption at such⦠scale
So if you did a lot of things with a relatively small team, this is a pretty nice clue on what could be done with more focus and people! (and yes, Citrix was completely unfocused on Xen, only few years after acquiring it).
But this is clearly changing (the situation, not Citrix ).
-
RE: KVM or VMWare
haha nice try @Francesco-Provino This is foolish to think that
First, this is not true. Then, the number of active Xen users is growing (a reasonable part is due to XCP-ng). And even the number of contributors (also thanks to us).
Xen, by design, is more secure than the other "big" Open Source alternative. The only downside is that's requiring more knowledge to move it forward.
The main issues was 1. Citrix acquiring it but not pushing it fast enough, because not being part of their core skills and 2. not having any Open Source knowledge.
As a true type 1, you can accomplish great things, and yes it requires some efforts. That's exactly the reason why we are partnering with bigger players to really show the true potential of Xen
Maybe you lack the understanding of scale. Xen dev was built and maintained by a relatively small number of people, and despite that, got it working better than most competitor. And you can indeed consider us as a small player right now, but we are roughly doubling each year. Just next year, we'll have more people working on Xen that Citrix itself.
I hope this tells a bit on Xen's future
edit: just few example of driving Xen innovation:
- https://xcp-ng.org/blog/2021/09/14/runx-next-generation-secured-containers/ (in partnership with Stefano from Xilinx)
- https://xcp-ng.org/blog/2021/07/12/dpus-and-the-future-of-virtualization/ (in partnership with Kalray, a CPU manufacturer)
- https://vates.fr/blog/vates-joins-riscv-international/ (porting Xen to RISC-V)
- https://vates.fr/blog/kalray-vates-and-scaleway-alliance/ (alliance with a large Cloud company, Scaleway)
It's a LOT of innovation for a dying project. But well, I'm used to hear that Xen is dead for the last 10 years, so you are not the first to be wrong on this
-
RE: KVM or VMWare
@stacksofplates said in KVM or VMWare:
@olivier said in KVM or VMWare:
@stacksofplates said in KVM or VMWare:
@pete-s said in KVM or VMWare:
It isn't the ability to automate that is the problem. It's the availablility of easy to use tools that is the problem.
Thats the whole point I'm making.
KVM is hard to automate. Not that it's impossible, but the tooling doesn't exist to where you can easily automate like with VMware.
And that's a very good point. That's why here at Vates, we made various efforts in XCP-ng/Xen Orchestra, providing multiple solutions: Packer, Terraform and even Ansible integration. That's also why Xen Orchestra really makes sense as a "middleware", as a single central point to consume with its API. Like vCenter in fact.
This is a true way to create value on top of it. The other aspect is all about integration, like we did with Netbox for example (sync all VMs and hosts, with their IP address, config and such to Netbox).
Right VMware or Xen Orchestra. If the tool isn't built with an API first mindset, the work needed to automate it greatly increases.
I agree. And to be honest, we learnt it "by accident" (ie our API was made for our internal usage). But now we are working more on the direction of "API as first class citizen", thanks to the large feedback we got from our users. I'm happy we took the right "overall design" decisions at first, allowing us to rely on Xen Orchestra as a central point (vs one UI per host, which can be handy but doesn't scale)
-
RE: KVM or VMWare
@stacksofplates said in KVM or VMWare:
@pete-s said in KVM or VMWare:
It isn't the ability to automate that is the problem. It's the availablility of easy to use tools that is the problem.
Thats the whole point I'm making.
KVM is hard to automate. Not that it's impossible, but the tooling doesn't exist to where you can easily automate like with VMware.
And that's a very good point. That's why here at Vates, we made various efforts in XCP-ng/Xen Orchestra, providing multiple solutions: Packer, Terraform and even Ansible integration. That's also why Xen Orchestra really makes sense as a "middleware", as a single central point to consume with its API. Like vCenter in fact.
This is a true way to create value on top of it. The other aspect is all about integration, like we did with Netbox for example (sync all VMs and hosts, with their IP address, config and such to Netbox).
Automation is key.
Some links/resources:
-
RE: No way to create larger than 2TB virtual disk with Xen or XCP-NG?
I think the VM could still be migrated so long as I detach the passthrough disks first, move those disks to a new host, migrate the VM to the new host, and then re-attach/passthrough the disks on the new host.
You can indeed. Not very practical but no technical barrier.
I wonder - can you create an NFS mount point in XenServer or XCP-NG? then just share that via loopback?
I don't really see the point of doing that? I had in mind an NFS share mounted directly in the VM. Simple, efficient (if you already have a NAS obviously)
-
RE: No way to create larger than 2TB virtual disk with Xen or XCP-NG?
You can attach more than 7 disks when you have tools in the VM. In your case, you don't need a VM in the traditional way, ie something flexible that you can migrate etc. So you can indeed attach your disks directly, regardless the hypervisor you choose.
Another more flexible alternative would be to have a "normal" VM, but attach a NFS share on it to store your data. This way you keep the flexibility of the VM and the large storage you need. The extra requirement is any NFS capable machine (even a very cheap NAS)
-
RE: No way to create larger than 2TB virtual disk with Xen or XCP-NG?
Thanks for correcting the sentence @travisdh1 Indeed, SMAPIv1 is using VHD format everywhere. This format is limited at 2TiB by "design" [1] . This has nothing to do with XO or even XCP-ng because it's a fork of XenServer, ie a copy with new or improved code. So remember that regardless which filesystem you use, as long as you are using VHD format to store virtual disk, you are limited to 2TiB.
However, SMAPIv3 is using
qcow2
format instead, "solving" this limitation. We (XCP-ng team) are currently working on improving SMAPIv3 to support disk import/export inqcow2
(which isn't even done by Citrix people themselves ). As soon we got that, the next step is to write drivers forext4
for example, which is doable relatively easily.One of main issue with SMAPIv3 (there's others) is the fact a part of the development is done privately by Citrix instead of collaborating (see this conversation on GitHub), so the goal is to catching up on our side to be able to get an upstream public faster and become the de facto upstream standard. We are working toward that but it's not something you solve in one week (you need to go deep in qemu-dp/xen blktap, see our efforts here etc.)
[1]: The VHD format has a built-in limitation of just under 2 TiB (2040 GiB) for the size of any dynamic or differencing VHDs. This is due to a sector offset table that only allows for the maximum of a 32-bit quantity. It is calculated by multiplying 232 by 512 bytes for each sector.
edit: also, as soon we got
qcow2
import/export support in XCP-ng, we could use that format in XO to store backup. So far, there's only 2 options to get disk data from XS/XCP-ng: raw or vhd (that's why XO is storing VHD files, because⦠that's what we got from the hypervisor!) -
RE: XOSAN with XO Community edition
Also we could achieve hyperconvergence "the other way" (unlike having a global shared filesystem like Gluster or Ceph) but use fine grained replication (per VM/VM disk). That's really interesting (data locality, tiering, thin pro etc.). Obviously, we'll collaborate to see how to integrate this in our stack
-
RE: XOSAN with XO Community edition
FYI we started very interesting discussions with Linbits guys (we could achieve something really powerful by integrating Linstore inside XCP-ng as a new hyperconverged solution). It means really decent perfs (almost same as local storage) and keep it robust and simple.
-
RE: XCP-ng project
I said Q1 for the first release. On schedule. https://xcp-ng.github.io/news/2018/03/31/first-xcp-ng-release.html
-
RE: If all hypervisors were priced the same...
@scottalanmiller You should have mixed some stuff. PV mode doesn't need PV drivers by definition. You meant HVM? (to be in PVHVM then?)
-
RE: If all hypervisors were priced the same...
@scottalanmiller Citrix doesn't care anymore on server virt market, since a while now.
-
RE: If all hypervisors were priced the same...
@scottalanmiller I don't see exactly what are you talking about. What's PV driver feature?
-
RE: If all hypervisors were priced the same...
@tim_g said in If all hypervisors were priced the same...:
@olivier said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.
Can you stop with that FUD? Thanks. It's not dying at all. I hear this since 2006. It's like saying Linux is not secure because Open Source.
No fear or doubt, just uncertainty. But this is only because of how Citrix is treating Xen Server, and how Amazon is moving from Xen to KVM.
I feel the only thing that can save Xen is XCP-ng. I'm really hoping for it's success and have high hopes for it.
That's because you have a very partial view of Xen project. Xen project is far more than XenServer/XCP. Xen is the core hypervisor, used by a LOT of companies (from automotive to the Cloud).
A lot of companies are using it Xen + their own toolstack without making publicity around it (like AWS, which is NOT leaving Xen, just adding some instance on another HV to get some specific features not in Xen yet). Some companies (Gandi) even switch from KVM to Xen:
https://news.gandi.net/en/2017/07/a-more-xen-future/
So your opinion is mainly forged by limited number of sources, in a loop of telling "Xen is dying" since 10 years. The main reason is that because Xen is far less "segmented" than KVM (eg: easier to make clickbait articles on Xen security issues than KVM, despite KVM sec process is almost catastrophic/non-transparent)
-
RE: XOSAN with XO Community edition
Yes, we switched to
master
, but it's been a while now (months?)Also, it's up to you to decide on what "head" (commit/tag/branch) to follow in your own scripts
Β―\_(γ)_/Β―
-
RE: XOSAN with XO Community edition
@danp This is just because we merged in the mono repo. I don't think there is any issue to pull everything on master.