If all hypervisors were priced the same...
-
@bbigford said in If all hypervisors were priced the same...:
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
Xen Cloud Platform
NG=New Generation?
I'm not sure, but it does makes more sense.
Possibly... https://github.com/xcp-ng
Well that's it. Xen Cloud Platform New Generation. That tops the longest hypervisor project name.
-
@storageninja said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.I've always liked a tiny hypervisor and push the management off to API's (That can have layered UI/CLI) rather than install the damn kitchen sink on the hypervisor. What value does Fedora Server bring for actually running on the KVM hosts? You need to run Containers on ring 0 or something weird?
It’s just a newer kernel and some packages over CentOS/RHEL. But it also has some trade offs.
-
@storageninja said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.I've always liked a tiny hypervisor and push the management off to API's (That can have layered UI/CLI) rather than install the damn kitchen sink on the hypervisor. What value does Fedora Server bring for actually running on the KVM hosts? You need to run Containers on ring 0 or something weird?
Red Hat has been looking at running their OpenStack platform in OpenShift on RHEL Atomic. Not as small as ESXi but it’s around 700MB.
-
@stacksofplates said in If all hypervisors were priced the same...:
@storageninja said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.I've always liked a tiny hypervisor and push the management off to API's (That can have layered UI/CLI) rather than install the damn kitchen sink on the hypervisor. What value does Fedora Server bring for actually running on the KVM hosts? You need to run Containers on ring 0 or something weird?
Red Hat has been looking at running their OpenStack platform in OpenShift on RHEL Atomic. Not as small as ESXi but it’s around 700MB.
That way nothing is installed in the OS at all. You can actually rebase between Fedora and CentOS/RHEL in Atomic and it doesn’t touch any of your apps.
-
@storageninja said in If all hypervisors were priced the same...:
@emad-r said in If all hypervisors were priced the same...:
why purchase RAID controller when you get good amount of reliability using software RAID, Linux software RAID have been tested alot and alot and many big companies of enterprise NAS systems utilize it. I understahd that hardware RAID controller works most of the time for nearly anything, and software raid most properly will fail due to end user fault, cause it has some learning curve.
You accelerate the end user fault because of issues with SES not working correctly (Getting the right drive light to blink is strangely hard with DAS shelfs), or because of lack of end to end testing (Good luck getting HotAdd to work on hot swap on some HBA's). You cripple performance at scale doing it on AHCI controllers (25 queue depth for all drives vs. 600+ for a proper raid controller or HBA).
SATA drives are fine for home backup type stuff (I have Reds at home too) but for production workloads 5400RPM means ~20 IOPS at low latency before they kinda fall over. I have a Ryzen desktop system, and I just boot from NVMe (M.2). Intel's vROC is interesting but I havn't seen any server OEM's adopt it yet.
Noted, but at the same time each project is different than each other, I kinda tend to the small to medium business, and in the middle east region that means something else than small to medium business in US.
Your ryzen system can support 6 SATA ports + 2 SATA express, the OEM can reuse the SATA express which in my understanding is each SATA express basically consists of 2 dedicated SATA ports + power, so therotically the chip-set can support 10 SATA normal ports, but realistically OEM can give you 200$ motherboard with 8 SATA ports, now add this to the fact that linux systems loves common hardware, and can be installed on anything.
Also the same 200$ board can have NVMe M2, which can be great for separate OS install.
You start to look at things differently I think we have desktop systems and AMD new way of doing things is giving us more for less, actually they have been doing this for some time, but only now we are really getting something good, with good CPU, since when AMD has 8 core (disregard the 16 threads cause with KVM you want to disable that https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liabp/liabpmicrothread.htm )with 65W ?
And were talking about 200$ motherboard and 300$ CPU, what was the cost of good fancy RAID controller again that can support up to 8 drives ?
Seems good ones will cost you at least $250+, and their viability of RAID cards where I live is not as common as CPU + motherboards.
Living in this region teach you alot of hacks and tricks, but if the system is durable enough then why not ? sure it will be slower but you just dont tax the chipset, dont fill it up if possible use
8TBx4
instead of 4TB x8I am actually gona go for similar build very soon, and feeling confident, cause I was able to simulate the environment using VMware Workstation, yes I cant afford a real physical machine as home lab (I can actually but the software one is good enough), salaries also are difference here, I am considered to have very high salary for my age in my country, (currently earning 1600 $ per month) but Workstation Pro can simulate and pass the CPU AMD-V to guest VMS, so i can get rough idea of the real deal, and the pitfalls and strength, but trust me for the price and how easy it is to manage it will rock and blow anything out of the water.
Sure only 1 person in the country will know how to run it, which is me but I guess that is extra point.
-
My 2 cents are based on the fact that if they all were priced the same, they may not have come this far. Take VMware for example.
-
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.
Can you stop with that FUD? Thanks. It's not dying at all. I hear this since 2006. It's like saying Linux is not secure because Open Source.
-
@olivier said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.
Can you stop with that FUD? Thanks. It's not dying at all. I hear this since 2006. It's like saying Linux is not secure because Open Source.
No fear or doubt, just uncertainty. But this is only because of how Citrix is treating Xen Server, and how Amazon is moving from Xen to KVM.
I feel the only thing that can save Xen is XCP-ng. I'm really hoping for it's success and have high hopes for it.
-
@tim_g said in If all hypervisors were priced the same...:
@olivier said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.
Can you stop with that FUD? Thanks. It's not dying at all. I hear this since 2006. It's like saying Linux is not secure because Open Source.
No fear or doubt, just uncertainty. But this is only because of how Citrix is treating Xen Server, and how Amazon is moving from Xen to KVM.
I feel the only thing that can save Xen is XCP-ng. I'm really hoping for it's success and have high hopes for it.
That's because you have a very partial view of Xen project. Xen project is far more than XenServer/XCP. Xen is the core hypervisor, used by a LOT of companies (from automotive to the Cloud).
A lot of companies are using it Xen + their own toolstack without making publicity around it (like AWS, which is NOT leaving Xen, just adding some instance on another HV to get some specific features not in Xen yet). Some companies (Gandi) even switch from KVM to Xen:
https://news.gandi.net/en/2017/07/a-more-xen-future/
So your opinion is mainly forged by limited number of sources, in a loop of telling "Xen is dying" since 10 years. The main reason is that because Xen is far less "segmented" than KVM (eg: easier to make clickbait articles on Xen security issues than KVM, despite KVM sec process is almost catastrophic/non-transparent)
-
@olivier said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
@olivier said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.
Can you stop with that FUD? Thanks. It's not dying at all. I hear this since 2006. It's like saying Linux is not secure because Open Source.
No fear or doubt, just uncertainty. But this is only because of how Citrix is treating Xen Server, and how Amazon is moving from Xen to KVM.
I feel the only thing that can save Xen is XCP-ng. I'm really hoping for it's success and have high hopes for it.
That's because you have a very partial view of Xen project. Xen project is far more than XenServer/XCP. Xen is the core hypervisor, used by a LOT of companies (from automotive to the Cloud).
A lot of companies are using it Xen + their own toolstack without making publicity around it (like AWS, which is NOT leaving Xen, just adding some instance on another HV to get some specific features not in Xen yet). Some companies (Gandi) even switch from KVM to Xen:
https://news.gandi.net/en/2017/07/a-more-xen-future/
So your opinion is mainly forged by limited number of sources, in a loop of telling "Xen is dying" since 10 years. The main reason is that because Xen is far less "segmented" than KVM (eg: easier to make clickbait articles on Xen security issues than KVM, despite KVM sec process is almost catastrophic/non-transparent)
I see. That makes sense.
-
@tim_g said in If all hypervisors were priced the same...:
@olivier said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.
Can you stop with that FUD? Thanks. It's not dying at all. I hear this since 2006. It's like saying Linux is not secure because Open Source.
No fear or doubt, just uncertainty. But this is only because of how Citrix is treating Xen Server, and how Amazon is moving from Xen to KVM.
Amazon is a concern, but Citrix is not, IMHO. Citrix has been out to cripple Xen for years and, if anything, it just shows a lack of health of Citrix, not Xen.
-
Once Xen gets the PV driver features backported to core Xen PV, we will see a leap forward too, I think.
-
@scottalanmiller I don't see exactly what are you talking about. What's PV driver feature?
-
@scottalanmiller Citrix doesn't care anymore on server virt market, since a while now.
-
@olivier said in If all hypervisors were priced the same...:
@scottalanmiller Citrix doesn't care anymore on server virt market, since a while now.
Did they ever? They bought Xen for the name so that they could confuse their customers into thinking that XenApp was somehow virtualization.
-
@olivier said in If all hypervisors were priced the same...:
@scottalanmiller I don't see exactly what are you talking about. What's PV driver feature?
Xen has some performance advantages using their PV drivers over doing full PV.
-
@scottalanmiller You should have mixed some stuff. PV mode doesn't need PV drivers by definition. You meant HVM? (to be in PVHVM then?)
-
@olivier said in If all hypervisors were priced the same...:
@scottalanmiller You should have mixed some stuff. PV mode doesn't need PV drivers by definition. You meant HVM? (to be in PVHVM then?)
I know it doesn't but it needs the performance tech from them.
-
@olivier said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.
Can you stop with that FUD? Thanks. It's not dying at all. I hear this since 2006. It's like saying Linux is not secure because Open Source.
Do you have any 3rd party surveys or tracking showing growth in Xen, because all the public (and private sets like IDC) that I’ve seen show It loosing market share.
-
@scottalanmiller said in If all hypervisors were priced the same...:
@olivier said in If all hypervisors were priced the same...:
@scottalanmiller Citrix doesn't care anymore on server virt market, since a while now.
Did they ever? They bought Xen for the name so that they could confuse their customers into thinking that XenApp was somehow virtualization.
They bought it because Vmware bundled the hypervisor with their VDI product, so Citrix bought Xen and had its devs focus on VDI friendly features (APIs for provisioning, and GPU support). They briefly tried to take on ESXi in the enterprise but abandoned that a few years back.
Citrix also pushed cloudstack for a while to hosting providers (but seems to have given up on that too).