If all hypervisors were priced the same...
-
@jaredbusch said in If all hypervisors were priced the same...:
@scottalanmiller said in If all hypervisors were priced the same...:
@stacksofplates said in If all hypervisors were priced the same...:
@coliver said in If all hypervisors were priced the same...:
@scottalanmiller said in If all hypervisors were priced the same...:
@coliver said in If all hypervisors were priced the same...:
@scottalanmiller said in If all hypervisors were priced the same...:
@emad-r said in If all hypervisors were priced the same...:
- VMware ESXi cause It was the first to put web UI i reckon for management of the hyper-visor, ...
Was it? Maybe, I'm not sure. But we were all begging for it from everyone at the time that they did it.
I was in highschool I think and it was a pretty big deal I remember.
OMG HS, for most of us this "just happened".
We've talked fairly extensively about how old you are.
Ya like I’m within a couple years of being able to be his kid.
You can all just STFU now about the age thing...
Feeling old you? Your DOB you!
-
@stacksofplates said in If all hypervisors were priced the same...:
@coliver said in If all hypervisors were priced the same...:
@scottalanmiller said in If all hypervisors were priced the same...:
@coliver said in If all hypervisors were priced the same...:
@scottalanmiller said in If all hypervisors were priced the same...:
@emad-r said in If all hypervisors were priced the same...:
- VMware ESXi cause It was the first to put web UI i reckon for management of the hyper-visor, ...
Was it? Maybe, I'm not sure. But we were all begging for it from everyone at the time that they did it.
I was in highschool I think and it was a pretty big deal I remember.
OMG HS, for most of us this "just happened".
We've talked fairly extensively about how old you are.
Ya like I’m within a couple years of being able to be his kid.
Why do we need to bring up the age thing, we all just get along in someway but age here is not the factor.
-
@dustinb3403 As someone who ran Xen for a while I never bothered to look at the source code.
While I have access to one of the commercial hypervisors code, and early builds it's really the last code we produce that I"m generally interested in (Generally more interested in health check code as I"m working with engineering on some new ones).I've reported CVE worthy bugs in commercial software and the code was really the last thing I needed to do to find them. It's generally as simple as finding an exception case, or noticing that they are using a protocol that can't be secured (TFTP) improperly (leaving things in the directory).
-
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.I've always liked a tiny hypervisor and push the management off to API's (That can have layered UI/CLI) rather than install the damn kitchen sink on the hypervisor. What value does Fedora Server bring for actually running on the KVM hosts? You need to run Containers on ring 0 or something weird?
-
@emad-r said in If all hypervisors were priced the same...:
KVM, not cause of KVM cause it runs and actively supported and updated on Linux OSes, so eventually we will get all the features if not more, and benefits and more of ESXi via external packages like mdraid + cockpit, so you can build pretty strong system but the learning curve can scare people away.
People talk a lot about MDRAID, but given how hit/miss hot-add are on HBA's (Glares at HPE) or that it's commonly done with AHCI controllers (Garbage performance QD=25 for ALL drives!) I don't see what the big deal is about buying a proper raid controller that you can access through out of band (iLO/iDRAC), has proper hot-add support, and a NVDIMM cache, or layering a distributed SDS system on top (in which case you don't use MDRAID. Even RedHat was requiring a local raid controller for their cluster HCI thing last time I checked.
-
@bnrstnr said in If all hypervisors were priced the same...:
I've never even used VMware, but I'm pretty sure if every single feature was available for free (like all the other hypervisors), then I'm pretty sure that's a no-brainer.
It's not just feature but ecosytem to consider. xxx hypervisor may work for what you do, but what if you need to run XenDesktop. It's not a supported hypervisor for them to do PVS/MCS automation with. What if you needs FIPS 140-2 compliance, or need a DISA STIG.
What if you need NSX/microsegmentation and service insertion support? NSX-T can cover KVM, but for Hyper-V or Xen you'll need to deploy a gateway.
Hypervisor requirements tend to not live in a vacuum, and that drives a lot of stuff.
-
@storageninja said in If all hypervisors were priced the same...:
What value does Fedora Server bring for actually running on the KVM hosts?
Control, access, security, open source, etc.
Cockpit for example. Any linux tools at your disposal.
I never said anything about installing kitchen sinks. Dumb assumption.
-
@storageninja said in If all hypervisors were priced the same...:
@emad-r said in If all hypervisors were priced the same...:
KVM, not cause of KVM cause it runs and actively supported and updated on Linux OSes, so eventually we will get all the features if not more, and benefits and more of ESXi via external packages like mdraid + cockpit, so you can build pretty strong system but the learning curve can scare people away.
People talk a lot about MDRAID, but given how hit/miss hot-add are on HBA's (Glares at HPE) or that it's commonly done with AHCI controllers (Garbage performance QD=25 for ALL drives!) I don't see what the big deal is about buying a proper raid controller that you can access through out of band (iLO/iDRAC), has proper hot-add support, and a NVDIMM cache, or layering a distributed SDS system on top (in which case you don't use MDRAID. Even RedHat was requiring a local raid controller for their cluster HCI thing last time I checked.
You can always start with centos minimal or fedora minimal , then install KVM on them, you will be surprised how lean and small the system is, regarding why Fedora check this:
https://mangolassi.it/topic/16450/meltdown-shows-why-to-avoid-lts-releases
2nd of all, the performance is not bad in my tests, its very good even with 2 degraded disks, and why purchase RAID controller when you get good amount of reliability using software RAID, Linux software RAID have been tested alot and alot and many big companies of enterprise NAS systems utilize it. I understahd that hardware RAID controller works most of the time for nearly anything, and software raid most properly will fail due to end user fault, cause it has some learning curve.
Other than that if you look at the modular approach here
Any linux OS really
Linux RAID
cockpit
KVMThats pretty sweet, and KVM is getting a nice management interface by accident, which means we can build very reliable system on the cheap.
Not sure what you mean about proper hot add support ? you either have it or you dont.
And there are good Chipsets lately from AMD and INTEL, especially after Ryzen and we have truly enterprise quality SATA disks from WD, the RED series are very proven and reliable and good durable HDD using PMR, just steer away of those HAMR ones and you will be good to go. -
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
-
@emad-r said in If all hypervisors were priced the same...:
why purchase RAID controller when you get good amount of reliability using software RAID, Linux software RAID have been tested alot and alot and many big companies of enterprise NAS systems utilize it. I understahd that hardware RAID controller works most of the time for nearly anything, and software raid most properly will fail due to end user fault, cause it has some learning curve.
You accelerate the end user fault because of issues with SES not working correctly (Getting the right drive light to blink is strangely hard with DAS shelfs), or because of lack of end to end testing (Good luck getting HotAdd to work on hot swap on some HBA's). You cripple performance at scale doing it on AHCI controllers (25 queue depth for all drives vs. 600+ for a proper raid controller or HBA).
SATA drives are fine for home backup type stuff (I have Reds at home too) but for production workloads 5400RPM means ~20 IOPS at low latency before they kinda fall over. I have a Ryzen desktop system, and I just boot from NVMe (M.2). Intel's vROC is interesting but I havn't seen any server OEM's adopt it yet.
-
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
It's a fork of XenServer to try to bring back the API's and the features that Citrix has locked from the free version (and also back port security patches to older versions as Citrix only does this for paid users now if you want beyond 6 months). It's being run by a small community group.
Citrix only sees XenServer as useful as a means to an end for VDI (and they have been slowly stepping down their investment in it). The linux foundation (who technically has Xen) Doesn't really care (They are backing KVM). So it's up to a rag tag band of rebels to keep Xen going...
-
@storageninja said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
It's a fork of XenServer to try to bring back the API's and the features that Citrix has locked from the free version (and also back port security patches to older versions as Citrix only does this for paid users now if you want beyond 6 months). It's being run by a small community group.
Sorry, I meant it looks like an acronym; if so, what does it mean?
-
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
Xen Cloud Platform
https://wiki.xen.org/wiki/XCP_Overview -
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
Xen Cloud Platform
NG=New Generation?
-
@bbigford said in If all hypervisors were priced the same...:
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
Xen Cloud Platform
NG=New Generation?
I'm not sure, but it does makes more sense.
-
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
Xen Cloud Platform
NG=New Generation?
I'm not sure, but it does makes more sense.
Possibly... https://github.com/xcp-ng
-
@bbigford said in If all hypervisors were priced the same...:
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
Xen Cloud Platform
NG=New Generation?
I'm not sure, but it does makes more sense.
Possibly... https://github.com/xcp-ng
Well that's it. Xen Cloud Platform New Generation. That tops the longest hypervisor project name.
-
@storageninja said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.I've always liked a tiny hypervisor and push the management off to API's (That can have layered UI/CLI) rather than install the damn kitchen sink on the hypervisor. What value does Fedora Server bring for actually running on the KVM hosts? You need to run Containers on ring 0 or something weird?
It’s just a newer kernel and some packages over CentOS/RHEL. But it also has some trade offs.
-
@storageninja said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.I've always liked a tiny hypervisor and push the management off to API's (That can have layered UI/CLI) rather than install the damn kitchen sink on the hypervisor. What value does Fedora Server bring for actually running on the KVM hosts? You need to run Containers on ring 0 or something weird?
Red Hat has been looking at running their OpenStack platform in OpenShift on RHEL Atomic. Not as small as ESXi but it’s around 700MB.
-
@stacksofplates said in If all hypervisors were priced the same...:
@storageninja said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.I've always liked a tiny hypervisor and push the management off to API's (That can have layered UI/CLI) rather than install the damn kitchen sink on the hypervisor. What value does Fedora Server bring for actually running on the KVM hosts? You need to run Containers on ring 0 or something weird?
Red Hat has been looking at running their OpenStack platform in OpenShift on RHEL Atomic. Not as small as ESXi but it’s around 700MB.
That way nothing is installed in the OS at all. You can actually rebase between Fedora and CentOS/RHEL in Atomic and it doesn’t touch any of your apps.