It isn't the ability to automate that is the problem. It's the availablility of easy to use tools that is the problem.
Thats the whole point I'm making.
KVM is hard to automate. Not that it's impossible, but the tooling doesn't exist to where you can easily automate like with VMware.
Agreed, and I don't think that that's the point of concern here. The issue at hand should be "does that automation that VMware offers get used by or should be used by the OP?" I believe that the answer is no to being used today and likely no to should it be used. It's a very small deployment. The overhead to the automation, even when you have VMware, is too high. And regardless, even if we agree that it should be used, probably because an MSP/ITSP is brought in to effectively make the environment larger and changing some of the scale discussions, the bigger question would be "will the OP's environment opt to do that anyway?" If that answer is "no", in the practical sense, then the automation point becomes moot.
I "think" we can all agree that VMware has better standard built in automation. And that KVM is completely automatable if you put in the extra, non-standard effort. So if we were considering standard automation then VMware would have an important edge in that area. That point shouldn't be in dispute. We can argue how close KVM gets, while still being behind, sure.
But the key point here, for me, is that I believe based on knowing the environment a bit that that automation is not, and won't be, used if VMware remains.
@scottalanmiller I have tried to connect via virtualbox before but after the windows and linux setups the screen would just turn black so i resorted to qemu and followed your discussion but now it wont connect on ubuntu
Podman is not Kubernetes. Also when you install Kubernetes you don't get a podman1 service (or any type of podman service).
When you install Kubernetes that way you don't get a Kubernetes service. You seemingly have to start the kube-proxy, kube-scheduler, kube-controller-manager, kube-api-server, and the kubelet separately.
It installs docker, which is deprecated in k8s now. They have switched to using containerd which is pretty much the standard runtime now.
So I'll stick with my original recommendation.
Yep, this is why I need to mess with this stuff in my home lab. I can't even talk about it intelligently yet!
Not the exactly the same thing but you might want to look into how to create a VM from scratch.
Meaning a script that will set up a VM with vCPU, memory, storage, network etc and then boot it from iso and have it do an unattended install, create what users you want and install the packages you need.
That's one of the next things I'm looking into.
@EddieJennings Also remember about things like kickstart in RedHat based operating systems. In Fedora/CentOS/RHOS you can use a kickstart file to automatically select all the install time options for the OS. A short time later you've got a fresh server and all the time it took you to setup was running the creation script on your hypervisor.
One of the things I'll need to figure out going the Kickstart route is setting the hostname what I want it to be at the time of installation. Likely not difficult to do, I just have to figure it out. Or perhaps, I can just truly take the approach of just making a clean minimal install, and then later configure to whatever specific thing I'm wanting the VM to do for my lab / testing.
Inside the kickstart file you'll find something like this:
We use debian as our goto and then it's called a preseed file. The only real thing that can be tricky is to tell the installation what kickstart/preseed file you want to use. You can do it in different ways. If you don't want to rely on dhcp/tftp/pxe etc you can roll your own iso file. I think the kickstart file can also be mounted as a drive that the installation will detect when it starts.
I think the best approach is to make an automated installation with same basic settings and some of those will get changed later in the installation. For example you can use a fixed hostname that is later changed from ansible.