Containers on Bare Metal
-
@Emad-R said in Containers on Bare Metal:
@black3dynamite said in Containers on Bare Metal:
To use something like LXD, you would install Ubuntu and then LXD.
https://help.ubuntu.com/lts/serverguide/lxd.htmlExcatly, and not KVM -> Ubuntu -> LXD
What will I lose if I went Ubuntu -> LXD
that's what I am thinking .... what are the negatives or potential downsides to this in the future of skipping the whole type 1 virtulization
I haven't used LXD enough to properly give a negative or potential downsides. But I think it really depends on your needs.
There is a nice documentation on LXD that can help answer some of your questions.
https://lxd.readthedocs.io/en/latest/ -
-
@travisdh1 said in Containers on Bare Metal:
@stacksofplates said in Containers on Bare Metal:
@travisdh1 said in Containers on Bare Metal:
@stacksofplates said in Containers on Bare Metal:
@travisdh1 said in Containers on Bare Metal:
Containers never run on bare metal. They are all considered Type-3 hypervisors. Assuming I remember correctly, it's been a while since we had that discussion.
I'm assuming he means run them on bare metal vs inside of a VM.
Then the answer is no, because it's impossible.
It really doesn't matter. So long as you've got enough cpu/ram/iops to handle your workload.
Idk what this is supposed to mean but you can def deploy to bare metal. Depending on how the container is constructed and what engine you're using you can deploy just a binary that's just process on the system. All containers are just processes but not all of them are single binaries with no dependencies.
Even if you're using a full OS inside of a container running in Docker it's still using the kernel on bare metal.
That's like saying "You can deploy Hyper-V to bare metal." Of course you can, that's the entire point. Containers are just another type of virtualization. I really don't get the confusion.
No it's not because a type 1 doesn't share the kernel with the guests. So even though the container could (doesn't have to) be using libraries separately from the host it's still sharing the kernel, it's just a namespace. So yes it's still running on bare metal just like any other process that would be running in the OS.
-
@travisdh1 said in Containers on Bare Metal:
Type-3 hypervisors.
never heard this term b4, and I think in the future it will expire. You would just run containers on bare metal and that it. we didnt reach this step but i think in 10 years or so
-
@Emad-R said in Containers on Bare Metal:
Does anyone have experience running the above? if so are you doing it in Prod/Dev ?
For like 20 years now, yeah. It's quite common.
-
@travisdh1 said in Containers on Bare Metal:
Containers never run on bare metal. They are all considered Type-3 hypervisors. Assuming I remember correctly, it's been a while since we had that discussion.
Type-C
And the majority run on bare metal. But certainly lots of people do Type-C inside a VM as well. That's what he is asking about. Both approaches are common.
-
@travisdh1 said in Containers on Bare Metal:
@stacksofplates said in Containers on Bare Metal:
@travisdh1 said in Containers on Bare Metal:
Containers never run on bare metal. They are all considered Type-3 hypervisors. Assuming I remember correctly, it's been a while since we had that discussion.
I'm assuming he means run them on bare metal vs inside of a VM.
Then the answer is no, because it's impossible.
It really doesn't matter. So long as you've got enough cpu/ram/iops to handle your workload.
It is, we do both and have for a long time.
-
@Emad-R said in Containers on Bare Metal:
@travisdh1 said in Containers on Bare Metal:
Type-3 hypervisors.
never heard this term b4, and I think in the future it will expire. You would just run containers on bare metal and that it. we didnt reach this step but i think in 10 years or so
That's because it is Type-C, not Type-3. Type-3 isn't used because it implies something that is incorrect.
-
Interesting, thanks.
https://containersummit.io/events/sf-2015/videos/type-c-hypervisors -
@Emad-R said in Containers on Bare Metal:
Interesting, thanks.
https://containersummit.io/events/sf-2015/videos/type-c-hypervisorsMangoCon 2 had a topic on them that sadly didn't get recorded.
-
LXD is what we use. Very fast, very mature, and good tools for it.
-
Nice, do you try to do them with ceph storage or you simply go with the default zfs
-
@Emad-R said in Containers on Bare Metal:
Nice, do you try to do them with ceph storage or you simply go with the default zfs
ZFS isn't a default on any system that I know. But definitely not CEPH, CEPH isn't very performant unless you do a lot of extra stuff (Starwind makes a CEPH acceleration product.) ZFS was only default for Solaris Zones, not LXD. Much of LXD doesn't have have ZFS as an option. We are normally on XFS.
-
https://lxd.readthedocs.io/en/latest/clustering/
https://lxd.readthedocs.io/en/latest/storage/I think latest versions and especially with clustering recommends ZFS storage, which is nice cause now it is added easily as fuse fs
-
@scottalanmiller said in Containers on Bare Metal:
LXD is what we use. Very fast, very mature, and good tools for it.
@Emad-R Yeah LXD has taken the OCI image idea and applied it to LXC. LXC was doing something kind of like that later on. When you did an
lxc-create -t download
it would look at a text file with links to tarballs to download. LXD has incorporated images from the beginning which has given them a lot of flexibility like updating and layering. -
-
@Emad-R said in Containers on Bare Metal:
Very good read:
That is a good way to break them down, I liked that.
-
A few things...
-
Google and AWS don't bother running them on Baremetal. While some people do, they tend to be shops that like running lots of linux on bare-metal and for them, it's a OS/Platform choice rather than a Hypervisor vs. non-hypervisor choice. The majority of the containers in people's datacenters and in the cloud are in VMs.
-
VMware with the project pacific announcement at VMworld called out that they get better performance with their container runtime in a Virtual Machine, than bare metal Linux container hosts. (This makes sense, once you understand that the vSphere scheduler does a better job at packing with NUMA awareness than the Linux kernel. Kit explained this on my podcast last week if anyone cares to listen).
-
I run them on bare metal on my Pi4 cluster because I'm still waiting on drivers and EFI to be written for it so I can run a proper hypervisor on them.
-
-
I would like to hear more about your pi4 cluster since the pi4 is fairly new, any links or hints or suggested products
-
@Emad-R Eh, I got 6 of them with the maximum memory (4GB). Also looking to acquire some beefier ARM platforms that I can run experimental ESXi builds on. - https://shop.solid-run.com/product/SRM8040S00D16GE008S00CH/ has caught my eye, but there are a few other ARM packages that are also reasonably priced and have different capabilities (Jetson etc from Nvidia for CUDA etc). Was really hoping rancher would sort out a ARM install but egh, might end up running that on my Intel NUCs.