Containers on Bare Metal
-
Interesting, thanks.
https://containersummit.io/events/sf-2015/videos/type-c-hypervisors -
@Emad-R said in Containers on Bare Metal:
Interesting, thanks.
https://containersummit.io/events/sf-2015/videos/type-c-hypervisorsMangoCon 2 had a topic on them that sadly didn't get recorded.
-
LXD is what we use. Very fast, very mature, and good tools for it.
-
Nice, do you try to do them with ceph storage or you simply go with the default zfs
-
@Emad-R said in Containers on Bare Metal:
Nice, do you try to do them with ceph storage or you simply go with the default zfs
ZFS isn't a default on any system that I know. But definitely not CEPH, CEPH isn't very performant unless you do a lot of extra stuff (Starwind makes a CEPH acceleration product.) ZFS was only default for Solaris Zones, not LXD. Much of LXD doesn't have have ZFS as an option. We are normally on XFS.
-
https://lxd.readthedocs.io/en/latest/clustering/
https://lxd.readthedocs.io/en/latest/storage/I think latest versions and especially with clustering recommends ZFS storage, which is nice cause now it is added easily as fuse fs
-
@scottalanmiller said in Containers on Bare Metal:
LXD is what we use. Very fast, very mature, and good tools for it.
@Emad-R Yeah LXD has taken the OCI image idea and applied it to LXC. LXC was doing something kind of like that later on. When you did an
lxc-create -t download
it would look at a text file with links to tarballs to download. LXD has incorporated images from the beginning which has given them a lot of flexibility like updating and layering. -
-
@Emad-R said in Containers on Bare Metal:
Very good read:
That is a good way to break them down, I liked that.
-
A few things...
-
Google and AWS don't bother running them on Baremetal. While some people do, they tend to be shops that like running lots of linux on bare-metal and for them, it's a OS/Platform choice rather than a Hypervisor vs. non-hypervisor choice. The majority of the containers in people's datacenters and in the cloud are in VMs.
-
VMware with the project pacific announcement at VMworld called out that they get better performance with their container runtime in a Virtual Machine, than bare metal Linux container hosts. (This makes sense, once you understand that the vSphere scheduler does a better job at packing with NUMA awareness than the Linux kernel. Kit explained this on my podcast last week if anyone cares to listen).
-
I run them on bare metal on my Pi4 cluster because I'm still waiting on drivers and EFI to be written for it so I can run a proper hypervisor on them.
-
-
I would like to hear more about your pi4 cluster since the pi4 is fairly new, any links or hints or suggested products
-
@Emad-R Eh, I got 6 of them with the maximum memory (4GB). Also looking to acquire some beefier ARM platforms that I can run experimental ESXi builds on. - https://shop.solid-run.com/product/SRM8040S00D16GE008S00CH/ has caught my eye, but there are a few other ARM packages that are also reasonably priced and have different capabilities (Jetson etc from Nvidia for CUDA etc). Was really hoping rancher would sort out a ARM install but egh, might end up running that on my Intel NUCs.
-
@StorageNinja said in Containers on Bare Metal:
Also looking to acquire some beefier ARM platforms that I can run experimental ESXi builds on. - https://shop.solid-run.com/product/SRM8040S00D16GE008S00CH/ has caught my eye
Now this looks really sweet. That's some cool stuff... both the hardware and ESXi on ARM. $459 is a little high for that CPU and only 16GB, but not horrible.