Containers in IT
-
@Kelly said:
One of the questions I haven't had answered yet, is where Docker/LXC/etc. fits in the virtualization stack. Would it be on top of a "Type 1" hypervisor or lieu of?
Most important answer here is that containers can be in either place. You will find it commonly on both...
-
-
@scottalanmiller said:
@Kelly said:
And I have no idea how or why this ended up in SAM-SD.
Fixed
Thanks
-
@scottalanmiller said:
@Kelly said:
One of the questions I haven't had answered yet, is where Docker/LXC/etc. fits in the virtualization stack. Would it be on top of a "Type 1" hypervisor or lieu of?
Most important answer here is that containers can be in either place. You will find it commonly on both...
I have had some devs pushing me to implement containers because it is "closer to the metal". How accurate is this? If it is run on top of XS, does that add much overhead?
-
For the bulk of container usage, both internal and external (meaning Amazon or similar) containers are going to run on top of the hypervisor stack. So that would mean building large Docker or LXC VMs that then, in turn, run a lot of containers on top. This is what NTG is setting up now, Containers on top of our Scale which is a KVM Type 1 system. It is extremely rare that a show is going to be deploying only containers, even huge shops or very modern ones, so having containers on top of virtualization makes sense so that workloads can remain mixed.
-
Shops doing an extremely large amount of containers or, very rarely, only containers maybe consider treating the Linux OS as a virtualization platform itself (like ProxMox does) and running containers on a bare metal Linux install rather than on a VM. This works fine too, but would be much less common. You would normally only do this where the density was extremely high.
This is traditionally very common on big Sparc platforms where doing thousands of containers on a singe box is not unhead of. This has been a viable Solaris container model for a decade now.
-
@Kelly said:
I have had some devs pushing me to implement containers because it is "closer to the metal". How accurate is this? If it is run on top of XS, does that add much overhead?
If you implement LXC instead of Xen, rather than on top of Xen, yes, you are marginally closer to the metal. Xen is closer to the metal than other hypervisors, though, already. If you are doing all Linux on Xen, you can run full PV which is in between a type 1 and a container - it is as close to the metal as you can get while having its own kernel. Going to the containers gets closer to the metal by giving up the unique kernel, all containers share kernels with the parent OS.
Remember... closer to the metal is considered a bad thing. The whole point of virtulization is that being too close to the metal is risky and cumbersome and a legacy thing from a different era. So based on their statement, you'd want to avoid containers
-
Containers like LXC are nominally faster than Xen, but they are less stable and less secure.
-
One of the benefits that virtualization has that Containers don't really have yet, is the ability to live migrate an existing VM from one host to another. Containers don't really have a way to do that yet, that I am aware of.
I realize that with containers, spinning up a new machine is easy and fast, but you lose the data that was in the original container if I understand the way the work correctly.
-
@dafyre said:
I realize that with containers, spinning up a new machine is easy and fast, but you lose the data that was in the original container if I understand the way the work correctly.
I idea is that containers should be stateless. Nothing makes this true at the technology level, of course, but the idea is that things like databases don't run in containers, only stateless application code. So there should be nothing to migrate over.
-
@dafyre said:
One of the benefits that virtualization has that Containers don't really have yet, is the ability to live migrate an existing VM from one host to another. Containers don't really have a way to do that yet, that I am aware of.
All of the robust tooling around VMs is lacking. And even the more mature containers have been replaced so all of the major technologies are brand new.
-
@scottalanmiller said:
@dafyre said:
I realize that with containers, spinning up a new machine is easy and fast, but you lose the data that was in the original container if I understand the way the work correctly.
I idea is that containers should be stateless. Nothing makes this true at the technology level, of course, but the idea is that things like databases don't run in containers, only stateless application code. So there should be nothing to migrate over.
For us noobs, can you give an example or two of stateless things used in containers?
-
@Dashrender said:
@scottalanmiller said:
@dafyre said:
I realize that with containers, spinning up a new machine is easy and fast, but you lose the data that was in the original container if I understand the way the work correctly.
I idea is that containers should be stateless. Nothing makes this true at the technology level, of course, but the idea is that things like databases don't run in containers, only stateless application code. So there should be nothing to migrate over.
For us noobs, can you give an example or two of stateless things used in containers?
Webservers or proxies/load balancers would be my first guess.
-
@Dashrender said:
For us noobs, can you give an example or two of stateless things used in containers?
Anything that doesn't contain data. So databases and file servers are the key examples that are NOT good for containers. Mostly, everything else is.
Any application or processing or networking system would be stateless.
-
@coliver said:
Webservers or proxies/load balancers would be my first guess.
Yes, application servers (web or otherwise) are the vast majority of these.
-
Some people put database clusters into containers with the understanding that they have to all be in sync all the time and that at least three or more have to never shut down. I don't like that model, though.
-
@scottalanmiller said:
@coliver said:
Webservers or proxies/load balancers would be my first guess.
Yes, application servers (web or otherwise) are the vast majority of these.
Pretty much anything that has relatively static content, right? You wouldn't host say... a Wordpress install in a container, would you?
Edit: Even if you do keep the Database server somewhere else.
-
@dafyre said:
Pretty much anything that has relatively static content, right? You wouldn't host say... a Wordpress install in a container, would you?
Normally yes and normally, yes. LOL. You would expect Wordpress to update very infrequently (other than what is in the database) and you would rebuild the container if and when that happened. Or you would put the non-static content, which is generally very tiny amounts, into a shared NFS share.
-
So the application - the web daemon - can be in a container, and it just pulls data from sources behind it. OK.
This is for load balancing?
-
@Dashrender said:
So the application - the web daemon - can be in a container, and it just pulls data from sources behind it. OK.
This is for load balancing?
If it is a load balancer like HA-Proxy that we are discussing, yes.