How many Linux servers do I really need?
-
@Dashrender said:
Aren't we already doing that with XenServer - is a VM inside XenServer a container? lol - I'm confusing myself.
XenServer is a hypervisor. Windows is virtualized there, not containerized. Don't start applying the word container to things that are not container platforms. VMs run on hypervisors, containers do not.
-
@Dashrender said:
Personally I felt like I missed the beginning of virtualization because to me it felt like it was for enterprise only - of course now it's being touted as the absolute starting point for any project unless you can show specific reasons why it doesn't/can't/won't work for your project (unlike SAN, which should still primarily live in the enterprise)
Virtualization has been for the SMB since the day it was released. It's never been about size or scale. Containers too. SMBs that run Linux have used containers for a decade, it's standard, old hat, so old no one talks about it.
What is interesting today is that three new container players; Docker, Rocket and LXC, have emerged and have a lot of great technology behind them, big communities and are finally being used on a large scale. DevOps has made containers important in a way that it has not been before. In the same ways that cloud and DevOps have made VMs not just important but necessary, containers take this to another level by making things lighter still.
-
@Dashrender said:
maintaining all of these micro VMs seems like such a pain in the ass.
You'll confuse yourself less if you always call them containers and always call the others VMs. Don't mix the terms, it will just be confusing. Only two resulting object terms, VMs and containers.
A container takes no more effort to maintain than a VM, they are identical to a systems admin. Just as a VM takes no more effort to maintain than a physical box, less actually. There is nothing that creates "more" work.
-
@scottalanmiller said:
A container takes no more effort to maintain than a VM, they are identical to a systems admin. Just as a VM takes no more effort to maintain than a physical box, less actually. There is nothing that creates "more" work.
Right that I understand, but putting each and every service, when possible, in it's own VM or container is what I meant by the micro VMs - instead of maintaining one system that has AD/File/Print/small DB, now your maintaining 4 boxes. Granted with tools, managing them is easier today, but not the same as managing one. that's all I was getting at.
-
@Dashrender said:
@scottalanmiller said:
A container takes no more effort to maintain than a VM, they are identical to a systems admin. Just as a VM takes no more effort to maintain than a physical box, less actually. There is nothing that creates "more" work.
Right that I understand, but putting each and every service, when possible, in it's own VM or container is what I meant by the micro VMs - instead of maintaining one system that has AD/File/Print/small DB, now your maintaining 4 boxes. Granted with tools, managing them is easier today, but not the same as managing one. that's all I was getting at.
It's not the same as managing a single one, but it should be just as easy.
-
@Dashrender said:
@scottalanmiller said:
A container takes no more effort to maintain than a VM, they are identical to a systems admin. Just as a VM takes no more effort to maintain than a physical box, less actually. There is nothing that creates "more" work.
Right that I understand, but putting each and every service, when possible, in it's own VM or container is what I meant by the micro VMs - instead of maintaining one system that has AD/File/Print/small DB, now your maintaining 4 boxes. Granted with tools, managing them is easier today, but not the same as managing one. that's all I was getting at.
Ah, I see. I would argue that it is easier to manage, not harder, especially with Linux. The management of the OS is so trivial itself and so repeatable that there is nearly zero overhead from that - remember this isn't Windows. You can easily manage ten Linux boxes for every one of Windows before talking DevOps (these are real numbers from enterprise environments) so keep that in mind. Then consider how much easier it is to manage applications when you have no fear of interaction issues and can isolate the OS/Application for troubleshooting, repair, updates, etc.
For example, you need to do a reboot on the database server but the email server can't go down at the same time - no problem, you can reboot by application.
-
I really think LXD will be a nice addition. The one bad thing about LXC is it's not standard across distros. Each one seems to make it's bridge a different name, and if you try to create a container with a release of Ubuntu with systemd on a host without systemd it causes some issues.
The live migration in LXD will be a killer feature.
-
@johnhooks said:
I really think LXD will be a nice addition. The one bad thing about LXC is it's not standard across distros. Each one seems to make it's bridge a different name, and if you try to create a container with a release of Ubuntu with systemd on a host without systemd it causes some issues.
The live migration in LXD will be a killer feature.
Same issue you will always have with containers.
-
@scottalanmiller said:
@johnhooks said:
I really think LXD will be a nice addition. The one bad thing about LXC is it's not standard across distros. Each one seems to make it's bridge a different name, and if you try to create a container with a release of Ubuntu with systemd on a host without systemd it causes some issues.
The live migration in LXD will be a killer feature.
Same issue you will always have with containers.
Just throwing it out there as a reference
-
@anonymous
How many Linux servers do you need?
All of them. You need them all.
-
@RamblingBiped said:
@anonymous
How many Linux servers do you need?
All of them. You need them all.
/thread
-
Yup, that says it all.