Docker or Small VMs
-
@DustinB3403 said:
So what would be an example that anyone can come up with (Docker folks) were you might need a bunch of duplicate programs running?
Well, the design mostly came about because of web applications. So let me present a generic example that is mirrored over and over again in the real world. Let's say... a custom web application (store, blog, whatever.)
You have at least three tiers, a load balancing tier, an application tier and a database tier.
First tier, let's say that runs HAProxy. You'll have three of these VMs or containers at least.
Second tier, let's say you are running a PHP application on Apache or NGinx.
Third tier, let's say you have a database on Redis. You'll need at least three of these.
Then, on a fourth tier, you'll want at least three Redis Sentinels to handle monitoring.
Each layer gets several identical VMs or containers as a starting point and potentially dozens or even hundreds as the site gets busy.
-
@dafyre said:
@scottalanmiller said:
@Dashrender said:
Going back to the resource question - I know Scott and others have talked about how lean Linux is. You probably only need 512 megs for the OS then whatever your app needs. The Linux portion of those VMs should be pretty lean.
We've actually got systems that we tuned down from 512MB to more like 380MB as anything more was just wasted. We actually have one product server that is 256MB and no issues. And a lot of users on it, too.
This is where some of us have to wrap our head around. Yes, I know Linux runs great in smaller sets of RAM... but I was always of the mindset that More Is Better (tm). Especially if I am wanting to run hefty apps, like Plex or Heavy hitter apps like Zabbix or ownCloud...
The "amount needed" is always the best amount. Too little is bad, too much is too. I've had financial trading applications noticeably slowed down due to have too much unused memory on the system.
-
@johnhooks said:
You could use traditional containers (LXC, jails, zones) to do this. Each LXC container has a console and can be run like a VM.
So much so that we still call them VMs
-
@scottalanmiller said:
@dafyre said:
@scottalanmiller said:
@Dashrender said:
Going back to the resource question - I know Scott and others have talked about how lean Linux is. You probably only need 512 megs for the OS then whatever your app needs. The Linux portion of those VMs should be pretty lean.
We've actually got systems that we tuned down from 512MB to more like 380MB as anything more was just wasted. We actually have one product server that is 256MB and no issues. And a lot of users on it, too.
This is where some of us have to wrap our head around. Yes, I know Linux runs great in smaller sets of RAM... but I was always of the mindset that More Is Better (tm). Especially if I am wanting to run hefty apps, like Plex or Heavy hitter apps like Zabbix or ownCloud...
The "amount needed" is always the best amount. Too little is bad, too much is too. I've had financial trading applications noticeably slowed down due to have too much unused memory on the system.
Very few applications care about too much. Really only when you are into real-time processing and such does that play into it.
-
@JaredBusch said:
Very few applications care about too much. Really only when you are into real-time processing and such does that play into it.
The latency is still there, just not noticeable. I didn't mean to imply that you'd notice or that the world would end, only that you are no longer moving forward in performance once you get to the "right" amount but stop or actually start creeping backwards. Having too little is a BIG deal, err on the side of too much, of course. But don't err on the side of double, it's just wasteful at best.
-
@scottalanmiller said:
Having too little is a BIG deal, err on the side of too much, of course. But don't err on the side of double, it's just wasteful at best.
This is why I like Dynamic Memory (in Hyper-V... not sure what VMware calls this)... Tell the system it can boot with 256 megs of ram, and use up to 1Gig... if it never needs more than 256, ideally, it won't ask for it.
-
@dafyre said:
@scottalanmiller said:
Having too little is a BIG deal, err on the side of too much, of course. But don't err on the side of double, it's just wasteful at best.
This is why I like Dynamic Memory (in Hyper-V... not sure what VMware calls this)... Tell the system it can boot with 256 megs of ram, and use up to 1Gig... if it never needs more than 256, ideally, it won't ask for it.
You can do the same with KVM.
-
@johnhooks said:
@dafyre said:
@scottalanmiller said:
Having too little is a BIG deal, err on the side of too much, of course. But don't err on the side of double, it's just wasteful at best.
This is why I like Dynamic Memory (in Hyper-V... not sure what VMware calls this)... Tell the system it can boot with 256 megs of ram, and use up to 1Gig... if it never needs more than 256, ideally, it won't ask for it.
You can do the same with KVM.
I knew it was available on other platforms, however, my experience (at the moment) is limited only to 2 of them.
Edit: This is good to know about KVM. I'll soon have my desktop freed up at home.
-
@scottalanmiller said:
Are you making Ansible or Chef recipes to handle all of this? Are you moving to DevOps? Unless those things are true, no Docker won't make any sense for you. Containers do not really lighten the load on your hypervisor, that's not the reason for using them.
No so I think small VM's is for me lol
-
Definitely, just make them lean and tune as necessary. VMs will continue to be the staple of the SMB for a very long time. That will not remain true for forever, but for a very long time.
-
@scottalanmiller Any tips on how to tune a Linux machine?
I'll be running
Unifi Controller V4
Zabbix (Latest Version)
Snipe-IT - once they develop the fixed asset number thingy -
@hobbit666 said:
@scottalanmiller Any tips on how to tune a Linux machine?
I'll be running
Unifi Controller V4
Zabbix (Latest Version)
Snipe-IT - once they develop the fixed asset number thingyEasy way is to start with the recommended minimums and monitor over time. Watch the systems to see what the memory is doing and tune up or down as needed. We have a good idea about certain workloads that we deploy regularly so can set good starting points very easily. But for new workloads, you can put in a reasonable guess and then tune.
-
@scottalanmiller said:
Easy way is to start with the recommended minimums and monitor over time. Watch the systems to see what the memory is doing and tune up or down as needed. We have a good idea about certain workloads that we deploy regularly so can set good starting points very easily. But for new workloads, you can put in a reasonable guess and then tune.
What's the best method to "monitor" the resources in Linux?
-
@hobbit666 said:
@scottalanmiller said:
Easy way is to start with the recommended minimums and monitor over time. Watch the systems to see what the memory is doing and tune up or down as needed. We have a good idea about certain workloads that we deploy regularly so can set good starting points very easily. But for new workloads, you can put in a reasonable guess and then tune.
What's the best method to "monitor" the resources in Linux?
What hypervisor are you using? That will generally tell you when you are maxing out in memory. If not check out:
top
to see what resources your app is using.
-
@hobbit666 said:
@scottalanmiller said:
Easy way is to start with the recommended minimums and monitor over time. Watch the systems to see what the memory is doing and tune up or down as needed. We have a good idea about certain workloads that we deploy regularly so can set good starting points very easily. But for new workloads, you can put in a reasonable guess and then tune.
What's the best method to "monitor" the resources in Linux?
Exactly what @coliver said. Htop and glances are also other popular ones.
-
I forgot about Glances. I need to set that one back up again.
-
@johnhooks said:
@hobbit666 said:
@scottalanmiller said:
Easy way is to start with the recommended minimums and monitor over time. Watch the systems to see what the memory is doing and tune up or down as needed. We have a good idea about certain workloads that we deploy regularly so can set good starting points very easily. But for new workloads, you can put in a reasonable guess and then tune.
What's the best method to "monitor" the resources in Linux?
Exactly what @coliver said. Htop and glances are also other popular ones.
Yep either of those work too. top is generally installed by default on most *nix systems though so you wouldn't have to install anything new.
-
They will be running on ESXi FREE 5.5update3
-
@hobbit666 said:
They will be running on ESXi FREE 5.5update3
Really? I'm not sure you can get performance information from that host then. I've never seen anyone use the ESXi Free version so I don't know how it interacts with that kind of data.
-
@hobbit666 said:
@scottalanmiller said:
Easy way is to start with the recommended minimums and monitor over time. Watch the systems to see what the memory is doing and tune up or down as needed. We have a good idea about certain workloads that we deploy regularly so can set good starting points very easily. But for new workloads, you can put in a reasonable guess and then tune.
What's the best method to "monitor" the resources in Linux?
For a glance, the
free
command tells you all that you need to know. To see over time,sar
does. To watching it for a while,top
orvmstat
.