top -- What is it telling us?
-
Load is unrelated to CPU utilization. Load has to do with the run queue depth. A run queue can be very deep while the processor might be idle. And the CPU could be extremely busy without there being a run queue. Two very different aspects of your processing.
-
CPU % is a reference to how much work your CPU is doing. It takes an amount of time, generally about one second, and sees how many cycles within that second it had productive work to do and how many it was just awaiting something to do and gives you a percentage. Quite simple.
A run queue is how many threads are waiting their turn to get into the CPU for processing. And, like anything, these change every nanosecond, so the number is an average over a period of time, like five minutes.
So one is about how much work the CPU has to do. The other is about how much software is trying to get the CPU's attention.
-
The rule of thumb is that load FACTOR below one is no problem. Load Factor is the load average divided by the number of thread engines that you have. If you have four thread engines are your machine, then your load is always fine below four. Above four might be fine too, as long as the CPU is overly taxed.
-
How do you get a high CPU % with no run queue? Easy, make a single process that counts from one to infinity... it will go as fast as the CPU can process it, forever, but will never need to load another process into the CPU. So that one thread can keep the CPU infinitly busy.
-
How do you get a deep run queue while the CPU is idle? You might have threads that are awaiting some other resource and cannot be loaded into the CPU yet, but have been placed in the queue. The CPU is available, but has no means of processing them yet, so they wait in the queue. So a deep queue can be okay, if the CPU is also relatively idle. This tells you that the queue depth is not caused by an overloaded CPU, but from something else.
-
Even a taxed CPU with a high CPU % number, and a deep queue doesn't prove that the CPU is overloaded, it might just be "efficiently utilized." It is likely, if your queue is extra high and the CPU is taxed that the queue is high because the CPU is taxed. Knowing your system baselines helps you to understand what is going on when this happens.
-
If you have a CPU that is at a high percentage, say 98%, and your queue depth is small, you are not overloaded, you are simply running applications at the "speed of the machine." Think of a taxi that goes at full speed between airport and hotel, every load has people in it, but it never has to leave anyone at the taxi stand for the next run. That's a high CPU %, each run has someone, no one left behind to wait.
If you have a CPU that is overloaded, this means that there are threads that are needed but can't get into the CPU because it is busy. This is like that taxi going at the same speed, but there are too many people and some of them have to be left behind because they don't fit into the taxi on the first run. If the people keep coming at the same pace, the taxi will just get more and more backed up. That's overloaded.
-
There are two ways to deal with overload (other than reducing how much the CPU has to work on.) One is to get a "faster" CPU. This is the same as raising the speed limit for our taxi. The taxi hauls the same car load each time, but at 75mph instead of at 65mph. Over the course of the day, it can pick up about 15% more people from the extra speed making each round trip that much faster.
Or you can increase the size of the taxi, maybe replacing that Honda Accord with a Dodge Caravan. Now each trip is still at 65mph, but you can haul eight people at a time instead of just four. Twice the people, same speed. This is like going from four cores to eight cores.
And, of course, we can do both at the same time.
Increasing the speed helps every passenger, every time by making the time in the taxi less. This lowers latency and even if you only get a single passenger every tenth trip, that one passenger benefits. But speed ups are often 5-10% tops, nothing huge.
Increasing the size of the load, increasing cores, only helps when you have more passengers than you could get in a single load previously, but often jumps by 25-100% increases.
-
@EddieJennings if you thought that a load of .55 meant 55%, what did you think that 8.0 meant?
-
Great, useful information :D. Could this be a correct analogy for the deep run queue but low CPU percentage?
The taxi isn't moving (0% CPU) because there's some barrier preventing people from loading into the taxi (and this queue of people gets longer and longer).
-
@eddiejennings said in top -- What is it telling us?:
Great, useful information :D. Could this be a correct analogy for the deep run queue but low CPU percentage?
The taxi isn't moving (0% CPU) because there's some barrier preventing people from loading into the taxi (and this queue of people gets longer and longer).
The taxi goes every cycle without fail whether it has a load or not. The taxi never stops. Low CPU % means that there were no passengers to pick up so the taxi was running empty.
-
@scottalanmiller said in top -- What is it telling us?:
@EddieJennings if you thought that a load of .55 meant 55%, what did you think that 8.0 meant?
By that logic 800%, which seems impossible; thus, my misunderstanding.
-
@scottalanmiller said in top -- What is it telling us?:
@eddiejennings said in top -- What is it telling us?:
Great, useful information :D. Could this be a correct analogy for the deep run queue but low CPU percentage?
The taxi isn't moving (0% CPU) because there's some barrier preventing people from loading into the taxi (and this queue of people gets longer and longer).
The taxi goes every cycle without fail whether it has a load or not. The taxi never stops. Low CPU % means that there were no passengers to pick up so the taxi was running empty.
Ah, I see.
-
@eddiejennings said in top -- What is it telling us?:
@scottalanmiller said in top -- What is it telling us?:
@EddieJennings if you thought that a load of .55 meant 55%, what did you think that 8.0 meant?
By that logic 800%, which seems impossible; thus, my misunderstanding.
That's why I felt it was odd that you panicked, given that it couldn't be over 100%, so no reason to assume that 8 was a bad number.
-
Even if your machine has a high CPU % and a high load, you still have to test running applications and ask "is it fast enough"? If you have a perfectly planned system, you might easily have a busy CPU and lots of load and no issues at all. Generally you want your CPU % to be high, otherwise it generally means that you didn't size your system correctly and bought something more expensive than you really needed.
-
So I've got a VM here, with 1 vCPU and 2048 MB of ram.
Here is top of that system.
top - 15:22:30 up 2:52, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 102 total, 1 running, 101 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.0 us, 0.3 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 1883884 total, 365780 free, 445932 used, 1072172 buff/cache KiB Swap: 839676 total, 839676 free, 0 used. 1188012 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1336 mysql 20 0 1102624 113756 9976 S 0.3 6.0 0:05.04 mysqld 1 root 20 0 128164 6820 4060 S 0.0 0.4 0:02.47 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.12 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
I truncated it for shortness. Based on that the CPU is just too fast for the workload. I can't possibly give a VM half a core. . .
-
@dustinb3403 said in top -- What is it telling us?:
So I've got a VM here, with 1 vCPU and 2048 MB of ram.
Here is top of that system.
top - 15:22:30 up 2:52, 1 user, load average: 0.00, 0.01, 0.05
Tasks: 102 total, 1 running, 101 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.3 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 1883884 total, 365780 free, 445932 used, 1072172 buff/cache
KiB Swap: 839676 total, 839676 free, 0 used. 1188012 avail MemPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1336 mysql 20 0 1102624 113756 9976 S 0.3 6.0 0:05.04 mysqld
1 root 20 0 128164 6820 4060 S 0.0 0.4 0:02.47 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.12 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0HI truncated it for shortness. Based on that the CPU is just too fast for the workload. I can't possibly give a VM half a core. . .
It's wasted. If you had more control of the system, you could assign it less CPU. Hypervisors don't actually assign cores, that's a myth.
-
Hypervisors present "visible thread processors" which may or may not correlate to actual cores, or thread engines, under the hood. A key purpose of a hypervisor is to allow for a workload to receive less than a single core, or thread engine, of workload. The stnadard use case is for a VM to get far less than full cores or thread engines.
What the VM sees and what it is given are very different things. The hypervisor might only give 1/100th of a core, but tell the VM it has two.
-
Hrm.. but in any case what is provided here is wasted resources.
-
@dustinb3403 said in top -- What is it telling us?:
Hrm.. but in any case what is provided here is wasted resources.
Correct, assuming that throughput and not latency are what matter to you. Even in the taxi scenario, you might want to run a taxi empty 99% of the time because you only pcik up VIPs and you care more about making sure that VIPs never wait, even for a single trip, that about keeping your taxi running efficiently. In which case you are forced to run your CPU taxi empty nearly always to guarantee that it always has enough capacity for any arriving VIPs.