The number given as % CPU in Activity Monitor

147 pointsposted a year ago
by Brajeshwar

19 Comments

kiitos

a year ago

> What Activity Monitor actually shows as % CPU or “percentage of CPU capability that’s being used” is what’s better known as active residency of each core, that’s the percentage of processor cycles that aren’t idle, but actively processing threads owned by a given process. But it doesn’t take into account the frequency or clock speed of the core at that time, nor the difference in core throughput between P and E cores.

Does "%CPU" need to take into account these things?

kevincox

a year ago

I think it would be useful in many scenarios. For example if I am running at near cores*100% I may think my system or fully loaded (or overloaded). But if those cores are running at a low frequencies because there isn't actually any overload I would want to know that. Because in this case if I spawn twice as many tasks and have twice as much throughput while the CPU% doesn't change that seems confusing.

I think if I had to pick a single number for reporting CPU usage it would be percent of available throughout. Although this has complications (the max frequency will depend on external conditions like temperature). But 0% would mean "no runnable tasks" and 100% would mean that the CPUs are running at the maximum currently available speed. The values in-between would be some sort of approximation but I think that is fine.

rcxdude

a year ago

From an intuitive point of view, if you want to use %CPU as "how much of the total available processing power is this process using", it's potentially valuable. With the status quo, a process might appear to be a few multiples more CPU intensive than it really is, if the system happens to be relatively idle.

That said, it's not particularly easy to apply these corrections, especially because the available maximum clock speed depends on variables like the ambient temperature, how bust all the cores are, and how long the CPU has been boosting for. So if you were to apply these corrections, either you report that a fully loaded system is using less than 100% of possible available CPU power in a lot of cases, or your correction factors vary over time and are difficult to calculate.

dspillett

a year ago

need no, but it could be useful. Not a requirement, but a very “nice to have” property. It would reduce certain confusions in some end users, as well as being handy for us techie types.

Often you don't care that the current batch of processes are using 100% of what the CPU cores they are assigned to can do at current clock rates, what you want to know is how much is left available, so you can add more work without slowing any existing tasks down much.

It used to be, back when CPUs didn't have low & high power cores and always ran what they had at the same speed, that %CPU shown in various OS displays was a reasonably accurate measure of the impact of a process that could easily be used to judge optimisation success (getting the same done in less hardware effort) and scaling success (getting the same done in less wall-clock time by giving more hardware to the problem or improving parallelism to make better use of what you already have).

These days it is more complicated than most assume at face value, and you have to be a lot more careful when assessing such things to avoid incorrect assumptions leading to wrong decisions. It would be nice to get back to the previous state of affairs, in terms of a given % value meaning something more fixed. Of course that is not as practical to achieve as naively stating the problem might suggest: for a start you can't really state what 100% is because in many cases the maximum clock might only be achievable for very short periods before thermal throttling kicks in. Maybe if there is a “minimum maximum”, below which we know the throttle won't go, we could state that as 100% and display more when the heat limit is not taking effect, but I expect that really would confuse end users (I have memories of confused conversations when multi-core CPUs became common, when people saw displays of processes using ~200%, with that meaning ~100% of ~2 cores).

d1sxeyes

a year ago

I guess not. I think the problem here is a bit more fundamental: people (read, at least 1 from a sample of 1 - me) think that the '% CPU' column in Activity Monitor shows how much of the total processing power the computer has is being used by the process, when actually it's a much more complicated story. I don't think it's a bad thing that people learn more about what the metric actually means.

I at least found the article interesting, and learned something useful from it.

sophacles

a year ago

Yes. If my 10Ghz cpu core says it's running 100%, but is scaled down to 1hz, i'll be really confused about how much work it is doing, and look in all the wrong places to find out why my process is taking forever to run.

(extreme numbers to highlight the point)

smegsicle

a year ago

honestly if you want load avg why not just use load avg

krackers

a year ago

I'm confused, isn't this exactly the same as for intel? Intel processors can turbo-boost, and you can manually cap the frequency by setting the package power-limit register (or well, you used to before some firmware update). That obviously doesn't change the % CPU reported either.

BeeOnRope

a year ago

Yes it the same for Intel.

One quantitative difference is that tasks assigned to the E cores may run at a sustained frequency much lower than the maximum (3.7x here) while on Intel any sustained load generally results in frequency scaling up over a few 100 ms to a maximum value which is much closer to the absolute max.

MBCook

a year ago

Yeah. When the article compares what Apple is doing to Intel, I think they mean “classic Intel processors of years ago” that didn’t frequency scale. Like the numbered Pentiums, IIRC. That did stand out to me a little bit.

You’re right that everyone has been doing this for quite a while, it’s certainly not an Apple invention.

The Electric Light Company often covers Mac stuff and Activity Monitor is the tool you use to view this stuff in OS X, so it’s just the domain this article is focusing on.

blueflow

a year ago

Linux has introduced the Pressure Stall Information[1] which completely replaced the classic cpu/mem percentage metrics for me. With PSI you see upfront where the bottleneck is. And it inherently accounts for factors like cpu frequency regulation or the disk cache.

The PSI data also has a value (total=) that is not averaged, so you can now see cpu jitter and spikes.

[1] https://docs.kernel.org/accounting/psi.html

user

a year ago

[deleted]

eviks

a year ago

Is there an alternative app that shows the better numbers?

Eisenstein

a year ago

I read the article and I still don't know the answer.

dang

a year ago

[stub for offtopicness]

magicalhippo

a year ago

What Activity Monitor actually shows as % CPU or “percentage of CPU capability that’s being used” is what’s better known as active residency of each core, that’s the percentage of processor cycles that aren’t idle, but actively processing threads owned by a given process.

That's exactly what I thought it was. Where do I sign up for the refund?

I really hate this click-bait trend to assume what I do or do not know.

simscitizen

a year ago

Pretty sure it’s just scheduled CPU time / wall clock time. If you have multiple cores then scheduled CPU time can be greater than wall clock time.

Also scheduled CPU time doesn’t take in to account frequency scaling or core type as explained in the article. Just how much time the OS scheduler has allocated to the core to run tasks.

user

a year ago

[deleted]