Dealing with Hyperthreaded Cores in CPU Usage Stats

I have HWinfo for my PC and something called iStat Menus for the Mac. Both my Mac and PC have hyperthreaded cores, and this makes the CPU percentage reading seemingly inaccurate.

For instance, with the HT cores enabled, it's pretty hard to get the total CPU reading above 50%, even when all of the physical cores are running at 100%. Reason-being that, no matter how hard certain threads are working, the HT cores bring the average way, way down, unless something really multi-threaded well like Handbrake is running. Given that HT cores aren't *really* additional CPUs, counting them as full-fledged CPUs mangles the total usage figures.

I always thought for the purposes of CPU monitoring, a better reading would be to only calculate physical cores for 0-100%, while allowing HT to push above 100% to have a potential reading of 130%, or whatever. (I read somewhere where full saturation of physical + HT cores, under very favorable conditions, can net UP TO a 30% increase. Rarely is it a 100% increase, though, and this is what counting HT cores as full procs implies.

Has anyone come up with a good metric + customized multiply/add that you feel gives you a better idea of CPU usage for HT-enabled CPUs?

I'm sure this would be a ton of work, but just thinking off the top of my head, how hard would custom metrics be to add someday ie. CustomerMetric = (T0 core totals added together/ physical CPUS) + ((T1 core total added together / Phys CPUs) * .3)

Sorry, not trying to set a ridiculous bar, what we have already is awesome, just a thought every now and then ;-)
Somewhat related, is T0 or T1 on the core considered a "primary" thread, or do each have the exact same priority? This would make figuring out which thread is the "hyperthreaded" core somewhat moot.


Well-Known Member
Have in mind though that not on all CPUs that +30% for logical cores is true. On some could be 20%, on others could be 35%. And to make it even harder for you, one CPU don’t always behave the same. On some workloads the benefit from SMT could be +10%, and on some other could be 40%, and even some other could be 0%.

In any way, this kind of measuring (default of HWiNFO) is counting the utilization of physical and logical cores. And you are talking about performance. Total different things.

SMT (or HT as you call it from past) exists because in 99% of workloads, software threads, are not capable to 100% utilize a core. That’s why most of times there’s room for another thread to run through a physical core.
And your question is... “Then why does it say 100% if there’s room for a second thread?”
This is deep programming stuff that I can’t explain well.

Lets say that CPUs have multiple sets of internal calculators doing(calculate) different things. Along with predictors, decoders and a lot more.
If 1 thread is fully utilize only one set of these calculators and partially utilize a different one inside a core, then this core would be 100% loaded but there would be room for another thread to run inside to take what’s left and maybe steal a little from 1st thread too.

I know it’s a stupid explanation but I hope it makes some sense...