FEATURE Effective clock vs instant (discrete) clock

Exactly! :) Moreover to read the actual multi the core needs to wake-up, which introduces the observer effect.
Hi, Martin!
May I ask you: how an "effective clock" looks programmatically? I just started to dig it for my project, and I use MSR to get values.
In my case, the results are slightly different than what HWInfo shows as an effective clock despite I tried to reproduce the logic as accurately as possible based on your explanation.
If it's not a secret (it's totally understandable if it is), can you PM me a snippet, pseudo, or some sort of formula?
Thank you in advance!
 
Hi, Martin!
May I ask you: how an "effective clock" looks programmatically? I just started to dig it for my project, and I use MSR to get values.
In my case, the results are slightly different than what HWInfo shows as an effective clock despite I tried to reproduce the logic as accurately as possible based on your explanation.
If it's not a secret (it's totally understandable if it is), can you PM me a snippet, pseudo, or some sort of formula?
Thank you in advance!

It's based on the APERF counter (delta), but even if you would implement the exactly same method as HWiNFO, you'd get somewhat different results due to different sampling points/intervals.
In some cases (i.e. AMD Zen with Snapshot Polling enabled) a different method is used that cannot be disclosed.
 
Hey Martin,

I have a Ryzen 2700X and when I expand the Core Effective Clocks tab I noticed that all 8 cores have a T0 and a T1 Effective Clock. After reading through this entire thread and creating an account it dawned on me that I might be seeing the physical and SMT threads. Is this correct?

Thanks!
 
Hey Martin,

I have a Ryzen 2700X and when I expand the Core Effective Clocks tab I noticed that all 8 cores have a T0 and a T1 Effective Clock. After reading through this entire thread and creating an account it dawned on me that I might be seeing the physical and SMT threads. Is this correct?

Thanks!

Yes, T0 is what you call the "physical" and "T1" the respective SMT thread.
 
So my confusion is, when I have to see what's the max clock achieved by my CPU, I can, without a doubt check the perf readings right?
Yes you can check discrete clock for that, as long as you have "Snapshot CPU Polling" enabled from main settings.
 
Actually there's no simple answer to that. The active "Core x Clock" values (which you call perf) are determined by reading the actual multiplier of each core. This means that each core actively reads its own multiplier. So if a particular core is sleeping and HWiNFO queries its multiplier it has to run a (fairly small) task on it. That means it has to wake the core up, so the result you get here doesn't reflect the true state of the core as if HWiNFO would not be polling it. This is the observer effect. However this effect can also be very different since modern CPUs have several other technologies affecting actual and maximum achieved core clocks, i.e. Turbo Boost. The Boost clock also depends on how many cores are currently active, so if someone performs active polling of all cores, it can result in reduced maximum clock (mutili-core turbo).
So if you're running a certain constant workload where mutili-core turbo is limiting the maximum clock, such active polling of all cores might be undesired and not reflect the maximum clock achievable. In such case the Effective Clock might give better (more accurate) results.
The "Snapshot CPU Polling" is another even better option as in this case the CPU parameters including clocks are not affected by the observer effect.
 
Hi, new user here. I'm sorry if this is off-topic, but google brought me in to this thread. My issue/question:
I've been using Intel's i9-12900K for audio use (Pro Tools as the DAW) for a year now, and just now upgraded to the i9-13900K. Mobo is MSI PRO Z690-A DDR4 - I also have an identical gaming system, except it used to have the i7-12700K, and now has the i9-12, both on Win 11 21H2.

I've used HWiNFO64 (now v7.32) along with Task Manager to learn what my different DAW projects are doing as far as core allocation, power use etc. I use quite a bit of processing so it's taxing even to these modern CPU's, and like to tinker with my machines, trying to optimize them (and OC mildly etc).

I now have observed that on the i9-13900K the same projects generate much higher "peaks" in TM's Performance/Logical Processors view, and also in HWInfo's Effective Clock cores view, than on the i9-12900K, on the same BIOS version (the newest, 1.92). However, when I actually try to load the systems with artificially demanding single thread DAW loads (serial real-time audio processing), the i9-13900K does still perform better, and the i9-12900K "breaks up" (stop playback with a CPU error message in the case of PT) sooner. The i9-13900K TM display also shows a much larger variance on the cores in question (favored P-cores mostly, occasionally the P0 core), while on the i9-12900K it seems to be more "rounded".

So, could this be a BIOS bug on the Z690 board, or an actual difference of CPU Utility/Effective Clock reporting between the 12th and 13th gen? Both systems (and both versions of the DAW rig) are performing well, but I was initially rather worried when I saw the much larger TM and Effective Clock values on the same projects on the more powerful CPU.

Chipset and ME Drivers have also been updated (I also learned that the Chipset update didn't "stick" until done after the BIOS update - synthetic multicore like Cinebench R23 performed as it should, but actual core allocation was off and it wasn't using the 2 favored cores as expected until after I redid the Chipset update...)

Edit: I'll add that I do understand that the i9-12 reports base clock of 3,2GHz in TM and P-cores can boost to x52, while the i9-13 reports a base clock of 3,0GHz in TM (I think these are averages accross P/E) and can boost to x58, which will factor in to the reporting at least on the TM "Utility" side, but the "peaks" I'm seeing are still very high comparatively. FWIW, System Information Process Explorer is also reporting higher CPU usage on the i9-13 with the exact same projects running in sync.
 
Last edited:
Hello everyone, I'm a new user in this forum. I'm using Ryzen 5 1500x + MSI A320M Pro-E. When performing cpu stress testing in CPUZ, HWinfo shows "Average effective clock speed" lower than core clock speeds. When I stress test my 3 other computers, the average effective clock speed is the same as the core clock speeds. What is the reason for this, is it a problem with my motherboard or processor?
 

Attachments

  • Untitled.jpg
    Untitled.jpg
    548.2 KB · Views: 15
This matches my observations, on raptor lake I have observed sometimes the performance doesnt tally with the reported clock speeds, as if was an internal lower speed, then I noticed the effective clock speed in hwinfo64 and it all makes sense now.
 
During cinebench my core clocks and effective core clocks are identical , however, when I play games my effective core clockes are all over the place. What does this mean? Am i clock stretching?
 
During cinebench my core clocks and effective core clocks are identical , however, when I play games my effective core clockes are all over the place. What does this mean? Am i clock stretching?
No nothing wrong with those effective clocks during gaming.

At Cinebench MT run, the entire CPU is loaded so we can compare directly the 2 kind of clocks. App uses so many threads as the CPU has available.
24 in case of the 12core 5900X for example.

When gaming the CPU load/usage is far less than 100%, and almost never steady. The game can use 3-4 or even 6-7threads (or even more) during gaming and this fluctuates constantly.
While game(?) and windows scheduler are trying to keep the heaviest and more demanding threads on CPU’s best cores (if CPPC is enabled from BIOS/UEFI)… the loads can still jump around a bit between cores (for unknown reasons to me at least)
It’s normal for gaming to have a couple of avg core clocks for example at 4.6-4.8GHz while avg effective clocks on those same cores be around 4 or even much lower than that.
The more cores and threads a CPU has available the more load is spreading all over the place. And also depends of game thread needs at any given moment or even how CPU demanding is, if the GPU can keep up with the workload, the resolution and graphic settings and so on…

It’s a complicated matter
 
is effective clock also the most 'accurate' measure for GPUs (e.g. Nvidia 40xx?) msi afterburner, for example, seems to show the 'core clock', rather than 'effective clock'.
 
Back
Top