# FEATUREEffective clock vs instant (discrete) clock

No, it's not possible to filter out the time spent in 'sleep mode', that's against the principle of effective clock. Even if someone would do this somehow, the values you would get would have even less meaning.
To know the exact clocks you would have to sample every clock value at discrete points equal to the clock changing frequency. And this is not possible for any software.
Ok. Thanks.

No, it's not possible to filter out the time spent in 'sleep mode', that's against the principle of effective clock. Even if someone would do this somehow, the values you would get would have even less meaning.
To know the exact clocks you would have to sample every clock value at discrete points equal to the clock changing frequency. And this is not possible for any software.
I understand that... but just a question
How excactly are these avgs calculated? HWiNFO polling interval related? Whatever that may be in msec... (2000ms by default)

Those values are accurate as per definition. But there are 2 different things:
1. The Effective clock value is the average clock measured internally by the CPU across the polling interval set in HWiNFO. So if for example this is set to 1000 ms, the Current Effective Clock value reflects the average of all clock states during the last 1000 ms. Reducing the polling interval in HWiNFO will allow showing the clock with a finer granularity.
2. The Average column calculated for any value in the sensors screen is another average of all Current values read by HWiNFO. The averaging interval can be configured in HWiNFO sensor settings.

Those values are accurate as per definition. But there are 2 different things:
1. The Effective clock value is the average clock measured internally by the CPU across the polling interval set in HWiNFO. So if for example this is set to 1000 ms, the Current Effective Clock value reflects the average of all clock states during the last 1000 ms. Reducing the polling interval in HWiNFO will allow showing the clock with a finer granularity.
2. The Average column calculated for any value in the sensors screen is another average of all Current values read by HWiNFO. The averaging interval can be configured in HWiNFO sensor settings.
1. Got it...
2. So by default is 0 (samples), meaning it contains the entire runtime of sensors mode (unless reseted by the clock down right)
And in what values can be set on...? how many samples is HWiNFO taking per sec or min?

The averaging interval specifies how many last samples are being taken into account. So for example setting 10 with polling interval of 1000 ms means the Average column will show average value of the last 10 seconds.

The averaging interval specifies how many last samples are being taken into account. So for example setting 10 with polling interval of 1000 ms means the Average column will show average value of the last 10 seconds.
How dumb of me... of course... samples = polling samples

Thanks a lot!

Why are the effective clock speeds shown when my CPU OC in BIOS is fixed multiplier with every C-State turned off. No Speed-step or Speedshift either.
Fixed vCore too, so while my usage of the cores may fluctuate, the core clock speeds do not.
Windows using Ultimate performance mode with min CPU state of 100%.

So why is HWinfo64 showing lower clock speeds when there are no power saving modes active?
Sorry if I am missing something, and thanks for any answers.

Why are the effective clock speeds shown when my CPU OC in BIOS is fixed multiplier with every C-State turned off. No Speed-step or Speedshift either.
Fixed vCore too, so while my usage of the cores may fluctuate, the core clock speeds do not.
Windows using Ultimate performance mode with min CPU state of 100%.

So why is HWinfo64 showing lower clock speeds when there are no power saving modes active?
Sorry if I am missing something, and thanks for any answers.

What generation of Intel CPU are compatible with this function? Or is there another prerequisite (OS/Driver?)
I can see it on a Comet Lake processor but not on a Coffe Lake.

What generation of Intel CPU are compatible with this function? Or is there another prerequisite (OS/Driver?)
I can see it on a Comet Lake processor but not on a Coffe Lake.

It was introduced in early Core series, Coffee Lake should well support it. If you don't see it there, something else in the system might be messing up the counters, some other tools or virtualization.
I would need to see the HWiNFO Debug File to know more.

I need to reinstall the "Coffee Lake" computer with a newer OS in a few weeks, if the value is still missing then I'll come back to you. Thanks

I have a Threadripper 3960x on a Gigabyte TRX40 Aorus Master (not currently overclocked). When I run Prime95 (small FTTs on all 48 threads) and let the system stabilize, all 24 cores show steady clock speeds between 3775 and 3800, with a simple average of ~3795. The utilization on all 48 threads is steady at 100.0%. But the average "effective clock" speed on the 48 threads ranges from ~3580 to ~3780. I understand the general concept behind "effective clock", but I would have thought that at steady 100% utilization on all threads, the effective clock should equal the actual clock. Can you please explain? Thank you.

No, steady 100% utilization doesn't guarantee that. The maximum clock reached depends on several factors (power, current, silicon FITness limits, temperature). So when either of those limits are hit, the clock will be reduced and this might fluctuate so frequently that it's not captured by the instant clocks.

I understand the general concept behind "effective clock", but I would have thought that at steady 100% utilization on all threads, the effective clock should equal the actual clock. Can you please explain? Thank you.
What Martin said, and to go a little further if it helps...
The more difference between instant(descrete) clock and effective clock, the more the C1/C6 state residency.

Rough explanation
C0 = Active state
C1 = Sleeping state
C6 = Deep sleeping state

Essentially this is what effective clock introduces. All the states of the cores together.

Thanks Martin & Zach. That's very helpful. I did some further testing and what I found is that in general the reported "average effective clock" (under full load on all 48 threads) is about 99% of the simple average of utilization multiplied by clock speed for each of the 48 threads individually. This is obviously just restating what I said in my original question above and is consistent with your explanations that there are micro fluctuations to lower clock speeds or lower C-states that are captured in the "average effective clock" number but not in the individual utilization or clock speed numbers.

However, when I manually overclock by setting clock speeds individually for the various cores, then the ratio is exactly 100%. I guess that's just because once you go to manual clocking, you effectively lock the clock speeds rather than allowing the CPU to automatically lower or boost its clock speeds.

Is that right?

Of course, when you manually overclock you disable all power management of the chip, and core's clock doesnt boost or fall back to lower power stages.

Last edited:
hi all,

noticed this thread about the effective clock, got some questions or maybe im misreading it, i do a manual overclock to 4,7; no c-states, windows 10 power state bitsum (parkcontrol) power, all cores not parked, but somehow the effective clock is about 50% lower than the actual speed, im a loosing performance somewhere?? when i use the stress test Occt al cores ramp up to 100% and effective clock is than actually correct, 4,7ghz, the question comes around with my simracing, very cpu demanding, on the Gpu i have still 50% headroom in the performance app of Oculus rift, makes me thinking is my cpu than that slow ?? 8700K @ 4,7 on all cores

attachement was while in Assetto corsa with 24 AI cars

#### Attachments

• Effective Clock.PNG
86.5 KB · Views: 41
hi all,

noticed this thread about the effective clock, got some questions or maybe im misreading it, i do a manual overclock to 4,7; no c-states, windows 10 power state bitsum (parkcontrol) power, all cores not parked, but somehow the effective clock is about 50% lower than the actual speed, im a loosing performance somewhere?? when i use the stress test Occt al cores ramp up to 100% and effective clock is than actually correct, 4,7ghz, the question comes around with my simracing, very cpu demanding, on the Gpu i have still 50% headroom in the performance app of Oculus rift, makes me thinking is my cpu than that slow ?? 8700K @ 4,7 on all cores

attachement was while in Assetto corsa with 24 AI cars

I believe that just means the CPU is not sufficiently loaded in Assetto Corsa, so it's often reducing the clock. OCCT is heavy load.

Just trying to wrap my head around it. Are you saying that the Instant Clock reading simply looks at the multiplier, and if the core is sleeping or idle, it still reports the clock as bus x multi.

For instance, if the Bus is 100 and mult 35, but the core is sleeping, the Instant Reading shows the core at 3.5Ghz even if it's sleeping, whereas the Effective will show 0Mhz (or whatever P-state speed its in)?

This is why Instant will never show a very low reading?

Just trying to wrap my head around it. Are you saying that the Instant Clock reading simply looks at the multiplier, and if the core is sleeping or idle, it still reports the clock as bus x multi.

For instance, if the Bus is 100 and mult 35, but the core is sleeping, the Instant Reading shows the core at 3.5Ghz even if it's sleeping, whereas the Effective will show 0Mhz (or whatever P-state speed its in)?

This is why Instant will never show a very low reading?

Exactly! Moreover to read the actual multi the core needs to wake-up, which introduces the observer effect.