IMPORTANT Explaining the AMD Ryzen "Power Reporting Deviation" -metric in HWiNFO

Hello everyone, before I posted some messages but I did not understand very well what I should do, neither now but I need you to shed some light on me because I am very confused. I built my PC a month ago, the equipment has the following specifications:

Ryzen 5 3600
cooler: Cooler Master Hyper 212
gpu: rx 6500xt
mos: msi mag b550 tomahawk
with the latest stable version of Bios (7C91vA9)

All settings are factory. I have not done any kind of oc on my computer beyond enabling the xmp profile of the RAM. I would like to know if these values that you see are normal, I made the capture when doing the r23 test in multicore, so I made it when it was at 100%. Thanks!
100 percent complete.PNG
 
On one hand when I do Multicore test the deviation was between 129.2% and 131.2%. Usually landed between 130.2 and 130.6%. Single core Cinebench 150-160%

Ryzen 2700 - no OC. I run very old BIOS. During Multicore testing temps were okay, no throttling (warmest part was around 72-74 degrees Celsius). But I made a function over logs of HWINFO and average CPU Frequency during the test (it's easy to say when it was, since the voltage went just below 1 V) was 2957.9. With normal clocks in lightly threaded apps is 3324.1 to 3374.1. Same in WinRar, though there the problem is that I'm always, even when idle - within EDC limit. At fully idle I'm 98.9%, lightly threaded work 99.1% load, full load 99.7% EDC limit.

So both of this things considerably slow down my CPU - Ryzen 2700 on Cinebench R23 is around 9000 (8946) for multicore. And 1059 for single core. 85.27% of Multicore power compared to what it should be. 82.15% of single core performance. I'm building a new AM4 system. When I'm done I'm flashing the BIOS here. Hopefully it will help. Because otherwise I thought that my CPU is too weak because of a bug! And decided to go with superb 5950X build. Me a person that has spent all this time since 2017 putting money away (I don't earn much, slightly over 100$ a month) and I still had to borrow money to buy a new computer parts. I'm going to DIY PC for the first time since 2004. And I never did a whole build.Cinebench_R23_Deviation_130_prcnt_NT_and_150_to_160_prcnt_1T.jpg
 
Hello everyone, before I posted some messages but I did not understand very well what I should do, neither now but I need you to shed some light on me because I am very confused. I built my PC a month ago, the equipment has the following specifications:

Ryzen 5 3600
cooler: Cooler Master Hyper 212
gpu: rx 6500xt
mos: msi mag b550 tomahawk
with the latest stable version of Bios (7C91vA9)

All settings are factory. I have not done any kind of oc on my computer beyond enabling the xmp profile of the RAM. I would like to know if these values that you see are normal, I made the capture when doing the r23 test in multicore, so I made it when it was at 100%. Thanks!
View attachment 7851
can someone guide me with this? I don't really know what to do. And I don't know if this is okay or if I should be worried.
 
Hello everyone, before I posted some messages but I did not understand very well what I should do, neither now but I need you to shed some light on me because I am very confused. I built my PC a month ago, the equipment has the following specifications:

Ryzen 5 3600
cooler: Cooler Master Hyper 212
gpu: rx 6500xt
mos: msi mag b550 tomahawk
with the latest stable version of Bios (7C91vA9)

All settings are factory. I have not done any kind of oc on my computer beyond enabling the xmp profile of the RAM. I would like to know if these values that you see are normal, I made the capture when doing the r23 test in multicore, so I made it when it was at 100%. Thanks!
View attachment 7851

As stated by your screenshot:
Reported CPU package consumption (CPU PPT) during 100% CPU load = 88Watt
ReportedPowerDeviation (PRD) during the 100% CPU load = 92%

Meaning:
The reported 88W is in fact the 92% of the true power consumption of the CPU package.

True power:
88 / 0.92 = 95.6W

Your CPU's true consumption is 95.6W when you're seeing 88W and 92% PRD during this particular test.

I have the R5 3600 on a X570 Aorus Pro and behaves exactly the same (under CB R20 multicore test) the last 30 months. Its ok and your temps are ok, similar with mine.
Nothing to be concerned about.
 
I know this is an older thread, and I perfectly understand the issue. What I don't understand is how to resolve it. While I'm not really concerned about the longevity of the CPU, I am concerned about the temp. This is my file/media server that runs 14x7 and also does encoding/transcoding. It's running a 3900X in an Asus B550-E with UEFI v2423. With a Noctua U12S Chromax, temps are getting close to 80 degrees and my PRD is generally around 80% at a 100% load using Cinebench R20. I am not doing any overclocking. No PBO or messing with the voltages. I do have the memory manually set to it's rated clock speed and latency. Outside that, everything is set to auto.

After reading through this entire thread, as well as other sources, I've not seen a consistent explanation on how to resolve this. Some say to mess with the CPU voltage, some say to adjust the PPT, some say to adjust the CPUVDD, some say to change settings I don't have in my UEFI, like full scale VDD that's separate from the CPU or SOC. Can someone give me an idea of what setting I need to adjust to resolve this?
What was the CPU PPT when you're seeing the ~80% PRD during the CPU 100% load?

Its important because the true power consumption is calculated by CPU PPT and PRD as I wrote in post #505.
80°C is not that bad for a 140W(PPT) CPU. Max operating temp of 3000series is 95°C. Depends a lot on CPU cooler but also on room/case ambient temp.

The most simplest way to turn down the true power consumption is to lower the CPU PPT from BIOS (PBO settings), to a level that after the calculation PPT/PRD you will get something close to 140W.
If your mainboard BIOS does not have any CPU power telemetry settings like the full scale VDD setting.

EDIT:
Just saw on your personal thread that you resolve it by adjusting the CPU telemetry.
Well done!
 
Last edited:
As stated by your screenshot:
Reported CPU package consumption (CPU PPT) during 100% CPU load = 88Watt
ReportedPowerDeviation (PRD) during the 100% CPU load = 92%

Meaning:
The reported 88W is in fact the 92% of the true power consumption of the CPU package.

True power:
88 / 0.92 = 95.6W

Your CPU's true consumption is 95.6W when you're seeing 88W and 92% PRD during this particular test.

I have the R5 3600 on a X570 Aorus Pro and behaves exactly the same (under CB R20 multicore test) the last 30 months. Its ok and your temps are ok, similar with mine.
Nothing to be concerned about.
Perfect. Thank you very much for answer, I really feel much calmer!!
 
Any thoughts on resolving my issue? I'm building new computer, but until I do it, I'll probably won't risk BIOS flashing (even though I have an UPS, but this is a system that I need to operate 24/7).
Honestly I always thought that my poor performance was due to ambient temperature (I live in the Attic, Just below metal sheets that make the roof 30 degrees Celsius is cold here during the summer) and the fact that I use AMD stock cooler. At idle nothing goes above 60 C. After 1 hour stress test (not including GPU) - the hottest elements approached 80 C. So the temps are not the issue here. It's the fact that (and I saw that as well when I launched RyzenMaster) EDC limit is ALWAYS above 98%. And the deviation is as I wrote down at 130% at full load. At idle it is usually around 230%.
There was a time when I had problems with Temperature (something clogged my cooler but it didn't stop working so I didn't notice). Then it did thermal throttle a lot and randomly shut down on at least dozen occasions doing semi-tough workloads (not full stress tests).
Given that I can OC above current single-thread boost (3.275 MHz from memory) and do all core 3.375 with lower voltages (just a tiny bit, but still) and still maintain decent temps, might be the answer. I just hope that OC isn't the only answer I have. Apart from flashing the BIOS - that I will do as soon as possible.
 
What is your CPU PPT (Watt) report during that stress test that you getting 130% PRD?

And never pay any attention on PRD at idle or even middle load tasks. It doesnt mean anything.
Only at near 100% CPU load has a meaning

EDIT:
Better state all relative values of CPU during the 100% load (at 130% PRD).
PPT, EDC, TDC
 
What is your CPU PPT (Watt) report during that stress test that you getting 130% PRD?

And never pay any attention on PRD at idle or even middle load tasks. It doesnt mean anything.
Only at near 100% CPU load has a meaning

EDIT:
Better state all relative values of CPU during the 100% load (at 130% PRD).
PPT, EDC, TDC
Cinebench with HW Info in view..jpg
I have also second by second data from my previous runs in form of Excel spreadsheet. This I did after shutting down everything inessential and running HWInfo64 and Cinebench R23 with multicore test.
Notice that Vcore is lower here then when it is in idle (idle is about 1.07 - 1.08). Honestly I should have allowed the CPU to run that test - IIRC previously it only drew 55-65W of power. Which sort of made sense since it is 65W CPU.
 
One more thing. I also used Handbrake and I observed the opposite effect. Though it wasn't 100% load, but 80-85% on all cores. 30 seconds temperatures went to 90 C, another 30 seconds 95C and another over a minute and system shut down from overheat protection - CPU had 105 C.
So it over-reports Floating point usage, but under-reports Integer one? Throttling officially never started, but CPU lowered it's frequency below 3 GHz. Before the CPU cleaning this February and reapplying paste, during rendering a movie in DaVinci it would go as low as 2 GHz to stay around 100C. But I thought it was fixed?

EDIT: also during handbrake 80-85% load the deviation parameter was 66%.
 
I do not have much experience with ZEN+ CPUs but it looks to me that the CPU is under thermal constrain. First clue is TDC reaching the limit of 60A.
"Thermal Design Current (“TDC”): The maximum current (amps) that can be delivered by a specific motherboard's voltage regulator configuration in thermally-constrained scenarios"
Dont know the max operating temp of these CPUs either... but this could be a board restriction not a CPU one. 75 or even 80°C for the CPU should be fine and not causing a CPU thermal throttle.

"Normally" I would expect TDC to be far away from reaching limit. Its ok if PPT and/or EDC hit the limit.

I dont know if that "VRM MOS" temperature is related to this. 70°C in VRM is not ideal but its not near theoretical limit either (95~100+°C), unless maybe VRMs on the board are too cheap.
VRM: Voltage Regulator Module

Somewhere else I found this:
"Thermal design current is the sustained (DC equivalent) current that a load is capable of drawing indefinitely and defines the current to use for voltage regulator temperature calculations."

--------------------------------------------------

Most 65W (TDP) CPUs from AMD have a total package power consumption (PPT) around 88W, with the exception of 5600X that is a 75W PPT CPU.
TDP (Thermal Design Power) is more or less the max heat that the CPU will dissipate towards the cooler under very... very specific conditions. Its a guide for minimum cooler design for cooler manufacturers. It does not state the max stock power consumption of a CPU.

And 130% PRD with 77W PPT means that the CPU is drawing 59W in reality. Its somewhat far from the 88W limit.
 
So I just upgraded my GPU from a 5700 (non-XT) to a 3070FE and after installing it and all drivers, I noticed my CPU idle temps and load temps increased dramatically.

My CPU is a 3700X with stock cooler, mobo is a B550AM Gaming, and the GPU is the only thing that has changed in my system. No other settings were messed with. I did have to upgrade the version of HWINFO I was using since I could not get the old one to see the 3070...the new version had the setting for "Power Reporting Deviation" which is what brought me here.

With the 5700 installed, my CPU temps would be the following:
Idle = 36C
Load = 65C
Stress Test (CPU-Z) = 80C (after 60 seconds)

With the 3070 installed, my CPU temps are the following:
Idle = 43C
Load = 78C
Stress Test (CPU-Z) = 92C (after only 10 seconds...I immediately stopped it)

Prior to this version of HWINFO, I never had the Power Reporting Deviation setting but understand it only means something when under full load. These are the number being reported:
Idle = 92.8%
Stress Test = 49.3%

My understanding is that the lower the percentage, the more power it is drawing and thus higher temps which would explain my increase in temps. Any idea what could be causing this? Could changing a GPU affect CPU performance this much?
 
I dont see how a GPU change can make this happen... Did you test it with latest version of HWiNFO and old card (5700)?
I would suggest to update to the latest BIOS version for your mainboard if not already. 49% for Power Reporting Deviation is way off and it shouldn't be less than 90%. 85~80% tops (is maybe acceptable).
Your mainboard is telling the CPU that its drawing half the power from reality so the CPU accelerate it self beyond its marketing limits.

Your max CPU power drawing depends on the reported "CPU PPT" sensor during the 100% CPU load (with 49% PRD).

Another thing to state for mind sanity:
Even so, under these conditions, the CPU is still within its silicon limits. This only breaks the marketing/advertised specs.
 
Just ran it again and when the PRD is 47.3% the others are as follows:
CPU PPT 53.171W
CPU PPT Limit 60.4%

I have not tried putting the 5700 back in, but I suppose I can when I have some extra time just to see.

Something else I've noticed is the spikes have become much more frequent and more noticeable than before. Just opening Chrome and typing this reply sees spikes to 69C and averages in the upper 50C/low 60C when it would never get above 55C during normal tasks. Now the spikes are going from 44C to 58C to 42C to 69C and they seem to do it constantly...and they're lingering at those higher temps for several seconds whereas before it would be a quick spike then back to normal.

Ambient room temp is 25-27C, which is a bit hotter than normal, but it's been hotter in the past and this was never an issue. I put a floor fan blowing directly on the case and the idle temps have dropped to an average of 38C...still doesn't help the increased spikes when using Chrome or other applications, though.

Perhaps it's always done this and I never noticed because I never had that deviation setting available?

Any other thoughts of what might be causing it?
Thermal paste issues?
Windows "Power Options" issues (I did notice a change in these with a Windows update recently...just don't remember what I had it on)?


EDIT: I've attached a screenshot that shows HWINFO, Ryzen Master, and Cinebench R23 all running. Note that Cinebench was only running for 24 seconds before these temps were reached. Before, it would be running for minutes before ever getting close to 90C. Unfortunately, I do not remember my stats from before.
CPU Bench Test 2.png
 
Last edited:
Those temps make sense in a slightly warm ambient temp. That depends on CPU cooler as well.

65W (PPT) with a 51.5% PRD result to a 126W total CPU power draw during that R23.

65 / 0.515 = 126

I would check for a latest board BIOS, its a "bit" off from the 88W PPT spec. I've seen this before on some B550s

If you cant find a new BIOS, I believe ASRock has a setting in BIOS to correct this in CPU telemetry. Cant remember the exact setting name
 
Last edited:
Note that Cinebench was only running for 24 seconds before these temps were reached. Before, it would be running for minutes before ever getting close to 90C.
That sounds like a cooling issue to me. Even if I consider warm ambient temperatures and an AMD stock cooler, the CPU shouldn't hit its max temp in under 30 seconds. Check if the cooler is warm to the touch. If not, it's not mounted properly. Maybe the cooler contact to the IHS somehow changed (i.e. the cooler came loose) while you swapped the GPU.

Regards
Dalai
 
Now the spikes are going from 44C to 58C to 42C to 69C
Check your in-case temps. Based on my experience with asrock boards the "Motherboard" temp sensor should report 7-8° above ambient, if the case is well ventilated. Your 38-39° is likely too much.
But i wouldn't be surprised when something goes wrong with the board that doesn't even exists in the products list on the official asrock site.
 
Last edited by a moderator:
Check your in-case temps. Based on my experience with asrock boards the "Motherboard" temp sensor should report 7-8° above ambient, if the case is well ventilated. Your 38-39° is likely too much.
But i wouldn't be surprised when something goes wrong with the board that doesn't even exists in the products list on the official asrock site.
My motherboard is currently reporting an average of 33C with a max of 37C. Average room temp during this time was 26C.
This room does get hot and after adjusting external room fans to blow on the case, my CPU average temp over the past 12hrs is 36C (CPU CCD1). My GPU average is 31C.

These idle temps are actually cooler now than they were before, but it's these spikes and the fact it gets to 90C so quickly during the stress tests that got me concerned. As I'm downloading a game on Steam and typing this with a few other browser tabs open, I'm averaging 55C with spikes in the low 70s. I rarely ever got out of the 40s when doing this before with the occasional spike into the low 50s.

I checked the cooler and it does get warm during the stress test. Fan seems to ramp up at the correct temps according to the curve I have it set at...
40C/40%
60C/40%
70C/90%
75C/100%
Critical = 89C

I know these temps are all still well within the acceptable range, it's just this sudden increase of +10-15C from a week ago has me concerned when all I did was change the GPU. I had someone else point out that my 5700 was a blower-style card so it exhausted most of the heat from the case. Despite the 3070FE running cooler overall, it's a dual fan card that dumps its heat into the case. I did not think it would affect temps THIS much, but suppose I'll chalk it up to that and just watch that it does not get crazy hot during gaming (which it has not come close to doing yet). I don't believe I'm at the point where I need to remove/replace the cooler to reapply thermal paste.

EDIT:
Something else I noticed with this new version of HWiNFO is that it has a new temp reading of "CPU Die (average)". The last version I had did not have this and I've always used the CPU CCD1 reading. Is the CPU Die a better reading to go off of?

When the CCD1 reading shows current max of 72C, the CPU Die shows 65C. The CPU Die reading seems to average about 3-5C less.
 
Last edited:
That's not a good idea to use stock cooler. Imho it's main purpose is to define thermal design metrics for the the CPU, and using it in some "set-and-forget" configs like an office pc.
It's not really applicable for a DIY market.
 
Back
Top