IMPORTANT Explaining the AMD Ryzen "Power Reporting Deviation" -metric in HWiNFO

Just the PPT of 45W under 100% load indicates something fishy is going on here.

I've updated to the latest Bios available for the Taichi (UEFI 3.0)
SNAG-00008.png
Fresh Boot, nothing except wallpaper engine running

SNAG-00009.png
Cinebench R20 running

It seems to me the data is being reported more accurately at this point, which means older bios definitely is doing something strange compared to the latest released ones.
-- I'd also like to note my Memory at this test is runing at 2133 instead of 3200 since that was apparently reset AGAIN upon flashing my bios

-- -- Enabled XMP for 3200 MHz RAM again, no change in values from the latest ones posted, so it seems to be good now.
 
Last edited:
I've updated to the latest Bios available for the Taichi (UEFI 3.0)
View attachment 4605
Fresh Boot, nothing except wallpaper engine running

View attachment 4606
Cinebench R20 running

It seems to me the data is being reported more accurately at this point, which means older bios definitely is doing something strange compared to the latest released ones.
-- I'd also like to note my Memory at this test is runing at 2133 instead of 3200 since that was apparently reset AGAIN upon flashing my bios

Yup, that looks good now.
 
Im on a ASRock Taichi X570, using the latest Bios 3.0

powerdeviation.jpg

Im seeing ~93% at full load (ran CB20) so I'm am gathering I'm looking at the max %, i would say this board is set up perfect.
 
Last edited:
I've updated to the latest Bios available for the Taichi (UEFI 3.0)
View attachment 4605
Fresh Boot, nothing except wallpaper engine running

View attachment 4606
Cinebench R20 running

It seems to me the data is being reported more accurately at this point, which means older bios definitely is doing something strange compared to the latest released ones.
-- I'd also like to note my Memory at this test is runing at 2133 instead of 3200 since that was apparently reset AGAIN upon flashing my bios

-- -- Enabled XMP for 3200 MHz RAM again, no change in values from the latest ones posted, so it seems to be good now.
Hmmhmm. Some boardmakers had foresight on this exploit coming to light and fixed it in the most recent bios updates maybe :3 Since it had something to do with 'Auto Rules', my board was probably fixed 2 bios ago at the end of last year.
 
Ok. Tried stock and got 94.7% Lowest point during a cinebench run. Which sucked because i only got 8887 score as well on a 3.9ghz all core boost. the EDC glitch 300/230/20 1x gives me a 97.2% lowest point on cinebench run with a 4.2ghz 1.250v all core for 9880. Think ill stick to this as my daily. This is on Asrock X570 Aqua Bios 2.03 that was sent to me from support btw.
 
Last edited:
You could just revert back to AMD high performance, reboot and test it again?

The CPU is running at stock with the exception of the 1usmus DRAM calculator inputs + the Advanced CPU Bios settings also suggested e.g. 'C-States enabled' With those two being static, the effects of using different power plans during MT R20 showed no change.
 
3900x (full stock) + 3800CL15 8x4Gb RAM and a custom loop. How is this parameter actually calculated, it seems to vary a lot if you change any LLC settings (or fiddle with negative offset and so on)? F51d bios.
 

Attachments

  • x470g7-default.JPG
    x470g7-default.JPG
    475.6 KB · Views: 56
R9 3900X + Gigabyte X570 Aorus Pro (F11 BIOS default settings) Dark Rock Pro 4 -
CPU cooling + AMD Performance power plan, latest chipset drivers.
 

Attachments

  • Desktop Screenshot 2020.06.09 - 19.49.18.54.png
    Desktop Screenshot 2020.06.09 - 19.49.18.54.png
    887.8 KB · Views: 55
@The Stilt - Looking at the numbers, I see that when the amps are reduced, the voltage is increased, probably to compensate for lower amps to still hit its frequency numbers. Increased voltage equals increase in heat output for CPUs at least. The volts go slightly higher with 150 amp (1.388v) versus 225 amp (1.375v) and a good bit higher when comparing 300 amp (1.319v) to 225 amp (1.375v).

If the CPU is reading values and trying to compensate loss of amp capacity with increased voltage to still hit frequencies then could that be chalked up to the dumbness of the CPU trying to operate within its envelope? The CPU has no idea how many amps can be supplied if artificially limited or incorrect values put in.

I have a 2600X and have a negative voltage offset of .08275 which brings the volts down to the 1.3 area versus stock around 1.39, the net effect has been reduced temperature which has yielded slightly higher all core boost frequencies (was 3.9 all core with no voltage offset versus 3.95 [63 Celsius] - 4.05 [58 Celsius] depending on temperature. I have been experimenting with voltage offset to see how it affects CPU auto boosting versus higher stock voltages. I see this same trend with the CPU that you tested, higher maximum frequency numbers (4.35, 4.45Ghz) with lower voltage versus higher voltage (4.3Ghz) for 150 amp test. I also see that more cores boost higher overall with increased voltage versus less voltage.

I also see that the wattage for 300 amp test is less than 11 watts for half the cores and less than 10 watts for the other half, but those values increase by roughly 2 watts per core with 225 and 15 amp tests. It would seem that HWinfo isn't correctly showing actual wattage used if all cores are using fewer watts with 300 amp value versus the other tests showing a lower watt value but higher temp. Something doesn't correlate correctly...
 
Last edited:
3900x (full stock) + 3800CL15 8x4Gb RAM and a custom loop. How is this parameter actually calculated, it seems to vary a lot if you change any LLC settings (or fiddle with negative offset and so on)? F51d bios.

Changing LLC or offset has an impact on the detection algorithm. You need to be running at stock settings for the method to work properly. This has been stated in the OP.
 
I'm even running this under a H150i, Triple AIO Rad
I think you have a cooling issue there. I'm using an Arctic Cooling Liquid Freezer 120 and my 2600X doesn't even get up to 70C under sustained load! I'd check to see if you have your thermal paste applied correctly, block mounted and tightened down properly. Is your pump running? Those are seriously high temps for a 360mm Rad.
 
Changing LLC or offset has an impact on the detection algorithm. You need to be running at stock settings for the method to work properly. This has been stated in the OP.
Yes-yes, if you read my post you'd see that I tested it in complete stock config. Each board has its own stock settings (mine droops very hard, almost 0.07V under load), how do you actually determine real power consumption without doing the calibration Stilt mentions in his write-up? Also, how we can disentangle a mundane bad calibration and intentional underreporting? I'd actually like to control it (for science, of course), it'd be interesting to know how it's possible to do that.
 
Back
Top