Normal vs. Snapshot polling mode

  • Thread starter Deleted member 15652
  • Start date
D

Deleted member 15652

Guest
Hi Martin, and everybody here!
Here is screenshot of my PC running Cinebench R23 MT test. I've merged two sensor panel screenshots from different modes.

HWInfo64 normal vs snapshot-polling.png

Can someone explain, why reported core VID's values are
- all the same and close to Vcore SVI2 value (normal mode - left panel)
- all different and unrealistically low.
Thanks!
_______________________________________________________________________
Ryzen 5 5600X, ASRock B550 Extreme4 ( agesa 1.1.0.0 patch D, smu 56.40 )
 
How do you know that they are unrealistically low?
First of all, VIDs are not actual supplying voltage but only individual core requests. The final core(s) request is “CPU Core VID (effective)” and the actual voltage supplying the cores is “CPU Core Voltage (SVI2 TFN).
If I’m not mistaken the Snapshot Polling is using a more accurate method.

Off topic:
You have a static overclock and 1.35V is too much for this CPU under heavy load, regardless the low(within limits) temp. Or you are using curve optimizer to push clock and voltage excessively...
The CPU power consumption (PPT) is 48% over stock limits but that’s not the worst part. The CPU current(A) (EDC) is 35% over stock limits and that is the worst part.
 
 
How do you know that they are unrealistically low?
First of all, VIDs are not actual supplying voltage but only individual core requests. The final core(s) request is “CPU Core VID (effective)” and the actual voltage supplying the cores is “CPU Core Voltage (SVI2 TFN).
If I’m not mistaken the Snapshot Polling is using a more accurate method.
I dont know it for sure, just an assumption based on what I see in sensor reports, and I understand what VIDs are. Why I think they are too low ? Because its not clear how the "final" VID 1.413V derived from current VID's range of 1.239 - 1.307 V , that is 106mv above max per-core VID requested (even taking into account Vdroop). I'd appreciate if your share your knowledge on it, rather quoting zen arch basics.

Off topic:
You have a static overclock and 1.35V is too much for this CPU under heavy load, regardless the low(within limits) temp. Or you are using curve optimizer to push clock and voltage excessively...
The CPU power consumption (PPT) is 48% over stock limits but that’s not the worst part. The CPU current(A) (EDC) is 35% over stock limits and that is the worst part.

First, I dont use static OC, and you're right, it's PBO2.
And sorry, but I cant take seriously what you wrote under "offtopic".
Of course PPT and EDC are increased, that's what we called "overclocking" :)
I understand if you just recently start learning Zen architecture, but why not just assume, that PMU/SMU and FIT algorithms knows better, what is considered safe or not, unless you statically overclock. Mind you that CB MT is not heaviest workload, so for example, in prime95 small FFT, Vcore (svi2) dropped to ~ 1.280 - 1.3V

I appologize for not sharing the detailed settings, so here they are:

Power settings:
  • CPU Vcore/LLC - Auto
  • Vsoc - fixed 1.105/LLC1 (strongest on asrock)
  • VDDP - 0.900
  • VDDG - ccd/iod 0.950/1.060
  • DRAM 3800@CL15 / IF 1900 (sync'ed)

PBO settings:
  • Limits - Motherboard
  • Boost Override - +400
  • Scalar - 5x
  • Curve Optimizer: -13 -12 -13 -5 -2 -13
 
Last edited by a moderator:
Thanks for the link, though I've seen it before and I cant see how it answers the question below , so is it really too much to ask for a more detailed reply:
its not clear how the "final" VID 1.413V derived from current VID's range of 1.239 - 1.307 V , that is 106mv above max per-core VID requested (even taking into account Vdroop)
 
Well, I guess no one except AMD knows what exact factors play a role in final VID decision. Perhaps the Dynamic VID Optimizer or PBO Scalar.
 
Well, I guess no one except AMD knows what exact factors play a role in final VID decision. Perhaps the Dynamic VID Optimizer or PBO Scalar.
Yeah, I know the lack of public info makes it just a guesswork, but you guys, those are deeply involved in building monitoring/tuning soft, knows a tad better, and the one thing regarding power management I'm interested most is whether Vermeer using separate per-core voltage regulation in all-core workloads, so can you confirm this (the assumption made by 1usmus in his ctr2.0 article) ?
 
I dont know it for sure, just an assumption based on what I see in sensor reports, and I understand what VIDs are. Why I think they are too low ? Because its not clear how the "final" VID 1.413V derived from current VID's range of 1.239 - 1.307 V , that is 106mv above max per-core VID requested (even taking into account Vdroop). I'd appreciate if your share your knowledge on it, rather quoting zen arch basics.



First, I dont use static OC, and you're right, it's PBO2.
And sorry, but I cant take seriously what you wrote under "offtopic".
Of course PPT and EDC are increased, that's what we called "overclocking" :)
I understand if you just recently start learning Zen architecture, but why not just assume, that PMU/SMU and FIT algorithms knows better, what is considered safe or not, unless you statically overclock. Mind you that CB MT is not heaviest workload, so for example, in prime95 small FFT, Vcore (svi2) dropped to ~ 1.280 - 1.3V

I appologize for not sharing the detailed settings, so here they are:

Power settings:
  • CPU Vcore/LLC - Auto
  • Vsoc - fixed 1.105/LLC1 (strongest on asrock)
  • VDDP - 0.900
  • VDDG - ccd/iod 0.950/1.060
  • DRAM 3800@CL15 / IF 1900 (sync'ed)

PBO settings:
  • Limits - Motherboard
  • Boost Override - +400
  • Scalar - 5x
  • Curve Optimizer: -13 -12 -13 -5 -2 -13
No I'm not just recently started learning about ZEN architecture. I've been studying it very closely since ZEN2 was launched.
FIT algorithms are pushed over the silicon limits when using scalar beyond x1. I t is well known that scalar over x3~4 is supplying (as buildzoid would put it) stupid amounts of voltage. The 5600X would never ever do that to it self, giving 1.35+Vcore for a CB workload. If you want to know what the FIT trully allows under CB load then run it with PBO just Enabled (eveything else Auto). Dont be surprized if you see voltage under 1.3V. What ever you see that would be you max allowed voltage for a CB load.
I know the CB MT workload isnt the heaviest out there, like the synthetic AVX P95. But its very close on what a CPU can encounter in its daily usage for "heavy" workloads.

Hey its your CPU. You can degrade it all you want.

VID as we all know (apparently) are requests. And that should give you an idea of what the CPU cores would like/want for voltage on that specific speed, that specific workload, and that specific temp with out damaging it self in the long run.
The VID effective will always be close to what the cores are actually getting (SVI2 TFN).
 
Last edited:
You're right, scalar seem to push over FIT healthy limits, and I certainly not gonna use more than 2-3x value at 24/7 setup, as well as +400 boost override, since it requires the heat transfer speeds that my cooling setup is not capable of. But tend to disagree in
what the FIT trully allows
as long as according to The Stilt, if we exclude Scalar from the equation, then FIT limits are still respected even with ppt/edc way off. And dont forget the actual voltage applied depends on local temperature sensors (don't know how exactly, but the pattern can be seen) so what you saw on first screen is kinda edge case for science. The screen below, the rightmost one basically what my daily setup look like.
cbr20.png

But you missed my question regarding the per-core VIDs, namely why so big delta between max per-core VID requested and the actual Voltage requested from PWM (though Martin said this info is "blackboxed" :), anyway), as well as I would like to know about possible avfs per-core regulation (see prev. post).
In other words (and that's why I seek for HWInfo developer's assistance here), is it safe to assume that Vermeer uses per-core voltage regulation in all-core workloads, based on the different reported per-core VID's in sensor panel, or the actual per-core voltage applied may be the same for all cores ?
 
Last edited by a moderator:
No individual core voltage supply can happen. Always one voltage is and thats the vcore SVI2 TFN.

That wont help much, but general info suggests that individual VIDs are a request based upon a given speed, temp and probably current. Of course its dynamic and it changes as speed, temp and current changes.
 
No individual core voltage supply can happen. Always one voltage is
Just out of curiousity, what your confidence is based upon? I mean does this slide referred exclusively to mobile CPUs?

index.php
 
Last edited by a moderator:
...where power efficiency is crucial.
In order to implement such power regulation it has to be on die to avoid board VRM complexity. This adds extra on die circuitry/regulators. I havent cross any info on the subject for the desktop parts.
 
VID values are almost useless as it's not just the cores that are taken into account when requesting the final VDDCR_CPU voltage, but other domains like L3 as well. So it's probably often the L3 that's deciding the final voltage.
AFAIK the per-core voltage control is theoretically possible in Ryzen desktop CPUs (via dLDOs), but in the end this feature was not enabled (for some reasons...).
 
Thanks, was curious what else is dLDOs for, other than for dynamic voltage injections... :) (of course not taking into account single core workloads, when it just works)
 
Back
Top