running hwinfo with high-perf gpu [nvidia optimus] maxes out gpu clocks

hello!

first off, thanks for such a useful and free monitoring tool :)

i've already read the known issues, but still have a question not being covered there.

i'm running hwinfo using high-perfomance gpu (gt650m in my case), either through context menu "launch using... " or by directly assigning it in nvidia control panel.

ok, it works, the nvidia sensors show up, but they show maximum clocks for every unit even though no other software is using it! (1670 mhz for shader, 835 for geometry and 900 for memory clock).

the situation is different if i launch msi afterburner first and then the hwinfo, using integrated gpu. in this case the clocks displayed by hwinfo would throttle down to minimum in a few seconds, if the gpu is idle.

what's the reason for such behavior? i would rather like to have only the hwmonitor running and have access to nvidia gpu sensors in my systray as soon as some demanding app is launched, and it would work except for this strange clock max out & hold. does it force the nvidia gpu to run at max clock even when there's no other load on it? does hwinfo even display correct clocks in this case?
 
HWiNFO shouldn't force the GPU to run at full clocks, there must be a different reason for it.
Could you please try to run any GPU intensive task (game for example) first, and then HWiNFO to see how the clocks behave?
I would also be useful for me to analyze the HWiNFO Debug File from the situation with max clocks.
 
Martin, thanks for quick reply!

i tried the following:
1) launched a 3d modeller on gt650m
2) started hwinfo (nvidia sensors detected)
3) while i performed various actions in the modeller, move, rotate the scene etc, the clocks varied depending on load, then fell back to minimal when idle.
4) i closed the modeller, leaving hwmonitor running
5) the nidia gpu clocks displayed immidiately hit the maximum, while gpu core load sensor shows 0%.
6) if i open any other program using nvidia, the clocks normalize again then rise again upon exit and so on.

btw the clocks i mentioned in my 1st post are not really maximum, they reached maximum while i was running a gpu intensive program: shader: 1810 mhz, geometry: 905, memory: 900. anyway, 1670 mhz for shader doesn't seem right with zero core load.

where do i find (or how do i generate) a debug file?
 
Thanks for the extensive report. I'll implement a change to HWiNFO which maybe can change things.
I will prepare a new build for you if you are willing to run a test. Are you using HWiNFO32 or HWiNFO64?
 
So I update the clock report for nVidia, but I'm not sure if this will fix it. Please try this build:
www.hwinfo.com/beta/hw64_403_1718.zip
if it changes anything in the clock report during GPU idle.
Also please attach (or send me) the HWiNFO Debug File when the card is idle, so I can check detailed what's happening there. See this thread about how to create the Debug File: http://www.hwinfo.com/forum/Thread-IMPORTANT-Read-this-before-submitting-a-report
Just make sure to enable Debug Mode, then let the sensors run for a while when you think it reports invalid clock, close HWiNFO and send me the HWiNFO64.DBG file it produced for analysis.
 
Martin, i've tried the 403_1718 version.

the only difference i noticed is when i run it on nvidia gpu explicitly (via context "run using high-perfomance gpu"), the nvidia sensors won't appear alltogether. they will only show if something is already using the gpu prior to starting the hwinfo.

in every other aspect, the behavior is the same - while some program is running on nvidia, clocks vary, and go back to 1670 mhz for shader, 835 for geometry and 900 for memory clock again when there's no nvidia gpu load.

i've PMed you the debug file.
 
Thanks for the information.
I have checked the data and HWiNFO correctly interprets the clock values reported by the nVidia driver.
So if the debug file you sent me was sampled during the idle mode where the GPU was in a low power state, then the problem is the nVidia driver. It might either not properly put the GPU into idle mode or not properly report actual clock status.
Maybe an nVidia driver update might help to fix this...
You might also check with another tool (like GPU-Z for example) what clock does it report. Launch it the same way as HWiNFO and watch if the clock reported in idle is the same. Or launch it together with HWiNFO and check if the clocks reported are consistent...
 
yes, the same story with gpu-z 0.6.4...

strange. can't find any more reports on this issue in the internet..

i checked pci link power state setting in power options to no avail.

if it is relevant in any way, my driver version is 296.67, the oem one from asus site (i've got an asus n56vz notebook).

i had 304.79beta prior to this, but was advised by an asus technichian to roll back to older "certified by asus" driver.
he said it is more likely to get the gpu burnt in a year or so because the driver from nvidia site is not optimized to this specific
asus nvidia gpu, there are polar opinions on this in the internet, so i don't know whom to believe, hesitating to install
new driver versions meanwhile...
 
Maybe try to check or post on nVidia forums, since the clock values reported come from their own driver (NVAPI).
 
Martin, a fact to consider:

i tried it several times and there can't be coincidence: when i start hwinfo or cpu-z, the gpu temperature starts climbing steadily from 40s to 50s. there are no other programs utilizing gpu at that moment. which means gpu was running cool and idle clocked before monitoring had started. then i stop monitoring, wait a few minutes for gpu to cool down, repeat - temperature rises again...

well, at least i'm happy the gpu isn't constantly running hot.

it's just odd.
nvidia forums are down for some security reinforcing reasons, unfortunately..
 
HWiNFO doesn't put any load on the GPU, it just makes few queries to the nVidia driver to gather basic information for sensors. And as far as I know CPU-Z uses the same method. That interface is an official nVidia method...
So in case these few queries cause the GPU to raise its clocks and get considerably warmer, then I guess it's a fault of the nVidia driver.

Oh, I forgot that the nVidia forums are down (they have been hacked).
 
You might try one more thing: reduce the "Scan Interval" in HWiNFO / Sensors / Configure (set to 5000 ms for example or higher). This will reduce the amount or queries to the nVidia driver to be less frequent. See if that will reduce the GPU load or temperature...
 
Back
Top