Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
GPU Memory Allocated & Usage (possible) bug
#1
Hello.

When I start this PC (HWInfo64 auto-start with Windows) all the readings seem to be correct:

[Image: before_zpsbbuc3dvn.jpg]

But after using the GPUs (playing a game, using 3DMark or whatever application that makes an intensive load of GPU memory), it starts reading things like this:

[Image: after_zps1s5miduy.jpg]

All other readings seem to be correct. And the GPUs memory allocated don't go back to normal readings until PC reboots (restarting HWInfo64 doesn't seem to correct the readings).

This system is running Windows 10 Pro (1607).

Thanks in adavance for having a look into this.
Reply
#2
Well, this is an interesting one. Since HWiNFO reads this value from NVIDIA's dedicated interface, it might be a bug there.
Please run the latest HWiNFO Beta build in Debug Mode, try to reproduce the scenario until you get such invalid value, then close it and attach the Debug File produced. I'll look into it, if there's anything I can do...
Reply
#3
(11-03-2016, 11:15 AM)Martin Wrote: Well, this is an interesting one. Since HWiNFO reads this value from NVIDIA's dedicated interface, it might be a bug there.
Please run the latest HWiNFO Beta build in Debug Mode, try to reproduce the scenario until you get such invalid value, then close it and attach the Debug File produced. I'll look into it, if there's anything I can do...

I've created the debug file, but it is too heavy to be uploaded in the forum. I've tryed also to sent it to the contact e-mail from your page and your server returns the mail also because of the attachment size.

I'll split that file in some others using rar and sent them to you separatelly in few minutes.

EDIT: winrar reduced its size from 19Mb to 819Kb. Here it is. If something has been corrupted because of the compression, let me know and I'll upload it again.

Thanks in advance.


Attached Files
.rar   HWiNFO64.rar (Size: 818.72 KB / Downloads: 1)
Reply
#4
Compressing the file using ZIP/RAR/7zip should help. If that won't work, you can send it straight to my e-mail in the tool or web page.
Reply
#5
(11-09-2016, 11:49 AM)Martin Wrote: Compressing the file using ZIP/RAR/7zip should help. If that won't work, you can send it straight to my e-mail in the tool or web page.

Yes, compressing definitely helped. The file is a couple of post up.

Thanks once again.
Reply
#6
Thanks for the data.
Well, this really looks like a bug in the NVIDIA drivers - they report more memory available than really present. At the end of the dump you attached it says 17.1GB free, while 12GB present.
In the latest HWiNFO Beta build (3025) I released few hours ago I added a workaround for this, so for such cases 0 should be reported. But in the end I believe this should be fixed by NVIDIA.
Reply
#7
(11-09-2016, 12:13 PM)Martin Wrote: Thanks for the data.
Well, this really looks like a bug in the NVIDIA drivers - they report more memory available than really present. At the end of the dump you attached it says 17.1GB free, while 12GB present.
In the latest HWiNFO Beta build (3025) I released few hours ago I added a workaround for this, so for such cases 0 should be reported. But in the end I believe this should be fixed by NVIDIA.

Thank you very much. I will contact them to let them know this issue.
Reply
#8
I did more analysis and it seems to me that this might be because of SLI. Both GPUs report the same amount of available memory, but 12GB total.
So another theory would be that maybe they report memory available for both GPUs, but total memory for only each of them.
When you created the Debug File, was it done while the GPUs were actively used or going back to idle ? Because if the GPUs go back to idle, you should get correct values (low-usage). If that doesn't happen, then it's an NVIDIA bug.
Reply
#9
(11-09-2016, 12:27 PM)Martin Wrote: I did more analysis and it seems to me that this might be because of SLI. Both GPUs report the same amount of available memory, but 12GB total.
So another theory would be that maybe they report memory available for both GPUs, but total memory for only each of them.
When you created the Debug File, was it done while the GPUs were actively used or going back to idle ? Because if the GPUs go back to idle, you should get correct values (low-usage). If that doesn't happen, then it's an NVIDIA bug.

The GPUs where back in idle at the moment of creating the debug file.

And I've checked that once the strange readings appear, they remaing such strange till the PC is turned off and on again. GPUs going back to idle doesn't correct at all those readings once they have appeared.
Reply
#10
Well, then it must be a bug in NVIDIA drivers.
Reply
#11
Apparently this bug was fixed in NVIDIA ForceWare 375.86. Can you please check ?
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)