RTX 3080/3090 Memory/VRM temperature sensor

Thanks @Martin, this is awesome.

A quick comment about the GPU, more than the tool, in case it helps anyone landing here, like me. I have a ROG Strix 3080 and it ran super cool until now; crypto mining all day at +1000MHz memory, fan in the 60% range, keeping core at 65C for days.

Today VRAM started throttling, so something changed.

New HWInfo confirms my VRAM temps are high, so this awesome tool has been the key to diagnosing this problem.

I've done nothing to the card, software, drivers. But I did shut down and knock around in the case installing things, then boot back up.

Maybe the (stock) cooling block is shifting or poorly coupled to the VRAM?

Something definitely changed for me, and I know others have this temp problem from the beginning, so maybe it's a mechanical issue with the heatsink. Just speculating.

Thanks again.

P.S. I downclocked VRAM to -500MHz in afterburner, HWinfo reports it still hits 104C. So yeah, I think my heat sink came loose, it's not getting cooled. Never did this before.
 
Last edited:
You're welcome, I'm glad this helped you and others.
Some users noticed improperly placed thermal pads so maybe it would be good to check them.
 
Big thanks. FYI 3080 on water and I'm looking at 20 degs C vram above gpu temperature on standard 3080 vision clocks
 
It is known that GDDR6X is super power hungry (x3~4) compared to GDDR6. If you check the power consumption of VRAM alone on your RTX30series you'll be surprised!
I've seen some drawing 100+W for VRAM and thats on stock settings. All this energy is translated into heat.
 
Hi Guys - I am pretty surprised that my Memory Junction Temp on my slightly OC'ed RTX3090 (+250 Mem clock and undervolted GPU@2100Mhz) maxes out at around 66 degrees after long gaming sessions - pretty far from the throttling at 110 degrees and from the temps reported from other users , - yes I have a full watercooled alphacool block on GPU side but just a passive cooled aluminum backplate covering the backside memory chips.

So curious if the 66 degrees is not the highest temp in reality? - Thoughts?

Thanks for any insights.

Cheers

C
 
Hi Guys - I am pretty surprised that my Memory Junction Temp on my slightly OC'ed RTX3090 (+250 Mem clock and undervolted GPU@2100Mhz) maxes out at around 66 degrees after long gaming sessions - pretty far from the throttling at 110 degrees and from the temps reported from other users , - yes I have a full watercooled alphacool block on GPU side but just a passive cooled aluminum backplate covering the backside memory chips.

So curious if the 66 degrees is not the highest temp in reality? - Thoughts?

Thanks for any insights.

Cheers

C
Mine doesn't update live and gets stuck on a reading for some reason. Maybe that is it? I'm going to have to uninstall and reinstall I think.
 
Hi Guys - I am pretty surprised that my Memory Junction Temp on my slightly OC'ed RTX3090 (+250 Mem clock and undervolted GPU@2100Mhz) maxes out at around 66 degrees after long gaming sessions - pretty far from the throttling at 110 degrees and from the temps reported from other users , - yes I have a full watercooled alphacool block on GPU side but just a passive cooled aluminum backplate covering the backside memory chips.

So curious if the 66 degrees is not the highest temp in reality? - Thoughts?

Thanks for any insights.

Cheers

C
Well if you're certain that the value you see is the updated current one then the explanation could be that now that memory modules have a separate cooler from GPU they are not affected by GPU heat.
Is it completely passive? Just case airflow?
 
Mine doesn't update live and gets stuck on a reading for some reason. Maybe that is it? I'm going to have to uninstall and reinstall I think.
Thanks for the feedback - my reading does not look stuck as it fluctuates nicely with usage.

Just tried 5 superposition 8K runs and were up to 72 degrees max and now back idling around 38 degrees
 
Last edited:
Well if you're certain that the value you see is the updated current one then the explanation could be that now that memory modules have a separate cooler from GPU they are not affected by GPU heat.
Is it completely passive? Just case airflow?
Indeed - just case airflow on the backside memory modules - frontside are actively cooled by water. Max GPU temp ever recorded in my setup is 55 Degrees during prolonged heavy use and the watercooling for sure keeps the overall PCB heat down. (500W powerdraw of total Graphics card of which 89W is DDR Ram powerdraw)

Still i recon the memory temp should be higher but maybe the watercooling on GPU side of PCB is enough to keep temps low of opposite PCB side modules...? - Keen to hear feedback from other watercooled RTX3090 users :D

Cheers

C
 
So I'm getting around 50gpu 65/68ram approx in a 24 degrees room. My case is only small so 2x240 rads and 1x140. Stress test combined gpu and cpu shows 430 Watts on hwinfo into the system resulting with a water delta of 10 or 11 degs.

3600 at 4.2 all core 110watt and 3080 mild overclock std vision 350watt tdp
 
So I'm getting around 50gpu 65/68ram approx in a 24 degrees room. My case is only small so 2x240 rads and 1x140. Stress test combined gpu and cpu shows 430 Watts on hwinfo into the system resulting with a water delta of 10 or 11 degs.

3600 at 4.2 all core 110watt and 3080 mild overclock std vision 350watt tdp
ok that seems comparable temps - strange that others are approaching 100 degrees on GDDR6x temps :O

Thanks

C
 
The thing with memory modules is that they producing heat deep, close to PCB. A high capacity GPU cooler like a serious custom water loop may “lift” a lot of heat from GPU and therefor from all surrounding components close to GPU die through PCB. Having extra cooling on back side modules makes a difference also.

The “weak“ point of FE 30series cards is their small PCB size resulting high component density, because of cooler design. This was a concern from day1 but it wasn’t confirmed until now, after GDDR6X temp reveal. If any air cooled AIB cards with their larger PCB and lower density have the same issue, then for me their cooler is bad design, focused on GPU cooling. Power hungry GDDR6X is not making things easy. Need more than double the power from the “plain” GDDR6.
 
@

Martin

Thank you for this great update. I was just curious, and if this has already been asked to death then I apologize in advance, will you be releasing a linux version of this memory junction temperature sensor as well anytime soon?

I couldn't find any information for nvidia-smi to do any readings on the memory side, and this would be really useful for checking undervoltage temperatures for my 3090. Anyway thanks in advance! I love windows too but I also enjoy linux from time to time :D
 
I'm not planning a Linux version in near future and I have no idea whether nvidia-smi supports this either.
 
Try to Reset Preferences in HWiNFO, it could be caused by some glitch in the actual sensor layout configuration.
I didn't see any "Reset Preferences" option but I did change this and it now works and updates on the fly with a graph....

I went into: Settings -> Layout -> Restore Original Order

Just in case anyone else was curious.
 
@Martin I just checked junction temperature for 6900XT and it reports 10C degrees higher than AMD drivers software, I wonder if the temperature reported for Nvidia 30xx cards is also 10C higher than it actually is, since throttling begins right around 105C which in case of -10C offset it points to the maximum DDR6X 95C working temperature.
 
Back
Top