Prometheus Adapter for HWiNFO (+ Grafana Dashboard)

Thank you for your kind words :) (and thanks for Martin to making the HWiNFO in the first place, I had to do PromDapter because the alternatives providing Prometheus-output were insufficient metric/sensorwise).

I have some roadmap to improve PromDapter, but it has worked bit too well (not much issues) for me, that I haven't had the real need to touch it myself.

The PromDapter is designed to be(come) extendable, finishing the model is also on that said roadmap; I can extend it also to provide some solid extension to provide JSON directly, and with some sane way to customize/extend it, if you want.

So let me know if there's something I can help you with :).
 
The PromDapter is designed to be(come) extendable, finishing the model is also on that said roadmap; I can extend it also to provide some solid extension to provide JSON directly, and with some sane way to customize/extend it, if you want.

I would be thrilled to have built-in JSON output, though I realize coming up with a smart data structure is super important and no small amount of work.

Since I'm already working on mapping it for my own purposes, maybe I can share the JSON structure I come up with once I'm done and we (and others) can refine it. If nothing else it could give you a real-world sample of what I (and possibly others) might find useful. If I was more comfortable writing C#, I'd happily pitch in on the effort directly—but I'm afraid my skills are all front-end web oriented, as you might guess.
 
Yes, please share your needs and current/preferred structure. I speak fluent C#, its trivial for me to add...
 
I would be thrilled to have built-in JSON output, though I realize coming up with a smart data structure is super important and no small amount of work.

I'm senior enough to have my "own school of thought". I don't believe in delaying development for trying to find "unified smart data structure". I believe in extensibility/flexibility to have multiple EXACT required structures.

So we'll proceed ASAP with the exact structure you need :).
 
That's odd... this line should catch them:
- '(?<Entity>Core) (?<CoreNo>\d+)'

They should be visible in http://localhost:10445/metrics view something as following (the ... means there is other stuff, but coreno and unit should be like that among other things):

hwi_core_c{coreno="0",unit="°C",...} 98
hwi_core_c{coreno="1",unit="°C",...} 99
hwi_core_c{coreno="2",unit="°C",...} 99
hwi_core_c{coreno="3",unit="°C",...} 99

Can you doublecheck they're not there?
I checked one more time. There is the ratio by cores, loading , etc. The temperature by cores is missing
 
You can try temporarily adding "catch-all" line to your .yaml file; as last (before the aggregation part) or first in case last doesn't still catch it:

- '(?<MetricName>.*)'

NOTE! It will screw all the sensor names / categorization and other parts (being as last line might limit the effect).
That should catch all the available sensors, as its not name filtering, rather than "get me all regardless of the name and don't try to structurize the name at all".

If you try that, will they be visible and with what kind of exact lines?
 
Last edited:
Finally had some time to poke at my project a little more. I'm still playing with what the best JSON data structure for my own purposes is, but so far I've tried two things.

The first is just a large array with all the sensor data in it (plus an extra "value" property). It's basic, but works well if you're happy using something akin to JavaScript's Array.filter() and Array.find() methods for traversal.
JavaScript:
[
  {
    "coreno": "0",
    "unit": "%",
    "sensorType": "SENSOR_TYPE_USAGE",
    "sensor": "Core 0 Usage",
    "source": "CPU [#0]: Intel Core i5-6600K",
    "value": "20.8"
  },
  ...
]

The second sorts the data by two metrics: The source and the sensor. This is more visually organized, and lets you rely largely on standard object notation to get what you need since the keys match the HWiNFO GUI.
JavaScript:
{
  "CPU [#0]: Intel Core i5-6600K": {
    "Core 0 Usage": {
      "coreno": "0",
      "unit": "%",
      "sensorType": "SENSOR_TYPE_USAGE",
      "value": "37.7"
    },
    ...
  },
  "GPU [#0]: NVIDIA GeForce GTX 1060 6GB": {
    "GPU Core Load": {
      "unit": "%",
      "sensorType": "SENSOR_TYPE_USAGE",
      "value": "11"
    },
    ...
  },
  ...
}

The only other question I've had since starting to work on this is if it's possible to get the min/max/average values from a particular sensor. I didn't see an obvious way based on the regex syntax, but it's possible those values can't be surfaced easily for that method of sensor scraping.

Hope that helps and thanks for your continued work!
 
These are quite generic structures; array and grouped up by sensor. Should be pretty easy to implement - I check how the different value types best play along.

About min/max/average, I can check them out, but I personally don't think they belong at this layer with context of Prometheus; of course you have different source where you need the JSON, but I think same applies.

They depend on HWiNFO startups/resets and are not consistent over time; whereas if you build them yourself from your gathered data, you can control the time window of "average/min/max of last 7 days" etc kind of types in consistent manner.
 
Fair enough on the min/max/average values. I figured it was worth asking in case they were easy to get out of HWiNFO, since it's already providing all the other values anyway. Shouldn't be hard to handle those data myself!
 
I will check them still though; if they're available, I'll pass them through - but if they are calculated on the fly (and that would be needed to be replicated), then its better to do it in the data handling end of the pipe.

I've been busy elsewhere for few days now, will try to get the JSON (and check those min/max/avg) today/tomorrow.
 
Closing in, had to refactor some of the Prometheus specific data structures to more common (I was naive initially to focus on Prometheus alone). Current version (not yet released) is outputting something like this; I'll now revisit your suggestions and see if I can match them properly.

{
"unit": "MHz",
"sensor_type": "SENSOR_TYPE_CLOCK",
"sensor": "Average Effective Clock",
"source": "CPU [#0]: AMD Ryzen 9 3950X",
"categories": {
"aggregationType": "Average",
"metricName": "Effective Clock"
},
"value": 188.3128217373367,
"valueType": "Double",
"metric": "hwi_effective_clock_mhz"
},
{
"unit": "MHz",
"sensor_type": "SENSOR_TYPE_CLOCK",
"sensor": "Bus Clock",
"source": "CPU [#0]: AMD Ryzen 9 3950X",
"categories": {
"entity": "Bus",
"metricName": "Clock"
},
"value": 100.0,
"valueType": "Double",
"metric": "hwi_clock_mhz"
},
{
"unit": "T",
"sensor_type": "SENSOR_TYPE_OTHER",
"sensor": "Command Rate",
"source": "Memory Timings",
"categories": {
"entity": "Memory",
"metricName": "Command Rate"
},
"value": 1.0,
"valueType": "Double",
"metric": "hwi_command_rate_t"
},
 
Ok, finally managed to release the new version.

v0.9.16

No other additions, but JSON output support, with some initial grouping support too (this is not so trivial, had to alter it a bit from initial suggestion).

The JSON can be tested with few different options (NOTE the options-parameters below are case-sensitive):

Please let me know, if there is any issue(s) with them.
 
I've been playing around with your JSON update a bit today and it's been very nice! I've transitioned from my own custom JSON parsing to simply traversing the json?option=flattenMeta output and it's been smooth sailing. I can see the grouped options being very useful under the right circumstances, but for now the simple flat version has been the easiest to work with in my case.

Thanks for taking the time to tackle my request. Once I have something worth showing I'll be sure to share a little about my project as thanks.
 
I'm happy to hear that. The refactoring took most of the time, so now adjusting the output is easier; but the flat structure seems to serve best, as the grouping up can be done on caller's side with more flexible manner.

Please let me know, if you find any issues or prefer some adjustment on the JSON :).
 
This project and webpage looks awesome! I tried to install it but what exactly should work after running the installer? If i run http://localhost:10445/metrics/ I get all the values, so readings seem to be fine. If I load http://localhost:10445/ it returns a white empty page. Should it be like this? How could I link it to a webpage? Read something about Grafana, but what is exactly required? Or should it show the page as shown in the attachment in the initial post's screenshot?
If you are (at the current stage) looking for more experienced people it's ok as well ;)
 
Oh you need Prometheus Database installed, that is "scraping" (= reading in Prometheus terms) from that metrics - data... and then Grafana is the visualization system that provides the graphs.

So to answer "what is exacty required?" is following:

1. Prometheus Database
2. Grafana

They likely come bundled, but both can work without the other. There are various replacements for Prometheus and for Grafana, they provide different piece in the whole solution.

Bottom line; the PromDapter is very small adapter that makes HWiNFO data as Prometheus compatible.
 
Thank you, that makes it more clear. So I guess the port 10445 is only used to test the configuration for the adapter while the website should be displayed via port 3000 (or whatever gafana port you use)?
Edit: Or maybe I am not right: Should Gafana be installed on client or server side? After checking your screenshot again I think it should be running on client side and be configured to connect to the prometheus adapter which must be installed on server side. Is it right?
 
Last edited:
The setup goes like this:

1. The systems to be monitored need to run HWiNFO (with sensor window opened at least once) and PromDapter - that is your 10445 port

So this is one or more systems that all provide the Prometheus-compatible data

2. Prometheus is installed on some server, and its configured to scrape (= read) all the systems to be monitored, that is http://<each system's ip>:10445/metrics

Prometheus is running on some port (I forget the default one right now)

3. Grafana is installed on some server with connection to the Prometheus database (through what ever port Prometheus uses)

Grafana exposes the port 3000, that is server side. In my case it happens to be Kubernetes port-forwarded to localhost:3000, but its not actually running on localhost, but in my Kubernetes cluster.


So the dashboard can be visible on any web browser that can connect to Grafana to that 3000 port, and can login to access the system.
 
Took a few hours but it finally works, although e.g. combined CPU+GPU Power does not work automatically, I will have to check and adapt everything during the next days. Without any knowledge of Prometheus or Grafana it was a little difficult but still possible thanks to your help regarding the communiaction structure. ;):)Currently the monitored PC is also hosting the promethus and grafana server which i can login to via my tablet.
 

Attachments

  • Unbenannt.png
    Unbenannt.png
    90 KB · Views: 36
Nice! The CPU + GPU Power is just combining the metrics from previous two graphs into the same graph, odd that it doesn't work. I checked my own currently used graph metrics and its just combination.

CPU graph alone: hwi_cpu_package_power_w
GPU graph alone: hwi_gpu_power_w

Combined graph:
Metric A: hwi_cpu_package_power_w
Metric B: hwi_gpu_power_w

And then the Grafana itself adds the graphs as totals... your dots in that graph look exactly right, but for some reason that graph is not updating the timeline; they're just the most recent value.

This is the query below that I have (it should be as-is in the provided Dashboard json).

1595324811784.png
 
Back
Top