smartctl can read SMART from drives behind Areca controller. Any chance HWiNFO will?

That's great :)
Let me know if you see any issues. I'll probably add some more improvements to the code and might need a few more tests.
But now it's time for bed ;)
 
So this latest version no longer picks up the drives behind the Areca controller. The result is the same in or out of debug mode. This is what I get:

hwinfoareca-05.JPG


Debug file:

http://www.cstone.net/~dk/HWiNFO64-05.DBG
 
It does.

HWiNFO-09.JPG


However, it is picking up 3 'disks' that don't exist. They look be be SATA connectors on the motherboard with no drives attached to them.

Debug:

http://www.cstone.net/~dk/HWiNFO64-09.DBG

In the left tab, under Drives, would it be possible to show how and where each drive is connected? Something like:

drivelocationinfo.JPG


At the moment, in the detail pane, Drive Controller is showing as "Serial ATA 6Gb/s @ 6Gb/s for all drives behind the Areca. Showing the actual controller, enclose and slot would be very useful. If I have a SMART error, for example, it would be nice to know which enclose and slot the drive was located in.
 
Thanks for so many tests.
Those 3 invalid drives seem to come from invalid responses to other standard queries, I'll have a look at filtering those out.
I have just found how to decode the controller information, so I should be able to display detailed information like serial number, firmware versions, controller type, CPU/DRAM information, etc. But this will take some time...
Not sure yet about detailed device information, will have a look at that.

Now a new build: www.hwinfo.com/beta/hw64_535_2944.zip
I have reduced the number of scanned devices per enclosure to 32 and max enclosures to 8 to improve startup time, as I think this might be the controller limits. Let me know if I'm wrong. Moreover, this build will give me more information about particular device properties in the DBG file that might allow me to show more details. It should also filter out those invalid devices.
 
Yes, it does filter out those invalid devices.

I believe the max number of supported devices is 36 per expander. See here:

http://www.avagotech.com/products/server-storage/sas-expanders/sas-2x36

I also ran the latest version on my 2nd server. The output looks good here as well:

HWiNFO-10.PNG


Going back the my primary server, here are a couple of screenshots from the controller web interface.

Hardware info:

ArecaBramaHardwareInfo.JPG


SAS expander info:

ArecaBramaSASInfo.JPG


Drive location info:

The 8 SSD drives are directly connected to the 1882 controller via SFF-8087 to SATA fan out cables. The 24 other drives are connected via the LSI SAS2 expander on the SuperMicro backplane.

ArecaBramaDriveInfo.JPG


Same info for my 2nd server as follows:

SAS expander info:

Note that this server is comprised of 3 chassis, all of which have a LSI SAS2 x36 expander in them. Chassis/expander 1 is connected via the internal SFF-8087 connector. Chassis 2 is connected via the external SFF-8088 connector. Chassis 3 daisy-chained through chassis 2.

ArecaArgonSASInfo.JPG


Drive location info:

Note that in the device column, the labels are all Slot 01 - Slot 24 for all 3 chassis. In my other server, the drives hanging off the SAS expander are labeled PHY#12 - PHY#35. This is due to that particular SAS2 backplane being a slightly older hardware revision. My friend with a pair of Areca 1880 controllers and a very similar setup with 4 SuperMicro 846 chassis, is noticing the exact same behavior. He has been running these beta versions as well, and it seeing the same improvements as I am.

ArecaArgonDriveInfo.JPG
 
Thanks for all the details. I think that currently I should be able to display the System Information similar to what you posted.
Not sure about the others yet, some information is missing in the data I have.
Could you please post a new DBG file using the latest Beta ? That should give me more clues about drive PHY information and how to implement that in HWiNFO.
 
DBG file for server 1 (Brama):

http://www.cstone.net/~dk/HWiNFO64-11-Brama.DBG

DBG file for server 2 (Argon):

http://www.cstone.net/~dk/HWiNFO64-12-Argon.DBG

I have the drives configured to spin down if the array has not been accessed for 30 minutes. That was the case for the drives on Argon (backup server). The drives take a while to spin up (60 drives configured for staggered spin up). I went ahead and accessed the array by opening a folder on the shared drive. I did this after HWiNFO stared scanning as I saw it got hung up. I'm not sure if HWiNFO will spin up the drives on its own when scanning for them? I guess I can test that and see what happens.
 
Thanks.
I'm too not sure what happens when the drives are down, I guess that depends on controller firmware if it's able to provide the required information without spinning them up.
 
Ok. I will check it out after the drives spin down again.

Here's a screenshot showing the device info currently being reported for the 1882:

1882HWiNFO.JPG
 
So here another build: www.hwinfo.com/beta/hw64_535_2945.zip
In this build:
+ Areca drives should now be put under a respective Areca node
+ The Areca node will show additional controller information
+ All drive names under that controller should be prefixed with "[E:N:R]", where E=Enclosure#, N=Disk#, R=Raid Set#

Not sure if all will be OK, so please let me know.
 
Looking really good.

HWiNFO-20.JPG


There are a few minor issues however:

1. The drives are at the same 'level' as the controller, not under it.
2. There are about 80 RAID Array devices that, when expanded, contains a dupe of all the drives behind the Areca controller.

I also checked under sensors, and I now see the E:N:R assignments there as well.

Getting really close!

DBG file:

http://www.cstone.net/~dk/HWiNFO64-20.DBG
 
OK, that's not how it was supposed to look like ;)
So we have a new build: www.hwinfo.com/beta/hw64_535_2946.zip
Now:
+ There should be a node for each Areca Controller
+ Under that node, there should be new nodes for each RAID Set and they should list all drives belonging to that set
+ Changed the address prefix in name to suffix and removed RAID Set#, as I think this is not needed. Also, this will be shown only in the tree and sensors
+ The full address will also be shown for each drive in the details view

I haven't yet found a way how to read RAID Set details, if I do, I could add some more details like status, etc.

Please let me know how it works...
 
Thanks Martin. I will try this out when I get home (not working from my home office today).

I'll see if I can get my friend with the 1880 controller to check out this new version.
 
Another question, I'm using Aquasuite and HWiNFO together. It appears that I have to keep the HWiNFO sensor window open in order for the sensor readings that I'm mapping over to Aquasuite to refresh in real time. Is that the expected behavior?
 
Back
Top