Nah, I live alone. Just me, two cats, and my robots. I can turn everything off if I want.
I pulled the rails off today, packed those up. You know, so I don’t slice my leg open walking by them.
I’ll plug one in this week and get started with it.
Rocket Surgeon
- 1 Post
- 5 Comments
Ok cool. Ya we closed our office, and I work from home now, so my only bench is my livingroom table. It’s gonna be an interesting moment when I power the first one up, but I’m glad to hear they will probably run.
We run 120v in our colos. Standard plugs. I haven’t plugged it in yet, but I don’t think the cable or voltage will be an issue.
The drives … well I’ve got at least 50 drives here. (There were a few spares too.)
They are all WD Reds 4tb and above. Great for this application, but my homelab servers both have several TB of unused redundant storage, including SSD on each.
So I don’t need em, but that many drives … hell, if I sold just the drives as refurb on ebay, I think I could make like $2k.I do anticipate selling the two devices with the drives as a complete kit.
I hope I can sell them locally (Portland OR). These beasts would be a b!tch to ship.
Thanks! So you don’t think I’m gonna blow my breakers? Alright, we will see.
“TrueNAS or ProxMox … triage the issues. … set the drive controllers to HBA mode or flash an HBA firmware to them.”
- Right. I’ve installed TrueNAS on em a couple times previously. They were running ZFS software raid. So … maybe just use the raid controller instead? Honestly, I’ve not tried that yet.
- I’ve installed a couple different Supermicro firmware versions to them. Got em up to date with the HTML5 (not Java) remote console. That did not fix the crashes. Supermicro’s driver download services are a bit weird, perhaps I missed something they need.
- All of my prior troubleshooting has been from 1200 miles away. Yes, I’ll do my best to triage. Spin up an OS, and then one by one, check each drive and bay.
.
I’m gonna enjoy working with them, but I have a couple Dell Gen13 (Broadwell) servers already in my lab. My main host, running Proxmox, is a Dell R230 8vcpu 64gb. I run up to 8 VMs there, and its really all I need.
I never run my R430 80vcpu 180gb. No need for that much juice. I really enjoyed upgrading it to the max, and now I don’t use it. After I finish shopping out these new Supermicro monsters, I’m gonna be happy to sell em off to somebody that wants a big chassis with a bunch of disks.



Those were a couple really good vids. I’ve never been a storage specialist, but I do manage all the storage for a small MSP, so I’m not ignorant. Like, I know ZFS pretty darn well, and I apparently collect storage servers for fun.
That Wendell guy tho, he really knows his shit.
I don’t know that I got any final answers from him, but it left me with a lot to consider.
Honestly, a good chunk of what he had to say had me questioning my build with my Highpoint SSD7540 PCIe 4.0 x16 / 8x M.2 Ports NVMe card … on a completely different machine, a build I was quite satisfied with until now. (It’s on my gamer/server, my main box.)
I put a lot of research and performance testing into the Highpoint build. It’s an 8x card supporting Gen 4 NVMe in an (actually) 16 lane slot. I populated 4 bays. Each stick gets 4 lanes, which is great for Gen 4. (I figured some day in the future when NVMe gen4 is dirt cheap, I’ll fill the rest, and each stick will just get 2 lanes.) After some testing, I decided to use the hardware RAID controller on the card. Considering what old Wendell had to say, I suspect that perhaps it should be software raid instead … still, that would mean relying on Windows to run the raid, and I don’t trust Windows. And then there’s the fact that after reviewing all the spec sheets, I’ve realized there’s a lot I don’t know about the card. But the Highpoint smokes, and I mostly just store video games there. So maybe bit-rot isn’t a big deal anyway.
All very interesting stuff. Thanks.