It’s been a while, let’s go! Any major fuckups lately or smooth sailing?
I had to change the local DNS setup yesterday. I finally installed my wife Linux Mint and wanted to set her up for Vaultwarden real quick which became an hour long debug session since apparently CNAME entries for hostnames don’t work as I thought. Never came up the recent year as all my machines took it, but resolved refused to and so I eventually deleted the entries in the Pihole and created them as A records pointing to the VM with the reverse proxy, hoping I won’t need to change the IP anytime soon. It’s always DNS!
In other news I think I moved all my local dockered services to forgejo+komodo now and applying updates by merging renovate MRs still feels super smooth. I just updated my calibre web automated with a single click. Only exception is home assistant where I have yet to find a good split in what to throw in a docker volume and what to check in git and bindmount.
Someohow rewiring my drives and removing two cables have stopped all zfs errors and it’s running 200% quieter.
I finally figured out it was a bad stick of RAM in my server that has been causing random freezes and not some stupid mistake on my part. Thankfully it’s DDR3 so I can keep both of my kidneys and still afford the replacement.
Thankfully it’s DDR3
It’s one of the benefits of having older equipment. I use these guys for RAM purchases: https://www.memorystock.com/
Got hit with this recently
https://github.com/jellyfin/jellyfin/issues/15148
Just restored an old backup. Everything is behind a vpn and is working so ill give it a while and see if it gets sorted before resorting to swapping out the sqlite version for each update.
Ouchy!
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters CGNAT Carrier-Grade NAT DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network DNS Domain Name Service/System HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol MQTT Message Queue Telemetry Transport point-to-point networking NAS Network-Attached Storage NAT Network Address Translation NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency NVR Network Video Recorder (generally for CCTV) Plex Brand of media server package RAID Redundant Array of Independent Disks for mass storage SATA Serial AT Attachment interface for mass storage SSD Solid State Drive mass storage VNC Virtual Network Computing for remote desktop access VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting) nginx Popular HTTP server
[Thread #51 for this comm, first seen 1st Feb 2026, 10:01] [FAQ] [Full list] [Contact] [Source code]
I’ve been hinking about infrastructure as code tools. Skimmed the very surface of opentofu, looked at the list of alternatives.
I’m in need of something that is both, deployment automation and (implicit) documentation of the thing that I call “the zoo”. Namely:
- network definition
- machine definitions (VMs, containers) and their configuration
- inventory: keeping track of third party resources
Now I think about which tool would be the right one for the job while I’m still not 100% sure what the job is. I don’t like added complexity, it is quite possible this could become a dead end for me, if I spend more time wrangling the tool than I gain in the end.
PS: If you haven’t already, please take a look at your openssl packages. Since this week there are two new CVEs rated as high: https://openssl-library.org/news/vulnerabilities/index.html
Recently obtained a free circa-2017 mac mini which I installed Linux on, to create a docker hosting environment. Current have Jellyfin, SearXNG, and Forgejo.
My much older NAS serves as the NFS drive for the Jellyfin media (formerly, I ran Plex directly on the NAS, but this was slow/unreliable as the NAS has only dual 1Ghz ARM cores).
One of the drives in the NAS died Thursday night, but no serious issue as its RAID 1. I wonder if the new load on it pushed it over the edge. (Also, I wonder if I could use the mac minis SSD as a sort of cache in front of the NAS, to reduce wear on it, if that would even help…)
Luckily I had some gift cards from recycling old tablets and phones, so I could get a replacement drive at minimal cost. I went with a cheap WD Blue drive instead of the 2.5x more expensive Seagate IronWolf drives I had used in the past. We will see how that fares over the next few years.
Upon replacing the drive yesterday, I found the one that failed was a 2017 mfg date, so its life was 8 years (from when I initially populated the NAS). The other drive was replaced in 2021 (but it actually failed in 2020, I just left the NAS unused for a year at that time, so it had a life of 3 years). Some insight into the life span of the Iron Wolf drives.
Things I’d like to add soon:
- kiwix instance
- normalize my ebook/magazine collection
- setup to download my youtube subscriptions to Jellyfin’s media directory so I can avoid the youtube app/website
- something for music to ditch that subscription
Finally killed my Discord account and moved my monitoring notifications to a self-hosted ntfy server. Works well.
Some of the things in my house were set up so long ago, and running so smoothly, i havent looked at them in years (other than auto updates) now i’m afraid i’ve accidentally left some security hole without realizing it
For example, i set up cerbot 10 years ago and back then there was no DNS challenge, so i had to open my webserver to port 80 to renew… well since everything was running from https/443, i decided to block port 80
so i edited the systemctl unit for certbot to temporarily open port 80 for the renewal, and close it right after…
It was only 5 years later i realized i made a mistake and port 80 had been open for 5 years to the open internet
Probably no harm since its a public server anyway… defense in depth is the key
Wait I don’t understand how changing your CNAME to A records resolved your problem. Did your wife’s computer simply not resolve the CNAME records?
So I have my vms behind an opnsense with DHCP, the opnsense also creates local DNS records like vm1.opnsense. The pihole has conditional forwarding for .opnsense to the firewall, so I can resolve the domain everywhere in LAN.
I had CNAME records in the pihole for my actual domain (e.g. lemmy.nocturnal.garden) pointing to vm1.opnsense so I take a shortcut from inside the LAN, avoiding going “outside” via the public IP.
Mint/resolved resolves the .opnsense domains when I directly look them up, but for a reason I didn’t fully understand, it does not work with a CNAME entry pointing to that. So I have up on the CNAME approach and created A records for each service, directly pointing to the VM’s IP.
I’m curious as why you decided to setup pihole when you already have opnsense. More so that your records are in pihole and not opnsense
I’ve had pihole years before the opnsense, but also opnsense is not the main router but just sits in front of my homelab. The wifi etc is a FritzBox, which also acts as WAN for opnsense.
That way, everything still in the house still works if my homelab/opnsense is down. Pihole is on a pi in the FritzBox LAN.
That sounds overly complicated, why not have it all on opnsense instead of 3 different devices?
Is your opnsense unstable? Otherwise regarding network availability you are just introducing unnecessary failure points the network.
The point of the opnsense is that I can tinker with it without risking our home wifi. It needs to stay up for my wife, for our mqtt devices/home assistant etc.
I don’t introduce points of failure to our home network which is the critical part. If something in the opnsense misbehaves, it only impacts my lab stuff. The FritzBox + Pihole combination has proven pretty stable over years, even though I’m considering getting a second Pihole device for high availability.
Ah right, I thought you were doing it like this
Internet -> Fritzbox + Pihole -> Opnsense -> Home Network
It makes sense now :D
Yeah that would be a bit convoluted :D
At home, smooth sailing. At “work/uni”, migrating everything to ceph, and been a pain in the arse installing OpenSuse with software raid for some reason
So much has been going on
I moved recently and had to change ISPs. I went from 2 Gbps symmetrical fiber to 90/3 Mbps satellite behind CGNAT.
Fastest place to get the WAN cable into the house was through the attic and into my guest room / office. But that caused some serious heat and noise issues.
Ran some structural Cat6, installed new electrical outlet, put in some keystone jacks, wired a new patch panel, then moved the rack to the basement.
Bought and installed a UPS which has already saved me twice in a month.
Up speeds were too slow and the high latency to the satellite constellation was causing issues, so I spun up a small VPS. But that means I have to sync content back to my local.
I’ve been wrestling with
rsyncfor over a month… fiddling with flags to get the best results. I think I finally settled on a config yesterday and the service and timer are working wellCGNAT is messing with remote access, so I set up cloudflare tunnels. But the tunneling is not well suited for streaming. I was only getting ~100 Kbps on remote connections. Ran some
iperf3testing over tailscale and was slightly better.My preferred audiobook app
Prologuereleased a major update to v4.0 which broke Plex libraries on launch, so I had to quickly pivot toAudioBookShelf.To achieve remote streaming and access for Prologue, I had to explain Tailscale set up and create new user accounts. Only halfway through my user base. Not looking forward to explaining it to my parents
Finally, I’m trying to set up
Claudeto run on my server rather than my locked down enterprise laptop. That’ll allow more tooling access like git rather than before when I was spending a lot of time downloading and uploading files manually. I need to figure out how to keep my session open. I’ll probably runtmuxinside a docker container then runclaudeinside the tmux window. Hopefully that worksOh, I also want to look into using a tailscale exit node to use a proton vpn wire guard route so I don’t have to switch between two separate VPNs
I also want to look into the exit node stuff.
As someone chronically behind CGNAT, you have my condolences
Currently dealing with extraordinarily slow network interface speeds on my NAS. Did a quick IO test with dd, and the results were great. I’d troubleshot this before to no avail, let the device power cycle and network speeds were fine afterwards. No dice this time, so I’m just replacing most of the hardware aside from the drive pool since I’d planned to anyways. Will troubleshoot my router’s network card as well for sanity’s sake.
I got zfs-zed working again after hours spent on vanishing notifications that worked before a kernel update that replaced a config file.
Turns out I missed a $ in a bash function call.
I finally installed my wife
Man…technology has come a long way.
Nothing here to write home about. A couple of minor tweaks to the network, and blocking even more unnecessary traffic. I’ve been on a mission to reduce costs in consumables such as electricity. I have a cron that shuts everything down at a certain time in the evening, and am working on a WOL routine fired by a cron from my stand alone pfsense box, to the server, to crank it back up in the morning just before I get up. It seemed to be the lowest hanging fruit so I have it on priority. It just didn’t make sense to run the server for 10 - 12 hours on idle I don’t have any midnight mass downloads of Linux iso’s nor do I make services available to other users so, it seemed to be a good place to start. I guess, by purist’s standards, it’s not a server anymore but an intermittent service, but it seems to be working for me. Will check consumption totals at the end of the month.
Other than that, I haven’t added anything new to the lineup, and I am just enjoying the benefits.
If you want to go all in, get some plug that measures the energy! Also let’s you directly see the effects of turning stuff on/off. My last server went up 3W when I started using the second network interface! Let drives go to sleep, play with C-States, etc
I had a post a while back about what I was doing to cut costs.
- TLP: Adjusts CPU frequency scaling, PCI‑e ASPM, SATA link power‑management
- Powertop: Used to profile power consumption and has a tune feature sudo powertop --auto-tune
- cpufrequtils: Used to manage the CPU governor directly
- logind.conf: Can be used to put the whole server to sleep when idle
After doing all of that, which does help out during operational hours, I decided to save 10-12 hours of consumption by just shutting it down. The old ‘turn the light out if you’re not in the room’ concept. Right now I am manually booting the server, and it doesn’t take that long to resume operations. However, why not employ some automation and magic packets to fire it back up in the morning.
ETA: I do have a watt meter on the server.
Sounds good! Are you on SSD or HDD?
The OS lives on an SSD and I have two aux drives. One is HDD, but it is a samba share for Navidrome, so it’s not like it’s spinning constandly. Everything gets a 3,2,1 backup.
ETA: Now that you mention it, I guess I could employ a park(?) for the HDD before shutting down.
Moved all my Unraid ‘apps’ to Dockhand, and linked my Pangolin VPS with the Hawser agent. I had Dockge for a while on newer container deployments, but wanted something a bit more playful, Dockhand is it.
I degoogled my GMail last year to Infomaniak, which was OK, but moved to Fastmail last week, which I now love! Setting the custom domain pulled in the sites favicon for the Fastmail account header, which made me smile too much for such a simple thing. Think I’ll be on Fastmail for the future. (Background syncing with the new Bichon email archiver).






