

Good thing I decided against switching to it, even though my main reason is that my weird book organisation scheme isn’t feasible with anything but calibre or manual organisation currently as far as I know


Good thing I decided against switching to it, even though my main reason is that my weird book organisation scheme isn’t feasible with anything but calibre or manual organisation currently as far as I know
I use a MikroTik Router and while I do love the amount of power it gives me, I very quickly realized that I jumped in at the deep end. Deeper than I can deal with unfortunately.
I did get everything running after a week or so but I absolutely had to fight the router to do so.
Sometimes less is more I guess
That was my exact setup as well until I switched to a different router which supported both custom DNS entries and blocklists, thereby making the pi-hole redundant
Not OP but a lot of people probably use pi-hole which doesn’t support wildcards for some inane reason


I typically use EndeavorOS because I enjoy how well documented and organized the arch wiki is.
I tried switching to fedora on my laptop recently but actually had some issues with software that was apparently only distributed through the AUR or AppImage (which I could have used, I know).
When I also had issues setting up my VPN to my home network again, I caved and restored the disk to a backup I took before attempting the switch. The VPN thing almost definitely wasn’t Fedoras fault since I remember running into the same issue on EndeavorOS but after my fix from last time didn’t work I was out of patience.
My servers runs either on debian or Ubuntu LTS though.


Why not skip ahead in time a little and call it farmarr?


I know you didn’t mention video but if you think you might want to host jellyfin in the future, make sure your CPU supports hardware decoding for modern formats.
For example, my lenovo mini pc with an i5-6500 has support for h265 but not h265 10bit or AV1, which makes playing those formats on some devices basically impossible without re-encoding the files.
I remember building something vaguely related in a university course on AI before ChatGPT was released and the whole LLM thing hadn’t taken off.
The user had the option to enter a couple movies (so long as they were present in the weird semantic database thing our professor told us to use) and we calculated a similarity matrix between them and all other movies in the database based on their tags and by putting the description through a natural language processing pipeline.
The result was the user getting a couple surprisingly accurate recommendations.
Considering we had to calculate this similarity score for every movie in the database it was obviously not very efficient but I wonder how it would scale up against current LLM models, both in terms of accuracy and energy efficiency.
One issue, if you want to call it that, is that our approach was deterministic. Enter the same movies, get the same results. I don’t think an LLM is as predictable for that