

All the billionares watched a few too many 2077 YouTube videos.
Why do you think they’re so specifically interested in space datacenters? And AR glasses? And AGI, and corpo states? Just to start.


All the billionares watched a few too many 2077 YouTube videos.
Why do you think they’re so specifically interested in space datacenters? And AR glasses? And AGI, and corpo states? Just to start.


since it’s needed to store training data.
Again, I don’t buy this. The training data isn’t actually that big, nor is training done on such a huge scale so frequently.


Aside: WTF are they using SSDs for?
LLM inference in the cloud is basically only done in VRAM. Rarely stale K/V cache is cached in RAM, but new attention architectures should minimize that. Large scale training, contrary to popular belief, is a pretty rare event most data centers and businesses are incapable of.
…So what do they do with so much flash storage!? Is it literally just FOMO server buying?


Playing devil’s advocate, I understand one point of pressure: Plex doesn’t want to be perceived as a “piracy app.”
See: Kodi. https://kodi.expert/kodi-news/mpaa-warns-increasing-kodi-abuse-poses-greater-video-piracy-risk/
To be blunt, that’s a huge chunk of their userbase. And they run the risk of being legally pounded to dust once that image takes hold.
So how do they avoid that? Add a bunch of other stuff, for plausible deniability. And it seems to have worked, as the anti-piracy gods haven’t singled them out like they have past software projects.
To be clear, I’m not excusing Plex. But I can sympathize.


The pathological need to find something to use LLMs for is so bizzare.
It’s like the opposite of classic ML, relatively tiny special purpose models trained for something critical, out of desperation, because it just can’t be done well conventionally.
But this:
AI-enhanced tab groups. Powered by a local AI model, these groups identify related tabs and suggest names for them. There is even a “Suggest more tabs for group” button that users can click to get recommendations.
Take out the word AI.
Enhanced tab groups. Powered by a local algorithm, these groups identify related tabs and suggest names for them. There is even a “Suggest more tabs for group” button that users can click to get recommendations.
If this feature took, say, a gigabyte of RAM and a bunch of CPU, it would be laughed out. But somehow it ships because it has the word AI in it? That makes no sense.
I am a massive local LLM advocate. I like “generative” ML, within reason and ethics. But this is just stupid.
Joke’s on you.
They don’t actually make any money. Not unless their a monopoly that’s captured regulators anyway.