Google’s TurboQuant has the internet joking about Pied Piper from HBO's "Silicon Valley." The compression algorithm promises to shrink AI’s “working memory” by up to 6x, but it’s still just a lab experiment for now.
Yeah i don’t think they ever stop training is the thing. At this point I’d assume they have multiple training pipelines to try different shit out, just queued up to hit the big farms as soon as the last models are done training.
Training is constant. None of these models by any of these providers are static. You’ll notice that they are releasing new models and new model versions regularly.
This means that training is happening constantly. It never stops. There’s always new shit being trained.
Yeah—but in theory you only need to train once, while inference costs are ongoing and scale up with usage.
I guess it’s ultimately a business decision by AI companies to weigh how often retraining is worth the cost.
Yeah i don’t think they ever stop training is the thing. At this point I’d assume they have multiple training pipelines to try different shit out, just queued up to hit the big farms as soon as the last models are done training.
Resting isn’t a thing in capitalism.
Training is constant. None of these models by any of these providers are static. You’ll notice that they are releasing new models and new model versions regularly.
This means that training is happening constantly. It never stops. There’s always new shit being trained.