

I think it’s the first time it’s happened since I upgraded my hardware over a year ago. 64 gigs of RAM and I rarely use more than 30% of it.


I think it’s the first time it’s happened since I upgraded my hardware over a year ago. 64 gigs of RAM and I rarely use more than 30% of it.


Reset button not working, but power button working is quite odd.
Yeah makes me think something hardware level.
Are the server and chargers close to each other? Can you reliably trigger it with the car charger?
No. The car charges every night. This is the first time this has happened.


I assume the ID should also reflect your hair color and height at birth too?


I used to work for a consultancy that tried to bill themselves as experts in VR/AR. This is back in 2017 or so. We helped a client make a 3D tracking system with VR/AR applications, and this client let us kind of run with it.
Anyway, I was sort of head of this AR/VR thing, and we were always desperate for free advertising, so I somehow got pulled to provide my thoughts on the impact of VR/AR on the grocery store industry for an article in “The Grocer” or some other industry mag.
Leading up to the call, I was trying to think of what I’d say. My thoughts were on building out virtual grocery stores to test customer reactions before building them for real. Bring in some test subjects, see how they plan their route, how they react to different placements of goods. Track their eye movements to see if the new end-cap design is working. Time how long they spend in the store, etc. Are the aisles too narrow and claustrophobic. I got the idea from another client who was using VR to test out new detergent bottle concepts (apparently a one-off of a blow-molded bleach bottle is crazy expensive).
Well my consultancy had been purchased by a multinational conglomerate a year or so prior, so I got a phone call from some C-suite ass who wanted to brief me on what they wanted me to say to the magazine.
His idea was a service where you could have a store employee wear some kind of camera rig so the customer could sit at home in VR and pilot the employee around the store. This would essentially replace curbside pickup, but with the added benefit of “allowing the customer to pick which apple they want out of the bunch.”
I resolved to ignore that advice, but the whole magazine thing ended up falling through anyway. I quit within the year.


But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries. The backend security bug
I feel like “bug” is doing a looot of heavy lifting here.


I’m mostly trying to describe a feeling I don’t hear named very often


Thanks for the link. I was gonna ask if you were a writer, heh.
I agree. The tone of the ads this year felt almost like lampshading. Like if we acknowledge the problem, we’re wise to what the audience is feeling, but we’re not going to do a damn thing to address it. It’s just something that needs to be done to make the ad feel remotely relevant.
AI is scary, but don’t be afraid of our surveillance device because we acknowledged that AI is scary
AI will sell you ads. Anyway, you’re watching an ad for AI
Work sucks amirite? Why not let us unemploy you?
There’s a wealth gap. Spend money on our stuff.
And I’m not going to even link the He Gets Us ads.


Thanks for taking the time.
So I’m not using a CLI. I’ve got the intelanalytics/ipex-llm-inference-cpp-xpu image running and hosting LLMs to be used by a separate open-webui container. I originally set it up with Deepseek-R1:latest per the tutorial to get the results above. This was straight out of the box with no tweaks.
The interface offers some controls settings (below screenshot). Is that what you’re talking about?




Well, not off to a great start.
To be clear, I think getting an LLM to run locally at all is super cool, but saying “go self hosted” sort of gloms over the fact that getting a local LLM to do anything close to what ChatGPT can do is a very expensive hobby.


Any suggestions on how to get these to gguf format? I found a GitHub project that claims to convert, but wondering if there’s a more direct way.


GO self-hosted,
So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server’s Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.
I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn’t even thought to try and it worked.
But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.
8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?


That’s not even close to the worst of it.


Just missed Bandcamp Friday. Also, get u some flac.


Yeah, that happens fairly frequently. I don’t have an Amazon account, so I personally roll with the punches.
What’s really fun is when you have to return one of those items and they don’t know what to do.


Interesting. Didn’t know about the google books case. I agree that it applies here.


Try eBay. You’re much more likely to find a small business selling whatever widget you need.


I think it’s critically important to be very specific about what LLMs are “able to do” vs what they tend to do in practice.
The argument is that the initial training data is sufficiently altered and “transformed” so as not to be breaking copyright. If the model is capable of reproducing the majority of the book unaltered, then we know that is not the case. Whether or not it’s easy to access is irrelevant. The fact that the people performing the study had to “jailbreak” the models to get past checks tells you that the model’s creators are very aware that the model is very capable of producing an un-transformed version of the copyrighted work.
From the end-user’s perspective, if the model is sufficiently gated from distributing copyrighted works, it doesn’t matter what it’s inherently capable of, but the argument shouldn’t be “the model isn’t breaking the law” it should be “we have a staff of people working around the clock to make sure the model doesn’t try to break the law.”


That study is six months old. The one I linked is from three weeks ago.


No it isn’t. Read.
In ECC memory?