

IDK, a person could stop being a billionaire, continuing their miserable existence as a millionaire, while a jew could not stop being a jew.
Redemption is a main and a key difference here.


IDK, a person could stop being a billionaire, continuing their miserable existence as a millionaire, while a jew could not stop being a jew.
Redemption is a main and a key difference here.


Hallucination is not just a mistake, if I understand it correctly. LLMs make mistakes and this is the primary reason why I don’t use them for my coding job.
Like a year ago, ChatGPT made out a python library with a made out api to solve my particular problem that I asked for. Maybe the last hallucination I can recall was about claiming that manual is a keyword in PostgreSQL, which is not.


Especially if you’re asking about something you’re not educated or experienced with
That’s the biggest problem for me. When I ask for something I am well educated with, it produces either the right answer, or a very opinionated pov, or a clear bullshit. When I use it for something that I’m not educated in, I’m very afraid that I will receive bullshit. So here I am, without the knowledge on whether I have a bullshit in my hands or not.


I’m not using LLMs often, but I haven’t had a single clean example of hallucination for 6 months already. This recursive calls work I incline to believe
designing with my kids,
Wtf
Ah yeah, the system of coming has stopped itself. Ask Ezhov and Yagoda how realistic it is