• 0 Posts
  • 15 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle
  • I want to preface my response that I appreciate the thought and care put into your thoughts even though I don’t agree with them. Yours as well as the others.

    The differences between a human hallucination and an AI hallucination is pretty stark. A human’s hallucinations are false information understood by one’s senses. Seeing or hearing things that aren’t there. An AI hallucination is false information being invented by the AI itself. It had good information in its training data but invents something that is misinformation at best and an outright lie at worst. A person who is experiencing hallucinations or a manic episode, can lose their sense of self awareness temporarily but it returns with a normal mental state.

    On the topic of self awareness, we have tests we use to determine it in animals, such as being able to recognize oneself in the mirror. Only a few animals such as some birds, apes, and mammals such as orcas and elephants pass that test. Notably, very small children would not pass the test but they grow into recognizing that their reflection is them and not another being eventually.

    I think the test about the seahorse emoji went over your head. The point isn’t that the LLM can’t experience it, it’s that there is no seahorse emoji. The LLM knows there isn’t a seahorse emoji and can’t reproduce it but it tries to over and over again because it’s training data points to there being one, when there isn’t. It fundamentally can’t learn, can’t self reflect on its experiences. Even with the expanded context window, once it starts a lie, it may admit that the information was false but 9/10 when called out on a hallucination, it will just generate another slightly different lie. In my anecdotal experience at least, once an LLM starts lying, the conversation is no longer useful.

    You reference reasoning models, and they do a better job of avoiding hallucinations by breaking prompts down into smaller problems and allowing the LLM to “check its work” before revealing the response to the end user. That’s not the same as thinking in my opinion, it’s just more complex prompting. It’s not a single intelligence pondering on the prompt, it’s different parts of the model tackling the prompt in different ways before being piped to the full model for a generative reply. A different approach but at the end of the day, it’s just an unthinking pile of silicon and various metals running a computer program.

    I do like your analogy of the 7 year old compared to the LLM. I find the main distinction being that the 7 year old will grow and learn form its experience, an LLM can’t. It’s “experience”, through prompt history, can give it additional information to apply to the current prompt, but it’s not really learning as much as it is just another token to help it generate a specific response. LLMs react to prompts according to its programming, emergent and novel responses come from unexpected inputs, not from it learning or otherwise not following its programming.

    I apologize I probably didn’t fully address or rebut everything in your post, it was just too good of a post to be able to succinctly address it all on a mobile app. Thanks for sharing your perspective









  • Jonathan Ross should never have been put back on duty after his first incident. He fought in Iraq as he served for the Indiana National Guard and had a long, high stress, career for Border Patrol. He should have been recognized as needing PTSD therapy and a career change after his June incident. He didn’t have to be a murderer, he was set up to be one by his superiors at ICE.

    Not to absolve him of personal responsibility because he is a murderer. But I have to think it was known that he could do this and he was pushed to do it





  • Parental Controls are a marketing gimmick.

    Did you know that you can’t stop strangers on Roblox from sending your kid friend requests? And you can’t delete them from your child’s account. Just block them entirely.

    On PlayStation you can’t approve communication features with just friends. If you have it turned on so they can chat in game with their cousin, then strangers can send unsolicited voice calls to them.

    Children should not be on internet connected devices unless you are willing to monitor their usage 100% of the time or are willing to hope and pray you’ve taught them enough about the danger and risks (and even if you have, they aren’t developed enough to understand them anyway)