• athairmor@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 days ago

    See, the quickest way to get AI banned is for it to start telling the truth about those in power.

    • falseWhite@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      They’ll just switch to Grok, which will encourage them to commit even more war crimes. Currently it’s based on Google’s Gemini

  • brown567@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    I vaguely remember a movie where the government makes an ai intended to defend the usa, and it starts killing off politicians because it saw them as the greatest threat to national security

  • Warl0k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    The most infuriating part is they’re so bad at this, yet they’re still getting away with it. I mean they’re just So. Dumb. and yet…

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    The Pentagon AI immediately notified the DOJ AI and Hegseth’s avatar was imprisoned for war crimes.

  • Xenny@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    A LLM advisor that takes REAL CASES AND LAWS NOT ONES IT MADE UP!!! and sorted through them to advise on legal direction THAT CAN THEN BE VERIFIED BY LEGAL PROFESSIONALS WITH HUMAN EYES!!! might not be too bad of an idea. But we’re really just remaking search engines but worse.

    • Coriza@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      You may already know that, but just to make it clear for other readers: It is impossible for an LLM to behave like described. What an LLM algorithm does is generate stuff, It does not search, It does not sort, It only make stuff up. There is not that can be done about it, because LLM is a specific type of algorithm, and that is what the program do. Sure you can train it with good quality data and only real cases and such, but it will still make stuff up based on mixing all the training data together. The same mechanism that make it “find” relationships between the data it is trained on is the one that will generate nonsense.

      • MiddleAgesModem@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 days ago

        Whole lot of unsupported assumptions and falsehoods here.

        Stand alone model predicts tokens. LLMs retrieve real documents, rank/filter results and use search engines. Anyone who has used these things would know that it’s not just “making stuff up”.

        It both searches and sorts.

        In short, you have no fucking idea what you’re talking about.

      • MiddleAgesModem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        4 days ago

        So much better than that. Always amusing how much people will distort or ignore fact if it “feels right”.