A talk from the hacker conference 39C3 on how AI generated content was identified via a simple ISBN checksum calculator (in English).

  • Saapas@piefed.zip
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    23 hours ago

    But it did create a working tool to identify AI contributions with fake ISBN, didn’t it? Are we assuming the tool from OP wasn’t working?

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      23 hours ago

      Ok, but how do you know that other edits from Wikipedia aren’t AI generated, but had users who actually validated the output? And can you explain to me the difference between the users who validated the AI output before updating Wikipedia, and the researcher who validated his AI output before making his talk?

      The point you’re missing is that both sides are using the same crappy tool, but you’re only seeing an example of one side doing it wrong and the other right, and using that to make a conclusion that is not able to be validated. You appear to be saying it’s better for code than language because of the example in front of us and naively extrapolating that to mean ai works better in one task than the other, when the difference is how the user handled the output, not the output itself.

      • Saapas@piefed.zip
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        23 hours ago

        I mean in this specific case it did make a working tool to identify hallucinated sources.

        And yes it’s definitely better at code than doing something requiring the sort of thinking that Wikipedia edits require. I’m not drawing that from this singular thing but rather when they’ve studied the correctness of AI output. Coding is sorta “easy” for a computer to do compared to language and information tasks in that sort of free-er form way.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          23 hours ago

          I mean in this specific case it did make a working tool to identify hallucinated sources.

          Right, but it also outputs really bad code, so pointing to this specific example is pointless to the overall point. Both code and language output have to be validated and reworked, because an AI is just a stochastic parrot. The quality of the final product is dependent on the person using what the AI gives them.

          it’s definitely better at code than doing something requiring the sort of thinking that Wikipedia edits require

          Lol. Lmao even. Vibe coding is arguably worse than vibe talking, just ask the users of Tea. Or the devs of NX that had to stop an AI PR that would have run unsanitized shell commands. Or that idiot that fired everyone to vibe code his app that was immediately hacked.

          If you think language requires more thinking than coding, I don’t know what to tell you, considering that code is applied thinking/logic. AI is good at speeding things up by accelerating blocking things out, but absolutely sucks at producing a workable product. Just like with language processing.

          But I’m done with this conversation, considering you’ve ignored the point I’m making to point back at this example multiple times now. Good day.

          • Saapas@piefed.zip
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            23 hours ago

            I mean it’s not really my take,it’s just when they’ve tested it out at different tasks. It does a lot better at coding than most other tasks.