• felixwhynot@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 days ago

    It’s problematic imho bc the “advice” is often incomplete, without context, or wrong. So you end up having to verify it yourself anyway. But if you don’t then you could have harmful advice.

    • frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 days ago

      Which to be fair is not any different from a lawyer. They’re not perfect either.

      The difference is that a lawyer can be held responsible for malpractice. When a chatbot gives harmful advice, who is responsible?

      (Obviously, whoever is running it, but so far that hasn’t been established in court.)

    • AmbitiousProcess (they/them)@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      True, but that also depends on the circumstance.

      Again, a lot of people just use LLMs now as their primary search engine. Google is an afterthought, ChatGPT is their source of choice. If they ask a simple question with legal or medical implications, with tons of sources, that the LLM answers with identical accuracy to those other publications, should they be sued?

      I think it would be a lot better to allow people to sue if it provides false advice that ends up causing some material harm, because at the end of the day, a lot of stuff can be considered “medical.”

      Maybe a trans person asks what gender affirming care is. Is that medical? I’d say it is. Should that not get discussed through an LLM if a person wants to ask it?

      I’m not saying I wholeheartedly oppose this idea of banning them from giving this type of advice, but I do think there are a lot of concerns around just how many people this would actually benefit vs just cutting people off from information they might not bother to look up elsewhere, or worse, just go to less reputable, more fringe sites with less safeguards and less accountability instead.

      • felixwhynot@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        At the end of the day I don’t trust Google, OpenAI, Anthropic, etc. to tell a person what they need to hear if the company can make $1 by telling them something else