• lennybird@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    19 hours ago

    How about instead you provide your prompt and its response. Then you and I shall have discussion on whether or not that prompt was biased and you were hallucinating when writing it, or indeed the LLM was at fault — shall we?

    At the end of day, you still have not elucidated why — especially within the purview of my demonstration of its usage in conversation elsewhere and its success in a similar implementation — it cannot simply be used as double-checker of sorts, since ultimately, the human doctor would go, “well now, this is just absurd” since after all, they are the expert to begin with — you following?

    So, naturally, if it’s a second set of LLM eyes to double-check one’s work, either the doctor will go, “Oh wow, yes, I definitely blundered when I ordered that and was confusing charting with another patient” or “Oh wow, the AI is completely off here and I will NOT take its advice to alter my charting!”

    Somewhat ironically, I gather the impression one has a particular prejudice against these emergent GPTs and that is in fact biasing your perception of their potential.

    EDIT: Ah, just noticed my tag for you. Say no more. Have a nice day.