Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

  • Zeth0s@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    2 years ago

    Hallucinations is common for humans as well. It’s just people who believe they know stuff they really don’t know.

    We have alternative safeguards in place. It’s true however that current llm generation has its limitations

    • alvvayson@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      2 years ago

      Not just common. If you look at kids, hallucinations come first in their development.

      Later, they learn to filter what is real and what is not real. And as adults, we have weird thoughts that we suppress so quickly that we hardly remember them.

      And for those with less developed filters, they have more difficulty to distinguish fact from fiction.

      Generative AI is good at generating. What needs to be improved is the filtering aspect of AI.

      • nous@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        2 years ago

        Hell, just look at various public personalities - especially those with extreme views. Most of what some of them say they have “hallucinated”. Far more so than what GPT chat is doing.

    • Dark Arc@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 years ago

      Sure, but these things exists as fancy story tellers. They understand language patterns well enough to write convincing language, but they don’t understand what they’re saying at all.

      The metaphorical human equivalent would be having someone write a song in a foreign language they barely understand. You can get something that sure sounds convincing, sounds good even, but to someone who actually speaks Spanish it’s nonsense.

      • Zeth0s@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 years ago

        Calculators don’t understand maths, but they are good at it.

        LLMs speak many languages correctly, they don’t know the referents, they don’t understand concepts, but they know how to correctly associate them.

        What they write can be wrong sometimes, but it absolutely makes sense most of the time.

        • Dark Arc@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          but it absolutely makes sense most of the time

          I’d contest that, that shouldn’t be taken for granted. I’ve tried several questions in these things, and rarely do I find an answer entirely satisfactory (though it normally sounds convincing/is grammatically correct).

          • Zeth0s@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 years ago

            This is the reply to your message by our common friend:

            I understand your perspective and appreciate the feedback. My primary goal is to provide accurate and grammatically correct information. I’m constantly evolving, and your input helps in improving the quality of responses. Thank you for sharing your experience. - GPT-4

            I’d say it does make sense

    • rambaroo@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      2 years ago

      Humans can recognize and account for their own hallucinations. LLMs can’t and never will.

      • Zeth0s@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 years ago

        They can’t… Most people strongly believe they know many things while they have no idea what they are talking about. Most known cases are flat earthers, qanon, no-vax.

        But all of us are absolutely convinced we know something until we found out we don’t.

        That’s why double blind tests exists, why memories are not always trusted in trials, why Twitter is such an awful place