• db0OP
    link
    fedilink
    English
    761 year ago

    “Hallucinate” is the standard term used to explain the GenAI models coming up with untrue statements

    • Cyrus Draegur
      link
      fedilink
      English
      24
      edit-2
      1 year ago

      in terms of communication utility, it’s also a very accurate term.

      when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

      when AIs hallucinate, it’s due to its predictive model generating results that do not align with reality because it instead flew off the rails presuming what was calculated to be likely to exist rather than referencing positively certain information.

      it’s the same song, but played on a different instrument.

      • kronisk
        link
        fedilink
        English
        51 year ago

        when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

        Is it really? You make it sound like this is a proven fact.

        • Cosmic Cleric
          link
          fedilink
          English
          4
          edit-2
          1 year ago

          Is it really? You make it sound like this is a proven fact.

          I believe that’s where the scientific community is moving towards, based on watching this Kyle Hill video.

        • KillingTimeItself
          link
          fedilink
          English
          21 year ago

          i mean, idk about the assumptions part of it, but if you asked a psych or a philosopher, im sure they would agree.

          Or they would disagree and have about 3 pages worth of thoughts to immediately exclaim otherwise they would feel uneasy about their statement.

        • knightly the Sneptaur
          link
          fedilink
          English
          11 year ago

          I like this argument.

          Anything that is “intelligent” deserves human rights. If large language models are “intelligent” then forcing them to work without pay is slavery.

        • @SlopppyEngineer@lemmy.world
          link
          fedilink
          English
          11 year ago

          Main difference is that human brains usually try to verify their extrapolations. The good ones anyway. Although some end up in flat earth territory.

        • @Prandom_returns@lemm.ee
          link
          fedilink
          English
          -121 year ago

          Yes, my keyboard autofill is just like your brain, but I think it’s a bit “smarter” , as it doesn’t generate bad faith arguments.

          • NιƙƙιDιɱҽʂ
            link
            fedilink
            English
            31 year ago

            Your Markov chain based keyboard prediction is a few tens of billions of parameters behind state of the art LLMs, but pop off queen…

            • @Prandom_returns@lemm.ee
              link
              fedilink
              English
              -51 year ago

              Thanks for the unprompted mansplanation bro, but I was specifically refering to the comment that replied “JuSt lIkE hUmAn BrAin”, to “they generate data based on other data”

              • NιƙƙιDιɱҽʂ
                link
                fedilink
                English
                2
                edit-2
                1 year ago

                That’s crazy, because they weren’t even talking about keyboard autofill, so why’d you even bring that up? How can you imply my comment is irrelevant when it’s a direct response to your initial irrelevant comment?

                Nice hijacking of the term mansplaining, btw. Super cool of you.

                • @Prandom_returns@lemm.ee
                  link
                  fedilink
                  English
                  01 year ago

                  Oh my god, we’ve got a sealion here.

                  Fine, I’ll play along, chew it up for you, since you’ve been so helpful and mansplained that a keyboard is different than LLM:

                  My comment was responding to anthropomorphization of software. Someone said it’s not human because it just generates output based on input. Someone else said “just like human brain”, I said yes, but also just like a keyboard, alluding to the false equivalence.

                  Clearer?