• @photonic_sorcerer@lemmy.dbzer0.com
    link
    fedilink
    English
    -17
    edit-2
    6 days ago

    I’m just a meat computer running fucked-up software written by the process of evolution. I honestly don’t know how sentient Grok or any modern AI system is and I’d wager you don’t either.

    • @Coldcell@sh.itjust.works
      link
      fedilink
      326 days ago

      How sentient? Like on a scale of zero to sentience? None. It is non-sentient, it is a promptable autocomplete that offers best predicted sentences. Left to itself it does nothing, has no motivations, intentions, “will”, desire to survive/feed/duplicate etc. A houseplant has a higher sentience score.

      • @photonic_sorcerer@lemmy.dbzer0.com
        link
        fedilink
        English
        -276 days ago

        An LLM is only one part of a complete AI agent. What exactly happens in a processer at inference time? What happens when you continuously prompt the system with stimuli?

        • @nef@slrpnk.net
          link
          fedilink
          English
          186 days ago

          If you believe that AI is “conscious” while it’s processing prompts, and also believe that we shouldn’t kill machine life, then AI companies are commiting genocide at an unprecedented scale.

          For example, each AI model would be equivalent to a person taught everything in the training data. Any time you want something from them, instead of asking directly, you make a clone of them, let it respond to the input, then murder it.
          That is how all generative AI works. Sounds pretty unethical to me.

          And, by the way, we do know exactly what happens inside processors when they’re running, that’s how processors are designed. Running AI doesn’t magically change the laws of physics.

          • skulblaka
            link
            fedilink
            95 days ago

            People taught AI to speak like a middle manager and thinks this means the AI is sentient, instead of proving that middle managers aren’t

          • @photonic_sorcerer@lemmy.dbzer0.com
            link
            fedilink
            English
            -8
            edit-2
            6 days ago

            I’m not saying I believe they’re conscious, all I said was that I don’t know and neither do you.

            Of course we know what’s happening in processors. We know what’s happening in neuronal matter too. What we don’t know is how consciousness or sentience emerges from large networks of neurons.

            • Wren
              link
              fedilink
              105 days ago

              But they’re saying they do know. And they are correct.

              • @photonic_sorcerer@lemmy.dbzer0.com
                link
                fedilink
                English
                -55 days ago

                I know I’m the smartest man on earth. And I’m correct.

                See how crazy that sounds? Just because someone is confident about something doesn’t make it true.

                • @octopus_ink@slrpnk.net
                  link
                  fedilink
                  English
                  6
                  edit-2
                  5 days ago

                  Please apply that to this:

                  all I said was that I don’t know and neither do you.

                  Because there is not any evidence whatsoever that there is consciousness associated with LLMs. We have ample evidence that consciousness is associated with many forms of biological life.

                  I’m not even aware of a scholarly theory suggesting there might be consciousness associated with LLMs. Now, I’m not an LLM expert, but neither are you (hurr durr) and so I think if you are going to suggest that maybe consciousness exists there, it should be based off something other than “hey man you never know” which is pretty much what it feels like. (Or you should be unsurprised when folks find that assertion unconvincing.)

                  • @photonic_sorcerer@lemmy.dbzer0.com
                    link
                    fedilink
                    English
                    05 days ago

                    Honestly, I’m not surprised. I obviously didn’t phrase my argument in a compelling way.

                    I disagree that we don’t have evidence for conciousness in LLMs. They have been showing behavior previously attributed only to highly intelligent, sentient creatures, i.e. us. To me it seems very plausible that when you have a large network of neurons, be they artificial or biological, with specialized circuits for processing specific stimuli that some sort of sentience could emerge.

                    If you want academic research on this you just have to take a look. Researchers have been discussing this topic for decades. There isn’t a working theory of machine sentience simply because we don’t have one that works for natural systems. But that obviously doesn’t rule it out. After all, why should sentience be constrained to squishy matter? In any case, I think we can all agree something very interesting is going on with LLMs.

                • Wren
                  link
                  fedilink
                  34 days ago

                  I think you know exactly how empirically provable facts work. And I also think you’re a troll.

    • @archonet@lemy.lol
      link
      fedilink
      English
      146 days ago

      by their very nature, they are not sentient. They are Markov chains for words. They do not have a sense of self, truth, or feel emotions, they do not have wants or desires, they merely predict what is the next most likely word in a sequence, given the context. The only thing they can do is “make plausible sentences that can come after [the context]”.

      That’s all an LLM is. It doesn’t reason. I’m more than happy to entertain the notion of rights for a computer that actually has the ability to think and feel, but this ain’t it.

      • @FatCrab@lemmy.one
        link
        fedilink
        -14 days ago

        Not that I agree they’re conscious, but this is an incorrect and overly simplistic definition of a LLM. They are probabilistic in nature, yea, and they work on tokens, or fragments, of words. But it’s about as much of an oversimplification to say humans are just markov chains that make plausible sentences that can come after [the context] as it is to say modern GPTs are.

    • Wren
      link
      fedilink
      85 days ago

      I do know. It’s not sentient at all. But don’t get angry at me about this. You can put that all on science.