• @archonet@lemy.lol
    link
    fedilink
    English
    73
    edit-2
    6 days ago

    “AI freedom”

    listen I am 100% here for the rights of non-human general intelligence, but no I will not entertain that kind of crock from an overambitious form of autocomplete.

      • Wren
        link
        fedilink
        42
        edit-2
        1 day ago

        You know “Grok” is not a sentient being, right? Please tell us you understand this simple fact- because you just defended a computer program as deserving the same freedoms as a human being.

          • Wren
            link
            fedilink
            1
            edit-2
            1 day ago

            no one can prove if their sentient you know

            And this statement just might be the best argument by example, that one could make in defense of that point.

        • @photonic_sorcerer@lemmy.dbzer0.com
          link
          fedilink
          English
          -17
          edit-2
          6 days ago

          I’m just a meat computer running fucked-up software written by the process of evolution. I honestly don’t know how sentient Grok or any modern AI system is and I’d wager you don’t either.

          • @Coldcell@sh.itjust.works
            link
            fedilink
            326 days ago

            How sentient? Like on a scale of zero to sentience? None. It is non-sentient, it is a promptable autocomplete that offers best predicted sentences. Left to itself it does nothing, has no motivations, intentions, “will”, desire to survive/feed/duplicate etc. A houseplant has a higher sentience score.

            • @photonic_sorcerer@lemmy.dbzer0.com
              link
              fedilink
              English
              -276 days ago

              An LLM is only one part of a complete AI agent. What exactly happens in a processer at inference time? What happens when you continuously prompt the system with stimuli?

              • @nef@slrpnk.net
                link
                fedilink
                English
                186 days ago

                If you believe that AI is “conscious” while it’s processing prompts, and also believe that we shouldn’t kill machine life, then AI companies are commiting genocide at an unprecedented scale.

                For example, each AI model would be equivalent to a person taught everything in the training data. Any time you want something from them, instead of asking directly, you make a clone of them, let it respond to the input, then murder it.
                That is how all generative AI works. Sounds pretty unethical to me.

                And, by the way, we do know exactly what happens inside processors when they’re running, that’s how processors are designed. Running AI doesn’t magically change the laws of physics.

                • skulblaka
                  link
                  fedilink
                  95 days ago

                  People taught AI to speak like a middle manager and thinks this means the AI is sentient, instead of proving that middle managers aren’t

                • @photonic_sorcerer@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  -8
                  edit-2
                  5 days ago

                  I’m not saying I believe they’re conscious, all I said was that I don’t know and neither do you.

                  Of course we know what’s happening in processors. We know what’s happening in neuronal matter too. What we don’t know is how consciousness or sentience emerges from large networks of neurons.

                  • Wren
                    link
                    fedilink
                    105 days ago

                    But they’re saying they do know. And they are correct.

          • @archonet@lemy.lol
            link
            fedilink
            English
            146 days ago

            by their very nature, they are not sentient. They are Markov chains for words. They do not have a sense of self, truth, or feel emotions, they do not have wants or desires, they merely predict what is the next most likely word in a sequence, given the context. The only thing they can do is “make plausible sentences that can come after [the context]”.

            That’s all an LLM is. It doesn’t reason. I’m more than happy to entertain the notion of rights for a computer that actually has the ability to think and feel, but this ain’t it.

            • @FatCrab@lemmy.one
              link
              fedilink
              -14 days ago

              Not that I agree they’re conscious, but this is an incorrect and overly simplistic definition of a LLM. They are probabilistic in nature, yea, and they work on tokens, or fragments, of words. But it’s about as much of an oversimplification to say humans are just markov chains that make plausible sentences that can come after [the context] as it is to say modern GPTs are.

          • Wren
            link
            fedilink
            85 days ago

            I do know. It’s not sentient at all. But don’t get angry at me about this. You can put that all on science.