Looks so real !

  • Thorry@feddit.org
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    Ah but have you tried burning a few trillion dollars in front of the painting? That might make a difference!

  • LuigiMaoFrance@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    We don’t know how consciousness arises, and digital neural networks seem like decent enough approximations of their biological counterparts to warrant caution. There are huge economic and ethical incentives to deny consciousness in non-humans. We do the same with animals to justify murdering them for our personal benefit.
    We cannot know who or what possesses consciousness. We struggle to even define it.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      digital neural networks seem like decent enough approximations of their biological counterparts to warrant caution

      No they don’t. Digital networks don’t act in any way like a electro-chemical meat wad programmed by DNA.

      Might as well call a helicopter a hummingbird and insist they could both lay eggs.

      We cannot know who or what possesses consciousness.

      That’s sophism. You’re functionally asserting that we can’t tell the difference between someone who is alive and someone who is dead

      • yermaw@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        I dont think we can currently prove that anyone other than ourselves are even conscious. As far as I know I’m the only one. The people around me look and act and appear conscious, but I’ll never know.

  • Lightfire228@pawb.social
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    I suspect Turing Complete machines (all computers) are not capable of producing consciousness

    If that were the case, then theoretically a game of Magic the Gathering could experience consciousness (or similar physical systems that can emulate a Turing Machine)

  • qyron@sopuli.xyz
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    It’s achieveable if enough alcohol is added to the subject looking at the said painting. And with some exotic chemistry they may even start to taste or hear the colors.

  • MercuryGenisus@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    Remember when passing the Turing Test was like a big deal? And then it happened. And now we have things like this:

    Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 “passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative”

    The best way to differentiate computers to people is we haven’t taught AI to be an asshole all the time. Maybe it’s a good thing they aren’t like us.

    • Sconrad122@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Alternative way to phrase it, we don’t train humans to be ego-satiating brown nosers, we train them to be (often poor) judges of character. AI would be just as nice to David Duke as it is to you. Also, “they” is anthropomorphizing LLM AI much more than it deserves, it’s not even a single identity, let alone a set of multiple identities. It is a bundle of hallucinations, loosely tied together by suggestions and patterns taken from stolen data

  • mhague@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    It’s like how most of you consume things that are bad and wrong. Hundreds of musicians that are really just a couple dudes writing hits. Musicians that pay to have their music played on stations. Musicians that take talent to humongous pipelines and churn out content. And it’s every industry, isn’t it?

    So much flexing over what conveyor belt you eat from.

    I’ve watched 30+ years of this slop. And now there’s ai. And now people that have very little soul, who put little effort into tuning their consumption, they get to make a bunch of noise about the lack of humanity in content.

      • peopleproblems@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Agents have debated that the new phenomenon may or may not constitute a new designation. While some have reported the painting following them, the same agents will then later report nothing seems to occur. The agents who report a higher frequency of the painting following them also report a higher frequency of unexplained injury. The injuries can be attributed to cases of self harm, leading scientists to believe these SCP agents were predisposed to mental illness that was not caught during new agent screening.

  • Ex Nummis@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    As long as we can’t even define sapience in biological life, where it resides and how it works, it’s pointless to try and apply those terms to AI. We don’t know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      We don’t know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.

      I don’t think LLMs will create intelligence, but I don’t think we need to solve everything about human intelligence before having machine intelligence.

      • Perspectivist@feddit.uk
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        Though in the case of consciousness - the fact of there being something it’s like to be - not only don’t we know what causes it or how it works, but we have no way of measuring it either. There’s zero evidence for it in the entire universe outside of our own subjective experience of it.

  • nednobbins@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I can define “LLM”, “a painting”, and “alive”. Those definitions don’t require assumptions or gut feelings. We could easily come up with a set of questions and an answer key that will tell you if a particular thing is an LLM or a painting and whether or not it’s alive.

    I’m not aware of any such definition of conscious, nor am I aware of any universal tests of consciousness. Without that definition, it’s like Ebert claiming that, “Video games can never be art”.

    • khepri@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Absolutely everything requires assumptions, even our most objective and “laws of the universe” type observations rely on sets of axioms or first principles that must simply be accepted as true-though-unprovable if we are going to get anyplace at all even in math and the hard sciences let alone philosophy or social sciences.

      • nednobbins@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Defining “consciousness” requires much more handwaving and many more assumptions than any of the other three. It requires so much that I claim it’s essentially an undefined term.

        With such a vague definition of what “consciousness” is, there’s no logical way to argue that an AI does or does not have it.

        • 2xar@lemmy.world
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          2 months ago

          Your logic is critically flawed. By your logic you could argue that there is no “logical way to argue a human has consciousness”, because we don’t have a precise enough definition of consciousness. What you wrote is just “I’m 14 and this is deep” territory, not real logic.

          In reality, you CAN very easily decide whether AI is conscious or not, even if the exact limit of what you would call “consciousness” can be debated. You wanna know why? Because if you have a basic undersanding of how AI/LLM works, than you know, that in every possible, concievable aspect in regards with consciusness it is basically between your home PC and a plankton. None of which would anybody call conscious, by any definition. Therefore, no matter what vague definition you’d use, current AI/LLM defintiely does NOT have it. Not by a longshot. Maybe in a few decades it could get there. But current models are basically over-hyped thermostat control electronics.

    • arendjr@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I think the reason we can’t define consciousness beyond intuitive or vague descriptions is because it exists outside the realm of physics and science altogether. This in itself makes some people very uncomfortable, because they don’t like thinking about or believing in things they cannot measure or control, but that doesn’t make it any less real.

      But yeah, given that an LLM is very much measurable and exists within the physical realm, it’s relatively easy to argue that such technology cannot achieve conscious capability.

      • nednobbins@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        This definition of consciousness essentially says that humans have souls and machines don’t. It’s unsatisfying because it just kicks the definition question down the road.

        Saying that consciousness exists outside the realm of physics and science is a very strong statement. It claims that none of our normal analysis and measurement tools apply to it. That may be true, but if it is, how can anyone defend the claim that an AI does or does not have it?

        • arendjr@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          This definition of consciousness essentially says that humans have souls and machines don’t.

          It does, yes. Fwiw, I don’t think it’s necessarily exclusive to humans though, animals and nature may play a role too.

          It’s unsatisfying because it just kicks the definition question down the road.

          Sure, but I have an entire philosophy set up to answer the other questions further down the road too 😂 That may still sound unsatisfying, but feel free to follow along: https://philosophyofbalance.com/

          It claims that none of our normal analysis and measurement tools apply to it.

          I believe that to be true, yes.

          That may be true, but if it is, how can anyone defend the claim that an AI does or does not have it?

          In my view, machines and AI can never create consciousness, although it’s not ruled out they can become vessels for it. But the consciousness comes from outside the perspective of the machines.

          • nednobbins@lemmy.zip
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            I think this is likely an unsurmountable point of difference.

            The problem is that once we eliminate measurability we can’t differentiate between reality and fantasy. We can imagine anything we want and believe in it.

            The Philosophy of Balance has “believe in the universal God” as its first core tenant. That makes it more like a religion than a philosophy.

            • arendjr@programming.dev
              link
              fedilink
              arrow-up
              0
              ·
              2 months ago

              Yeah, I think I see where you’re coming from. It’s a fair point, and we need to be very careful not to loose sight of reality indeed.

              The idea of the Universal God is very tolerant towards “fantasy” so far as it exists in the minds of people, yet it also prescribes to align such belief with a scientific understanding. So the thing I’m trying to say is: believe what you want to believe, and so long as it’s a rational and tolerant belief, it’s fine. But it does explicitly recognise there are limits to what science can do for us, so it provides the idea of Universal God as kind of a North Star for those in search, but then it doesn’t really prescribe what this Universal God must look like. I don’t see it as a religious god, but more a path towards a belief in something beyond ourselves.

              In the book I also take effort to describe how this relates to Buddhism, Taoism, and Abrahamic religions, and attempt to show how they are all efforts to describe similar concepts, and whether we call this Nature, Tao, or God, doesn’t really matter in the end. So long as we don’t fall into nihilism and believe in something, I believe we can find common ground as a people.

              • nednobbins@lemmy.zip
                link
                fedilink
                arrow-up
                1
                ·
                2 months ago

                I can understand a desire to find something beyond ourselves but I’m not driven by it.

                That’s exactly where Descartes lost me. I was with him on the whole “cogito ergo sum” thing but his insistence that his feelings of a higher being meant that it must exist in real form somewhere made no sense to me.

  • BC_viper@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    Since we don’t actually know what consciousness is or how it starts thats a pretty dumb way to look at things. It may not come from LLMs but who knows when or if it will pop up on one ai chain or another.

  • ji59@hilariouschaos.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Except … being alive is well defined. But consciousness is not. And we do not even know where it comes from.