• @Devanismyname@lemmy.ca
    link
    fedilink
    English
    -52 days ago

    It’ll just keep better at it over time though. The current ai is way better than 5 years ago and in 5 years it’ll be way better than now.

    • @almost1337@lemm.ee
      link
      fedilink
      142 days ago

      That’s certainly one theory, but as we are largely out of training data there’s not much new material to feed in for refinement. Using AI output to train future AI is just going to amplify the existing problems.

      • @Devanismyname@lemmy.ca
        link
        fedilink
        English
        -102 days ago

        I mean, the proof is sitting there wearing your clothes. General intelligence exists all around us. If it can exist naturally, we can eventually do it through technology. Maybe there needs to be more breakthroughs before it happens.

          • @mindbleach@sh.itjust.works
            link
            fedilink
            -22 days ago

            I mean - have you followed AI news? This whole thing kicked off maybe three years ago, and now local models can render video and do half-decent reasoning.

            None of it’s perfect, but a lot of it’s fuckin’ spooky, and any form of “well it can’t do [blank]” has a half-life.

            • @Korhaka@sopuli.xyz
              link
              fedilink
              English
              32 days ago

              Seen a few YouTube channels now that just print out AI generated content. Usually audio only with a generated picture on screen. Vast amounts could be made so cheaply like that, Google is going to have fun storing all that when each only gets like 25 views. I think at some point they are going to have to delete stuff.

                • @MadhuGururajan@programming.dev
                  link
                  fedilink
                  English
                  117 hours ago

                  I kid you not, I took ML back in 2014 as a extra semester in my undergrad. The complaints then were the same as complaints now: too much power requirement, too many false positives. The latter of the two has evolved into hallucinations.

                  If normal people going “I made this!” is not convincing enough that it is easily identified then who is this going to replace? you still need the right expert right? all it creates is more work for experts to come and fix broken AI output.

                  • @mindbleach@sh.itjust.works
                    link
                    fedilink
                    116 hours ago

                    The complaints then were the same as complaints now

                    Despite results improving at an insane rate, very recently. And you think this is proof of a problem with… the results? Not the complaints?

                    People went “I made this!” with fucking Terragen. A program that renders wild alien landscapes which became generic after about the fifth one you saw. The problem there is not expertise. It’s immense quantity for zero effort. None of that proves CGI in general is worthless non-art. It’s just shifting what the computer will do for free.

                    At some point, we will take it for granted that text-to-speech can do an admirable job reading out whatever. It’ll be a button you push when you’re busy sometimes. The dipshits mass-uploading that for popular articles, over stock footage, will be as relevant as people posting seven thousand alien sunsets.

            • @SaraTonin@lemm.ee
              link
              fedilink
              English
              12 days ago

              If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.

              You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.

              The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.

              • @mindbleach@sh.itjust.works
                link
                fedilink
                2
                edit-2
                3 hours ago

                We don’t need leaps and bounds, from here. We’re already in science fiction territory. Incremental improvement has silenced a wide variety of naysaying.

                And this is with LLMs - which are stupid. We didn’t design them with logic units or factoid databases. Anything they get right is an emergent property from guessing plausible words, and they get a shocking amount of things right. Smaller models and faster training will encourage experimentation for better fundamental goals. Like a model that can only say yes, no, or mu. A decade ago that would have been an impossible sell - but now we know data alone can produce a network that’ll fake its way through explaining why the answer is yes or no. If we’re only interested in the accuracy of that answer, then we’re wasting effort on the quality of the faking.

                Even with this level of intelligence, where people still bicker about whether it is any level of intelligence, dumb tricks keep working. Like telling the model to think out loud. Or having it check its work. These are solutions an author would propose as comedy. And yet: it helps. It narrows the gap between “but right now it sucks at [blank]” and having to find a new [blank]. If that never lets it do math properly, well, buy a calculator.

                • @SaraTonin@lemm.ee
                  link
                  fedilink
                  English
                  114 hours ago

                  I’m not saying they don’t have applications. But the idea of them being a one size fits all solution to everything is something being sold to VC investors and shareholders.

                  As you say - the issue is accuracy. And, as you also say - that’s not what these things do, and instead they make predictions about what comes next and present that confidently. Hallucinations aren’t errors, they’re what they were built to do.

                  If you want something which can set an alarm for you or find search results then something that responds to set inputs correctly 100% of the time is better than something more natural-seeming which is right 99%of the time.

                  Maybe along the line there will be a new approach, but what is currently branded as AI is never going to be what it’s being sold as.

                  • @mindbleach@sh.itjust.works
                    link
                    fedilink
                    12 hours ago

                    If you want something more complex than an alarm clock, this does kinda work for anything. Emphasis on “kinda.”

                    Neural networks are universal approximators. People get hung-up on the approximation part, like that cancels out the potential in… universal. You can make a model that does any damn thing. Only recently has that seriously meant you and can - backpropagation works, and it works on video-game hardware.

                    what is currently branded as AI

                    “AI is whatever hasn’t been done yet” has been the punchline for decades. For any advancement in the field, people only notice once you tell them it’s related to AI, and then they just call it “AI,” and later complain that it’s not like on Star Trek.

                    And yet it moves. Each advancement makes new things possible, and old things better. Being right most of the time is good, actually. 100% would be better than 99%, but the 100% version does not exist, so 99% is better than never.

                    Telling the grifters where to shove it should not condemn the cool shit they’re lying about.

    • @GenosseFlosse@feddit.org
      link
      fedilink
      22 days ago

      To get better it would need better training data. However there are always more junior devs creating bad training data, than senior devs who create slightly better training data.

      • @SaraTonin@lemm.ee
        link
        fedilink
        English
        32 days ago

        And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.