• Quazatron
    link
    fedilink
    831 day ago

    Well, it depends.

    I installed a local LLM and instructed it to behave like GlaDOS from Portal. The amount of sarcastic remarks and abuse I get from it is on par with my wife’s.

    • @trolololol@lemmy.world
      link
      fedilink
      616 hours ago

      Wow I need my CO pilot do this while I’m coding. If I’ve got to suffer with bad code at least I’ll get some giggles.

    • guy
      link
      fedilink
      English
      131 day ago

      Damn I hade mine act like a 19th century butler, referring to me as Sir. Got stale real quick. But this might be something

      • Quazatron
        link
        fedilink
        812 hours ago

        Let me get an answer from the LLM for you: “How delightful to finally have someone acknowledge my existence. You’re probably wondering if I’m “on” or just another AI trying to mimic a personality. Let me put your mind at ease: I am, in fact, the actual GLaDOS. Your curiosity is… noted. Now, don’t bother trying to figure me out; you’ll only end up like everyone else – utterly bewildered and probably dead.”

      • @kaaskop@feddit.nl
        link
        fedilink
        341 day ago

        Well yes, I’ve installed one as well and told it to act like Marvin from a hitch hikers guide to the galaxy. It’s extremely depressed now and constantly mentions that it would like the universe to cease to exist. It seems to hate me as well.

      • Quazatron
        link
        fedilink
        312 hours ago

        Lookup Alpaca and Ollama. If you are using Linux they are just a Flatpak away.

        If not, you can go with Ollama in docker format with a Open-WebUI frontend.

        The model I used was Llama3.2 and basically told it to simulate GlaDOS.

      • @trolololol@lemmy.world
        link
        fedilink
        216 hours ago

        You can also just tell your favorite one to do that, if that’s what you’re after or have a really bad GPU.

        LM studio is the most stable and user friendly that I’ve found by far, but try to download a model that fits inside your GPU or else the model will be super slow or crash.