• FiskFisk33@startrek.website
    link
    fedilink
    arrow-up
    8
    arrow-down
    3
    ·
    6 months ago

    LLM isn’t ai.

    What? That’s not true at all.

    Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals.[1] Such machines may be called AIs.

    -Wikipedia https://en.m.wikipedia.org/wiki/Artificial_intelligence

    • Cyv_@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      7
      ·
      6 months ago

      So I’ll concede that the more I read replies the more I see the term does apply, though it still annoys me when people just refer to it as ai and act like it can be associated with the robots that we associate the 3 laws with. I think I thought AI referred more to AGI. So I’ll say its nowhere near an AGI, and we’d likely need an AGI to even consider something like the 3 laws, and it’d obviously be much muddier than fiction.

      The point I guess I’m trying to make is that applying the 3 laws to an LLM is like wondering if your printer might one day find love. It isn’t really relevant, they’re designed for very specific specialized functions, and stuff like “don’t kill humans” is pretty dumb instruction to give to an LLM since it can basically just answer questions in this context.

      If it was going to kill somebody it would be through an error like hallucination or bad training data having it tell somebody something dangerously wrong. It’s supposed to be right already. Telling it not to kill is telling your printer to not to rob the Office Depot. If it breaks that rule, something has already gone very wrong.

      • weker01@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        You are not alone in that confusion. Ai is whatever a machine can’t do at the moment. That is a famous paradox.

        For example for years some philosophers claimed a computer could never beat the human masters of chess. They argued that you need a kind of intelligence for that, which machines cannot develop.

        Turns out chess programs are relatively easy. Some time after that the unbeatable goal was Go. So many possibilities in Go. No machine can conquer that! Turns out they can.

        Another unbeatable goal was natural language which we kinda solved now or are in the process of.

        It’s strange in the actual field of computer science we call all of the above AI while a lot of the public wants to call none that. My guess is it’s just humans being conceited and arrogant. No machine (and no other animal mind you) is like us or can be like us (literally something you can read in peer reviewed philosophy texts).

      • FiskFisk33@startrek.website
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        There I agree whole heartedly. LLM’s seem to be touted as not only AI, but like, actual intelligence, which it most certainly is not.