• qyron
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    14 hours ago

    Somebody please explain me in very simple words why do I need an AI capable chip in my personal computer. And under Linux, for the most.

    • Mwa@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      23 minutes ago

      IKR, am fine with using cpu and gpu to run llms locally (even tho am trying to avoid using llms), But npus SRSLY

    • TheGrandNagus@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      13 hours ago

      Offline translation is pretty great. Some image editing tools are pretty great. Games may utilise them in the future. Offline image recognition for searching for images (e.g. “show me pics of grandma”), etc.

      It’s not particularly widely used now, but the same was true for hardware video encode/decode, hardware accelerated encryption/decryption, etc.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        image processing is pretty intense and would likely be handled by the GPU. Efficient embedded NN accelerators like this are meant to be used for more passive things, like noise cancelation or like you mentioned, translation.

        • KingRandomGuy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          I don’t know the architecture of AI accelerator in Ryzen processors but I do know a fair amount of image deblurring and denoising tools run on the neural engine on Apple Silicon. The neural engine is good enough for a lot of tasks, provided that your model only uses relatively simple operators and doesn’t need full precision.

    • N.E.P.T.R@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      12 hours ago

      This isnt for you, nor for me. I don’t need an AI-capable chip, I could just use my GPU if for some reason I wanted to run a local transformer model.