Image description:

Shopping for a laptop as a Linux user:

Screenshot from the Simpsons where Otto is talking to Marge and Homer standing next to a window in their house with a caption “Oh wow, windows!.. I don’t think I can afford this place.”

  • dan@upvote.au
    link
    fedilink
    arrow-up
    26
    ·
    10 months ago

    These days it’s not uncommon to have a powerful GPU just for AI acceleration.

    • Aux@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      10 months ago

      Or for photo editing. Or video editing. Or CAD work. Or a lot more stuff.

      • dan@upvote.au
        link
        fedilink
        arrow-up
        5
        ·
        10 months ago

        Are modern iGPUs not powerful enough for these tasks? The UHD 770 is pretty powerful, especially for video encoding/decoding (it can transcode 8+ 4K streams simultaneously)

        • I Cast Fist@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          10 months ago

          For photo editing, I suspect it should be more than enough. For video editing, a beefy graphics card can make the render/encode significantly faster, though as I don’t dabble with that, I can’t tell how much of a speed improvement it’d be from an integrated intel vs. anything equivalent or stronger than a GTX1650

        • Aux@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          iGPUs are pretty useless for the most part.

          1. Shared memory. Regular DDR is high latency high throuput. GDDR is low latency low throuput. Not only you’re sharing memory with other apps, you’re also penalising yourself in terms of performance.
          2. iGPUs are very slow at computation. Yes, they have codecs built-in, but if you want to run custom math they are not much better than running it on CPU.
          3. CUDA is not available. OpenCL is, but some apps are locked to CUDA.
          4. Old GTX 1080 is 5.5 times faster than brand new Iris Xe at computation. RTX 4080 is like 3x times faster than GTX 1080. That’s an order of magnitude difference between modern GPU and modern iGPU.
        • ⸻ Ban DHMO 🇦🇺 ⸻@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Could be a matter of CUDA-specific optimisations in the software. Also, an iGPU will share ram with the CPU so while it looks good on paper, memory access and (availability) will vary