• adam@kbin.pieho.me
    link
    fedilink
    arrow-up
    54
    arrow-down
    4
    ·
    1 year ago

    ITT people who don’t understand that generative ML models for imagery take up TB of active memory and TFLOPs of compute to process.

    • hotdoge42@feddit.de
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      3
      ·
      edit-2
      1 year ago

      That’s wrong. You can do it on your home PC with stable diffusion.

      • ᗪᗩᗰᑎ@lemmy.ml
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        3
        ·
        1 year ago

        And a lot of those require models that are multiple Gigabytes in size that then need to be loaded into memory and are processed on a high end video card that would generate enough heat to ruin your phones battery if they could somehow shrink it to fit inside a phone. This just isn’t feasible on phones yet. Is it technically possible today? Yes, absolutely. Are the tradeoffs worth it? Not for the average person.

    • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.worldM
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      You can for example run some upscaling models on your phone just fine (I mentioned the SuperImage app in the photography tips megathread). Yes the most powerful and memory-hungry models will need more RAM than what your phone can offer but it’s a bit misleading if Google doesn’t say that those are being run on the cloud.