• adam@kbin.pieho.me
    link
    fedilink
    arrow-up
    53
    ·
    2 年前

    ITT people who don’t understand that generative ML models for imagery take up TB of active memory and TFLOPs of compute to process.

      • ᗪᗩᗰᑎ@lemmy.ml
        link
        fedilink
        English
        arrow-up
        23
        ·
        2 年前

        And a lot of those require models that are multiple Gigabytes in size that then need to be loaded into memory and are processed on a high end video card that would generate enough heat to ruin your phones battery if they could somehow shrink it to fit inside a phone. This just isn’t feasible on phones yet. Is it technically possible today? Yes, absolutely. Are the tradeoffs worth it? Not for the average person.

        • diomnep@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 年前

          “He’s off by multiple orders of magnitude, and he doesn’t even mention the resource that GenAI models require in large amounts (GPU), but he’s not wrong”

    • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.worldM
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 年前

      You can for example run some upscaling models on your phone just fine (I mentioned the SuperImage app in the photography tips megathread). Yes the most powerful and memory-hungry models will need more RAM than what your phone can offer but it’s a bit misleading if Google doesn’t say that those are being run on the cloud.