• Black History Month@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    AI is like a sledgehammer to this walnut of a problem. This is supposed to sound badass or something, but in tech parlance this is literal insanity. Nothing about computers should be about endlessly repeating things hoping for better results. This is the opposite of technology. DLSS sucks anyway, who’s it for? Content creators have to deal with encoding, that wipe any of that detail out, and only paid youtubers seem to mention it in passing.

  • circuitfarmer@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    67
    ·
    2 days ago

    This may be a hot take downvoted to oblivion, but I think DLSS and all similar AI-dependent frame generation type stuff is a band-aid on a problem that won’t (or shouldn’t) exist for long, in the grand scheme of things.

    If you have performance improvements, you ultimately don’t need such things once that performance reaches an acceptable level.

    So two things may be happening:

    1. Performance improvements are not possible anymore. That seems false, because we still see them. Costs are high, but they’re there.

    2. Things like DLSS allow corps to give you less performance while still maintaining an illusion of a good experience. It ultimately reduces hardware costs, which the corpos ultimately just pocket.

    I lean strongly towards 2 at the moment. Notice how nvidia also continues to push DLSS as an exclusive feature – notably different from FSR in that regard, while FSR is admitted to be a tech allowing for better framerate on lower-end hardware.

    For nvidia, it’s a selling point, and it allows them to sell you less hardware with fewer actual improvements. It is the same snake that just wants you to (eventually) stream games instead of processing them locally, because it enhances corporate control.

    • Black History Month@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      A lot of games released today ‘feel’ like games that came out 20 years ago, there are exceptions and I’m including them too. Most of the growth came in graphics and visuals. People would buy any game if it looked cool. Now we’re on the diminished returns side of things, and investors are trying to maintain the charade. They’re pushing out half-baked products, and selling out. Leaving old heads and new heads with the bag of expensive but useless products. Think SLI on steroids.

    • xep@fedia.io
      link
      fedilink
      arrow-up
      14
      ·
      2 days ago

      DLSS isn’t just frame generation, the SS in name is for Super Sampling. It’s the best solution we have to graphical issues like subpixel shimmering and moire effects that are especially prevalent in modern titles due to contemporary graphical effects and expectations, and it might be quite a while before we invent something better.

      • circuitfarmer@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        23
        ·
        2 days ago

        And the other side of that coin is: personally I’ll happily accept shimmering and moire effects if it means I don’t lock myself into yet one more corporate ecosystem.

        FSR also combats those things, but can run on any GPU.

        • xep@fedia.io
          link
          fedilink
          arrow-up
          10
          ·
          2 days ago

          I agree, I was referring to ML supersampling and antialiasing in general.

        • Fades@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 days ago

          if it means I don’t lock myself into yet one more corporate ecosystem

          Nobody is forcing you to use DLSS, every single game I’ve played that utilizes it it is optional and there are alternatives like FSR, XeSS, etc.

          if you really feel that strongly stop using anything remotely associated with nvidia altogether since it’s such an unescapable corpo ecosystem otherwise. I’m sure you’re just as willing to cut all of that out just so you aren’t faced with the horrors of DLSS and cOrPoRaTE ecosystems lmao.

          Stop trying to sound like a victim, you are taking technology and blaming it and condemning its existence because it is misused by lazy devs.

    • Grey Cat@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      There is way for performance improvements ans DLSS type technologies to both exist. We are not at the pinnacle of graphical quality. Studios will not suddenly stop putting better graphics in games with the higher performance they get out of new cards, and allot that capability for more frames.

      Even if you have better - newer GPUs, DLSS-type stuff will always allow you to get more performance out of your rig. I am not even talking about frame-grn since I have not tested that yet. Just the upscaling.

  • BaroqueInMind
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 days ago

    Where is the model file stored after non-production training is completed? Do we all download it from the driver update? If so, and i don’t use DLSS, how can I remove that gigantic model checkpoint?

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 days ago

      It’s probably not big if it’s included in the driver download and run in real-time so quickly. Not big enough to worry about anyway.

      • addie@feddit.uk
        link
        fedilink
        English
        arrow-up
        24
        ·
        2 days ago

        DLSS2.0 is “temporal anti-aliasing on steroids”. TAA works by jiggling the camera a tiny amount, less than a pixel, every frame. If nothing on screen is moving and the camera’s not moving, then you could blend the last dozen or so frames together, and it would appear to have high resolution and smooth edges without doing any extra work. If the camera moves, then you can blend from “where the camera used to be pointing” and get most of the same benefits. If objects in the scene are moving, then you can use the information on “where things used to be” (it’s a graphics engine, we know where things used to be) and blend the same way. If everything’s moving quickly then it doesn’t work, but in that case you won’t notice a few rough edges anyway. Good quality and basically “free” (you were rendering the old frames anyway), especially compared to other ways of doing anti-aliasing.

        Nvidia have a honking big supercomputer that renders “perfect very-high resolution frames”, and then tries out untold billions of different possibilities for “the perfect camera jiggle”, “the perfect amount of blending”, “the perfect motion reconstruction” to get the correct result out of lower-quality frames. It’s not just an upscaler, it has a lot of extra information - historic and screen geometry - to work from, and can sometimes generate more accurate renders than rendering at native resolution would do. Getting the information on what the optimal settings are is absolute shitloads of work, but the output is pretty tiny - several thousand matrix operations - which is why it’s cheap enough to apply on every frame. So yeah, not big enough to worry about.

        There’s a big fraction of AAA games that use Unreal engine and aim for photorealism, so if you’ve trained it up on that, boom, you’re done in most cases. Indie games with indie game engines tend not to be so demanding, and so don’t need DLSS, so you don’t need to tune it up for them.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        2 days ago

        Isn’t it also tuned for each game individually? So it would be different iterations for every supported game.

        I swear when it came out that they said devs would have to submit their games for training.

        Reading the “article”, it doesn’t seem like that’s actually the case. It sounds more generic.

  • ShadowRam@fedia.io
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    imagine if and when an entire model’s weights are lost?

    Imagine you have a personal AI that you’ve been training for years, and its learning off you, and there’s a backup failure? It might be like losing a pet…