• diffuselight@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    edit-2
    1 year ago

    That’s a fundamental misunderstanding of how diffusion models work. These models extract concepts and can effortlessly combine them to new images.

    If it learns woman + crown = queen

    and queen - woman + man = king

    it is able to combine any such concept together

    As Stability has noted. any model that has the concept of naked and the concept of child in it can be used like this. They tried to remove naked for Stable Diffusion 2 and nobody used it.

    Nobody trained these models on CSAM and the problem is a dilemma in the same way a knife is a dilemma. We all know a malicious person can use a knife for murder, including of children Yet society has decided that knives sufficient other uses that we still allow their sale pretty much everywhere.

    • grepe@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      This can be used by pedophiles is used as an argument to ban cryptography… I wonder if someone will apply that to the generative AI.

      • piecat@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Depends how profitable it is.

        If it can replace workers no, if it threatens the big players like Disney yes.

    • 0ddysseus@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      11 months ago

      Editing this reply to say that I was in fact right and I did not have any fundamental misunderstanding of anything. And the database in question here is called LAIOn and contains 6 billions images scraped from the web, including CSAM images.

      Thanks for that. As I said, I’m not big into how AI works, so not surprised I got that wrong. The databases of everything that has come across the clear web are still there though and are available for use by people with access.