Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 days ago

    Alright, I found the name of what I was thinking of that sounds similar to what they’re suggesting: generative adversarial network (GAN).

    The core idea of a GAN is based on the “indirect” training through the discriminator, another neural network that can tell how “realistic” the input seems, which itself is also being updated dynamically. This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner.

    • Railcar8095@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 days ago

      Applying GAN won’t work. If used for filtering would result on results being skewed to a younger, but it won’t show 9 the body of a 9 year old unless the model could do that from the beginning.

      If used to “tune” the original model, it will result on massive hallucination and aberrations that can result in false positives.

      In both cases, decent results will be rare and time consuming. Anybody with the dedication to attempt this already has pictures and can build their own model.

      Source: I’m a data scientist