It feels like we have a new privacy threat that’s emerged in the past few years, and this year especially. I kind of think of the privacy threats over the past few decades as happening in waves of:

  1. First we were concerned about governments spying on us. The way we fought back (and continue to fight back) was through encrypted and secure protocols.
  2. Then we were concerned about corporations (Big Tech) taking our data and selling it to advertisers to target us with ads, or otherwise manipulate us. This is still a hard battle being fought, but we’re fighting it mostly by avoiding Big Tech (“De-Googling”, switching from social media to communities, etc.).
  3. Now we’re in a new wave. Big Tech is now building massive GPTs (ChatGPT, Google Bard, etc.) and it’s all trained on our data. Our reddit posts and Stack Overflow posts and maybe even our Mastodon or Lemmy posts! Unlike with #2, avoiding Big Tech doesn’t help, since they can access our posts no matter where we post them.

So for that third one…what do we do? Anything that’s online is fair game to be used to train the new crop of GPTs. Is this a battle that you personally care a lot about, or are you okay with GPTs being trained on stuff you’ve provided? If you do care, do you think there’s any reasonable way we can fight back? Can we poison their training data somehow?

  • curioushom
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    The issue is that most of the content posted is archived fairly quickly. Deleting/rewriting only hurts the humans that might have gone looking for it. The way I look at it is, if the data is searchable/indexable by search engines (as a proxy for all other tools) at any point of its life cycle then it’s essentially permanent.

    • unfazedbeaver
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      That all true. The idea isn’t to remove yourself from the internet. Once you post to the internet, it’s there forever. No, what I am proposing is to hurt reddits chances of being a viable first party resource to train AI.

      • curioushom
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        Unless you’re able to compel a platform to remove your data through something like EU right to be forgotten then the data will remain (in training sets or otherwise). If third parties are able to archive your data, reddit will surely have access to their own archival data and will use the original and edited content for training and let machine learning sort it out.

        I’m not saying this to be a defeatist, we need better data ownership and governance laws. Retroactively obfuscating the data will not serve the purpose and provides a false sense of control, which I contend is worse.