cross-posted from: https://lemmy.ml/post/5400607

This is a classic case of tragedy of the commons, where a common resource is harmed by the profit interests of individuals. The traditional example of this is a public field that cattle can graze upon. Without any limits, individual cattle owners have an incentive to overgraze the land, destroying its value to everybody.

We have commons on the internet, too. Despite all of its toxic corners, it is still full of vibrant portions that serve the public good — places like Wikipedia and Reddit forums, where volunteers often share knowledge in good faith and work hard to keep bad actors at bay.

But these commons are now being overgrazed by rapacious tech companies that seek to feed all of the human wisdom, expertise, humor, anecdotes and advice they find in these places into their for-profit A.I. systems.

  • FaceDeer
    link
    fedilink
    151 year ago

    But these commons are now being overgrazed by rapacious tech companies that seek to feed all of the human wisdom, expertise, humor, anecdotes and advice they find in these places into their for-profit A.I. systems.

    This analogy falls apart when you note that “overgrazing” these resources does absolutely nothing to harm them.

    They’re still there. They haven’t been affected in any way by the fact that a machine somewhere has read them and learned a bunch of stuff from them. So what?

    • @Edgelord_Of_Tomorrow@lemmy.world
      link
      fedilink
      English
      131 year ago

      This analogy falls apart when you note that “overgrazing” these resources does absolutely nothing to harm them.

      Only if you consider AI-supercharged misinformation to not be harmful.

      Only if you consider the entropy of human interaction on the internet to not be harmful.

      Only if you consider being unable to know who is real to not be harmful.

      • FaceDeer
        link
        fedilink
        111 year ago

        None of those things directly harm the resources being “grazed”, and none of them are inevitable consequences of AI. If you think they are then you’re actually arguing against AI in general and not the specific way in which they’ve been trained.

        • @Edgelord_Of_Tomorrow@lemmy.world
          link
          fedilink
          English
          31 year ago

          You think the internet being flooded with articles, comments etc. all being written by AI whose only goals are selling shit, disseminating misinformation, and manipulating elections and opinions - with no way to know what is human and what is AI - is going to be a great environment to continue to train your AI?

          You might be interested to read about Model Autophagy Disorder.

          • FaceDeer
            link
            fedilink
            6
            edit-2
            1 year ago

            That is not a problem caused by “overgrazing” those open resources. It’s a separate problem with AI training that needs to be addressed anyway. You’re just throwing out random AI-related challenges regardless of whether they’re relevant to what’s being discussed.

            Simply put, quality control is always important.

    • @sudneo@lemmy.world
      link
      fedilink
      English
      21 year ago

      While the analogy is not perfect, you can think that the harm is getting lost in the noise. If the “overgrazing” of content on the internet (content which has the purpose of being read/listened/etc. Often for a job) causes a huge amount of other content based on it (AI-generated), then the original is damaged by being lost in the noise.