• gabe [he/him]
    link
    fedilink
    English
    3610 months ago

    Remember to block metas ip ranges and not just defederate. Make data scraping as hard as possible for these weirdos

    • FaceDeer
      link
      fedilink
      3410 months ago

      You think Meta can’t pick up some random new IP address just for this?

      A better solution would be to either stop fretting about trivialities like this, or if you can’t do that stop putting your data up on an open protocol that is specifically designed to spread it around and show it to anyone who wants to see it.

      • MoogleMaestro
        link
        fedilink
        1210 months ago

        Companies need to stop ignoring copyright on data they don’t own and never have owned.

        • FaceDeer
          link
          fedilink
          2110 months ago

          There is nothing against copyright law to read data that a person has put online in a public, unrestricted manner for the purpose of having it be read.

          • @pup_atlas@pawb.social
            link
            fedilink
            710 months ago

            That’s not what’s happening though, they are using that data to train their AI models, which pretty irreparably embeds identifiable aspects of it into their model. The only way to remove that data from the model would be an incredibly costly retrain. It’s not literally embedded verbatim anywhere, but it’s almost as if you took an image of a book. The data is definitely different, but if you read it (i.e. make the right prompts, or enough of them), there’s the potential to get parts of the original data back.

            • FaceDeer
              link
              fedilink
              1510 months ago

              which pretty irreparably embeds identifiable aspects of it into their model.

              No, it doesn’t. The model doesn’t contain any copyright-significant amount of the original training data in it, it physically can’t contain it, the model isn’t large enough. The model only contains concepts that it learned from the training data - ideas, patterns, but not literal snippets of the data.

              The only time you can dredge a significant snippet of training data out is in a case where a particular bit of training data was present hundreds or thousands of times in the training data - a condition called “overfitting” that is considered a flaw and that AI trainers work hard to prevent by de-duplicating the data before training. Nobody wants overfitting, it defeats the whole point of generative AI to use it to replicate the “copy and paste” function in a hugely inefficient way. It’s very hard to find any actual examples of overfitting in modern models.

              It’s not literally embedded verbatim anywhere

              And that’s all that you need to make this copyright-kosher.

              Think of it this way. Draw a picture of an apple. When you’re done drawing it, think to yourself - which apple did I just draw? You’ve probably seen thousands of apples in your life, but you didn’t draw any specific one, or piece together the picture from various specific bits of apple images you memorized. Instead you learned what the concept of an apple is like from all those examples, and drew a new thing that represents that concept of “appleness.” It’s the same way with these AIs, they don’t have a repository of training data that they copy from whenever they’re generating new text.

              • @pup_atlas@pawb.social
                link
                fedilink
                310 months ago

                I’m aware the model doesn’t literally contain the training data, but for many models and applications, the training data is by nature small enough, and the application is restrictive enough that it is trivial to get even snippets of almost verbatim training data back out.

                One of the primary models I work on involves code generation, and in those applications we’ve actually observed verbatim code being output by the model from the training data, even if there’s a fair amount of training data it’s been trained on. This has spurred concerns about license violation on open source code that was trained on.

                There’s also the concept of less verbatim, but more “copied” style. Sure making a movie in the style of Wes Anderson is legitimate artistic expression, but what about a graphic designer making a logo in the “style of McDonalds”? The law is intentionally pretty murky in this department, with even some colors being trademarked for certain categories in the states. There’s not a clear line here, and LLMs are well positioned to challenge what we have on the books already. IMO this is not an AI problem, it’s a legal one that AI just happens to exacerbate.

                • FaceDeer
                  link
                  fedilink
                  510 months ago

                  You’re conflating a bunch of different areas here. Trademark is an entirely different category of IP. As you say, “style” cannot be copyrighted. And the sorts of models that chatter from social media is being used for is quite different from code generation.

                  Sure, there is going to be a bunch of lawsuits and new legislation coming down the pipe to clarify this stuff. But it’s important to bear in mind that none of that has happened yet. Things are not illegal by default, you need to have a law or precedent that makes them illegal. And there’s none of that now, and no guarantee that things are going to pan out that way in the end.

                  People are acting incensed at AI trainers using public data to train AI as if they’re doing something illegal. Maybe they want it to be illegal, but it isn’t yet and may never be. Until that happens people should keep in mind that they have to debate, not dictate.

                  • @pup_atlas@pawb.social
                    link
                    fedilink
                    210 months ago

                    The law is (in an ideal world), the reflection of our collective morality. It is supposed to dictate what is “right” and “wrong”. That said— I see too many folks believing that it works the other way too, that what is illegal must be wrong, and what is legal must be ok. This is (decisively) not the case.

                    In AI terms, I do believe some of the things that LLMs and the companies behind them are doing now may turn out to be illegal under certain interpretations of the law. But further, I think a lot of the things companies are doing to train these models are seen as “immoral” (me included), and that the law should be changed to reflect that.

                    Sure that may mean that “stuff these companies are doing now is legal”, but that doesn’t mean we don’t have the right to be upset about it. Tons of stuff large corporations have done was fully legal until public outcry forced the government to legislate against it. The first step in many laws being passed is the public demonstrating a vested interest in it. I believe the same is happening here.

        • @anlumo@feddit.de
          link
          fedilink
          210 months ago

          Short messages usually aren’t creative enough to be protected by copyright. Exceptions might be poems and similar texts.

    • BlueÆther
      link
      fedilink
      810 months ago

      Then all they need to do is spin up an instance of mastodon on any random VPS and scrape away if they really want to get the data.

      But the 1,000,000 odd mastodon users would pale to their user base