I came across Nepenthes today in the comments under a post about AI mazes. It has an option to purposefully generate not just an endless pit of links and pages, but also to deterministically generate random, human-like text for those pages to poison the LLM scrapers as they sink into the tarpit.
After reading that, I thought, could you do something similar to poison image scrapers too?
Like if you have an art hosting site, as long as you can get an AI to fall into the tarpit, you could replace all the art it thinks should be there with distorted images from a dataset.
Or just send it to a kind of “parallel” version of the site that replaces (or heavily distorts) all the images but leaves the text descriptions and tags the same.
I realize there’s probably some sort of filter for any automated image scraper that attempts to sort out low quality images, but if one used similar images to the expected content, that might be enough to get through the filter.
I guess if someone really wanted to poison a model, generating AI replacement images would probably be the most effective way to speed up model decay, but that has much higher energy and processing power overhead.
Anyway, I’m definitely not skilled/knowledgeable enough to make this a thing myself even just as an experiment. But I thought you all might know if someone’s already done it, or you might find the idea fascinating.
What do you think? Any better ideas / suggestions for poisoning art scraping AI?
Oh wow I’m dense, I didn’t even think about the fact that scrapers probably don’t render the full webpage and instead just seek out images in the HTML lol
This seems like a much easier to set up trap than creating a tarpit and then serving bullshit images.
Would it negatively impact the loading times for regular users? Like would it take significant amounts of time for the webpage to load if you added hundreds of these hidden images?
I’m by no means knowledgeable on webdev stuff so I don’t know the performance implications.