- cross-posted to:
- technews@radiation.party
- cross-posted to:
- technews@radiation.party
All 50.
At the same time.
In lockstep.
Of course. lol
Idk i don’t like it but God dammit rather use a fucking AI to make it than a real child.
Its not even about protecting children. These people are some of the last people who would care about protecting children.
…This probably won’t stop here unfortunately. Once they start trying to regulate this stuff, they’ll keep pushing for more restrictions, especially if you can generate images/video of celebrities/our political elite saying ‘nigger.’ But yeah, think of the fictional children!
To address child exploitation, we already have laws on the books that make it illegal to possess child pornography so if such a thing is used in/to generate a model, then you’re already getting in trouble for that. There’s no reason to make more unnecessary laws. Which goes right back to what I said earlier in this post. Pure trojan horse, plain and simple.
Once they start trying to regulate this stuff, they’ll keep pushing for more restrictions, especially if you can generate images/video of celebrities/our political elite saying ‘nigger.’
Okay to be fair, that is a real problem. I, and I imagine most people, are very uncomfortable with indistinguishable-from-reality deepfakes. The implications for spread of misinformation is only a small part of the problem.
There’s nothing you can do about it though without banning it/making it so that only people with special licenses for academic/business use can generate that stuff. If someone makes their software to where it embeds a watermark/some kind of metadata that helps you verify that what you’re watching is fake, someone else will just release their own software/model that doesn’t do that.
This is either an all or nothing kind of deal.
Yeah, I don’t have a good solution. You could just make it illegal to make deepfakes of real people, but that is pretty restrictive (not to mention impossible to enforce).
Malicious deepfakes would probably fall under defamation.
Remember the days when it was considered common sense to never share your real life information online?
Those days are going to come back because of this.
Tinfoil hat tiem!
Its all to protect the viability of blackmail. Anyone who has been blackmailed via photography, video, or audio will be able to use AI-generated content to plausibly deny the legitimacy of the blackmail material. Those who rely on blackmail for power, wealth, and influence are incentivized to buy as much time as they can. Blackmailed individuals can be leveraged to buy time in various ways, one of which is on display in the headline of this article. Now that many past social taboos have been normalized, what can you blackmail people with in the current year?
I can’t imagine this going poorly!
Now is probably the time to start scraping every model, LoRA, etc. you might want to use just in case.
I’d be surprised if they actually care about the kids. All this time they always try to push regulations in the name of “protecting the kids”. I really hope this won’t be the start of open source AI’s death.
“Study the impact of AI on children.”
I will laugh myself breathless if they do and find out that AI generated fakes reduce the prevalence of the real thing.
Do you think they’ll even release the results if that happens, or just quietly stop talking about it?
Have you seen some of the literal fake news that the Cato Institute pushed? The studies will say whatever the lawmakers (actually, whatever the lawmaker’s lobbyists) want them to say.
Actually there is already some evidence that CP itself reduces child molestation: https://scholar.google.com/scholar?cluster=8400329088513934403
Of course the author argues that CP is immoral therefore we should stick to “virtual child pornography”. Though chances are that these lawmakers don’t care either way, and won’t listen to the evidence.
They will P-hack the results like usual until a conclusion they favor is reached. See my other comment about why they are doing this to begin with.