- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos::undefined
Removed by mod
Nevermind that. One of those people isn’t looking at the viewer!
deleted
Of course! Tech companies are just as evil as any other company.
deleted
No no, you misunderstood. Everyone gave their explicit consent by clicking “agree” to the 80-page terms of service!
This tool will have some really disgusting consequences if LLaMA-type leak happens 🤦♂️
Shame that nobody wants to stop Facebook!
deleted
What happened to them with underage users? I think I’m out of the loop on this one.
deleted
I guess so. Why?
Minors can’t legally agree to give away their data.
This is the best summary I could come up with:
Previously, Meta’s version of this technology—using the same data—was only available in messaging and social networking apps such as Instagram.
Images include a small “Imagined with AI” watermark logo in the lower left-hand corner.
We put Meta’s new AI image generator through a battery of low-stakes informal tests using our “Barbarian with a CRT” and “Cat with a beer” image-synthesis protocol and found aesthetically novel results, as you can see above.
(As an aside, when generating images of people with Emu, we noticed many looked like typical Instagram fashion posts.)
The generator appears to filter out most violence, curse words, sexual topics, and the names of celebrities and historical figures (no Abraham Lincoln, sadly), but it allows commercial characters like Elmo (yes, even “with a knife”) and Mickey Mouse (though not with a machine gun).
It doesn’t seem to do text rendering well at all, and it handles different media outputs like watercolors, embroidery, and pen-and-ink with mixed results.
The original article contains 510 words, the summary contains 159 words. Saved 69%. I’m a bot and I’m open source!