The idea of using a picture upload for automated verification is completely unviable. A much more commonly used system would be something like telling you to perform a random gesture on camera on the spot, like “turn your head slowly” or “open your mouth slowly” which would be trivial for a human to perform but near impossible for AI generators.
It’s not that difficult to identify if you have a good understanding of photography principles. The lighting on this image is the biggest tell for me personally, since I can’t visualize any lighting setup that can cast shadows in the directions that’s shown on this picture, it just instinctually looks wrong to me on first sight because of the impossible light sources.
That’s the reason the picture looks WRONG, even if you can’t identify the reason why it looks wrong.
I only focused on the nonsense background clutters because I think it’s easier for people who don’t work around cameras all day.
This is what makes this technology anxiety inducing at best…
So, for yourself, you have no issues seeing the artificiality of the image due to your extensive exposure to and knowledge of photographic principles. This is fair… that said, I have read your earlier comment about the various issues with the photo as well as this one about light sources, and I keep going back to scrutinize those elements, and… for the life of me… I cannot pick out anything in the image that, to me, absolutely screams artificial.
I’m fairly sure most people who look at these verification photos would be in a similar boat to me. Unless there’s something glaringly obvious (malformed hands, eyes in the wrong place, a sudden cthulhu-esk eldritch thing unnaturally prowling the background holding a stuffed teddy bear) I feel most people would accept an image like this at face value. Alternatively, you’ll get those same people so paranoid about AI generated fakes they’ll falsely flag a real image as fake because of one or two elements they can’t see clearly or have never seen before.
And this is only the infancy of AI generated art. Every year it gets better. In a decade, unless there are some heavy limitations on how the AI is trained (of which, only public models would ever really have these limitations as private models would train be trained on whatever their developers saw fit… to shreds with what artists and copyright said), there would probably be no real way to tell a real image from a fake out apart at all… photographic principals and all.
That’s not really the case but moreoever the gap is closing at a blistering pace. Approximately two years ago this stuff was in the distant future. One year ago the lid was blown open. Today we’re seeing real-time frame generation. This rallying against the tech is misguided. It needs to be embraced and understood. Trying to do otherwise is great folly as everything will fall even further behind and lead to even larger misunderstandings. This isn’t theoretical. It’s already here. We can’t bury our heads in the sand.
If you look at gaussian splatting and diffusion morphs/videos, this is merely in the space of “not broadly on hugging face yet” and not impossible, or even difficult depending on the gesture.
We’re months away from fully posable and animatable 3d models of these AI images. It already exists in demos and on arxiv, it runs on consumer hardware but not in realtime, so a video upload would work but a live stream would require renting a cloud GPU ($$$).
Having an AI act out random gestures is really not that different from generating an image based on a prompt if you think about it. The temporal element has already been done, the biggest factor right now is probably that it’s too computationally heavy to do in real time, but I can’t see that being a problem for more than a year.
More than that - these systems will eventually figure out how to not bitch the background so obviously. Then what? As others have said, we could switch to verification videos. That will be an extra year or two.
I think so. I don’t think there would be more than a few dozens of verification to do every day, with a dozen of mods, it seems doable in this context. It’s not like millions of users are asking for verification every day.
The point isn’t that you can spot it.
The point is that the automated system can’t spot it.
Or are you telling me there is a person looking at every verification photo, and if they did they would thoroughly scan the photo for imperfections?
The idea of using a picture upload for automated verification is completely unviable. A much more commonly used system would be something like telling you to perform a random gesture on camera on the spot, like “turn your head slowly” or “open your mouth slowly” which would be trivial for a human to perform but near impossible for AI generators.
…I feel like this isn’t the first time I heard that statement before.
It’s not that difficult to identify if you have a good understanding of photography principles. The lighting on this image is the biggest tell for me personally, since I can’t visualize any lighting setup that can cast shadows in the directions that’s shown on this picture, it just instinctually looks wrong to me on first sight because of the impossible light sources.
That’s the reason the picture looks WRONG, even if you can’t identify the reason why it looks wrong.
I only focused on the nonsense background clutters because I think it’s easier for people who don’t work around cameras all day.
This is what makes this technology anxiety inducing at best…
So, for yourself, you have no issues seeing the artificiality of the image due to your extensive exposure to and knowledge of photographic principles. This is fair… that said, I have read your earlier comment about the various issues with the photo as well as this one about light sources, and I keep going back to scrutinize those elements, and… for the life of me… I cannot pick out anything in the image that, to me, absolutely screams artificial.
I’m fairly sure most people who look at these verification photos would be in a similar boat to me. Unless there’s something glaringly obvious (malformed hands, eyes in the wrong place, a sudden cthulhu-esk eldritch thing unnaturally prowling the background holding a stuffed teddy bear) I feel most people would accept an image like this at face value. Alternatively, you’ll get those same people so paranoid about AI generated fakes they’ll falsely flag a real image as fake because of one or two elements they can’t see clearly or have never seen before.
And this is only the infancy of AI generated art. Every year it gets better. In a decade, unless there are some heavy limitations on how the AI is trained (of which, only public models would ever really have these limitations as private models would train be trained on whatever their developers saw fit… to shreds with what artists and copyright said), there would probably be no real way to tell a real image from a fake out apart at all… photographic principals and all.
Interesting times :D
That’s not really the case but moreoever the gap is closing at a blistering pace. Approximately two years ago this stuff was in the distant future. One year ago the lid was blown open. Today we’re seeing real-time frame generation. This rallying against the tech is misguided. It needs to be embraced and understood. Trying to do otherwise is great folly as everything will fall even further behind and lead to even larger misunderstandings. This isn’t theoretical. It’s already here. We can’t bury our heads in the sand.
If you look at gaussian splatting and diffusion morphs/videos, this is merely in the space of “not broadly on hugging face yet” and not impossible, or even difficult depending on the gesture.
We’re months away from fully posable and animatable 3d models of these AI images. It already exists in demos and on arxiv, it runs on consumer hardware but not in realtime, so a video upload would work but a live stream would require renting a cloud GPU ($$$).
Having an AI act out random gestures is really not that different from generating an image based on a prompt if you think about it. The temporal element has already been done, the biggest factor right now is probably that it’s too computationally heavy to do in real time, but I can’t see that being a problem for more than a year.
More than that - these systems will eventually figure out how to not bitch the background so obviously. Then what? As others have said, we could switch to verification videos. That will be an extra year or two.
The system doesn’t even need to get better at backgrounds, you just generate more images until one looks good.
I think so. I don’t think there would be more than a few dozens of verification to do every day, with a dozen of mods, it seems doable in this context. It’s not like millions of users are asking for verification every day.