I never understood how they were useful in the first place. But that’s kind of beside the point. I assume this is referencing AI, but due to the fact that you’ve only posted one photo out of apparently four, I don’t really have any idea what you’re posting about.
The point of verification photos is to ensure that nsfw subreddits are only posting with consent. Many posts were just random nudes someone found, in which the subject was not ok with having them posted.
The verification photos show an intention to upload to the sub. A former partner wanting to upload revenge porn would not have access to a verification photo. They often require the paper be crumpled to make it infeasible to photoshop.
If an AI can generate a photorealistic verification picture, it cannot be used to verify anything.
I didn’t realize they originated with verifying nsfw content. I’d only ever seen them in otherwise text-based contexts. It seemed to me the person in the photo didn’t necessarily represent the account owner just because they were holding up a piece of paper showing the username. But if you’re matching the verification against other photos, that makes more sense.
It’s been used way before the nsfw stuff and the advent of AI.
Back in the days if you were doing an AMA with a celeb, the picture proof is the celeb telling us this is the account they are using. Doesn’t need to be their account and was only useful for people with an identifiable face. If you were doing an AMA because you were some specialist or professional, giving your face and username doesn’t do anything, you need to provide paperwork to the mods.
This is a poor way to police fake nudes though, I wouldn’t have trusted it even before AI.
It used to be tits or GTFO ON /b.
From now on I’ll have amazing tits.
Was it really that hard to Photoshop enough to bypass mods that are not experts at photo forensic?
Probably not, but it would still reduce the amount considerably.
I think it takes a considerable amount of work to photoshop something written on a sheet of paper that has been crumpled up and flattened back out.
If you have experience with the program it’s piss easy
However most people do not have experience.
You also have to include the actual person holding something that can be substituted for the paper.
Sort of. You just need the vague correct position of the elbow/shoulder and facing the camera. You can get away with photoshopping different arms and most people wouldn’t notice if you do it correctly.
So you need a guy with such experience on your social engineering team.
It’s mostly about filtering the low-hanging fruit, aka the low effort trolls, repost bots, and random idiots posting revenge porn.
As in most things. I don’t have security cameras to capture video of someone breaking in. I have them so my neighbours house looks like an easier target.
Removed by mod
there’s a lot of tools to verify if something was photoshopped or not… you don’t need to be an expert to use them
I tried some one day and I didn’t find any that is actually easy for a noob, I remember having to check resolution, contrast, spatial frequency disruption etc. and nothing looked easy to detect without proper training.
i wouldn’t just go around telling people that…
Can you share more? Never had to use one.
You can verify the resolution changes across a video or photo. This can be overcome by setting a dpi limit to your lowest resolution item in the picture, but most people go with what looks best instead of a flat level.
I was going to suggest using an artifact overlay to suggest all the images were shot by the same lens on the same camera
On a side note, they are also used all the time for online selling and trading, as a means to verify that the seller is a real person who is in fact in possession of the object they wish to sell.
How does traditional - as in before AI - photo verification knows the image was not manipulated? In this post the paper is super flat, and I’ve seen many others.
From reading the verification rules from /r/gonewild they require the same paper card to be photographed from different angles while being bent slightly.
Photoshopping a card convincingly may be easy. Photoshopping a bent card held at different angles that reads as the same in every image is much more difficult.
That last thing will still be difficult with AI. You can generate one image that looks convincing, but generating multiple images that are consistent? I doubt it.
The paper is real. The person behind it is fake.
Curious how long it’ll be until we start getting AI 3D models of this quality.
I feel like you could do this right now by hand (if you have experience with 3d modelling) once you’ve generated an image. 3d modelling often includes creating a model from references, be they drawn or photographs.
Plus, I just remembered that creating 3d models of everyday objects/people via photos from multiple angles has been a thing for a long time. You can make a setup that uses just your phone and some software to make 3d printable models of real objects. No reason preventing someone from using a series of AI generated images instead of photos they took, so long as you can generate a consistent enough series to get a base model you can do some touch-up by hand to fix anything that the software might’ve messed up. I remember a famous lady in the 3d printing space who I think used this sort of process to make a complete 3d model of her (naked) body, and then sold copies of it on her Patreon or something.
Jut ask for multiple photos of the person in the same place, AI has a hard time with temporal coherence so in each picture the room items will change, the face will change a bit (maybe a lot), hair styles will change… etc
I found this singular screenshot floating around elsewhere, but yes r/stablediffusion is for AI images.
I had some trouble figuring out what exactly was going on as well, but the Stable Diffusion subreddit gave away that it was at least AI related, as that’s one of the popular AI programs. It wasn’t until I saw the tag though, that I really understood - Workflow Included. Meaning that the person included the steps they used to create the photo in question. Which means that the person in the photo was created using the AI program and is fake.
The implications of this sort of stuff are massive too. How long until people are using AI to generate incriminating evidence to get people arrested on false charges, or the opposite - creating false evidence to get away with murder.
If you look close her freckles are smeared and her background furniture is abstract.
Pretty sure it started because nsfw subreddit mods realized they demand naked pictures of women that nobody else had access to and it made their little mod become a big mod.
Verification posts go back further than Reddit.
They were used extensively on 4chan, because they were the only way to prove that a person posting was in fact that person.and yes, it was mostly people posting nudes, but it was more that they wanted credit.
The reason it carried on to Reddit was because people were using the accounts to advertise patreon and onlyfans, and mods mostly wanted the people making money off the pictures to be the people who took those pictures.
Also it was useful for AMA posts and other such where a celebrity was involved.
4chan was a bit different in that it was anonymous to begin with- and more to the point, it was self-volunteered verification, not a mod-driven requirement.
As for reddit, mods were requiring private verification photos LONG before patreon and onlyfans even existed in the first place.
AMAs, agreed.
“No no it’s not about consent it’s about someone being horny” is such a bad take… and bad taste.
I hate to break this to you, but there was in fact subreddits that publically stated that they required you to privately DM mods a full-body full-face nudes in poses of the mod’s choice for verification.
That ain’t me being in bad taste, it’s just me doing basic observation. Some subreddits it was about verification, yes. Some it was about consent. Some of them it was about the mods being horny. And most of them, it was some combination of the three.
To pretend that it didn’t happen is… well, casual erasure of sexual misconduct of the mods, frankly.
Her freckles are smeared and her background furniture is abstract.
Look at this verification
Every night it even makes me legitimize
Ha perfect!
Is that Chad Kroger from Nickelback?
“look at this photograph” meme is based on nicklback song yes
Thank you.
Is that Joey with something on his head?
How did his eyes get so red?
Due to having so many people trying to impersonate me on the internet, I’ve become somewhat of a expert on verification pictures.
You can still easily tell that this is fake because if you look closely, the details, especially the background clutter, is utterly nonsensical.
- The object over her right shoulder (your left), for example, looks like if someone blended a webcam with a TV with a nightstand.
- Over her left shoulder (your right), her chair is only on that one side and it blends into the counter in the background.
- Is it a table lamp or a wall mounted light?
- The doorframe in background behind her head is not even aligned.
- Her clavicles are asymmetrical, never seen that on a real person.
- Her wispy hairstrands. Real hair don’t appear out of thin air in loops.
The point isn’t that you can spot it.
The point is that the automated system can’t spot it.
Or are you telling me there is a person looking at every verification photo, and if they did they would thoroughly scan the photo for imperfections?
The idea of using a picture upload for automated verification is completely unviable. A much more commonly used system would be something like telling you to perform a random gesture on camera on the spot, like “turn your head slowly” or “open your mouth slowly” which would be trivial for a human to perform but near impossible for AI generators.
but near impossible for AI generators.
…I feel like this isn’t the first time I heard that statement before.
It’s not that difficult to identify if you have a good understanding of photography principles. The lighting on this image is the biggest tell for me personally, since I can’t visualize any lighting setup that can cast shadows in the directions that’s shown on this picture, it just instinctually looks wrong to me on first sight because of the impossible light sources.
That’s the reason the picture looks WRONG, even if you can’t identify the reason why it looks wrong.
I only focused on the nonsense background clutters because I think it’s easier for people who don’t work around cameras all day.
This is what makes this technology anxiety inducing at best…
So, for yourself, you have no issues seeing the artificiality of the image due to your extensive exposure to and knowledge of photographic principles. This is fair… that said, I have read your earlier comment about the various issues with the photo as well as this one about light sources, and I keep going back to scrutinize those elements, and… for the life of me… I cannot pick out anything in the image that, to me, absolutely screams artificial.
I’m fairly sure most people who look at these verification photos would be in a similar boat to me. Unless there’s something glaringly obvious (malformed hands, eyes in the wrong place, a sudden cthulhu-esk eldritch thing unnaturally prowling the background holding a stuffed teddy bear) I feel most people would accept an image like this at face value. Alternatively, you’ll get those same people so paranoid about AI generated fakes they’ll falsely flag a real image as fake because of one or two elements they can’t see clearly or have never seen before.
And this is only the infancy of AI generated art. Every year it gets better. In a decade, unless there are some heavy limitations on how the AI is trained (of which, only public models would ever really have these limitations as private models would train be trained on whatever their developers saw fit… to shreds with what artists and copyright said), there would probably be no real way to tell a real image from a fake out apart at all… photographic principals and all.
Interesting times :D
near impossible for AI generators
That’s not really the case but moreoever the gap is closing at a blistering pace. Approximately two years ago this stuff was in the distant future. One year ago the lid was blown open. Today we’re seeing real-time frame generation. This rallying against the tech is misguided. It needs to be embraced and understood. Trying to do otherwise is great folly as everything will fall even further behind and lead to even larger misunderstandings. This isn’t theoretical. It’s already here. We can’t bury our heads in the sand.
If you look at gaussian splatting and diffusion morphs/videos, this is merely in the space of “not broadly on hugging face yet” and not impossible, or even difficult depending on the gesture.
We’re months away from fully posable and animatable 3d models of these AI images. It already exists in demos and on arxiv, it runs on consumer hardware but not in realtime, so a video upload would work but a live stream would require renting a cloud GPU ($$$).
Having an AI act out random gestures is really not that different from generating an image based on a prompt if you think about it. The temporal element has already been done, the biggest factor right now is probably that it’s too computationally heavy to do in real time, but I can’t see that being a problem for more than a year.
More than that - these systems will eventually figure out how to not bitch the background so obviously. Then what? As others have said, we could switch to verification videos. That will be an extra year or two.
The system doesn’t even need to get better at backgrounds, you just generate more images until one looks good.
I think so. I don’t think there would be more than a few dozens of verification to do every day, with a dozen of mods, it seems doable in this context. It’s not like millions of users are asking for verification every day.
Margot Robbie
Due to having so many people trying to impersonate me on the internet
Uh huh.
That’s esteemed Academy Award nominated verification picture expert/character actress Margot Robbie to you!
Now watch me win my Golden Globe tonight. (Still no best actress… sigh)
So Margot Robbie is obsessed with Android News eh?
Acting is only her secondary passion.
Shitposting on obscure Internet technology forums is my true passion.
You sure you’re as dedicated to the awards as an actress as you are about posting to lemmy about android tech?
I’m hoping you win, as for best actress, they’re fools to not award you with that. So talented.
Keep doing what you’re doing.
Two win for Barbie is still winning, I guess.
On to the Oscars!
Her clavicles are asymmetrical, never seen that on a real person.
Shit, are you telling me that every time I see myself in the mirror I’m actually looking at a string of AI generated images, generated in real-time? The matrix is real. 😱
It’s either that, or my clavicles are actually very asymmetric. ☹️
What I meant is that her right clavicle (your left) is about an inch higher than her left.
I could be wrong, of course, but I imagine if that condition actually exists, then it would be extremely painful.
You’re reaching. I don’t think this is “easy” to tell as you’re making it at all. You’re benefiting enormously from knowing the results before you begin extrapolating.
I was agreeing with everything you’ve said, but I was in a pretty nasty bike accident years back which dislocated my clavical. Which now makes it sit about half an inch higher; mainly on the neck side. I was freaked out at first but the doctor said to just live with it so it can happen.
I’m sorry. ☹️
No worries. I was trying to say that it doesn’t hurt anymore at all, it’s mainly just a cool story to tell and show the height difference between the two.
Yeah, I see what you mean, but my shoulders look almost exactly like that. Doesn’t hurt at all, just very annoying when carrying a backpack as the straps will always tend to slide off from my ‘drooping’ shoulder.
But I agree with your comments about the background, that looks like a fever dream. And of course my situation isn’t the norm, so the shoulders/clavicles can be treated as a red flag, it’s just not definite proof and care should be taken to realise some people might actually just be built weird.
Due to so many people trying to impersonate me on the Internet
Yeah see, now I am not really sure if you’re the real Margot Robbie.
Could you send me a verification picture?
But then how will I astroturf (I mean, organically market) my current and future movies, like Golden Globe winning summer blockbuster, Barbie, now available on Blu-Ray and select streaming services, here if I get verified?
There’s already an AI generated one in this post (you didn’t specify that it be her or legitimate).
every time I’d seen this photo, I only focused on the subject in the foreground, if I were the one verifying that the person in the photo is real, I’d have fallen for it. To me, the subject is entirely convincing. the issues you mentioned about the clavicles and hair, i think kind of make it a bit more convincing. Nobody is completely symmetrical for one, so seeing something like that, while not common, wouldn’t be necessarily uncommon. The hair, to me, just looks like normal person hair. sometimes hair do be like that.
Dude same, before I even read anything I was thinking ‘that’s a cute girl I didn’t know they started doing verifications on lemmy’ then I read and saw the whole hullabaloo.
I’m not seeing the levitating hair
Me neither. There’s clearly more pictures that aren’t included here, so maybe on one of those?
The odd thing about the hair in that picture to me is that on the left side of the photo, there’s one piece that seems to go on a nearly 90 degree bend for seemingly no reason, mid air. I don’t generally see hair get… Kinked like that. I suppose it’s not outside the realm of possibility, but it’s odd at least.
The rest of the hair seems fine to me, but I’m no expert.
I will note however that the object(s) in the background on the left side of the photo look like a gigantic (novelty sized) point and shoot camera from the 90’s. The box on top is the viewfinder and there’s the impression of a circle below that which would be the lens.
Just makes me giggle at the thought of such a large disposable camera.
Curly hair can look like that when it’s curling tightly towards/away from you. It looks fairly natural to me.
Didn’t get the 5^th point, there’s only one clavicle visible, am I missing something?
Even so clavicles can be asymmetrical due to previous injury. We are pretty asymmetrical overall if you look closely enough.
Well she’s Margot Robbie, so her clavicles are symmetrical af. She probably just assumes the rest of us are like that too 😓
deleted by creator
An easy ‘solution’ to fix the background is to just use a mild blurring tool. They’re verifying you not your house, it wouldn’t be sus to just have a mild messy blur around you.
The bokeh effect is surprisingly hard to fake, actually, because it has to do with the physical properties of the camera lens. I think with a light Gaussian blur it would be even less convincing.
The “holes” on her cheeks are easy to miss but seriously unsettling close up. They’re not like freckles or blackheads but more like what termite tunnels look like in wood.
Nah. They just look like big pores. There are a few giveaways here that it’s AI generated, but the pores aren’t one of them.
Source: have big pores. Also, google images.
deleted by creator
I’m pretty sure we can just switch to a verification video chat which will buy us a year.
One year? I’m guessing six months, what a time to be alive!?!
two more papers down the line!
Nope, 1 month ago: https://humanaigc.github.io/animate-anyone/static/videos/demo11.mp4
She changes numbers of fingers during rapid movement, and the heels on her shoes change width continuously. Her eyes also distort randomly in a creepy way.
Yes these are all normal human traits that all us humans have.
If you’re still distressed or confused please visit your local station and let them know you’re worried about distorted fingers and eyes, they’ll fix you <3
Yes, it’s not perfect, and so isn’t OP (nonsensical background), but it’s already very impressive, I have little doubt it will become undistinguishable soon.
I think ai can already do videos with people in them. Not without it looking completely natural though so there will be some discrepancies.
AI video still looks like fever dreams. The AI can’t keep consistent details, specially in the background, from frame to frame. There’s always parts that morph and look like conjured up by Van Gogh during a maniacal delirium. Maybe in a couple of years and with some human grooming in the middle.
Not in real time with responsive replies.
It’s purpose is also to reduce the chances.
If somebody is going to go to all the trouble of fooling a human, they probably aren’t going to just start spamming random pictures on the community for an instant moderator ban.
At some point the only way to verify someone will be to do what the Klingons did to rule out changelings: Cut them and see if they bleed.
Don’t worry, companies like 23andMe and Ancestry have been banking DNA records, so mimicking blood won’t be too hard, either.
I think that stopped working as of Picard season 3.
Wait really? Haven’t seen Picard, what happened?
Two seasons of stupid garbage followed by Season 3 which just ignored the garbage and made things right.
Well, changelings bleed for example
Yeah but isn’t it just the changelings that Starfleet did all the fucked up experiments on that bleed?
technically yes, but IIRC they were pretty much the only changelings remaining since the others were wiped out in the war? I missed a few Star Trek shows/alternate timelines etc so I might be wrong.
In the ending of DS9, Odo went to the changeling homeworld and rejoined the great link and cured them of their disease. And he kinda made them more mellowed out or something like that. In Picard S3, Worf mentions something about how his friend (obviously Odo) sent word it wasn’t those changelings involved in the plot. So Odo and all the Gamma quadrant changelings are fine, it’s just the ones that were experimented on by Starfleet that we see in Picard S3.
So they’ve gotten better at mimicking other life forms? Is that the canonical reason in the show or is Picard just going against established lore?
I only watched a little of Picard before I gave up. But if I had to guess, it’s that the writers never watched any Star Trek, so they get a lot wrong.
He got old… really old
The Thing migrated to Star Trek?
Ye, good thing I already have spychecking as a persistent habit!
Pyro has entered the chat.
Can confirm, I made some random korean dude on dall-e to send to Instagram after it threatened to close my fake account, and it passed.
The eyes are in the exact same spot in every photo
I got one where they were facing to the right, so not always
It’s gotten a lot better with teeth. Last I looked at that site they were very misaligned. It was very Uncanny Valley.
Edit: ok, this one’s a bit whack:
Image: Close up of a man’s mouth. The teeth look 2D, and continue endlessly in a straight row behind his lips; there is too little curvature to indicate they are connected to a jawbone.
Why would they require a photo? What the heck?
Social media companies must have to charge less if several people seeing ads are actually bots
Hahaha, those jerks!
deleted by creator
Removed by mod
Ooooh an actually interesting use of ai: preserving anonymity
Y’all just trying to recreate the idea of digital avatars in here
In the dark future, an underground market has formed to preserve the anonymity and privacy of the average person using holographic disguises of anthropomorphic figures that were in the distant past sometimes known as “furries.”
Ah yes, even in the dark future, furries are making super advanced and useful technologies to be more furry.
There are projects that already exist with this sort of purpose, one I came across a while ago was Deep Privacy which uses deepfakes to replace your face and body in an image with one that is AI generated.
This is making me think of A Scanner Darkly. Check out the movie if you haven’t .
Agreed but skip the movie and read the book. 1000x better.
That movie temporarily cured my insomnia.
Same here. I’ve tried to watch it a few times, I want to like it, but I just can’t get through it.
I’ve had an AI generated mix between my face and an actors as my Facebook profile pic for a little over a year now I think, or close to it, and only my sister has called me out on it.
I mean I wouldn’t bring it up to most people with how much some people shop their shit before uploading.
Yeah I wouldn’t either. I still think it’s pretty funny though
Bro there are people who post entire timelines of themselves in completely anime ai generated imagery
And they are livinf better than i
I’m in the same boat. I basically want to wear an ai mask. I don’t like cartoon face trackers or similar. I don’t have the hardware to render a video though, and I’m not going to buy server time.
People use some Snapchat filters like this, like the anime face one I see a lot.
Google automatic1111, it’s the program to run if you want to generate AI images. You can put in the original photo, use the built in editor and request the face of a pretty man/woman/elephant (for all I care) and it’ll generate a face and merge it with the surrounding image perfectly.
Requires a graphics card with a few gigabytes of vram though, so there is a certain hardware requirement if you want to do this locally.
Smart! Just today I took a very funny photo, but don’t want to break the anonymity of anyone in there.
I really like “Bitmoji” on my iPhone as an interesting start in that direction. I can create my avatar, whether as similar to me or not, and use it as a filter on FaceTime where it follows a lot of my actual movement and expressions
Once again everyone on the internet is a cute girl if they want to be.
Or a cute cat.
Or Elvis.
And then there is me. I’m all of the above.
Ah, an Nyanvis.
If this is what we get, fine. She’s hot.
It’s the return of 16/f/cali
What am I looking at here?
GenAI made image of a verification post. The point i guess is that with genAI photos, anyone can easily make a fake verification post, making them less useful as a means to verify identity.
The post originally is from reddit (https://www.reddit.com/r/StableDiffusion/s/fEle6uaiR7)
Thank thought what it was but wasn’t sure.
Why are people making verification posts on social media?
I remember it being used at the “roast me” sub so the person verified they were actually the person in the image asking to be roasted.
Example: When famous people do an AMA. E.g. Obama did an AMA on Reddit and he was verified with a photo that would be very easy to fake today using AI.
So you can prove that you’re black to post on BlackPeopleTwitter
I believe it is most often for nsfw purposes but what do I know about such things nervous
There are plenty of people making Instagram and OF accounts of fully AI girls. They are hilariously fake, but looking at some of the thirsty comments their posts get, I’m inclined to say that subreddits like /r/AmIUgly and /r/rateme are likely to end up with lots of verification posts that result in lots of scams.
Although, as already pointed out, verification posts have always been easy for people to scam with Photoshop.
I can finally realise my dream of commenting on r/blackpeopletwitter
Yup. This is already a thing
Even before Stable Diffusion or other publicly available AI generators, there was https://www.thispersondoesnotexist.com which generates a random photo of a human every time you reload the page.
the text on paper is pretty important. Before the new tech you’d still need to have decent photoshop skills to make it work (or pay someone to do it for you)
… and come to think of it, with photoshop you could just find a real picture of some random person holding paper and change the text
Very rapidly the basis of truth in any discussion is going to get eroded.
Micro communities based on pre (post-truth) connections. Only allowing people into the community that can be confirmed be others?
I’ve been thinking of starting a matrix community to get away from discord and it’s inevitable Botting.
Isn’t there a trick where you can ask someone to do a specific hand gesture to get photos verified. That’ll still work especially because AI makes fingers look wonky
AI has been able to do fingers for months now. It’s moving very rapidly so it’s hard to keep up. It doesn’t do them perfectly 100% of the time, but that doesn’t matter since you can just regenerate it until it gets it right.
“For your verification please close left eye and run two fingers through your hair while eating a cauliflower with whipped cream. Attach a paperclip to your left ear and write your username on your forehead using an orange marker.”
And then if the user had done everything requested then the photo is generated, because nobody would do that all in their sane mind 😂
Exactly! If you reply “Fuck you”, they know you’re a human.
How to eradicate your userbase…
You could probably just set up a time for the person to send a photo, and then give them a keyword to write on the paper, and they must send it within a very short time. Combine that with a weird gesture and it’s going to be hard to get a convincing AI replica. Add another layer of difficulty and require photos from multiple angles doing the same things.
Lornas can be supplied to the AI. These are data sets of specific ideas like certain hand gestures, lighting levels, whatever style you need you can fine-tune the general data set with lornas.
I have the minimum requirements to produce art and HQ output takes 2 minutes. Low-quality only takes seconds. I can fine-tune my art on a LQ level, then use the AI to upscale it back to HQ. This is me being desperate, too, using only local software and my own hardware.
Do this through a service or a gpu farm and you can spit it out much quicker. The services I’ve used are easy to figure out and do great work for free* in a lot of cases.
I think these suggestions will certainly be barriers and I can think of some more stop-gaps, but they won’t stop everyone from slipping through the cracks especially as passionate individuals hyper-focus on technology we think in passing continue working on it.
Simpler thing is to just have the user take a video. I’ve already seen that in practice.
With a shoe on their head and a sharpie up their ass.
A sharpie is a poor and dangerous anal simulator. It is too easy to be sucked in.
Never put things into your bum unless they have a flange
I think the real problem with this as anal simulation is it looks and feels nothing like an anus
But can’t you just poop it out again?
I feel like there’s a way to get around that… Like if you really wanted, some sort of system to Photoshop the keyword onto the piece of paper. This would allow you to generate the image but also not have to worry ab the AI generating that.
Edit: also does anyone remember that one paper that had to do with a new AI architecture where you could put in some sort of negative image to additionally prompt an AI for a specific shape, output, or position.
Just write on paper and overlay via Photoshop. Photopea has a literal one button click function for that very easy to do. Just blank paper and picture with enough light. Very easy
And it’ll get better if loads of verification posts are doing hand signs
“Can you hold up 7 fingers in front of the camera?”
Photo with one hand up
Show us a picture of your hands with the fingers made out of penises
Some AI models have already nailed the fingers, this won’t do anything. We need something that we can verify without having to trust the other person. I hate to say it but the block chain might be one of the best ways to authenticate users to avoid bots
Block chains have no inherent capability to perform user authentication.
Blockchains aren’t exactly the best at proof of personhood. Usually all they can do is make masquerading as multiple people (a Sybil Attack) more expensive.
That’s not to say interesting approaches haven’t come out of blockchain-adjacent work, like https://passport.gitcoin.co/.
How would the blockchain help?
Removed by mod
They were always useless
That’s why you need a video with movement. AI still can’t do video right.
It’s getting close, now you can provide a picture of someone and an animated skeleton, and it outputs the person moving according to the reference.
Where do I get an animated skeleton?
Home Depot sells them around October
deleted by creator
I mean, you can’t argue with the realism
It’s a seasonal product, you have to wait for October.
Here’s an example by Animate Anyone https://humanaigc.github.io/animate-anyone/static/videos/demo11.mp4
midjourney will start producing videos in 2024.
https://ymcinema.com/2024/01/04/midjourney-will-start-to-create-ai-videos/
This has been published a month ago https://humanaigc.github.io/animate-anyone/static/videos/demo11.mp4
It can https://youtu.be/8PCn5hLKNu4
It is however still cutting edge research
Until it can
An arms race is inevitable until someone invents a perfect automated Turing test.
Yet
Never trust your eyes or ears again in this modern digital hellscape! https://youtube.com/shorts/55hr7Tx_7So?si=db5hROJWYjdQRMTD
In ‘Stranger In A Strange Land’ there’s an interesting profession; Fair Witnesses are sworn to provide a disinterested examination of any situation.
I’ve been thinking how much we need this for eight years now, and since the AI explosion it only seems more dire.
deleted by creator