Video evidence is relatively easy to fix, you just need camera ICs to cryptographically sign their outputs. If the image/video is tampered with (or even re-encoded) the signature won’t match. As the private key is (hopefully!) stored securely in the hardware IC taking the photo/video, any generated images or videos can’t be signed by such a private key.
So whatever way the camera output is being signed, what’s stopping you from signing an altered video with a similar private key and then saying “you can all trust that my video is real because I have the private key for it.”
The doubters will have to concede that the video did indeed come from you because it pairs with your key, but why would anyone trust that the key came from the camera step instead of coming from the editing step?
You, the end user, don’t have access to your camera’s private key. Only the camera IC does. When your phone / SD card first receives the image/video it’s already been signed by the hardware.
You can enter the camera as evidence, and prove that it has been used for other footage. Each camera should have a unique key to be effective.
So if you create a new key, it won’t match the one on am existing camera. If you steal the key, then once that’s discovered, the camera should generate a new one.
Actually, polls show that most people are not fond of AI-generated content and want it to be labelled or don’t want it at all.
As for generating your own entertainment at home, see interactive movies. They did not take off because people don’t want to be “working” for their entertainment. That’s their time to relax and not make decisions.
All in all, we’re not as careless as it may seem.
Not to mention those interactive movies from the early 90s games that also didn’t take off because they were sorely lacking in the game department
I’d be very interested in these polls if you have some to link!
Just swap security cameras back over to analog, problem solved for video evidence
You could just as easily burn a deepfake onto tape.
Shit
I wonder if personal websites with links to each other, like in the olden days, will start growing in popularity again because of how trust is slowly eroded for anything not in your direct control, and search engines becoming more and more useless 🤔
But, but, how will we monetize it? How!?
/s
I long for the early 2k internet. So much potential positivity for humanity.
Gotta spin me up a Neocities page…
Same here! This will be the way for me.
Reading this made my eye twitch.
Part of the fun of watching stuff isn’t because it “customised to me” it’s sharing an experience with the creator(s) and friends, family etc.
I see genAI being used as a tool for creators but not as an automation of content creation.
I don’t think everyone is into that link tho (/j)
I think this vastly overestimates the average person’s ability to recognise or even care to recognise what is AI and what is not.
You’ve got all those videos on Facebook which are BLATANTLY AI and the comment section is split between “wow, amazing!” and “it’s AI you fucking morons”
The latter will eventually leave the platform and the former will be all that’s left.
Then new people will grow up in an environment where its only the wow amazing people and they never hear from the its ai you moron people.
We’re a dying breed. There are people alive today who will never know anything other than the post truth world.
Interesting times.
And it is not going to take 10 years. It is right around the corner.
email. gmail already summarizes every mail by default in the US. most emails are bot spam. ppl start using ai bots to answer emails. is that the internet of things?
in fact, this green text was made purely from asking chatgpt what ai will look like in 10 years
Unironically the best greentext I ever read was the bottomless pit one written by AI
That was like 3 years ago when generative AI was fun and whimsical
The first ai green texts made me laugh so much. They managed to perfectly capture the essence of a green text but because they were dumb they would create the most weird situations.
The last time I had fun with LLMs was back when GPT2 was cutting-edge, I fine-tuned GPT2-Medium on Twitch chat logs and it alternates between emote spam, complete incoherence, blatantly unhinged comments, and suspiciously normal ones. The bot is still in use as a toy, specifically because it’s deranged and unpredictable. It’s like a kaleidoscope for the slice of internet subculture it was trained on, much more fun than a plain flawless mirror.
much more fun than a plain flawless mirror.
yeah agreed! Back in the day I used to generate text for fun with n-grams and I never went higher than bigrams bc it was boring without those unexpected disfluencies. I thought of it being like an electric guitar, you want it to sound a little raw.
Link?
🤯
This has already happened, many years ago. I know this because everyone but me is actually a highly sophisticated robot that resembles a member of my species. I’m onto you.
When I was a kid I had a theory that I’m the only conscient being in the world, and that everyone is some sort of a robot.
I couldn’t share it with anyone, because obviously no one was real but me.
He figured it out. Time to shut it down.
Finally. This iteration was starting to become weird anyway.
you can’t trick me machine. You can’t convince me I am the robot and you are the conscious. it can’t be possible.
This guy bought so many rare monkey tokens. Ai is impressive in some aspects, but it’s not nearly as impressive as the marketing that drives the massive amounts of investment into it.
The US economy is doing anything it can to create growth, which is causing investors to create a bubble around AI that is “too big to fail”.
you seem to underestimate just how fast ai is growing.
It’s growing fast, yes, but it’s nowhere near actually intelligent or hyperrealistic to the point it’s fooling anyone familiar with the tech.
“Growth for the sake of growth is the ideology of a cancer cell.” - Edward Abbey.
I made every one of these predictions years ago
I personally doubt that will happen, since the current models require a lot of data to get better, something we actually don’t have. The real danger is what happens once we figure out how to make models without an absurd amount of data.
As well as that, the internet is less reliable since there’s a lot more botshit on it.
TIHI