Skanky@lemmy.world to No Stupid Questions@lemmy.world · 3 年前From a legal/evidence perspective, what is going to happen when it will become impossible to tell the difference between a video generated by AI versus the real thing?message-squaremessage-square34fedilinkarrow-up163
arrow-up163message-squareFrom a legal/evidence perspective, what is going to happen when it will become impossible to tell the difference between a video generated by AI versus the real thing?Skanky@lemmy.world to No Stupid Questions@lemmy.world · 3 年前message-square34fedilink
minus-squareBlameThePeacock@lemmy.calinkfedilinkarrow-up1·3 年前No need to stream the whole video externally, you could just send the checksums every X minutes and then provide the video with that checksum later. It doesn’t entirely stop the problem though, as you could still insert faked videos into the stream. You just couldn’t do it retroactively.
minus-squarefubo@lemmy.worldlinkfedilinkarrow-up2·3 年前Sure, but robbing a store and simultaneously hacking their video feed is harder than robbing a store and retroactively creating fake footage.
minus-squareBlameThePeacock@lemmy.calinkfedilinkarrow-up1·3 年前I’m not particularly worried about robbery. There are far more sophisticated ways to attack an organization or person.
No need to stream the whole video externally, you could just send the checksums every X minutes and then provide the video with that checksum later.
It doesn’t entirely stop the problem though, as you could still insert faked videos into the stream. You just couldn’t do it retroactively.
Sure, but robbing a store and simultaneously hacking their video feed is harder than robbing a store and retroactively creating fake footage.
I’m not particularly worried about robbery. There are far more sophisticated ways to attack an organization or person.