The more complex technology becomes the easier it is to write these scaremongering stories.
Apples face ID is a very different technology than the one used in railway stations to track person’s movements.
One stores at the very least 30,000 data points about your face to allow unlocking a device that contains all your money, conversations and other dirty secrets. It’s been around for nearly 10 years and it’s nigh impossible to hack unless you have an identical twin.
The other uses tech to differentiate faces enough from each other based on (often not just) the face of one person and the others in the frame, often crossmatched between multiple live cameras feeds. This allows security to provide the very useful information that before the bombing suspect boarded the 7.03 train they went to the loo for 78s and the bought a ticked and something from the vending machine and walked straight to the train. None of that data can be used as a “key” to unlock anything. At best it can analyse crowds and detect suspicious walking patterns.
The professor who wrote this article is either ignorant or clickbaiting. His supermarket facescanner storing a template that can be stolen is not only highly unlikely of a quality good enough for biometrics, it’s unlikely that even happens at all.
Between these articles and all the AI fearmongering posts on Lemmy, we’ll soon need to hire debunkers as permanent positions.
Which is the reason why biometrics shouldn’t be a serious authentication method anymore


