It has become a norm for companies building facial recognition algorithms to scout the web for pictures of people they could use to train their software.
A lot of them have been able to build a large database of images they use for this feat. The companies have been using images in public domain, some images which people unknowingly allow apps and webpages to publish (because they don’t read the clause in the terms that waive their right to privacy).
These images gathered, are used by companies in facial recognition business, to improve the accuracy of their algorithms, and this is subtly adding to the loss of personal privacy.
While people will always want to take selfies, and share online for several reasons, researchers have found a way of cloaking the images such that, facial recognition algorithms would be left flummoxed, when they analyse the images.
A group of researchers at the University of Chicago’s Sand Lab, have developed a technique of tweaking photos so that when facial recognition algorithms scan the photos, they will not be able to make heads or tails with it.
This program is called Fawkes, and it is named after the masked character from the graphic novel and film – V for Vendetta.
The Software – Fawkes – uses artificial intelligence to subtly alter photos in order to trick facial recognition systems.
While Fawkes does not add any visible changes to the photo, it tweaks just enough details to make the photo look entirely different to facial recognition algorithms, and this will inadvertently, prevent the facial recognition algorithms from matching the photo to you.
What that means is, when you run your photo through Fawkes, the software adds some changes at the pixel level, the kind of changes that is not visible to human eyes, but the changes are enough to confuse facial recognition algorithms, and make it unable to find a match.
“Our distortion or ‘cloaking’ algorithm takes the user’s photos and computes minimal perturbations that shift them significantly in the feature space of a facial recognition model (using real or synthetic images of a third party as a landmark),” the researchers noted in their report.
“Any facial recognition model trained using these images of the user learns an altered set of ‘features’ of what makes them look like them.”
The researchers also wrote that:
“Fawkes has been tested extensively and proven effective in a variety of environments and is 100% effective against state-of-the-art facial recognition models (Microsoft Azure Face API, Amazon Rekognition, and Face++).”
Although, Fawkes is said to be 100% effective against facial recognition algorithms, this does not automatically overwrite the photos that have already been added to the database of the facial recognition algorithms.
The use of Fawkes, the researchers say, will only protect future data, and not already processed data.