The Strategist

McAfee researchers find vulnerability in face recognition systems

08/12/2020 - 06:43

McAfee said they have found a potential vulnerability in the face recognition and identification algorithm. With the help of special neural networks, they were able to successfully download a person's face into the recognition system so that the computer mistook the image for the face of another person.

In a number of developed countries, human rights defenders and members of the public are already trying to restrict the use of such systems by authorities, who often establish or use them with little or no publicity. However, the McAfee study does not tell about how to maintain your anonymity, but about the fact that facial recognition systems can be vulnerable to attackers.

The paper’s authors experimented with their own faces and were able to get the face recognition system to mistake the face of one of them for the face of another. To do this, they developed a neural network based on the generative adversarial systems CycleGAN and StyleGAN, which are used to transfer image style to photography or computer graphics. The CycleGAN system analyzes the original image and can change the backgrounds, texture and other features of the original image, transforming it into another, moreover, very believable.

So, for example, this system allows transferring a hero of one computer game into the virtual landscape of another game or turn a photo of a horse into a photo of a zebra so believable that, if you do not see the source, you might think that it is a photo of a real zebra.

In turn, the StyleGAN neural network can generate highly believable images of the faces of non-existent people based on several photographs of real people.

Researchers at McAfee did something similar with the facial recognition system. They photographed their faces with different expressions, at different angles and in different lighting about 1,500 times, uploaded these images to the open-source FaceNet neural network, which analyzes anthropometric data, and using CycleGAN and StyleGAN successfully changed the image of one of their faces into the image of another so that the recognition system

Thus, the researchers argue, an attacker can deceive a face recognition system, for example, at automatic passport control, by loading the face of another person into the chip of his passport. Naturally, such a trick will not allow you to deceive a living security officer. However, this was not the goal of the project. The authors wanted to demonstrate that over-reliance only on automated systems can be detrimental and harbor vulnerabilities that can be exploited by attackers.