Tech

The AI ​​fake face generators can be rewound to reveal the real faces they trained on.

However, this suggests that you can get this learning data, Kautz says. He and his colleagues at Nvidia have come up with another way to disclose personal information, including images of faces and other objects, medical data, and more, which does not require access to training data at all.

Instead, they developed an algorithm that can recreate the data that the trained model was subjected to. changing the steps that the model goes through when processing this data. Take a trained image recognition network: To identify what’s in an image, the network passes it through a series of layers of artificial neurons, with each layer extracting different levels of information, from abstract edges to shapes and more recognizable features.

Kauts’ team found that they could interrupt the model in the middle of these steps and reverse its direction by recreating the input image from the model’s internal data. They tested this technique on a variety of common image recognition and GAN models. In one test, they showed that they can accurately recreate images from ImageNet, one of the best-known image recognition datasets.

Images from ImageNet (top) along with a recreation of these images taken by rewinding a model trained in ImageNet (bottom)
NVIDIA

Like Webster’s work, the recreated images are very similar to the real ones. “We were surprised at the final quality,” says Kautz.

The researchers argue that such an attack is not just hypothetical. Smartphones and other small devices are starting to use artificial intelligence more and more. Due to lack of battery and memory, models are sometimes only half-processed on the device itself and sent to the cloud for final processing – an approach known as split computing. According to Kauts, most researchers believe that split computing does not reveal any personal data from a person’s phone, because only the model is used. But his attack shows that this is not the case.

Kautz and his colleagues are now working to find ways to prevent models from leaking personal data. “We wanted to understand the risks to minimize vulnerabilities,” he says.

Despite the fact that they use very different techniques, he believes that his work and Webster’s work complement each other well. Webster’s team has shown that private data can be found in model outputs; Kauts’s team has shown that personal data can be disclosed by doing the reverse by recreating the input data. “Learning both ways is important to better understand how to prevent attacks,” says Kautz.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button