CISPA researcher develops technique to safeguard against deepfakes

Visualization to the paper "UnGANable: Defending Against GAN-based Face Manipulation”
(c) CISPA

The omnipresence of images on the internet on the one hand and the exponential learning curve of AI image generators on the other heighten the risk of image manipulations with malicious intent. CISPA researcher Zheng Li and his colleagues have tested a technique that can partially prevent this. The results of their research have been published in the paper “UnGANable: Defending Against GAN-based Face Manipulation” at the renowned “USENIX Security” conference.

A significant feature of contemporary online communication is the exchange of images on social networks. Once uploaded, these images often remain online for a very long time, which paves the way for manipulative uses. In addition to identity theft, where real images are used to create fake accounts, images might also be submitted to AI image generators and used for deepfakes. Deepfakes are image manipulations that cannot be detected by the human eye. This is a real risk especially for politicians and celebrities. “Most of the time, the people in these images are not even aware of the manipulation and cannot do anything against it”, CISPA researcher Zheng Li explains. This opens the door to disinformation: “This is why deepfakes pose a real threat to democracy.” Deepfakes can be generated by a number of AI-based mechanisms, for example by so-called GANs.

GAN is short for Generative Adversarial Network and refers to a machine learning model. GANs consist of two artificial neuronal networks that communicate with one another. Simply speaking, one of these two networks generates new data, images for example, while the other compares this new data to an existing data set, assessing the differences between the two. This assessment is fed back to the generating network, which uses it for improvements, for example to increase the similarity between the generated image and the real image. This also improves the algorithm itself. As the image resolution becomes better and the look ever more photorealistic, more possibilities emerge to manipulate images of real people. This primarily concerns “face manipulation” which tampers with individual characteristics such as facial expressions or hair color. An important method for the generation of deepfakes is GAN inversion. It is a special technique for image processing in AI image generators.

Developing UnGANable

“Our starting point was the realization that up to now it has been impossible to prevent deepfakes that are based on GAN inversions”, Li says. “That is why we decided to call our technique UnGANable“, he recalls. “Simply speaking, UnGANable tries to protect images of faces against deepfakes.” GANs can only process images if they first convert them into mathematical vectors or so-called “latent code”. This is called GAN inversion and amounts to a kind of image compression. Using the latent code of a real image, the generator is able to create new images that are deceptively similar to their real-life precursor.

The procedure developed by Li and his colleagues obstructs GAN inversions and thus hinders attempts at forgery. At the level of mathematical vectors, UnGANable produces maximum deviations called “noise” that are invisible and that hinder the conversion into “latent code”. The GAN effectively runs dry because it cannot find any data that it might use to create new images: If no copies of the original image can be created based on the “latent code”, then image manipulation is simply not possible. Test runs with UnGANable with different GAN inversion techniques yielded satisfactory results. Li and his colleagues were also able to prove that their method offers better protection than alternative technique such as the program Fawkes. Fawkes, developed by a research group at the Sand Lab in Chicago, is based on a distortion algorithm which conducts changes at pixel level that are invisible to the human eye.

Areas of application

Li’s work is an important step toward new defense systems against face manipulation. “Protecting people against the malicious manipulations of their images matters to me”, Li declares. The code of UnGANable is open source and available to other researchers. Those well-versed in handling code can already use it to protect their images against misuse. The general public, however, will have to wait until the corresponding software is programmed. Li has already turned to other projects but some of his colleagues continue their research on related topics. They want to ascertain, for example, if the technique behind UnGANable can also be used for other AI-based procedures that generate images from textual input. “Perhaps the technique can also be applied to videos in the future”, Li hopes. In any case, the exponential learning curve of GAN-based techniques for image generation and manipulation will make the development of defense systems ever more necessary.

Originalpublikation:

Li, Zheng and Yu, Ning and Salem, Ahmed and Backes, Michael and Fritz, Mario and Zhang, Yang (2023): UnGANable: Defending Against GAN-based Face Manipulation. In: USENIX Security. Conference: USENIX-Security Usenix Security Symposium

https://cispa.de/en/zhengli-unganable

Media Contact

Felix Koltermann Unternehmenskommunikation
CISPA Helmholtz Center for Information Security

All latest news from the category: Information Technology

Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.

This area covers topics such as IT services, IT architectures, IT management and telecommunications.

Back to home

Comments (0)

Write a comment

Newest articles

A universal framework for spatial biology

SpatialData is a freely accessible tool to unify and integrate data from different omics technologies accounting for spatial information, which can provide holistic insights into health and disease. Biological processes…

How complex biological processes arise

A $20 million grant from the U.S. National Science Foundation (NSF) will support the establishment and operation of the National Synthesis Center for Emergence in the Molecular and Cellular Sciences (NCEMS) at…

Airborne single-photon lidar system achieves high-resolution 3D imaging

Compact, low-power system opens doors for photon-efficient drone and satellite-based environmental monitoring and mapping. Researchers have developed a compact and lightweight single-photon airborne lidar system that can acquire high-resolution 3D…

Partners & Sponsors