Skip to main content

Spot the Fake: Artificial Intelligence Can Produce Lifelike Photographs

By pitting AIs against one another, tech companies are creating realistic computer-generated images

A team of researchers from Nvidia used an artificial neural network to create progressively lifelike images based on hundreds of thousands of photographs of actual celebrities.

NVIDIA

Fraudulent images have been around for as long as photography itself. Take the famous hoax photos of the Cottingley fairies or the Loch Ness monster. Photoshop ushered image doctoring into the digital age. Now artificial intelligence is poised to lend photographic fakery a new level of sophistication, thanks to artificial neural networks whose algorithms can analyze millions of pictures of real people and places—and use them to create convincing fictional ones.

These networks consist of interconnected computers arranged in a system loosely based on the human brain's structure. Google, Facebook and others have been using such arrays for years to help their software identify people in images. A newer approach involves so-called generative adversarial networks, or GANs, which consist of a “generator” network that creates images and a “discriminator” network that evaluates their authenticity.

“Neural networks are hungry for millions of example images to learn from. GANs are a [relatively] new way to automatically generate such examples,” says Oren Etzioni, chief executive officer of the Seattle-based Allen Institute for Artificial Intelligence.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Yet GANs can also enable AI to quickly produce realistic fake images. The generator network uses machine learning to study massive numbers of pictures, which essentially teach it how to make deceptively lifelike ones of its own. It sends these to the discriminator network, which has been trained to determine what an image of a real person looks like. The discriminator rates each of the generator's images based on how realistic it is. Over time the generator gets better at producing fake images, and the discriminator gets better at detecting them—hence the term “adversarial.”

GANs have been hailed as an AI breakthrough because after their initial training, they continue to learn without human supervision. Ian Goodfellow, a research scientist now at Google Brain (the company's AI project), was the lead author of a 2014 study that introduced this approach. Dozens of researchers worldwide have since experimented with GANs for a variety of uses, such as robot control and language translation.

Developing these unsupervised systems is a challenge. GANs sometimes fail to improve over time; if the generator is unable to produce increasingly realistic images, that keeps the discriminator from getting better as well.

Chipmaker Nvidia has developed a way of training adversarial networks that helps to avoid such arrested development. The key is training both the generator and discriminator progressively—feeding in low-resolution images and then adding new layers of pixels that introduce higher-resolution details as the training progresses. This progressive machine-learning tactic also cuts training time in half, according to a paper the Nvidia researchers plan to present at an international AI conference this spring. The team demonstrated its method by using a database of more than 200,000 celebrity images to train its GANs, which then produced realistic, high-resolution faces of people who do not exist.

A machine does not inherently know whether an image it creates is lifelike. “We chose faces as our prime example because it is very easy for us humans to judge the success of the generative AI model—we all have built-in neural machinery, additionally trained throughout our lives, for recognizing and interpreting faces,” says Jaakko Lehtinen, an Nvidia researcher involved in the project. The challenge is getting the GANs to mimic those human instincts.

Facebook sees adversarial networks as a way to help its social media platform better predict what users want to see based on their previous behavior and, ultimately, to create AI that exhibits common sense. The company's head of AI research Yann LeCun and research engineer Soumith Chintala have described their ideal system as being “capable of not only text and image recognition but also higher-order functions like reasoning, prediction and planning, rivaling the way humans think and behave.” LeCun and Chintala tested their generator's predictive capabilities by feeding it four frames of video and having it generate the next two frames using AI. The result was a synthetic continuation of the action—whether it was a person simply walking or making head movements.

Highly realistic AI-generated images and video hold great promise for filmmakers and video-game creators needing relatively inexpensive content. But although GANs can produce images that are “realistic-looking at a glance,” they still have a long way to go before achieving true photo-realism, says Alec Radford, a researcher now at AI research company OpenAI and lead author of a study (presented at the international AI conference in 2016) that Facebook's work is based on. High-quality AI-generated video is even further away, Radford adds.

It remains to be seen whether online mischief makers—already producing fake viral content—will use AI-generated images or videos for nefarious purposes. At a time when people increasingly question the veracity of what they see online, this technology could sow even greater uncertainty.