At a glance, the images featured on the website This Person Does Not Exist might seem like random high school portraits or vaguely inadvisable LinkedIn headshots. But every single photo on the site has been created by using a special kind of artificial intelligence algorithm called generative adversarial networks (GANs).
Every time the site is refreshed, a shockingly realistic — but totally fake —picture of a person’s face appears. Former-Uber software engineer Phillip Wang created the page to demonstrate what GANs are capable of, and then posted it to the public Facebook group “Artificial Intelligence & Deep Learning” on Tuesday.
The underlying code that made this possible, titled StyleGAN, was written by Nvidia and featured in a paper that has yet to be peer-reviewed. This exact type of neural network has the potential to revolutionize video game and 3D-modeling technology, but, as with almost any kind of technology, it could also be used for more sinister purposes. Deepfakes, or computer-generated images superimposed on existing pictures or videos, can be used to push fake news narratives or other hoaxes. That’s precisely why Wang chose to create the mesmerizing but also chilling website.
“I have decided to dig into my own pockets and raise some public awareness for this technology,” he wrote in his post. “Faces are most salient to our cognition, so I’ve decided to put that specific pre-trained model up. Each time you refresh the site, the network will generate a new facial image from scratch from a 512 dimensional vector.”
How Do GANs Work?
The concept of GANs was first introduced in 2014 by esteemed computer scientist Ian Goodfellow, and ever since, Nvidia has been at the forefront of the technology. Tero Karras, a principal research scientist for the company, has led multiple GANs studies.
At their core, GANs consist of two networks: the generator and discriminator. These computer programs compete against each other millions-upon-millions of times to refine their image generating skills until they’re good enough to create the full-fledged pictures.
Researchers weren’t able to create high-quality, 1024x1024 images using this method until fairly recently — late 2017 — when Nvidia cracked the code using a technique described in its famous ProGAN paper. StyleGAN builds on this concept by giving the researchers more control over specific visual features.
Why Is Nvidia So Good at GANs?
Nvidia’s first line of business is designing and selling graphics processing units (GPUs, or graphics cards. GPUs are the engines for machine learning that are used to train algorithms, like StyleGANs, for hours on end. In short, GPUs are excellent at rapidly multiplying massive rows and columns of numbers, which is kind of what’s happening under the hood when A.I. gets trained.
The company has the benefit of having access to its most cutting-edge GPUs, giving its researchers the added advantage of the most cutting-edge resources to train neural networks.
The Future of GANs
Nvidia, Facebook, Google, and many other tech companies have squadrons of researchers developing versions of this A.I. technique. The end goal is to use it to generate fully fleshed out virtual worlds, potentially in VR, using automated methods instead of hard coding. But in the mean time, GANs are already being used to develop the budding market for virtual social media influencers.
A myriad of computer-generated characters advertising fashion brands and lifestyle companies have already amassed millions of followers across the internet. Venture capital firms have invested millions in the concept, and GANs could serve to make these 3D models more realistic with less labor.
Until then, you’ll be able to find us periodically refreshing This Person Does Not Exist, gazing with oblivion into the eyes of its misleadingly soulful fake faces. It’s an exciting, yet chilling, example of just how realistic the fake worlds of the future are about to become.