The first reported use of the term “deepfake” appeared in a Vice article published in December 2017. Actress Gal Gadot’s likeness (of Wonder Woman fame) had been expertly spliced onto the body of an adult actress. Deepfake uses artificial intelligence (AI) to realistically map one face to another with a vast database of source images and video (which is why actors are the primary target — there are tons of source data to feed the AI).
The concept itself, though, has been around a while. Consider the 1994 Robert Zemeckis film, Forrest Gump, where we all got to see Tom Hank’s character engage with various historical figures like John F. Kennedy and Richard Nixon. 1987’s Arnold Schwarzenegger vehicle The Running Man was based on the then-science fictional premise of a man falsely imprisoned due to a deepfake-style video.
On the verge of a new era
While deepfake videos may seem trivial at this point, future uses of an improved version of the technology may have more of an impact than a “simple” salacious video of your favorite actress.
We are truly entering an age in which we can no longer trust our own eyes when it comes to video. As the technology advances, it will be harder and harder to detect a fake before it goes viral.
Author and trendwatcher Jarno Duursma is at the forefront of these sorts of digital developments. The much-requested speaker is an alumnus of Singularity University in Silicon Valley and is frequently quoted in Dutch media.
“The most important thing for people to do is to stop trusting their eyes and ears,” he told Scality via email. “Soon reality will be manipulated in such a way that you can no longer see with your own senses whether something is real or fake.”
He cautions citizens to improve their own journalistic skills, as well, like investigating sources and understanding how anything can be taken out of context. Duursma relies on governments, technology companies, and the common sense of people to stay on top of trends like deepfake videos.
“At the moment,” Duursma said, “celebrities are already working to capture their personality in an avatar, a digital doppelganger and a three-dimensional human manifestation of [themselves] in order to keep in touch with thousands of people at the same time, probably long after their retirement.”
The presence of easily-created fake video will have an effect far beyond entertainment as well. Video footage, for example, could become inadmissible as evidence in courts around the world.
“It is indeed a threat that the evidence can be dismissed as deepfake technology,” Duursma said. “But it’s never really the case that only a single video can be a reason to convict someone.”
Just like medicine and law, said Duursma, the field of computer science should have its own code of ethics, too. At the end of the day, he said, leaders in technology should refer to their own moral compass and ethical integrity.
While deepfake videos seem to have less of a positive effect on us than, say smartphones, there can be a downside to all technology. Duursma remains optimistic, however.
“[Technology] has already brought us many benefits,” Duursma said. “Take a closer look at your smartphone to see what benefits it brings you on a daily basis. At the same time: there are also strong negative consequences such as smartphone addiction, privacy scandals, mass surveillance by companies and governments, extreme manipulation in marketing, and a decrease in our autonomy, privacy and intimacy.”