Becoming Terminator

Becoming Terminator

Video-tampering technology is likely to take fake news manufacturing to a whole new level. For journalism, that is a game-changer.

Imagine watching your local MP, your favorite singer or the mayor of your city, all on the TV set in your living room. But they’re not in control of their faces anymore. Imagine you are in control instead. When you sneer, they sneer. When you grin, they grin. If you laugh, they laugh.

This has been made possible by Face2Face, a software developed by a team of American and German researchers.

Playing with people’s faces, be they public officials or next-door friends, in videos downloaded from YouTube or other video sharing sites, can be fun. But imagine that such technologies are in the hands of nefarious governments or propaganda channels with large follower bases.

Technologically empowered, they can basically create any content one can ever imagine: they can use faces and sound to make anybody say anything.
For journalism, that is the ultimate misinformation frontier. The advance of video-altering technologies will be the biggest challenge for journalism in 2018, according to a survey run by Mediapowermonitor among 50 technology journalists in 10 western European countries.

“[The year] 2018 will be only the beginning. Mass disinformation technologies will be our biggest problem for the next decade,” according to one of the respondents in the survey, which was conducted during November 2017.

Volte-faces

Word about Face2Face was already out in 2016. In March last year, a team of researchers from Stanford University in California, and Germany’s University of Erlangen-Nuremberg and Max Planck Institute for Informatics published an academic paper describing their invention.

They present “a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor.”

Unlike previous attempts to create expression-replacing technologies, Face2Face renders a new face expression seamlessly. In technology lingo, that is called “real-time face capture and reenactment.” The software works with any commodity webcam.

But face control is not all. By skillfully clipping soundbites from elsewhere and pasting them over “reenacted” faces, one can create brand new narratives from scratch. You can make presidents declare wars or MPs announce new laws. With voice and motion under your full control, you can essentially create anything; and that anything that can go viral before the “reenacted faces” cobble up a denial.

That is likely to take fake news to a totally new level. Once bitten twice shy after more than a decade of technological disruption, journalists and media outlets are seriously looking into what these new video-editing technologies are bringing. “We are looking into what we think will hit us in 2018 and beyond,” said a Munich-based respondent in the MediaPowerMonitor survey. “Video will be the game changer. You can create your own source, and it will be there, in flesh and bone: this is the last frontier in journalism.”
Vanity Fair in January 2017 wrote that the era of text-based misinformation is gone. “[…] advancements in audio and video technology are becoming so sophisticated that they will be able to replicate real news—real TV broadcasts, for instance, or radio interviews—in unprecedented, and truly indecipherable, ways.”

A Digital Arm Race?

Technologists spurn the alarmist rhetoric. A member of the team that developed Face2Face, Matthias Niessner, sees potential for this technology to be used, for example, in the movie industry to improve film-dubbing or in the booming virtual reality devices to reconstruct faces. A professor at the Technical University of Munich, Mr Niessner rejected the idea that Face2Face creates immense potential for manipulation.

He told Washington Post in October 2016: “So say for instance you have a government who wants to deceive you—our technology doesn’t make a difference to them, because they have the resources anyway, right? But we’re opening that editing ability they have to everybody.” He went on, saying that technology should become accessible to everybody. “If you didn’t know about our program, and somebody else edited a video and sent it to you, that would be pretty bad! But if you know that every video is fake then it doesn’t make a real difference,” he told the Post.

Nonetheless, computer scientists have already tested face expression-replacing technologies on YouTube videos featuring politicians such as Russian president Vladimir Putin or former American president Barack Obama. The results are staggering: one can hardly notice that the videos were tampered with.

One solution to fight videoed manipulation, somewhat similar to ongoing efforts to debunk fake news on the internet, will be to identify fakery and make audiences aware of altered, or what some experts call “video-shopped”, content. Media can do that only if they team up with technologists and technology firms. Over two-thirds of the respondents in the Mediapowermonitor survey said that they “already cooperate” or “aim to cooperate in the near future” with technology experts to better cope with deceitful content made possible by technology.

Initiatives aimed at counterattacking advanced misinformation technologies are springing up. In the United States, MediFor, a program of the Defense Advanced Research Projects Agency (DARPA), in a move to spot fake videos, is going to monitor content online during the next five years. The experts in the program whose name stands for “media forensics” are creating a system that will be able to analyze hundreds of thousands of images a day.

Some compare these latest advances in digital manipulation as an arm race.
In journalism, they mark the rise of a totally new set of challenges. To some journalists and editors, fighting a never-ending technology game is not the answer.

“We might simply have to change the way we talk to our readers,” a France-based journalist, wrote in an answer to our survey. “We might have to do it old-school: small operations where your readers see you, trust you and read you.”

Real-life rather than digital face-to-face engagement could indeed be the answer.

Photo: Matthias Niessner Lab