The U.S. White House (By Original uploader was Black and White at en.wikipedia - Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=3786245)
Researchers worry so-called deepfake videos are a new type of misinformation tool that could give one presidential candidate an advantage over another in future elections.
Raising awareness about deepfakes is critical for getting ahead of the new wave of misinformation, Omer Ben-Ami, co-founder of an Israeli startup called Canny AI, told the Daily Caller News Foundation. His group seeks to create an imperceptible deepfake, or a perfect lip-sync between video and audio.
Ben-Ami’s group posted a deepfake video on Instagram June 9 portraying Facebook CEO Mark Zuckerberg saying: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.” Reporters pressured Instagram to remove that video.
WATCH:
View this post on Instagram
A post shared by Bill Posters (@bill_posters_uk) on Jun 7, 2019 at 7:15am PDT
“One of the reasons that we collaborated with the artists was exactly to show what an early stage start-up can achieve in this field, and I think it’s obvious we don’t have the amount of funding any government has,” Ben-Ami said, referring to the Zuckerberg fake. Most of the overhead for generating deepfakes comes from costs of expensive hardware (strong CPUs and GPUs), which varies according to quality and production time, he added.
These videos are done through the creation of what artificial intelligence researchers call a generative adversarial network, or GAN. Videos generated using this model produce lip movements that are synchronized with the audio and exhibit facial expressions such as blinks, and eyebrow raises, among other facial tics. The AI can quickly create moving images from nothing more than a video still or photograph.
Ben-Ami is among a handful of researchers producing such impressive-looking deepfakes. Dmitry Ulyanov, a PhD student at Skolkovo Institute of Science and Technology, a private graduate research institute in Russia, for one, posted examples of the technology in a tweet in May. He also linked to a YouTube video displaying deepfakes of former Republican New Jersey Gov. Chris Christie, along with a handful of celebrities.
WATCH:
Another great paper from Samsung AI lab! @egorzakharovdl et al. animate heads using only few shots of target person (or even 1 shot). Keypoints, adaptive instance norms and GANs, no 3D face modelling at all.
▶️ https://t.co/Xk5D4WccpD
???? https://t.co/SxnVfY72TT pic.twitter.com/GjVrJbejT0— Dmitry Ulyanov (@DmitryUlyanovML) May 22, 2019
Samsung’s AI research center in the UK worked with Imperial College London recently and created an AI that animates and syncs up an audio clip with facial movements that are derived from photographs. Here is a clip from the two research groups that uses a black-and-white photo of Russian mystic Grigori Rasputin to add facial movements to depict the self-proclaimed monk singing a Beyoncé song.
Deepfake videos are used for nefarious purposes, according to researchers, who say there are examples of the technology being used to inject a person into deepfake porn videos. Lawmakers are also worried these videos are capable of giving bad actors access to tools of deception.
“What is a proportionate response should the Russians release a deepfake of Joe Biden to try to diminish his candidacy?” Democratic Rep. Adam Schiff of California asked during a June 13 congressional hearing. He was addressing several experts in the field, along with fellow lawmakers. Discussion about deepfakes increased after former New York City Mayor Rudy Giuliani circulated a manipulated video of House Speaker Nancy Pelosi in May.
The Pelosi video, which is being labeled a “cheapfake,” was slowed down and designed to make Pelosi sound and look drunk. YouTube removed the video but Facebook and Twitter left it up. Members of the media joined lawmakers shortly thereafter in condemning Facebook for leaving up the video, which they believe could alter the public’s perception of Pelosi.
Tech experts have also created tools capable of creating phony articles. OpenAI, an artificial intelligence research group that published software called GPT2 that can generate fake news from two sentences — such pieces are being dubbed deepfake articles. Deepfake articles, like the videos, are effectively news articles that look deceptively real but are actually highly manipulated phonies.
GPT2 is fed text and asked to write sentences based on learned predictions of what words might come next. Access to the GPT2 was provided to select media outlets, one of which was Axios, whose reporters fed words and phrases into the text generator and created an entire fake news story. Open AI ultimately decided not to publish any of the code out of concern that bad actors might misuse the product.
All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact [email protected].