Deep Fakes

Imagine in the future, you run for office in politics, but then your opposition releases a video from 10 years ago of you making offensive comments which causes you to lose all political trust and support…..what if the video was not even real? What if you were framed for a crime?

“The problem with fake videos goes deeper: they affect people even if they are later told that they are fake, and there always will be people that will believe they are real, despite any evidence to the contrary.” – Bruce Schneier

Beyond deep fakes: Transforming video content into another video’s style, automatically
Researchers have devised a way to automatically transform the content of one video into the style of another, making it possible to transfer the facial expressions of comedian John Oliver to those of a cartoon character, or to make a daffodil bloom in much the same way a hibiscus would. –

[youtube] [/youtube]

[youtube] [/youtube]

Experts warn digitally-altered ‘deepfakes’ videos of Donald Trump, Vladimir Putin, and other world leaders could be used to manipulate global politics by 2020
Top tech firms have been cracking down on savvy internet users who continue to create fake, AI-assisted porn of many of Hollywood’s leading ladies.   But now it seems the videos, also known as ‘deepfakes,’ have taken an even darker turn. ‘Deepfakes’ hobbyists have begun using the technology to create digitally-altered videos of world leaders, including President Donald Trump, Russian president Vladimir Putin, former president Barack Obama and former first lady and presidential candidate Hillary Clinton.

Experts warn the videos could be realistic enough to manipulate future elections and global political as soon as 2020. –

Face2Face: Real-time Face Capture and Reenactment of RGB Videos
[youtube] [/youtube]

World’s first AI news anchor unveiled in China
The ‘tireless’ artificial news readers simulate the voice, facial movements, and gestures of real-life broadcasters who will report “tirelessly” all day every day, from anywhere in the country. Chinese viewers were greeted with a digital version of a regular Xinhua news anchor named Qiu Hao. The anchor, wearing a red tie and pin-striped suit, nods his head in emphasis, blinking and raising his eyebrows slightly.

Not only can I accompany you 24 hours a day, 365 days a year. I can be endlessly copied and present at different scenes to bring you the news,” he says.Xinhua also presented an English-speaking AI, based on another presenter, who adds: “The development of the media industry calls for continuous innovation and deep integration with the international advanced technologies … I look forward to bringing you brand new news experiences.” –

[youtube] [/youtube]

How hard is it to make a believable deepfake?
We set out to see if we could fake Malcolm Turnbull.

Researchers are warning that a new wave of artificial intelligence technology could make it so easy to create fake videos that it will undermine the public’s ability to trust what they see.

See the video here;

Fake news is about to get so much more dangerous
The most powerful false-news weapon in history is around the corner. The media industry has only a short time to get ahead of it.

If technology continues its current advance, we may soon face totally convincing videos showing events that never happened — created so effectively that even experts will have trouble proving they’re fakes.“Deep fake” video will be able to show people saying, with the authentic ring of their own voices, things they never said. It will show them doing things they never did, by melding their images with other video or creating new images of them from scratch.

At a political level, deftly constructed video could show a political leader advocating for the reverse of what she stands for, or portray bloody events that never happened. It could trigger riots, swing elections, and sow panic and despair. At a business and personal level, it could be equally dangerous. Fake statements by chief executives or banking officials could throw financial markets into turmoil. False videos could be created about anyone’s private life, with devastating effects. –

The Threat of Deep Fakes
Deep fakes are a profoundly serious problem for democratic governments and the world order. A combination of technology, education, and public policy can reduce their effectiveness. –

You thought fake news was bad? Deep fakes are where truth goes to die
Technology can make it look as if anyone has said or done anything. Is it the next wave of (mis)information warfare? I

In May, a video appeared on the internet of Donald Trump offering advice to the people of Belgium on the issue of climate change. “As you know, I had the balls to withdraw from the Paris climate agreement,” he said, looking directly into the camera, “and so should you.” The video was created by a Belgian political party, Socialistische Partij Anders, or sp.a, and posted on sp.a’s Twitter and Facebook. It provoked hundreds of comments, many expressing outrage that the American president would dare weigh in on Belgium’s climate policy.

The speech, it was later revealed, was nothing more than a hi-tech forgery. Sp.a claimed that they had commissioned a production studio to use machine learning to produce what is known as a “deep fake” – a computer-generated replication of a person, in this case Trump, saying or doing things they have never said or done. –

This is fake news! China’s ‘AI news anchor’ isn’t intelligent at all
AI mimicry: The digitally synthesized anchor was created by Sogou, a search company based in Beijing, in collaboration with China’s state press agency, Xinhua. Sogou used some cutting-edge machine learning to copy and re-create a real person’s likeness and voice. The company fed its algorithms footage of a real anchor, plus corresponding text, and trained it to reproduce a decent facsimile that will say whatever you want. –

[youtube] [/youtube]

Deepfake, a portmanteau of “deep learning” and “fake”,[1] is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos. A machine learning technique called a “generative adversarial network” (GAN) can be used to create fake videos.[2]

Deepfakes may be used to create fake celebrity pornographic videos or revenge porn.[3] Deepfake pornography surfaced on the Internet in 2017, particularly on Reddit,[4] and has been banned by sites including Reddit, Twitter, and Pornhub.[5][6][7] Deepfakes can be used to create fake news and malicious hoaxes.[8][9]

Non-pornographic deepfakes can be easily found on popular online video streaming sites such as YouTube or Vimeo.[citation needed] A popular program is FakeApp,[10] which makes use of TensorFlow.[11] Techniques for faking facial gestures and rendering onto the target video as look-alike of the target person were presented in 2016 and allow near real-time counterfeiting of facial expressions in existing 2D video.[12]

Detecting Fake Videos
This story nicely illustrates the arms race between technologies to create fake videos and technologies to detect fake videos.

I don’t know who will win this arms race, if there ever will be a winner. But the problem with fake videos goes deeper: they affect people even if they are later told that they are fake, and there always will be people that will believe they are real, despite any evidence to the contrary. –

DARPA is funding new tech that can identify manipulated videos and ‘deepfakes’
The Menlo Park-based nonprofit research group SRI International has been awarded three contracts by the Pentagon’s Defense Advanced Research Projects Agency (DARPA) to wage war on the newest front in fake news. Specifically, DARPA’s Media Forensics program is developing tools capable of identifying when videos and photos have been meaningfully altered from their original state in order to misrepresent their content.

The most infamous form of this kind of content is the category called “deepfakes” — usually pornographic video that superimposes a celebrity or public figure’s likeness into a compromising scene. Though software that makes that makes deepfakes possible is inexpensive and easy to use, existing video analysis tools aren’t yet up to the task of identifying what’s real and what’s been cooked up. –

[youtube] [/youtube]

Deep fake technology outpacing security countermeasures
Rubio warned, “I believe that this is the next wave of attacks against America and Western democracies … the ability to produce fake videos that … can only be determined to be fake after extensive analytical analysis,” which, going forward, poses a formidable threat, given that the Pew Research Center found that, according to Bobby Chesney, the Charles I. Francis Professor in Law and Associate Dean for Academic Affairs at the University of Texas School of Law and Director of UT-Austin’s Robert S. Strauss Center for International Security and Law; and Danielle Citron, the Morton & Sophia Macht Professor of Law at the University of Maryland Carey School of Law and author of Hate Crimes in Cyberspace, “As of August 2017, two-thirds of Americans [(67 percent] reported … that they get their news at least in part from social media. This is fertile ground for circulating deep fake content. Indeed, the more salacious, the better.”

These Full-Body Deepfakes are Like Nothing We’ve Ever Seen
The simulations are still clearly fake, but better versions are on the way.
That’s thanks to a new technique out of Heidelberg University that was recently published to Github and presented at September’s European Conference on Computer Vision — a step forward for deepfakes that has escaped mainstream scrutiny, but that could lead to the next generation of altered footage.

Putin developing fake videos to foment 2020 election chaos: ‘It’s going to destroy lives’
Policy insiders and senators of both parties believe the Russian president or other actors hostile to the U.S. will rely on “deep fakes” to throw the 2020 presidential election cycle into chaos, taking their campaign to influence American voters and destabilize society to a new level.

last updated 26th December, 2018