WEST LAFAYETTE, Ind. — A video on social media shows a high-ranking U.S. legislator declaring his support for an overwhelming tax increase. You react accordingly because the video looks like him and sounds like him, so certainly it has to be him.
The term “fake news” continues to take a much more literal turn as new technology is making it easier to manipulate the faces and audio in videos. The videos, called deepfakes, can then be posted online with little indication they are not the real thing.
Edward Delp, director of the Video and Imaging Processing Laboratory at Purdue University, says deepfakes are a growing danger with the next presidential election fast approaching.
“It’s possible that people are going to use fake videos to make fake news and insert these into a political election,” said Delp, the Charles William Harrison Distinguished Professor of Electrical and Computer Engineering. “There’s been some evidence of that in other elections throughout the world already.
He expects people will be creating deepfakes throughout this current election year, causing problems with what voters can believe was said by candidates.
He said deepfakes can be found wherever a user can post content online. That said, action by industry leaders like Facebook and Google speaks volumes about the severity of the issue at hand.
“One interesting trend is both Facebook and Google recently have announced they are both working on the deepfakes problem now and Google even has released a toolkit for manipulated images,” Delp said.
Delp and a doctoral student have previously worked for three years on video tampering as part of a larger research into media forensics. They’ve worked with sophisticated machine-learning techniques based on artificial intelligence and machine learning to create an algorithm that detects deepfakes.
Delp and his team’s algorithm has previously won a Defense Advanced Research Projects Agency (DARPA) contest. DARPA is an agency of the U.S. Department of Defense.
“By analyzing the video, the algorithm can see whether the face is consistent with the rest of the information in the video,” Delp said. “If it’s inconsistent, we detect these subtle inconsistencies. It can be as small as a few pixels; it can be coloring inconsistencies; it can be different types of distortion.”
Systems developed by Delp and his team are data driven and look for any anomalies, ranging from lack of blinking by the person to varying illumination within the deepfakes videos. Systems will continue to get better at detecting deepfakes as more examples to learn from emerge.
Deepfakes also can be used to fake pornography video and images, using the faces of celebrities or even children.
Delp said early deepfakes were easier to spot. The techniques couldn’t recreate eye movement well, resulting in videos of a person that didn’t blink. But advances have made the technology better and more available to people.
News organizations and social media sites have concerns about the future of deepfakes. Delp foresees both having tools like his algorithm in the future to determine what video footage is real and what is a deepfake.
“It’s an arms race,” he said. “Their technology is getting better and better, but I like to think that we’ll be able to keep up.”