Deepfake detection software developed at the University of California at Berkeley analyzed head tilt and facial mannerisms to determine that this video of President Donald Trump was faked. (Berkeley Video)

Facebook says it’s working with Microsoft, the Partnership for AI and an international team of academics to create the Deepfake Detection Challenge, a competition to develop better tools for flagging faked videos.

“We are also funding research collaborations and prizes for the challenge to help encourage more participation,” Facebook’s chief technology officer, Mike Schroepfer, said today in a blog posting. “In total, we are dedicating more than $10 million to fund this industry-wide effort.”

Deepfakes use voice impersonation and video manipulation to make it appear as if notables — especially politicians — are saying whatever the manipulator wants them to say. For example, researchers at the University of Washington recently demonstrated a technique for putting words in President Barack Obama’s mouth.

Video manipulation has already sparked political controversies: One such video, widely distributed via Facebook and Twitter in May, made House Speaker Nancy Pelosi appear drunk during a news conference. Experts say deepfakes are almost certain to play a role in disinformation campaigns leading up to the 2020 presidential election.

Fortunately, researchers have already been working on tools to detect deepfakes: In June, computer scientists at the University of California at Berkeley showed off a software tool that analyzed characteristic facial expressions and head movements to distinguish real videos from fakes.

Facebook’s challenge is modeled on other artificial intelligence contests, on subjects ranging from common-sense reasoning to finishing a sentence. Facebook will create a standard data set, using paid actors who have given consent for their video to be used in the exercise. No Facebook user data will be used. The videos and the procedures for the challenge will be vetted next month during a working session at the International Conference on Computer Vision in South Korea, and released in December for teams to work on.

There’ll be a leaderboard for the contestants, and the competition will be overseen by the Partnership for AI’s Steering Committee on AI and Media Integrity. Facebook and Microsoft will be represented on the committee, along with WITNESS and other organizations focusing on civil society, technology, media and academic research.

Academic partners include Berkeley, Cornell Tech, MIT, the University of Oxford, the University of Maryland at College Park and the University at Albany-SUNY.

In a statement, Oxford Professor Philip H.S. Torr said the threat from deepfakes isn’t limited to the United States.

“Manipulated media being put out on the internet, to create bogus conspiracy theories and to manipulate people for political gain, is becoming an issue of global importance, as it is a fundamental threat to democracy, and hence freedom,” Torr said. “I believe we urgently need new tools to detect and characterize this misinformation, so I am happy to be part of an initiative that seeks to mobilize the research community around these goals — both to preserve the truth whilst pushing the frontiers of science.”

Schroepfer said Facebook will take part in the challenge but won’t accept any prizes. The contest will end next March, just as the presidential campaign is heating up.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.