facebook-deepfake-detection

Facebook

Facebook is trying to fight deepfakes by making its own deepfakes

Facebook is offering $10 million in rewards to whoever can make a deepfake-detection system.

 

Mikael Thalen

Tech

Posted on Sep 5, 2019   Updated on May 20, 2021, 4:42 am CDT

Facebook is spearheading an effort to combat deepfakes by aiding researchers in developing better techniques for detecting fake videos.

In a blog post from Facebook’s artificial intelligence division Thursday, the social media company announced that it would be teaming up with other major tech businesses and universities to create the “Deepfake Detection Challenge (DFDC).”

“We want to catalyze more research and development in this area and ensure that there are better open source tools to detect deepfakes,” the company said. “That’s why Facebook, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY are coming together to build the Deepfake Detection Challenge (DFDC).”

The challenge is offering $10 million in rewards and grants to help researchers tip the scales against AI-generated misinformation.

Facebook will be releasing a dataset containing the faces of paid actors and has stressed that no Facebook user data will be included.

Once built, Facebook plans to release the dataset and officially begin the DFDC in December at the Conference on Neural Information Processing Systems (NeurIPS). Although Facebook itself will also be entering the competition, the company noted that it will not accept any prize money.

“This is a constantly evolving problem, much like spam or other adversarial challenges, and our hope is that by helping the industry and AI community come together we can make faster progress,” Facebook added.

The project has been endorsed by numerous academics, engineers, and scientists, including Antonio Torralba, a professor and director of the MIT Quest for Intelligence.

“People have manipulated images for almost as long as photography has existed. But it’s now possible for almost anyone to create and pass off fakes to a mass audience,” Torralba said in the press release. “The goal of this competition is to build AI systems that can detect the slight imperfections in a doctored image and expose its fraudulent representation of reality.”

The project comes as deepfake videos become increasingly realistic–and at a quickening pace. The technology can now even be implemented with just a click of a button through popular apps, available to the public.

Although much attention has been given to the possibility that deepfakes could be used to disrupt an election, the technology has already been used to blackmail and harass women by placing them into pornographic videos.

READ MORE:

Share this article
*First Published: Sep 5, 2019, 4:12 pm CDT