FB pixel

Microsoft says DOD needs to be better at detecting synthetic identities. And do it sooner

Microsoft says DOD needs to be better at detecting synthetic identities. And do it sooner
 

AI and cybersecurity are converging, according to Microsoft’s chief scientist, and one result will be the long-term inability of the U.S. Defense Department to reliably detect deepfakes using algorithmic tools.

Eric Horvitz testified this week before the cybersecurity subcommittee of the Senate Armed Services Committee. Horvitz says AI is getting better at detecting manipulated and synthetic identities, including deepfakes, but it is a losing effort.

Instead, software developers have to turn that solution upside down.

Offensive AI is improving the effectiveness of cyberattacks and algorithms. Defensive algorithms are, in turn, becoming more vulnerable to attack, Horvitz says.

It is starting to spook a lot of people. Europol’s concerns, for example, are mounting.

The first experimental and commercial software designed to spot synthetic identities are arriving, including Microsoft’s anti-cyberattack products. (Many of the world’s militaries are working on their own defenses.)

New research shows promise in spotting fake expressions in videos as a way of flagging deepfakes and new commercial software capable of detecting synthetic-ID fraud.

Researchers at University of California, Riverside, say their Expression Manipulation Detection framework can detect and then spotlight the emoting areas of a face that have been changed. Their paper is here.

Last month, Unite.AI reported a less unwieldly way to detect deepfakes using biometrics.

Meanwhile, a company called Early Warning Services says its newest AI-based software, Verify Identity, enables a business to determine in real time whether a presented identity is valid or synthetic.

All that might be good for now, but Horvitz’s message is that none of it will win out.

He says the world needs to speed development of technology that guarantees digital-content provenance — a way to put a figurative reality watermark on recorded events, including the actions and words of individual people.

Few of Horvitz’s recommendations to the defense establishment are surprising: Invest in its own research and development, follow security hygiene best practices, train employees, create its own networks to share information and experiences, prepare for the worse.

Legislation focused on provenance efforts in the civilian world — the Deepfake Task Force Act — was introduced in the senate last summer. It would seek mechanisms for determining who created and subsequently manipulated deepfake content.

 

This post was updated at 10:07am Eastern on May 6, 2022 to clarify that Horvitz does not think AI techniques will be reliable in the fight against deepfakes, and digital content provenance tool will prove better. Also, Horvitz says he was not suggesting that the government should have a role in defending civilian systems against deepfakes. It should be able to assure people in government and out that its claims about what is genuine information are trustworthy.

Article Topics

 |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics secure payments, borders, public services, but good practice eludes some

Facial recognition and biometric liveness detection are featured in all of the top stories of the week on Biometric Update,…

 

Tools to navigate facial recognition ethics and compliance already available: panel

In a digital age marked by technological advancement, the integration of facial recognition technology into business operations has sparked profound…

 

Ethiopia’s integration of digital ID and tax system could yield many benefits, ICTD says

Integrating Ethiopia’s national ID Fayda with its tax system could improve the quality of tax data, taxpayer experience, and compliance…

 

Selfie biometrics tailored to different sectors but must defend against fraud and lawsuits

Innovatrics presents a use case, AuthID nabs a new deal and another BIPA lawsuit has been filed over selfie biometrics…

 

Clear: quarterlies, annuals, SEC actions

May 10, 2024 – Clear’s revenues from its reusable digital ID and airport biometrics increased by more than 35 percent year-over-year…

 

European digital ID project offers grants for more pilots

The pan-European digital identity project will receive another cash infusion for its pilot programs. The European Union has published a…

Comments

One Reply to “Microsoft says DOD needs to be better at detecting synthetic identities. And do it sooner”

  1. “Prepare for the worse”?

    In reality though, it seems like an approach akin to that used for detecting Photoshop wouldn’t run awry – compression artifact ratios differ significantly between organically captured frames and generated frames. If you perform error level analysis on frames of a deepfake, the generated portions show drastically differing levels.

    Accommodating for that in the generation stage would require a LOT of work, so I think this technique could at least be used as a stopgap under traditional development.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read From This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events