Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

How to spot a deepfake, according to experts who clocked the fake persona behind the Hunter Biden dossier

rise of deepfakes 4x3
Samantha Lee/Business Insider

  • Deepfakes — highly convincing computer-generated imagery and video — pose a growing problem to democracy.
  • Take the attempts to sow doubt about President-elect Joe Biden's suitability, where a mysterious dossier alleging dubious ties between his son Hunter Biden and China began circulating in September.
  • A researcher noticed in October that the main author of the report was a made-up person, whose image had been generated by artificial intelligence.
  • We spoke to the researcher, Elise Thomas, and other experts about how you can spot deepfakes.
  • Visit Business Insider's homepage for more stories.
Advertisement

A shocking dossier intended to detonate a bomb under Joe Biden's presidential campaign was defused after a researcher spotted its author was a computer-generated deepfake.

A document penned by Typhoon Investigations began circulating in right-wing circles from September and alleged compromising ties between Biden's son Hunter Biden and China.

But "Martin Aspen", the document's purported author, isn't real. His likeness was produced by a generative adversarial network (GAN), a branch of artificial intelligence, and the report's allegations were baseless.

martin aspen
Martin Aspen, the AI-generated author of a fake Hunter Biden dossier. Twitter

Disinformation researchers have warned that deepfake personas like Martin Aspen pose a threat to democracy, though up until now the threat has been minimal. We've seen convincing examples of Trump and Obama deepfakes, though neither were used for nefarious political purposes.

Advertisement

The Martin Aspen incident is something else — if political fakery is really on the rise, how do we protect ourselves?

There are tell-tale signs when a neural network has produced a fake image

First, it's helpful to understand how these images are created.

Neural networks, which use hardware processing power to learn new skills, compete against each other to try and trick the other about what is a real image and what is faked, but indistinguishable, from the real thing.

GANs have become very good at creating lifelike images of people — but they're not infallible. Check out this weird "dog ball" generated by a trio of researchers in 2019:

Advertisement
DogBall
A weird "dogball", produced by a GAN. Andrew Brock/Github

But GANs have improved significantly, to the extent where the technology can generate fairly convincing human faces:

AI bloke
This man is not real. This Person Does Not Exist

"While these generative adversarial networks can be really good, and they learn from their own 'mistakes' so they get better over time, there are certain contextual things they cannot understand," said Agnes Venema, a Marie Curie research fellow, working on a project at the Romanian National Intelligence Academy and at the Department of Information Policy and Governance of the University of Malta.

Here's how to spot when an image isn't exactly a real person.

Background details can be telling

Martin Aspen clothing
Look for details, like clothing, being vague. Twitter

"Key giveaways for GAN-created faces tend to be vague, out of focus backgrounds, or weird textures," said Elise Thomas, the researcher at the Australian Strategic Policy Institute who first outed Aspen as an AI fraud.

Advertisement

"Sometimes they look like they're borrowed from other things," she added. "Like a shirt which looks like it has the texture of the plant." Aspen's odd green clothes were a dead giveaway.

It's all in the eyes

Martin Aspen eye
Martin Aspen's eye had two irises, if you zoom in. Twitter

The key tell that Aspen was the culmination of computer code doing its magic, rather than anyone real, was simple once you zoomed into the eyes. "You do sometimes see the irregular irises, as the Martin Aspen picture had," said Thomas.

The irises get close to being realistic, but often bleed or blur in a way that isn't natural. In the case of the faked image of Martin Aspen, there's a second pupil in one iris, which is only visible when you zoom in and analyze the image in detail.

Check the ears, too

Martin Aspen ear
Computers struggle to generate convincing ears. Twitter

Computers don't have ears, and so when confronted with the curious mix of cartilage, bone and skin, they struggle to understand what's going on anatomically. "Sometimes there are areas of a deepfake that the GAN has not been able to train so well on to make it look natural," said Venema.

Advertisement

Ears are covered by hair, and there's therefore less training data to make it perfect. The wonky ears were the giveaway for Aspen's photograph, though for women it's often the inability to handle earrings in a logical manner that makes it obvious something's amiss.

Hairlines are often worrying

Martin Aspen hair
Martin Aspen's receding hairline cast weird shadows. Twitter

For those focused on the ravages of ageing, the hairline is the first thing they look at – and it can help identify deepfaked images of people, too. "It's the inconsistencies that are very difficult to spot, but can be there, like fuzzy hairlines," said Venema.

GANs often also struggle with shadows, which the image of the fake intelligence analyst also had issues with. There's an odd element by Aspen's left temple where thinning grey hair casts dark brown shadows that it shouldn't.

How to become better at spotting deepfakes

If you're keen to stay away from disinformation in the coming days ahead of the election – or in the coming years, come what may – then Thomas recommends visiting Which Face is Real, a website that shows you a real and a computer-generated face, and tries to help train people to spot the common issues with AI-generated ones.

Advertisement

"It really helps to get your eye in for GAN faces," she said. "It's pretty incredible how good it's become in the last couple of years, given that this technology didn't exist until quite recently and is now available to almost anyone."

However, Thomas mixes that awe with fear. "I also question whether we really want it to get so good that no one can tell whether it's real or not," she said. "It's hard to see how the benefits of that would outweigh the inevitable misuse of it."

Joe Biden Artificial Intelligence
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account