fb-pixelKate Middleton’s bad photo edit shows humans can be hard to fool Skip to main content
tech lab

Kate Middleton’s photo fumble shows humans aren’t always that easy to fool

Kate, Princess of Wales, smiled as she spoke to a woman during her visit to Sebby's Corner in north London on Nov. 24.Frank Augstein/Associated Press

Catherine, Princess of Wales, is learning the hard way that digital photo editing is harder than it looks.

That’s too bad for Kate, but good news for those who fear that the latest digital tools have made it too easy to deceive the public with fake videos, photos, and audio files. It turns out humans aren’t quite that easy to fool.

You know the story. The British royal — also known as Kate Middleton — sought to quell concerns about her health after an extended period out of the public eye. So on Sunday she released a photo of herself and her family that was shared by millions across the internet. Then sharp-eyed viewers realized that the image had been digitally altered, and rather badly. Soon, the image and its creator were the butt of jokes from Liverpool to Leominster, and Kate was forced to apologize for a bad digital editing job.

Here’s hoping people will be equally attentive this election year, as partisans of the left and the right attempt to deceive us with fake visuals, hand-tailored via Photoshop or mass-produced through artificial intelligence services like DALL-E or Stable Diffusion.

Advertisement



It’s already begun. Last year, Donald Trump-haters posted phony images of the former president wrestling with police as they try to arrest him. And this month, the BBC reported on a spate of fake online images showing Trump surrounded by admiring Black people.

Some of this stuff may come from foreign adversaries looking to undermine our way of life. But much of it is generated by domestic actors. Some are looking to sway voters, while others will do it just for laughs. It’s got many election watchers worried that fake imagery and audio could alter the outcome of this year’s campaigns.

And yet it’s far from obvious that people will be fooled. The Trump arrest photo was obvious nonsense; had it really happened, the story would have dominated the news worldwide. The pictures of Trump with Black supporters are more plausible; he really does have Black supporters, after all. But the BBC investigators quickly spotted the telltale flaws, like too-shiny skin and missing fingers. Soon, they tracked down the people who created the fake images.

Advertisement



You don’t have to be a pro to spot phony images. They’ve become so common that many of us can recognize them at a glance, making us instantly suspicious of whatever message the artist is trying to convey.

Of course, the technology will get steadily better, and spotting AI-generated fakes will become ever more difficult. The AI vendors can help by building in digital watermarks that can identify computer-generated photos. But for now, we’re mostly on our own. There are obvious defenses that sensible people should always use when evaluating videos, audios, or still images that show something unexpected or shocking.

First, consider the source. A picture that originated with the Associated Press will be a lot more trustworthy than a posting from some stranger in Jersey City. Next, ask yourself, “How likely is this to be true?” Last month when a computer imitating Joe Biden’s voice urged people in New Hampshire not to vote in that state’s primary, anyone with common sense should have smelled a rat. And of course, “Is anyone else reporting this?” Any really significant event will be covered by multiple sources. If only one website or X account is displaying an image, it’s best to reserve judgment and avoid sharing it with others.

Advertisement



And of course, before sharing that photo or video that “proves” Trump is a Russian agent or Joe Biden is in the pay of China, take a second, closer look. You might be amazed at how easily you can spot the flaws when you pay attention.

Just ask Kate.


Hiawatha Bray can be reached at hiawatha.bray@globe.com. Follow him @GlobeTechLab.