Analysis

Kate royal photo: Furore around Princess of Wales photo hints at looming ethical quagmire surrounding AI images

As it becomes easier to edit and manipulate photos thanks to Artificial Intelligence (AI), the question of what is real and what is fake is becoming increasingly difficult to answer

“Pictures, or it didn’t happen.” It is a familiar refrain uttered by those seeking proof to verify a story or event. But the proof is proving increasingly problematic.

The extraordinary, self-inflicted public relations disaster by Kensington Palace last week is a case in point. What was proposed as an innocent and cheerful photograph, capturing the Princess of Wales and her children marking Mother’s Day, quickly turned into a full-blown crisis, as one respected picture agency after another pulled the photo, citing concerns over its manipulation.

Hide Ad
Hide Ad

Beyond Catherine’s admission that she "occasionally experiments with editing”, it remains unclear how the photograph was edited, and what tools or software were used to assist with the creation of the final image that was subsequently circulated around the world. Kensington Palace has so far declined to release the original, unedited image – a decision that has invited all manner of investigative work, both by specialists and excitable amateur sleuths.

Some experts have pointed to inconclusive, but suggestive signs that AI might have been utilised as part of the process of finalising the apparently innocuous image. Dev Nag, founder of the AI chatbot system, QueryPal, singled out the left arm of the top worn by Princess Charlotte, noting there seemed to be a strange texture “floating ahead of the top” of the sleeve. That kind of anomaly, he said, was consistent with the use of the generative AI tool in Adobe’s Photoshop, which is natively integrated into the popular software package.

Others, however, have suggested the inconsistencies within the picture were more likely to have been the result of some sloppy editing, and dismissed any indication of generative AI’s visibility. “I think it is unlikely that this is anything more than a relatively minor photo manipulation,” said Hany Farid, a professor at the University of California, Berkeley, who specialises in digital forensics and image analysis, and was among several experts to run checks on the royal photograph. “There is no evidence that this image is entirely AI-generated.”

Amid an ongoing firestorm of conspiracy theories, it is unlikely that even the most considered, professional rebuttals will prove to be the final word on the matter. Ultimately, it is a story about trust, credibility and authenticity. One element of that has focused on the impact on the monarchy – an institution where visibility is everything – and its uncertain attempts to reassure the public about Catherine’s health while maintaining her privacy. It has also raised searching questions around how the media sources and verifies images.

But such issues also feed into wider concerns about the authenticity of photographs, and the ease with which they can be manipulated. Even if AI was not used in the photograph taken by the Prince of Wales, its rapid growth and integration means that such fears are becoming more prevalent, raising a fundamental question – can we really trust what we see with our own eyes?

The Princess of Wales admitted after this photograph had been released that she had made some changes to the image (Picture: Prince of Wales/Kensington Palace/PA Wire)The Princess of Wales admitted after this photograph had been released that she had made some changes to the image (Picture: Prince of Wales/Kensington Palace/PA Wire)
The Princess of Wales admitted after this photograph had been released that she had made some changes to the image (Picture: Prince of Wales/Kensington Palace/PA Wire)

Thanks to modern technology, it is harder than ever before to discern which images are real, which ones are fake, and which occupy an ever-shifting middle ground. Some specialists who research machine learning believe advances in AI mean it is a matter of time before the kind of clumsy edits made to the royal photo can be improved upon with ease by algorithms.

People are asking the right questions,” said David Bau, an assistant professor at the Khoury College of Computer Sciences at Northeastern University in the US, whose work includes researching AI and ‘deep’ networks. “People have caught inconsistencies in that photo. But they’re the kind of inconsistencies that would show up if you use traditional photo editing software to manipulate the image in Photoshop.

“Some of these inconsistencies are the kinds of things that an AI might be able to do better. And in the future, people using AI tools may actually be able to make edits, without too much effort, that are less detectable. So I think that the kind of concern that is being raised is how can we trust photos if they might be manipulated?”

Hide Ad
Hide Ad

There was a time, not so long ago, when such tinkering would require expensive software and even hardware. But the ability to call on AI to make edits is becoming an increasingly common feature packaged in with everyday consumer technology. One of the hallmarks of Google’s latest Pixel smartphone range is ‘Best Take’, a feature driven by a combination of different AI models. Together, they analyse images, check timestamps to find sequential photos, search for signals such as poses as facial expressions, and then suggest images that create the best composite. The resultant image may not be entirely fake, but it uses real photographs to create something that isn’t real.

Samsung has also gone to considerable lengths to explain the generative AI used in its Galaxy phone cameras, explaining how the feature helps with filtering, modification and optimisation to remove unwanted shadows and reflections. Defending its use of AI, Patrick Chomet, the tech firm’s head of customer experience, told TechRadar earlier this year there is “no such thing” as a real picture. “You can try to define a real picture by saying ‘I took that picture’, but if you used AI to optimise the zoom, the autofocus, the scene – is it real?” he stressed. “Or is it all filters? There is no real picture, full stop.”

The same controversies are true of Photoshop, software used by tens of millions of creative professionals worldwide. Some have heralded generative AI features that were introduced last year as a game-changing advance that makes it easier to edit images. But its pitfalls were exposed earlier this year when an Australian news network used an image of a female MP that was edited to reveal her midriff and make her breasts look bigger. The programme’s news director blamed Photoshop’s “automation”, although Adobe later stressed the changes in question would have required “human intervention” and approval.

Some have speculated that such unfortunate edits could be, in part, the consequence of inherent biases in the AI systems, which perpetuate harmful stereotypes and encourage sexualised content. It is among a host of other ethical concerns surrounding the use of AI to digitally manipulate pictures, such as copyright infringement, privacy breaches, fake news, and the impact on the employment opportunities available to photographers and editors.

However, some of the concerns are more pressing than others. Over the past year, there have been instances of some firms deliberately using AI to create explicit, non-consensual pornographic images. One slew of so-called deepfake photographs that circulated in the town of Almendralejo in southern Spain used images of school-age children taken from their Instagram accounts, before altering them to make it appear as if they were naked. The fake photos were created using ClothOff, an app that has been linked to a similar case in New Jersey. Such horrific and extreme cases may be rare, but they point to how dangerous the ever-improving AI tech can be if wielded by those intent on causing harm.

So what, if anything, can be done? The use of AI is here to stay, even while there are growing calls for greater government regulation of the space. In the meantime, major players in the photography industry are taking steps to bolster public confidence. Sony, Canon, and Nikon have all promised that a feature known as image authentication will soon be rolled out across some of their professional camera ranges, with the firms having a greed upon a global standard for digital signatures, which make it easier to identify how and when a photograph was taken, and by whom.

That innovation, though welcome, will not solve everything, however, especially at a time when most people use mid-level smartphones to both take pictures and edit them. Farid is among those who believe the use of such credentialing protocols should be more widespread, and pointed to a metadata-like scheme known as ‘content credentials’ developed by the Coalition for Content Provenance and Authenticity. He described it as the equivalent of a food-stuff label, which can help people understand where an image came from, and how it was created.

“The same technology is already part of Photoshop and other editing programs, allowing edit changes to a file to be logged and inspected,” he told Time. “All that information pops up when the viewer clicks on the ‘cr’ icon, and in the same clear format and plain language as a nutrition label on the side of a box of cereal.” Significantly, he also said that were the technology fully in use today, photo editors across newsrooms in media outlets around the world could have instantly reviewed the credentials of the royal photograph.

Hide Ad
Hide Ad

But even so, would that necessarily have settled the debate about whether the image of the Princess of Wales and her children was real or fake, and the point at which editing becomes manipulation? The line between is growing all the more blurred, and not just thanks to the tech – some AI developers are trying to change the rhetoric around its use. Take Google’s Magic Editor, for example, which promises users of the ability to “reimagine” their images.

The focus at the moment may still be on the Princess of Wales and her wayward editing skills, but as the ability to change images becomes easier by the day, she will not be the last public figure to find herself at the centre of a debate over whether we can trust what we see.

Comments

 0 comments

Want to join the conversation? Please or to comment on this article.