This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

The philosophy of deepfakes: Has the camera always lied?

Deepfakes: Has the camera always lied?
Massacre des dominicains d’Arcueil route d’italie no. 38 le 26 mai 1871 à 4 heures et demie, photomontage produced by Eugène Appert 1871. Credit: Synthese (2023). DOI: 10.1007/s11229-023-04097-3

Fake footage is a centuries-old problem that began long before digital technology, according to a Leeds academic.

A new research paper published in the philosophy journal Synthese argues that deepfakes aren't as unprecedented as they may seem, as photos and videos have been the subject of manipulation for more than a century.

Instead of seeing deepfakes as a sign of a digital apocalypse, philosopher Dr. Joshua Habgood-Coote says we should be focusing on the insidious social problems behind the videos, such as misogyny and racism.

Dr. Habgood-Coote, Research Fellow at the University of Leeds' School of Philosophy, Religion and History of Science and author of the paper, said, "There's been a lot of public debate about deepfakes, which has become increasingly apocalyptic, pushing the idea that reality and truth are being lost in a technological dystopia."

"We should think about deepfakes as a social rather than technological problem and focus on the way deepfakes will be used against marginalized groups."

In recent weeks, images of Pope Francis wearing a puffer jacket and Donald Trump being arrested have flooded , with many showing telltale signs of digital manipulation, from extra limbs to distorted hands.

The camera lies

Photo manipulation didn't begin with the release of Photoshop in 1990, but more than a century beforehand in the late 1800s.

Dr. Habgood-Coote said, "The idea of a golden period in which recordings unproblematically represented the world is a fiction."

The research highlights several examples of this:

  • Famous photos in the 1860s claimed to capture real spirits, including the ghost of Abraham Lincoln. The photographer, William Mulmer, was accused of fraud, and after a sensational trial in 1869, a columnist for the New York World wrote: "Who, henceforth, can trust the accuracy of a photograph?"


  • Footage of the Spanish-American war in 1898 involved widespread fakery, including staged cavalry charges and the use of toy boats to represent naval engagements.

  • The New York Evening Graphic notoriously produced staged photos to illustrate between 1924 and 1932, often printing sexualized pictures of women.

  • Until the 1980s, film and development processes were set up to accurately depict white skin only, often changing the skin tones of anyone with darker skin to hide their faces in artificial shadows.

Social vs. technological

Dr. Habgood-Coote argues that, because deepfakes are not unprecedented, the panic around the technology is unjustified.

He said, "Making a deepfake isn't as simple as pressing a button—to create a convincing deepfake requires a significant level of expertise, large image sets of the target and considerable computing power."

We should instead be thinking about media reform and institutional political change to combat the behind deepfakes, according to his research.

Deepfake pornography has become a huge cause for concern, with celebrities frequently being edited into sexual videos without their consent. As this disproportionately affects women, Dr. Habgood-Coote says this form of pornography says more about misogyny than technology.

Dr. Habgood-Coote said, "We can see the threat of pornographic deepfakes is not that they persuade anyone that they are a genuine depiction of the target woman... but is rather their ability to effectively spread sexist propaganda."

We should also be concerned about the effect of deepfakes on racialised groups, according to the research.

For example, a Black "virtual influencer," Shudu Gram, sparked controversy in 2018 after it was revealed that all images of "Shudu" were computer-generated by a white creator, according to the New Yorker. With nearly a quarter of a million Instagram followers, "Shudu" is often shown advertising products for huge brands including Samsung, Louis Vuitton and Fila.

In recent days it has also been announced that Levi's will begin testing AI-generated models to diversify the shopping experience on its website, showing clothes on models of different body types, ages, sizes and skin tones.

"These examples raise important questions about how deepfakes will be used to fetishize, commodify and dehumanize marginalized groups," Dr. Habgood-Coote said.

More information: Joshua Habgood-Coote, Deepfakes and the epistemic apocalypse, Synthese (2023). DOI: 10.1007/s11229-023-04097-3

Citation: The philosophy of deepfakes: Has the camera always lied? (2023, March 30) retrieved 26 April 2024 from https://phys.org/news/2023-03-philosophy-deepfakes-camera.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Porn, politics are key targets in 'deepfakes': study

6 shares

Feedback to editors