Critics Say Apple Built a 'Backdoor' Into Your iPhone With Its New Child Abuse Detection Tools

Privacy advocates worry the new features could be a slippery slope.

We may earn a commission from links on this page.
Image for article titled Critics Say Apple Built a 'Backdoor' Into Your iPhone With Its New Child Abuse Detection Tools
Photo: STR/AFP (Getty Images)

Apple’s plans to roll out new features aimed at combating child sexual abuse material (CSAM) on its platforms have caused no small amount of controversy.

The company is basically trying to a pioneer a solution to a problem that, in recent years, has stymied law enforcement officials and technology companies alike: the large, ongoing crisis of CSAM proliferation on major internet platforms. As recently as 2018, tech firms reported the existence of as many as 45 million photos and videos that constituted child sex abuse material—a terrifyingly high number.

Advertisement

Yet while this crisis is very real, critics fear that Apple’s new features—which involve algorithmic scanning of users’ devices and messages—constitute a privacy violation and, more worryingly, could one day be repurposed to search for different kinds of material other than CSAM. Such a shift could open the door to new forms of widespread surveillance and serve as a potential workaround for encrypted communications—one of privacy’s last, best hopes.

Advertisement

To understand these concerns, we should take a quick look at the specifics of the proposed changes. First, the company will be rolling out a new tool to scan photos uploaded to iCloud from Apple devices in an effort to search for signs of child sex abuse material. According to a technical paper published by Apple, the new feature uses a “neural matching function,” called NeuralHash, to assess whether images on a user’s iPhone match known “hashes,” or unique digital fingerprints, of CSAM. It does this by comparing the images shared with iCloud to a large database of CSAM imagery that has been compiled by the National Center for Missing and Exploited Children (NCMEC). If enough images are discovered, they are then flagged for a review by human operators, who then alert NCMEC (who then presumably tip off the FBI).

Advertisement

Some people have expressed concerns that their phones may contain pictures of their own children in a bathtub or running naked through a sprinkler or something like that. But, according to Apple, you don’t have to worry about that. The company has stressed that it does not “learn anything about images that do not match [those in] the known CSAM database”—so it’s not just rifling through your photo albums, looking at whatever it wants.

Meanwhile, Apple will also be rolling out a new iMessage feature designed to “warn children and their parents when [a child is] receiving or sending sexually explicit photos.” Specifically, the feature is built to caution children when they are about to send or receive an image that the company’s algorithm has deemed sexually explicit. The child gets a notification, explaining to them that they are about to look at a sexual image and assuring them that it is OK not to look at the photo (the incoming image remains blurred until the user consents to viewing it). If a child under 13 breezes past that notification to send or receive the image, a notification will subsequently be sent to the child’s parent alerting them about the incident.

Advertisement

Suffice it to say, news of both of these updates—which will be commencing later this year with the release of the iOS 15 and iPadOS 15—has not been met kindly by civil liberties advocates. The concerns may vary, but in essence, critics worry the deployment of such powerful new technology presents a number of privacy hazards.

In terms of the iMessage update, concerns are based around how encryption works, the protection it is supposed to provide, and what the update does to basically circumvent that protection. Encryption protects the contents of a user’s message by scrambling it into unreadable cryptographic signatures before it is sent, essentially nullifying the point of intercepting the message because it’s unreadable. However, because of the way Apple’s new feature is set up, communications with child accounts will be scanned to look for sexually explicit material before a message is encrypted. Again, this doesn’t mean that Apple has free rein to read a child’s text messages—it’s just looking for what its algorithm considers to be inappropriate images.

Advertisement

However, the precedent set by such a shift is potentially worrying. In a statement published Thursday, the Center for Democracy and Technology took aim at the iMessage update, calling it an erosion of the privacy provided by Apple’s end-to-end encryption: “The mechanism that will enable Apple to scan images in iMessages is not an alternative to a backdoor—it is a backdoor,” the Center said. “Client-side scanning on one ‘end’ of the communication breaks the security of the transmission, and informing a third-party (the parent) about the content of the communication undermines its privacy.”

The plan to scan iCloud uploads has similarly riled privacy advocates. Jennifer Granick, surveillance and cybersecurity counsel for the ACLU’s Speech, Privacy, and Technology Project, told Gizmodo via email that she is concerned about the potential implications of the photo scans: “However altruistic its motives, Apple has built an infrastructure that could be subverted for widespread surveillance of the conversations and information we keep on our phones,” she said. “The CSAM scanning capability could be repurposed for censorship or for identification and reporting of content that is not illegal depending on what hashes the company decides to, or is forced to, include in the matching database. For this and other reasons, it is also susceptible to abuse by autocrats abroad, by overzealous government officials at home, or even by the company itself.”

Advertisement

Even Edward Snowden chimed in:

Advertisement


The concern here obviously isn’t Apple’s mission to fight CSAM, it’s the tools that it’s using to do so—which critics fear represent a slippery slope. In an article published Thursday, the privacy-focused Electronic Frontier Foundation noted that scanning capabilities similar to Apple’s tools could eventually be repurposed to make its algorithms hunt for other kinds of images or text—which would basically mean a workaround for encrypted communications, one designed to police private interactions and personal content. According to the EFF:

All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.

Advertisement

Such concerns become especially germane when it comes to the features’ rollout in other countries—with some critics warning that Apple’s tools could be abused and subverted by corrupt foreign governments. In response to these concerns, Apple confirmed to MacRumors on Friday that it plans to expand the features on a country-by-country basis. When it does consider distribution in a given country, it will do a legal evaluation beforehand, the outlet reported.

In a phone call with Gizmodo Friday, India McKinney, director of federal affairs for EFF, raised another concern: the fact that both tools are un-auditable means that it’s impossible to independently verify that they are working the way they’re supposed to be working.

Advertisement

“There is no way for outside groups like ours or anybody else—researchers—to look under the hood to see how well it’s working, is it accurate, is this doing what its supposed to be doing, how many false-positives are there,” she said. “Once they roll this system out and start pushing it onto the phones, who’s to say they’re not going to respond to government pressure to start including other things—terrorism content, memes that depict political leaders in unflattering ways, all sorts of other stuff.” Relevantly, in its article on Thursday, EFF noted that one of the technologies “originally built to scan and hash child sexual abuse imagery” was recently retooled to create a database run by the Global Internet Forum to Counter Terrorism (GIFCT)—the likes of which now helps online platforms to search for and moderate/ban “terrorist” content, centered around violence and extremism.

Because of all these concerns, a cadre of privacy advocates and security experts have written an open letter to Apple, asking that the company reconsider its new features. As of Sunday, the letter had over 5,000 signatures.

Advertisement

However, it’s unclear whether any of this will have an impact on the tech giant’s plans. In an internal company memo leaked Friday, Apple’s software VP Sebastien Marineau-Mes acknowledged that “some people have misunderstandings and more than a few are worried about the implications” of the new rollout, but that the company will “continue to explain and detail the features so people understand what we’ve built.” Meanwhile, NMCEC sent a letter to Apple staff internally in which they referred to the program’s critics as “the screeching voices of the minority” and championed Apple for its efforts.

Advertisement