Apple Takes a Bite Out of User Privacy
Apple will be using a scanning tool to determine images of child abuse, yet experts warn of its ramifications on the privacy of users.
Tech giant Apple will use a new tool, neuralMatch, to scan photo libraries on US iPhones for pre-registered images of child sexual abuse.
In a move that has surprised many, Apple will scan all images taken by iPhones and compare them against a database of known child abuse imagery before they are uploaded to the company’s iCloud Photos online storage. If any similarities are found, the image would be automatically flagged, then Apple staff will manually review the image in question. If child abuse is confirmed, Apple will instantly disable the user’s account, and the National Center for Missing and Exploited Children (NCMEC) will be notified.
This measure would also mark the first time in which the company examines the content of end-to-end encrypted messages sent via iMessage.
Apple has ensured its users that regular children's photos, such as a child taking bath or swimming in the pool, will not be flagged as neuralMatch only scans images with similarities to the ones already in the NCMEC database.
The company’s move comes years after other tech companies, such as Microsoft, Google, and Facebook, have been sharing digital fingerprints of known child sexual abuse images with the government. Yet Apple’s decision to scan images directly on the users’ devices and not strictly on its cloud services is completely unprecedented and unheard of, which raised concerns regarding the privacy of its users.
O Privacy, Where Art Thou?
Following Apple’s move, many have expressed their concerns regarding its ramification on the privacy of users.
Researchers are concerned about the different purposes for which the matching tool could be used. For instance, it could be easily used to supply governments with a backdoor to monitor dissidents and protesters, scanning their phones’ messages and report their contents without any need for bureaucratic turnarounds.
Another scary usage is the ability to frame innocent people: Matthew Green, a John Hopkins University researcher, criticized the system for its inability to detect trick images and messages. He described how seemingly inoffensive pictures could be designed to generate matches for child abuse pictures once opened by the recipient, stressing that researchers have already tested it and were proved right. Such practices could prove to be catastrophic in the long run and threaten the livelihood of many.
Apple, previously one of the world’s leading companies in terms of data and privacy protection, was the first entity to implement end-to-end encryption, scrambling messages in a fashion that would only allow the sender and recipients to read them. Yet the company has been under government pressure for a long time to increase surveillance of encrypted data. Now it may seem, Apple finally cracked under pressure despite it reassuring that these measures will not be used for mass surveillance or monitoring.
The Electronic Frontier Foundation, an online civil liberties pioneer, described Apple’s decision as "a shocking about-face for users who have relied on the company’s leadership in privacy and security."