CUPERTINO, Calif. — Apple briefed academics this week about their plans to install software in iPhones sold in the U.S. “to scan for child abuse imagery.”
The initiative was reported today by Financial Times, after speaking to people briefed on Apple’s plans, “raising alarm among security researchers who warn that it could open the door to surveillance of millions of people’s personal devices.”
Apple’s proposed system, neuralMatch, would “proactively alert a team of human reviewers if it believes illegal imagery is detected, who would then contact law enforcement if the material can be verified.”
Financial Times confirmed that “the scheme will initially roll out only in the US.”
Security researchers told the financial publication that while they may be “supportive of efforts to combat child abuse,” they are nevertheless “concerned that Apple risks enabling governments around the world to seek access to their citizens’ personal data, potentially far beyond its original intent.”
‘An Absolutely Appalling Idea’
Ross Anderson, professor of security engineering at the University of Cambridge, called neuralMatch “an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of […] our phones and laptops.”
Researchers point out that “although the system is currently trained to spot child sex abuse, it could be adapted to scan for any other targeted imagery and text, for instance, terror beheadings or anti-government signs at protests, say researchers. Apple’s precedent could also increase pressure on other tech companies to use similar techniques.”
Matthew Green, a security professor at Johns Hopkins University, warned about the expansive implications of such a technology.
“This will break the dam — governments will demand it from everyone,” Green noted.
Financial Times described the intrusive nature of the new technology as “Apple’s neuralMatch algorithm will continuously scan photos that are stored on a US user’s iPhone and have also been uploaded to its iCloud back-up system. Users’ photos, converted into a string of numbers through a process known as ‘hashing,’ will be compared with those on a database of known images of child sexual abuse.”
How the Algorithm Was Trained
Whether this private system thinks an image on a user’s computer is legal or illegal will be based on how Apple decided to set up the algorithm. In this case, FT reported, “the system has been trained on 200,000 sex abuse images collected by the US non-profit National Center for Missing and Exploited Children.”
“According to people briefed on the plans, every photo uploaded to iCloud in the US will be given a ‘safety voucher’ saying whether it is suspect or not,” the report added. “Once a certain number of photos are marked as suspect, Apple will enable all the suspect photos to be decrypted and, if apparently illegal, passed on to the relevant authorities.”
The report does not specify what the safeguards would be in case of a mistake or “false positive,” when the algorithm identifies a piece of legal content as CSAM and law enforcement is compelled to act, or who would have liability for a life-and-reputation destroying misidentification.