Apple will scan iPhones and iPads for child sex abuse images

Apple has announced plans to start scanning iPhones and iPads in the US for evidence of child abuse imagery.

The tech giant will use on-device machine learning technology to scan users’ pictures and match them with a known database of child sexual abuse material.

This happens automatically without the tech giant itself seeing a customer’s existing pictures.

However, if the system flags the existence of a potential match, it will notify a human who will then review it and send on to relevant law enforcement.

In a statement on its website, Apple clarified:

New technology in iOS and iPadOS will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies across the United States.

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.

Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.

Apple goes on to say that iCloud accounts will remain private unless they cross a certain threshold of known CSAM content.

Once that threshold is passed, the cryptographic technology that Apple uses to encrypt photos will be unlocked for the company to examine the contents of a user’s iCloud that matched with the flagged content.

If Apple determines there is a match, it will disable the user’s account and inform the authorities.

‘The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account,’ Apple explained.

The Silicon Valley tech giant is also putting more safeguards in place when it comes to its iMessage app.

New tools will warn parents and children if sexually explicit photos are being sent.

‘When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo,’ Apple wrote.

‘As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it.’

Like the photo scanning tool, Apple uses on-device intelligence to determine if a photo being sent is sexually explicit. The company says it has been designed so that Apple itself can’t see the content of the message.

Jake Moore, former Head of Digital Forensics at Dorset Police and Cybersecurity Specialist at global cybersecurity firm, ESET, applauded Apple’s move but also said there may be wider implications.

‘Scanning for known indecent images on cloud based accounts has been a staple tool used by law enforcement for years to locate offenders,’ Moore said.

‘However, Apple has taken this one step further by finding a way to scan devices before such images hit the cloud. The potential worry when technology firms make advances such as this is that it will drive CSAM further underground but this is likely to catch those at the early stages of delving into indecent material and hopefully bring those to justice before their problem gets out of control. 

‘Even if this feature takes time to roll out across the world’s law enforcements, it sends a clear message to those thinking of storing such material on their device. 

He added: ‘Another huge step made by Apple here is their claim to be able to locate known images after they have been slightly edited.

‘Previously it would have been impossible to find a known image if even one pixel was changed as it affects the hash value dramatically which is the factor used to find them in the database.’

At present, Apple is rolling these features out only to the US market. It remains to be seen whether iPhones or iPads across the rest of the world will also be included at a later date.

Here in the UK, the police’s secured Child Abuse Image Database (CAID) has about 20 million unique images of child sexual abuse, which is increasing at a rate of approximately 250,000 every month.

Nine years ago, the Child Exploitation and Online Protection Centre was responsible for coordinating 192 arrests in a year, but now police working in partnership with the National Crime Agency (NCA) are dealing with 850 offenders a month.

Source: Read Full Article