Apple delays child safety feature after criticism

Apple DELAYS the roll-out of its controversial plan to scan iPhones for child abuse images and report ‘flagged’ owners to the police, following a furious backlash regarding customer privacy

  • The tech firm revealed plans to scan US user phones and computers last month
  • It said it would scan devices for child abuse images and report ‘flagged’ owners
  • But now the firm has indefinitely delayed the roll-out following fierce criticism  

Apple has indefinitely delayed the roll-out of controversial child safety features following a furious backlash from its users.

The contentious plans, revealed by the tech giant on August 5, involve scanning iPhones for child abuse images and reporting ‘flagged’ owners to the police. 

It had planned to rollout the feature for iPhones, iPads, and Mac with software updates later this year in the US. 

But Apple said on Friday it would take more time to collect feedback and improve the proposed features, after the criticism of the system on privacy and other grounds both inside and outside the company.

However, child protection agencies have expressed their disappointment regarding Apple’s decision today, with one criticising the assumption that ‘child safety is the trojan horse for privacy erosion’.

 Apple has indefinitely delayed its plans for features intended to help protect children from predators

APPLE’S PLANS TO SCAN YOUR PHOTOS 

The new features, which will come with iOS 15, iPadOS 15, watchOS 8 and macOS Monterey later this year, will allow Apple to: 

1. Flag images to the authorities after being manually checked by staff if they match images already flagged as child sexual abuse images by the US National Center for Missing and Exploited Children 

2. Scan images that are sent and received by minors in the Messages app. If nudity is detected, the photo will be automatically blurred and the child will be warned that the photo might contain private body parts 

3. Allow Siri to ‘intervene’ when users try to search topics related to child sexual abuse 

4. Notify parents If a child under the age of 13 sends or receives a suspicious image, if the child’s device is linked to Family Sharing.  

As of Friday, Apple’s original statement announcing the plans on its website from last month now has a short but important amendment at the top. 

‘Previously we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them and to help limit the spread of child sexual abuse material [CSAM],’ it says. 

‘Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.’ 

Apple plans to automatically scan iPhones and cloud storage for child abuse images and report ‘flagged’ owners to the police after a company employee has looked at their photos.

The new safety tools will also be used to look at photos sent by text messages to protect children from ‘sexting’, automatically blurring images Apple’s algorithms could detect as CSAM. 

The iPhone maker said last month that the detection tools had been designed to protect user privacy and wouldn’t allow the tech giant to see or scan a user’s photo album. 

Instead, the system will look for matches, securely on the device, based on a database of ‘hashes’ – a type of digital fingerprint – of known CSAM images provided by child safety organisations.

As well as looking for photos on the phone, cloud storage and messages, Apple’s personal assistant Siri will be taught to ‘intervene’ when users try to search topics related to child sexual abuse.     

The new tools were set to be introduced later this year as part of the iOS and iPadOS 15 software update due in the autumn.

They were initially set to be introduced in the US only, but with plans to expand further over time. 

Critics had argued the entire set of tools could be exploited by repressive governments looking to find other material for censorship or arrests.

If and when implemented, it would also be impossible for outside researchers to check whether Apple was only checking a small set of on-device content.        

Apple’s plans sparked a global backlash from a wide range of rights groups, with employees also criticising the plan internally.

Greg Nojeim of the Center for Democracy and Technology in Washington DC said: ‘Apple is replacing its industry-standard end-to-end encrypted messaging system with an infrastructure for surveillance and censorship.’ 

Using ‘hashes’ or digital fingerprints, images in a CSAM database will be compared to pictures on a user’s iPhone. Any match would then sent to Apple and, after being reviewed by a human, on to the National Center for Missing and Exploited Children

HOW APPLE WILL SCAN YOUR PHONE FOR CHILD ABUSE PHOTOS 

The new image-monitoring feature is part of a series of tools heading to Apple mobile devices. 

Here is how it works:

1. User’s photos are compared with ‘fingerprints’ from America’s National Center for Missing and Exploited Children (NCMEC) from its database of child abuse videos and images that allow technology to detect them, stop them and report them to authorities. 

Those images are translated into ‘hashes’, a type of code that can be ‘matched’ to an image on an Apple device to see if it could be illegal.

2. Before an iPhone or other Apple device uploads an image to iCloud, the ‘device creates a cryptographic safety voucher that encodes the match result. It also encrypts the image’s NeuralHash and a visual derivative. This voucher is uploaded to iCloud Photos along with the image.  

3. Apple’s ‘system ensures that the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content,’ Apple has said.  

At the same time Apple’s texting app, Messages, will use machine learning to recognise and warn children and their parents when receiving or sending sexually explicit photos, Apple said.

‘When receiving this type of content, the photo will be blurred and the child will be warned,’ Apple said.

‘As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it.’

Similar precautions are triggered if a child tries to send a sexually explicit photo, according to Apple. Personal assistant Siri, meanwhile, will be taught to ‘intervene’ when users try to search topics related to child sexual abuse, according to Apple.

4. Apple says that if their ‘voucher’ threshold is crossed and the image is deemed suspicious, its staff ‘manually reviews all reports made to NCMEC to ensure reporting accuracy’

Users can ‘file an appeal to have their account reinstated’ if they believe it has been wrongly flagged. 

5. If the image is a child sexual abuse image, NCMEC can report it to the authorities with a view to a prosecution.

Security researcher Alex Muffett said Apple was ‘defending its own interests, in the name of child protection’ with the plans and ‘walking back privacy to enable 1984’. 

Muffett raised concerns the system will be deployed differently in authoritarian states, asking ‘what will China want [Apple] to block?’ 

Matthew Green, a top cryptography researcher at Johns Hopkins University, also warned that the system could be used to frame innocent people by sending them seemingly innocuous images designed to trigger matches for child pornography. 

That could fool Apple’s algorithm and alert law enforcement. 

‘Researchers have been able to do this pretty easily,’ Green said of the ability to trick such systems.

Other abuses could include government surveillance of dissidents or protesters. ‘What happens when the Chinese government says, “Here is a list of files that we want you to scan for”,’ Green asked. 

‘Does Apple say no? I hope they say no, but their technology won’t say no.’  

‘This will break the dam — governments will demand it from everyone,’ Green said. 

‘The pressure is going to come from the UK, from the US, from India, from China. I’m terrified about what that’s going to look like’, he told WIRED. 

Ross Anderson, professor of security engineering at Cambridge University, branded the plan ‘absolutely appalling’. 

‘It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of our phones and laptops’, he said. 

However, other experts welcomed Apple’s plans. Dr Rachel O’Connell, founder and CEO of verification consultancy Trust Elevate, called Apple’s child protections proposal ‘a scalable solution that does not break encryption’.

‘[It] respects user privacy while at the same time significantly bearing down on certain types of criminal behaviour, in this case terrible crimes which harm children,’ she said. 

‘The idea that child safety is the trojan horse for privacy erosion is a trope that privacy advocates expound. 

‘This creates a false dichotomy and shifts the focus away from the children and young people at the front line of dealing with adults with a sexual interest in children, which often engage in grooming children and soliciting them to produce child sexual abuse material.’   

Meanwhile, Andy Burrows, the head of child safety online policy at NSPCC, called Apple’s decision ‘an incredibly disappointing delay’. 

‘Apple were on track to roll out really significant technological solutions that would undeniably make a big difference in keeping children safe from abuse online and could have set an industry standard,’ he said. 

‘They sought to adopt a proportionate approach that scanned for child abuse images in a privacy preserving way, and that balanced user safety and privacy. 

Apple previously said: ‘We want to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM)’ 

‘We hope Apple will consider standing their ground instead of delaying important child protection measures in the face of criticism.’ 

Apple had been playing defence on the plan for weeks, and had already offered a series of explanations and documents to show that the risks of false detections were low. 

Apple boasted that ‘the likelihood that the system would incorrectly flag any given account is less than one in one trillion per year’. 

Craig Federighi, Apple’s senior vice president of software engineering, told The Wall Street Journal in August that the AI-driven program will be protected against misuse through ‘multiple levels of auditability’. 

‘We, who consider ourselves absolutely leading on privacy, see what we are doing here as an advancement of the state of the art in privacy, as enabling a more private world,’ Federighi said. 

Key areas for concern about Apple’s new plan to scan iPhones for child abuse images

‘False positives’

The system will look for matches, securely on the device, based on a database of ‘hashes’ – a type of digital fingerprint – of known CSAM images provided by child safety organizations. 

These fingerprints do not search for identical child abuse images because paedophiles would only have to crop it differently, rotate it or change colours to avoid detection.

As a result the technology used to stop child abuse images will be less rigid, making it more likely flag perfectly innocent files. 

In the worst cases the police could be called in and disrupt the life or job of the person falsely accused just, perhaps, for sending a picture of their own child.

The program currently only looks at photos and videos, but there are concerns that the tech could be used to scan usually encrypted messages 

Misuse by governments

There are major concerns that the digital loophole could be adapted by an authoritarian government to enforce other crimes and infringe human rights.

For example, in countries where homosexuality is illegal, private photos could be used against an individiual them in court, experts warn.

Expansion into texts

This new Apple policy will be looking at photos and videos, but there are concerns that the technology could be used to allow companies to see usually encrypted messages such as iMessage or WhatsApp.  

‘Collision attacks’

There are concerns that somebody could send someone a perfectly innocent photograph to someone – knowing it will get them in trouble,

If the person, or government, has a knowledge of the algorithm or ‘fingerprint’ being used, they could use it to fit someone up for a crime.

Backdoor through privacy laws

If the policy is rolled out worldwide, privacy campaigners fear that the tech giants will soon be allowed unfettered access to files via this backdoor.  

Source: Read Full Article