Mobile News

Apple Says It Will Implement A System That Will Check iPhone Phones For Child Sexual Abuse Images

Apple on Thursday said it will implement a system that checks photos on iPhones in the United States before they are uploaded to its iCloud storage services to ensure that the download does not match known child sexual abuse images.

Apple said that detecting child abuse image uploads sufficient to protect against false positives would trigger human verification and user reporting to law enforcement. It says the system is designed to reduce false positives to one trillion.

Apple’s new system addresses law enforcement requests to help stop child sexual abuse while respecting the privacy and security practices that are at the heart of the company’s brand. But some privacy advocates said the system could open the door for tracking political speeches or other content on iPhones.

Most other major technology providers, including Google Alphabet, Facebook and Microsoft, are already verifying images against a database of known child sexual abuse images.

“With so many people using Apple products, these new security measures could save the lives of children who are lured online and whose horrific images are circulated in child sexual abuse material,” said John Clark, executive director of the National Center for Missing and Exploited … Children, the statement said. “The reality is that privacy and child protection can coexist.”

This is how Apple’s system works. Law enforcement officials maintain a database of known child sexual abuse images and translate these images into “hashes” – numeric codes that uniquely identify the image but cannot be used to recover them.

Apple implemented this database using NeuralHash technology, which is also designed to capture edited images that look like the originals. This database will be stored on iPhone gadgets.

When a user uploads an image to Apple’s iCloud storage service, the iPhone generates a hash of the image to upload and compares it to the database.

Photos stored only on the phone are not verified, according to Apple, and human verification before reporting the account to law enforcement is designed to ensure the authenticity of any matches before the account is suspended.

Apple said that users who believe their account has been improperly blocked can appeal to have it restored.

The Financial Times previously reported some aspects of the program.

One of the features of Apple’s system is that it checks the photos stored on phones before uploading them, rather than checking the photos after they are sent to the company’s servers.

On Twitter, some privacy and security experts expressed concerns that the system could eventually be expanded to scan phones in general for prohibited content or political speech.

Apple “sent a very clear signal. In their (very influential) opinion, it is safe to create systems that scan users’ phones for prohibited content, ”warned Matthew Green, a security researcher at Johns Hopkins University.

“It will destroy the dam – governments will demand it from everyone.”

Other privacy researchers such as India McKinney and Erica Portnoy of the Electronic Frontier Foundation wrote in Blog post that third-party researchers may not be able to double check if Apple is delivering on its promises to only check a small subset of content on a device.

The move is “a shocking face turn for users who have relied on the company’s leadership for privacy and security,” they wrote.

“After all, even a carefully documented, elaborate and narrowly constrained backdoor is still a backdoor,” wrote McKinney and Portnoy.

Source link

Leave a Reply

Your email address will not be published.

Back to top button