Goto

Collaborating Authors

Apple to Scan Every Device for Child Abuse Content -- But Experts Fear for Privacy

#artificialintelligence

Apple on Thursday said it's introducing new child safety features in iOS, iPadOS, watchOS, and macOS as part of its efforts to limit the spread of Child Sexual Abuse Material (CSAM) in the U.S. To that effect, the iPhone maker said it intends to begin client-side scanning of images shared via every Apple device for known child abuse content as they are being uploaded into iCloud Photos, in addition to leveraging on-device machine learning to vet all iMessage images sent or received by minor accounts (aged under 13) to warn parents of sexually explicit photos shared over the messaging platform. Furthermore, Apple also plans to update Siri and Search to stage an intervention when users try to perform searches for CSAM-related topics, alerting that the "interest in this topic is harmful and problematic." "Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit," Apple noted. "The feature is designed so that Apple does not get access to the messages." The feature, called Communication Safety, is said to be an opt-in setting that must be enabled by parents through the Family Sharing feature. Detection of known CSAM images involves carrying out on-device matching using a database of known CSAM image hashes provided by the National Center for Missing and Exploited Children (NCMEC) and other child safety organizations before the photos are uploaded to the cloud.


Apple to tune CSAM system to keep one-in-a-trillion false positive deactivation threshold

ZDNet

When Apple announced its plans to tackle child abuse material on its operating systems last week, it said the threshold it set for false positives account disabling would be one in a trillion per year. Some of the workings of how Apple arrived at that number was revealed in a document [PDF] that provided more detail about the system. The most contentious component of Cupertino's plans was its on-device child sexual abuse material (CSAM) detection system. It will involve Apple devices matching images on the device against a list of known CSAM image hashes provided by the US National Center for Missing and Exploited Children (NCMEC) and other child safety organisations before an image is stored in iCloud. When a reporting threshold is reached, Apple will inspect metadata uploaded alongside the encrypted images in iCloud, and if the company determines it is CSAM, the user's account will be disabled and the content handed to NCMEC in the US.


Apple's new feature scans for child abuse images

Mashable

Apple is officially taking on child predators with new safety features for iPhone and iPad. One scans for child sexual abuse material (CSAM), which sounds like a good thing. But it has several privacy experts concerned. So, how does it work? The feature, available on iOS 15 and iPadOS 15 later this year, uses a new proprietary technology called NeuralHash to detect known CSAM images.


Apple announces new protections for child safety: iMessage features, iCloud Photo scanning, more

#artificialintelligence

Apple is today announcing a trio of new efforts it's undertaking to bring new protection for children to iPhone, iPad, and Mac. This includes new communications safety features in Messages, enhanced detection of Child Sexual Abuse Material (CSAM) content in iCloud, and updated knowledge information for Siri and Search. One thing Apple is emphasizing is that its new program is ambitious, but that "protecting children is an important responsibility." With that in mind, Apple says that its efforts will "evolve and expand over time." The first announcement today is a new communication safety feature in the Messages app.


Apple Walks a Privacy Tightrope to Spot Child Abuse in iCloud

WIRED

For years, tech companies have struggled between two impulses: the need to encrypt users' data to protect their privacy and the need to detect the worst sorts of abuse on their platforms. Now Apple is debuting a new cryptographic system that seeks to thread that needle, detecting child abuse imagery stored on iCloud without--in theory–introducing new forms of privacy invasion. In doing so, it's also driven a wedge between privacy and cryptography experts who see its work as an innovative new solution and those who see it as a dangerous capitulation to government surveillance. Today Apple introduced a new set of technological measures in iMessage, iCloud, Siri, and search, all of which the company says are designed to prevent the abuse of children. A new opt-in setting in family iCloud accounts will use machine learning to detect nudity in images sent in iMessage.