On-device approaches have a contested history, and any deployment must be genuinely privacy-preserving by design. But technology companies must weigh the objection against the harms that continue to occur. A safety by design approach is needed.
On-device safety measures have been demonstrated at scale with Apple’s on-device nudity detection for images sent or received via Messages, AirDrop and FaceTime. A 2025 study demonstrated high-accuracy grooming detection using Meta’s AI model designed specifically for on-device deployment on mobile phones.
Recently, both Apple and Google have started to take measures towards app store-based age verification in some jurisdictions.
The highest-profile real-world deployment of these is Apple enabling device-level privacy-preserving age verification in the UK.
Social media and private messaging companies, along with operating system vendors (Microsoft, Apple, and Google), all have a role to play in ensuring harmful content is detected, whether or not end-to-end encryption is used. Progress has been slow. But we, as a community, need to demand more from these companies.
Joel Scanlan is Adjunct Associate Professor at the School of Law and Academic Co-Lead at CSAM Deterrence Centre, University of Tasmania. This article first appeared in The Conversation.
