A recent legal defeat for Meta should provide a new impetus for governments to crack down on the scourge of online child abuse. A New Mexico jury determined not only that the company’s platforms have exposed children to predators and sexually explicit material, but that it also misled the public about such risks.
Yet despite this landmark case, the debate about children’s safety online may continue to move from scandal to scandal without arriving at truly effective solutions. Public outrage tends to peak after major court cases or in response to disturbing news stories. Politicians duly speak out, even promising meaningful regulation, only for public attention to fade without any action. And all the while, the technology continues to evolve and bring fresh challenges.
But it does not have to be this way, because we already understand how to prevent abuse online. The problem is not a lack of knowledge. It is that our response has not kept pace with the evidence. A recent systematic review of the academic literature and real-world interventions to tackle abuse, published by the International Panel on the Information Environment (IPIE), came to a stark conclusion: while we already have the tools we need, the global response remains fragmented and incomplete.
For example, digital platforms and investigators rely heavily on systems that scan the web for known illegal images and identify grooming behaviour. AI is increasingly being used to improve these tools and analyse huge volumes of platform data. Yet these tools’ effectiveness depends on the data used to train the model, and researchers often lack secure access to the data sets they need.
Moreover, these methods all deal with abuse after the fact. As important as it is to improve the detection of perpetrators, preventing harms before they happen must be the goal. Here, the IPIE identifies serious blind spots.
Digital payment systems, advertising tools, and other widely used platform features can help offenders organise and profit from abuse. Yet policy interventions rarely target these mechanisms. If the financial infrastructure that supports exploitation remains untouched, efforts to combat abuse will always be reactive.
Then there is public awareness. Education campaigns warn minors about grooming risks and explain the harms of illegal content. But education cannot replace responsible platform design or effective enforcement. Children should not be asked to navigate systems that enable their exploitation. The responsibility must remain with the companies that build and operate these platforms.
The major technology companies operate globally, whereas enforcement remains largely national. Without stronger legal frameworks that operate across borders, offenders will exploit regulatory gaps. Some policymakers already have the power to apply laws beyond their territory by requiring companies to report abuse, preserve evidence, and cooperate with investigations. Still, as the IPIE review shows, much more could be done to improve international coordination so that companies and criminals face clearer and more consistent enforcement.
Taken together, the IPIE’s findings describe a global response to online child abuse that is both essential and insufficient. Detection tools routinely identify abuse that is already known, law-enforcement agencies pursue offenders where they can, and education programmes try to call attention to outstanding risks. But the system still struggles to keep pace with ever-evolving digital technologies. Financial systems that support exploitation remain poorly understood; researchers lack access to the data needed to improve detection; and legal frameworks do not account for the global nature of the problem.
For years, policymakers argued that stronger action required more evidence. But that is no longer a credible excuse. The IPIE has established that the evidence is already there. We know which interventions help reduce harm, and we know where the major gaps lie. What remains unknown is whether political leaders will do what is necessary to keep children safe.
(Andy Phippen is Professor of Digital Rights at Bournemouth University and a visiting professor at the University of Suffolk.)
Project Syndicate