

OpenAI has released a 'Child Safety Blueprint' to combat AI-enabled child sexual exploitation, focusing on legal reforms and improved reporting coordination. The framework was developed with input from various stakeholders, including child safety groups and nonprofit organizations.
Aiming to address the rise of AI-enabled child sexual exploitation, OpenAI on Wednesday published a policy blueprint outlining new safety measures the industry can take to help curb the use of AI in creating child sexual abuse material.
In the framework, OpenAI lists legal, operational, and technical measures aimed at strengthening protections against AI-enabled abuse and improving coordination between technology companies and investigators.
“Child sexual exploitation is one of the most urgent challenges of the digital age,” the company wrote. “AI is rapidly changing both how these harms emerge across the industry and how they can be addressed at scale.”
OpenAI said the proposal incorporates feedback from organizations working in child protection and online safety, including the National Center for Missing and Exploited Children and the Attorney General Alliance and its AI task force.
“Generative AI is accelerating the crime of online child sexual exploitation in deeply troubling ways-lowering barriers, increasing scale, and enabling new forms of harm,” President & CEO, National Center for Missing & Exploited Children, Michelle DeLaune said in a statement. “But at the same time, the National Center for Missing & Exploited Children is encouraged to see companies like OpenAI reflect on how these tools can be designed more responsibly, with safeguards built in from the start.”
OpenAI said the framework combines legal standards, industry reporting systems, and technical safeguards within AI models. The company said these measures aim to help identify exploitation risks earlier and improve accountability across online platforms.
The blueprint identifies areas for action, including updating laws to address AI-generated or altered child sexual abuse material, improving how online providers report abuse signals and coordinate with investigators, and building safeguards into AI systems designed to prevent misuse.
“No single intervention can address this challenge alone,” the company wrote. “This framework brings together legal, operational, and technical approaches to better identify risks, accelerate responses, and support accountability, while ensuring that enforcement authorities remain strong as technology evolves.”
The blueprint comes as child safety advocates have raised concerns that generative AI systems capable of producing realistic images could be used to create manipulated or synthetic depictions of minors. In February, UNICEF called on world governments to pass laws criminalizing AI-generated child abuse material.
In January, the European Commission launched a formal investigation into whether X, formerly known as Twitter, violated EU digital rules by failing to prevent the platform’s native AI model, Grok, from generating illegal content, as regulators in the United Kingdom and Australia have also opened investigations.
Noting that laws alone will not stop the scourge of AI-generated abuse material, OpenAI said stronger industry standards will be necessary as AI systems become more capable.
“By interrupting exploitation attempts sooner, improving the quality of signals sent to law enforcement, and strengthening accountability across the ecosystem, this framework aims to prevent harm before it happens and help ensure faster protection for children when risks emerge,” OpenAI said.
Share this article
The blueprint includes legal, operational, and technical measures aimed at enhancing protections against AI-enabled child exploitation.
The blueprint was developed with input from child safety groups, attorneys general, and nonprofit organizations.
OpenAI considers AI-enabled child sexual exploitation one of the most urgent challenges of the digital age, as AI changes how these harms emerge and can be addressed.






See every story in Crypto — including breaking news and analysis.