
ChatGPT developer OpenAI is calling for world leaders to plan now for a world dominated by advanced artificial intelligence.
In the paper “Industrial Policy for the Intelligence Age: Ideas to Keep People First,” released on Monday, OpenAI argues that rapid advances in AI could reshape economies and may require new approaches to taxation, labor policy, and social protections as society prepares for the possibility of superintelligence.
“No one knows exactly how this transition will unfold,” the company wrote. “At OpenAI, we believe we should navigate it through a democratic process that gives people real power to shape the AI future they want, and prepare for a range of possible outcomes while building the capacity to adapt.”
While OpenAI claims AI could significantly increase productivity and accelerate scientific discovery, it also warns that the technology could disrupt labor markets and concentrate wealth if policies do not adapt. The paper says governments should begin preparing now for possible changes in work, income, and economic growth.
The document outlines several policy ideas, including treating access to AI as a foundational economic resource for “participation in the modern economy, similar to mass efforts to increase global literacy,” modernizing tax systems to account for automation, and creating mechanisms that allow citizens to share in the economic gains produced by AI-driven industries.
“The promise of advanced AI is not just technological progress, but a higher quality of life for all. Everyone should have the opportunity to participate in the new opportunities AI creates,” OpenAI wrote. “Living standards should rise, and people should see material improvements through lower costs, better health and education, and more security and opportunity.”
It also proposes strengthening worker protections and expanding social support if technological change leads to sudden job losses, while calling for oversight tools, including auditing for frontier models, incident reporting systems, and “model-containment playbooks” for scenarios in which dangerous AI systems cannot easily be recalled once deployed.
“If AI winds up controlled by, and benefiting only a few, while most people lack agency and access to AI-driven opportunity, we will have failed to deliver on its promise,” the company wrote.
This policy push comes at a difficult time for OpenAI CEO Sam Altman, who is facing fresh scrutiny following an extensive investigation by The New Yorker. The report reveals that in 2023, OpenAI’s co-founder and then-chief scientist, Ilya Sutskever, wrote internal memos accusing Altman of being deceptive about the company’s safety protocols and other key operations.
According to the magazine, these trust issues led the OpenAI board to fire Altman, concluding that he hadn't been "consistently candid" with them. The firing set off a firestorm in the company, with employees threatening to leave the company in protest, while powerful investors like Josh Kushner threatened to withhold funding unless Altman was reinstated.
The report underscored the deep internal divisions over governance and safety, with some former insiders—including Sutskever and Anthropic co-founder Dario Amodei—arguing that Altman prioritized growth and product expansion over the company’s original safety-focused mission.
OpenAI did not immediately respond to a request for comment by Decrypt.
Share this article
OpenAI is urging governments to prepare now for the economic disruption that advanced AI could cause. The company says leaders should rethink taxation, labor policy, social protections, and oversight so societies can adapt to rapid AI-driven change.
OpenAI says advanced AI could boost productivity and scientific discovery, but it could also disrupt labor markets and concentrate wealth. Because of that, it argues governments should plan ahead for possible changes in work, income, and economic growth.
OpenAI proposed broader access to AI, tax systems that better account for automation, and ways for citizens to share in the economic gains from AI. It also suggested stronger worker protections and social support if technology causes sudden job losses.
OpenAI says AI systems should be monitored with tools such as frontier model audits, incident reporting systems, and model-containment playbooks. These measures are meant to help manage dangerous systems that may be hard to recall once they are deployed.
Sam Altman is mentioned because OpenAI’s policy push comes amid fresh scrutiny of his leadership. A New Yorker report said internal memos accused him of being deceptive about safety protocols and other operations, which helped trigger his firing by the OpenAI board in 2023 before he was reinstated.





See every story in Crypto — including breaking news and analysis.