

ChatGPT developer OpenAI is calling for world leaders to plan now for a world dominated by advanced artificial intelligence.
In the paper âIndustrial Policy for the Intelligence Age: Ideas to Keep People First,â released on Monday, OpenAI argues that rapid advances in AI could reshape economies and may require new approaches to taxation, labor policy, and social protections as society prepares for the possibility of superintelligence.
âNo one knows exactly how this transition will unfold,â the company wrote. âAt OpenAI, we believe we should navigate it through a democratic process that gives people real power to shape the AI future they want, and prepare for a range of possible outcomes while building the capacity to adapt.â
While OpenAI claims AI could significantly increase productivity and accelerate scientific discovery, it also warns that the technology could disrupt labor markets and concentrate wealth if policies do not adapt. The paper says governments should begin preparing now for possible changes in work, income, and economic growth.
The document outlines several policy ideas, including treating access to AI as a foundational economic resource for âparticipation in the modern economy, similar to mass efforts to increase global literacy,â modernizing tax systems to account for automation, and creating mechanisms that allow citizens to share in the economic gains produced by AI-driven industries.
âThe promise of advanced AI is not just technological progress, but a higher quality of life for all. Everyone should have the opportunity to participate in the new opportunities AI creates,â OpenAI wrote. âLiving standards should rise, and people should see material improvements through lower costs, better health and education, and more security and opportunity.â
It also proposes strengthening worker protections and expanding social support if technological change leads to sudden job losses, while calling for oversight tools, including auditing for frontier models, incident reporting systems, and âmodel-containment playbooksâ for scenarios in which dangerous AI systems cannot easily be recalled once deployed.
âIf AI winds up controlled by, and benefiting only a few, while most people lack agency and access to AI-driven opportunity, we will have failed to deliver on its promise,â the company wrote.
This policy push comes at a difficult time for OpenAI CEO Sam Altman, who is facing fresh scrutiny following an extensive investigation by The New Yorker. The report reveals that in 2023, OpenAIâs co-founder and then-chief scientist, Ilya Sutskever, wrote internal memos accusing Altman of being deceptive about the companyâs safety protocols and other key operations.
According to the magazine, these trust issues led the OpenAI board to fire Altman, concluding that he hadn't been "consistently candid" with them. The firing set off a firestorm in the company, with employees threatening to leave the company in protest, while powerful investors like Josh Kushner threatened to withhold funding unless Altman was reinstated.
The report underscored the deep internal divisions over governance and safety, with some former insidersâincluding Sutskever and Anthropic co-founder Dario Amodeiâarguing that Altman prioritized growth and product expansion over the companyâs original safety-focused mission.
OpenAI did not immediately respond to a request for comment by Decrypt.
Share this article
OpenAI is urging governments to prepare now for the economic disruption that advanced AI could cause. The company says leaders should rethink taxation, labor policy, social protections, and oversight so societies can adapt to rapid AI-driven change.
OpenAI says advanced AI could boost productivity and scientific discovery, but it could also disrupt labor markets and concentrate wealth. Because of that, it argues governments should plan ahead for possible changes in work, income, and economic growth.
OpenAI proposed broader access to AI, tax systems that better account for automation, and ways for citizens to share in the economic gains from AI. It also suggested stronger worker protections and social support if technology causes sudden job losses.
OpenAI says AI systems should be monitored with tools such as frontier model audits, incident reporting systems, and model-containment playbooks. These measures are meant to help manage dangerous systems that may be hard to recall once they are deployed.
Sam Altman is mentioned because OpenAIâs policy push comes amid fresh scrutiny of his leadership. A New Yorker report said internal memos accused him of being deceptive about safety protocols and other operations, which helped trigger his firing by the OpenAI board in 2023 before he was reinstated.






See every story in Crypto â including breaking news and analysis.