Sam Altman tells OpenAI staff government decides AI use, not programmers

Sam Altman now says governments, not his own programmers, will decide how AI is used. This is a big change from engineers having a say.

Sam Altman has informed OpenAI staff that the power to decide how artificial intelligence functions in the field belongs to the state, not the programmers. During a private internal meeting, Altman clarified that employees lack the standing to dictate how governments use the company’s models once they are integrated into official systems. This shift formalizes a hard boundary between the technical construction of the "safety stack" and the messy, lethal reality of operational-decisions.

OpenAI Boss Sam Altman Warns Employees They Cannot Pick How AI Is Used By Governments - 1

The Mechanics of the Yield

OpenAI is moving toward a total integration with the Department of Defense (DOD), a move signaled by an arrangement finalized just hours before U.S. and Israeli strikes against Iran. While the company seeks to maintain a grip on the technical framework—the code that prevents "hallucinations" or unintended glitches—it has surrendered the steering wheel.

Read More: Pentagon AI Head Gavin Kliger Appointed Amid Online Past Concerns

OpenAI Boss Sam Altman Warns Employees They Cannot Pick How AI Is Used By Governments - 2
  • Altman argues that "Democracy is messy" but requires commitment, implying that elected or appointed officials, rather than tech workers in San Francisco, must hold the moral-burden of use.

  • The Pentagon expects OpenAI to provide input on where models fit, but insists on the right to deploy these tools in all "lawful use cases."

  • This policy effectively ends the era of the "tech veto," where engineers could block their work from being used in war.

A Fracture in the Frontier: OpenAI vs. Anthropic

The industry is split by a deep ideological wedge. While Altman seeks to "de-escalate" tensions with the military, Anthropic CEO Dario Amodei remains in a deadlock with the Pentagon.

OpenAI Boss Sam Altman Warns Employees They Cannot Pick How AI Is Used By Governments - 3
FeatureOpenAI ApproachAnthropic Approach
ControlTechnical input; yields to stateStrict red lines on surveillance/autonomy
Military StatusActively negotiating/collaboratingTense negotiations; "God-complex" accusations
SurveillanceDefers to "lawful use"Blocks mass surveillance of Americans
PhilosophyPragmatic integrationGuarded containment

"Amodei wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk." — Emil Michael, Defense Undersecretary.

The Retreat from Regulation

Altman’s current posture marks a sharp pivot from 2023, when he appeared before Congress pleading for AI-regulation. By May 2025, that tone soured. Altman warned that requiring government approval before releasing software would be "disastrous," suggesting that while the government should control the use of AI, it should not slow down the production of it.

OpenAI Boss Sam Altman Warns Employees They Cannot Pick How AI Is Used By Governments - 4

The signal is clear: the tech industry provides the engine; the State chooses the target.

Deep-Rooted Instability

Behind this outward cooperation with the state lies a history of internal friction regarding safety. The "OpenAI Files" and the exit of key researchers like Jan Leike suggest a lab at a crossroads, where safety-failures were sidelined for speed.

Read More: AI Slop Content Floods TikTok and Instagram in 2024 Making It Hard to Find Real News and Videos

  • Altman predicts Superintelligence could arrive by 2028, outperforming even his own role as CEO.

  • He admits that AI-benefits may not be distributed evenly, suggesting a future of sharp economic divides.

  • Entire job categories, specifically in customer service and medical diagnostics, are expected to be wiped-out.

The transition from "AI as a tool for humanity" to "AI as an instrument of statecraft" is no longer a theoretical debate. It is the current operating procedure of the world’s most powerful AI laboratory.

Frequently Asked Questions

Q: What did Sam Altman tell OpenAI staff about AI use?
Sam Altman told OpenAI staff that governments, not the programmers, have the final say on how the company's AI models are used, especially when integrated into official systems.
Q: Why is OpenAI changing its stance on AI use decisions?
Altman explained that democracy is complex and requires commitment, meaning elected officials, not tech workers, must bear the responsibility for how AI is deployed, particularly in sensitive areas like military operations.
Q: How does OpenAI's new policy affect its work with the Department of Defense?
OpenAI is moving towards closer integration with the Department of Defense, with the Pentagon expecting to deploy AI tools in all lawful use cases, effectively ending the era where engineers could block military use of their technology.
Q: What is the difference between OpenAI's and Anthropic's approach to AI and the government?
OpenAI is yielding to state control over AI use while focusing on technical input, whereas Anthropic has stricter rules and refuses to allow AI use for mass surveillance or autonomous weapons, leading to tense negotiations with the Pentagon.
Q: Has Sam Altman always supported this approach to AI regulation?
No, Altman's current stance is a shift from 2023 when he asked Congress for AI regulation. He now believes the government should control AI use but not slow down its production, warning that government approval before release would be disastrous.
Q: What are the predicted future impacts of AI according to Sam Altman?
Altman predicts Superintelligence could arrive by 2028. He also admits AI benefits might not be shared equally and expects AI to eliminate entire job categories, such as customer service and medical diagnostics.