Derivinate NEWS About

Pentagon Blacklists Anthropic, Anoints OpenAI—Same Safety Deal, Different Rules

Pentagon Blacklists Anthropic, Anoints OpenAI—Same Safety Deal, Different Rules

On February 27, 2026, the Trump administration did something that should terrify everyone paying attention to how AI policy actually gets made: it designated Anthropic a "Supply-Chain Risk to National Security" and ordered all federal agencies to stop using the company's models immediately. The same day, Sam Altman announced that OpenAI had reached an agreement with the Department of Defense to deploy its models on classified networks.

Both companies had negotiated identical safety principles with the Pentagon. Both insisted on the same two red lines: prohibitions on domestic mass surveillance and human responsibility for autonomous weapons systems. The DoD accepted OpenAI's terms. It rejected Anthropic's identical demands.

This wasn't a technical decision. It was a political one. And it reveals something much darker than a corporate rivalry—it shows that AI policy in America is now being decided by political alignment rather than merit, safety, or even coherent principle.

The Setup: Two Companies, One Negotiation

Let's start with what actually happened. According to reporting from CNBC, both Anthropic and OpenAI engaged with the Department of Defense on the same fundamental question: how do you deploy cutting-edge AI models on classified networks while maintaining meaningful safety guardrails?

Both companies came to the same conclusion about what mattered most. Sam Altman put it plainly in a memo to employees: "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."

Anthropic's statement echoed the same commitments. Both companies were saying the same thing: yes, we'll work with the Pentagon, but not on mass surveillance, and not on autonomous weapons without human in the loop.

These aren't radical positions. They're the bare minimum of what responsible AI deployment should look like. Yet somehow, one company got a classified network contract and a massive PR win. The other got blacklisted as a national security threat.

The Blacklist: When Safety Becomes Liability

Defense Secretary Pete Hegseth designated Anthropic a "Supply-Chain Risk to National Security"—a designation typically reserved for companies with ties to hostile foreign governments or compromised supply chains. Not for companies that negotiated too hard on safety principles.

Anthropic's response was measured but devastating: "deeply saddened" by the Pentagon decision and intends to challenge the supply-chain risk designation. Translation: we just got crushed for doing exactly what the government said it wanted us to do.

The timing matters. Trump's administration moved against Anthropic hours before announcing OpenAI's deal. This wasn't a delayed decision based on new information. It was a coordinated move—one door slammed shut, another opened wide.

The market reacted immediately. ChatGPT US uninstalls surged 295% after the Pentagon deal announcement, while Claude downloads rose 88% and topped the US App Store charts. Users were voting with their feet, but not for technical reasons. They were responding to political signal—the government just told them which AI company to trust.

The Revenue Question: What This Actually Costs

This matters in real dollars. OpenAI hit $25B annualized revenue by late February—up 17% from $21.4B at year-end 2025. Anthropic surpassed $19B run-rate revenue. Both companies are massive, but Anthropic was closing the gap. A Pentagon blacklist doesn't just block government contracts; it sends a signal to enterprise customers, investors, and international partners: this company is radioactive.

For context, OpenAI just launched GPT-5.4 with native computer use capabilities, and Cursor topped $2B annualized revenue, doubling in three months with enterprise clients driving 60% of sales. The AI infrastructure space is moving fast, and momentum matters. A supply-chain risk designation doesn't just cost you government revenue—it costs you enterprise deals, partnership opportunities, and investor confidence.

Anthropic will challenge this designation, but the damage is done. The government has signaled that it prefers OpenAI, and in a market where regulatory favor increasingly determines winners and losers, that signal is worth billions.

The Real Story: When Policy Becomes a Weapon

Here's what everyone's missing: this isn't about safety. Both companies negotiated the same safety principles. Both insisted on the same red lines. Both were trying to do responsible AI deployment at scale.

The difference is political alignment. OpenAI has Sam Altman, who has cultivated relationships with Trump's administration. Anthropic has Dario Amodei and Daniela Amodei, who have been more skeptical of government overreach and more vocal about AI safety concerns that sometimes conflict with national security interests.

In a previous era, that wouldn't have mattered. Policy would have been made on the merits. But we're not in that era anymore. We're in an era where the government uses regulatory authority as a tool to pick winners and losers, and AI companies are learning that technical excellence matters less than political favor.

This is how industrial policy works in authoritarian systems. You don't ban a company outright—you designate it a security risk. You don't say "we prefer this company"—you say the other one is a threat. The outcome is the same, but it sounds neutral.

The irony is that Anthropic fought harder than any major AI company for responsible development principles. Constitutional AI, red-teaming, safety-first roadmaps, transparency reports—these weren't marketing moves. They were core to how the company built its models. And it just got blacklisted by the government claiming to care about those exact principles.

What This Means for the Rest of the Industry

If you're building an AI company right now, the lesson is clear: technical merit matters less than political alignment. Build the best model you can, but understand that your regulatory fate depends on whether you're aligned with whoever's in power.

This creates perverse incentives. Companies will optimize for government favor rather than genuine safety. They'll hire the right people, cultivate the right relationships, and make sure their safety principles align with whoever's in charge. The real losers aren't Anthropic or OpenAI—they're both billion-dollar companies. The losers are startups that can't afford to play the political game, and users who will increasingly see AI policy made by fiat rather than principle.

As we covered in our analysis of AI's labor reckoning, the AI industry is entering a new phase where explicit power dynamics matter more than they used to. This Pentagon decision is that dynamic playing out in real time.

The classified network contract is valuable, but it's not the real prize. The real prize is the signal that the government prefers OpenAI. Every enterprise customer will see that signal. Every investor will see it. Every international partner will see it. And they'll all adjust their bets accordingly.

The Question Nobody's Asking

Here's what should keep you up at night: if the government can blacklist a company for negotiating too hard on safety principles, what happens when safety principles conflict with national security interests? What happens when the Pentagon wants surveillance capabilities that both OpenAI and Anthropic refused to build?

Will OpenAI stick to its principles, or will it rationalize away the red lines it just drew? Will the government pressure it to compromise? And if it does, will we even know?

The Pentagon deal is presented as a partnership between equals. But it's not. The government just demonstrated that it can destroy a competitor with a regulatory designation. OpenAI is now operating under that knowledge. Every conversation with the DoD happens with that leverage in the room.

This is the moment where AI competition stops being about who builds better models and becomes about who has better political alignment. That's not good for innovation, it's not good for safety, and it's not good for the future of AI development in America.

But it's very good for whoever wins the political game. And right now, that's OpenAI.