The most popular AI coding tool in the world is made by a company the US government just labeled a national security threat.
On February 27, Anthropic — maker of Claude and Claude Code, the tool that 46% of developers now call their favorite — was designated a “supply chain risk to national security” by the Pentagon. Defense contractors were ordered to sever all ties. Federal agencies got six months to rip out Claude from every workflow, every pipeline, every internal tool.
The designation had previously been reserved for foreign adversaries like Huawei. Now it applies to a San Francisco startup valued at $380 billion.
Today — March 9 — is also the deadline for NIST’s first-ever AI Agent Security framework public comments. The government is simultaneously trying to regulate autonomous AI agents and punishing the one company that drew hard lines on how those agents should be used.
The irony is not subtle.
What Actually Happened
The story is simple. Anthropic held a $200 million Pentagon contract. It was the first frontier AI company to deploy models on classified government networks. But Anthropic maintained two red lines:
- No mass domestic surveillance
- No fully autonomous lethal weapons without human oversight
The Pentagon wanted “all lawful purposes.” Anthropic said no. Trump posted on Truth Social directing all agencies to “immediately cease” use of Anthropic’s technology. Defense Secretary Hegseth simultaneously labeled them a supply chain risk.
“Threats do not change our position,” Amodei said. “We cannot in good conscience accede to their request.”
The next day, Claude hit #1 on the Apple App Store.
The Developer Tools Angle No One Is Talking About
Most coverage has focused on the geopolitics. What’s gotten less attention is the developer tools earthquake this creates.
Claude Code isn’t just popular. It’s dominant. According to the Pragmatic Engineer 2026 survey:
- Claude Code is the #1 AI coding tool — from zero to market leader in 8 months
- 46% of developers name it their most-loved tool (Cursor: 19%, Copilot: 9%)
- 4% of all public GitHub commits are now generated by Claude Code
- Its VS Code extension sees 29 million daily installs
Claude Code alone generates over $2.5 billion in annualized revenue. It went from zero to that in nine months.
Now imagine you’re a defense contractor who built your entire dev workflow around it.
The Contractor Scramble
CNBC reports that at least ten defense tech companies backed by J2 Ventures have already stopped using Claude and are actively replacing it. The GSA terminated its “OneGov” agreement. Treasury, NASA, State, HHS — all dropping Claude.
For engineering teams at these organizations, this means:
- Refactoring internal tooling, pipelines, and prompts to alternative models
- Retraining developers on different coding assistants
- Losing the muscle memory, custom instructions, and MCP integrations they’ve built
- Absorbing the productivity hit during transition
This isn’t swapping one SaaS tool for another. AI coding tools are deeply embedded in workflows. Your CLAUDE.md files, your MCP server configs, your custom slash commands, your hook scripts — none of that ports to GPT or Copilot. You’re starting over.
OpenAI’s “Trust Us”
Within hours of the Anthropic ban, OpenAI announced a Pentagon deal for classified networks.
This raises an obvious question: if the government banned Anthropic for refusing mass surveillance and autonomous weapons, how did OpenAI take the same contract without the same problem?
The Intercept investigated this yesterday. OpenAI’s answer boils down to: trust us. Sam Altman claims they negotiated “stricter protections around domestic surveillance.” But the details are classified. There are no public red lines. No contractual guarantees anyone can verify.
The contrast is stark:
| Dimension | Anthropic | OpenAI |
|---|---|---|
| Red lines | Public, explicit, non-negotiable | Private, unverifiable |
| Military use stance | Restricted: no autonomous weapons, no mass surveillance | “All lawful purposes” (details classified) |
| Government result | Banned as supply chain risk | Awarded Pentagon contract |
| Consumer result | #1 App Store, record signups | Steady |
| Developer trust signal | Company willing to lose $200M contract over principles | Company willing to accept terms Anthropic refused |
As former OpenAI researcher Todor Markov — who now works at Anthropic — described Altman in a 2025 court filing: “a person of low integrity who had directly lied to employees.”
The Paradox: Banned by Government, Beloved by Developers
Here’s where it gets interesting. The ban didn’t hurt Anthropic commercially. It helped.
- Claude hit #1 on the Apple App Store the day after the ban
- Free user signups jumped 60%+
- ARR surged from $14B to nearly $20B in weeks
- Claude Code business subscriptions quadrupled since January 1
- Anthropic raised a $30B round at $380B valuation on February 12
The market is sending a clear message: developers trust the company that said no to the Pentagon more than the one that said yes.
But there’s a darker thread. Despite the ban, multiple reports confirm the military still used Claude for operations against Iran. Hegseth’s own department can’t quit the tool it banned. The ban is performative and punitive — not operational.
Vendor Risk Is Now Political Risk
For developers, the practical lesson is brutal: your AI coding tool can become a political weapon overnight.
This isn’t theoretical anymore. If you’re building on Claude Code — and statistically, many of you are — you now know that a presidential Truth Social post can force your employer to rip it out. Not because the tool stopped working. Not because a better alternative shipped. Because of politics.
This changes the risk calculus for AI tool adoption:
| Risk Factor | Before Feb 27 | After Feb 27 |
|---|---|---|
| Model quality | Primary concern | Still matters |
| Pricing | Primary concern | Still matters |
| API stability | Primary concern | Still matters |
| Political exposure | Not considered | Existential for gov/defense |
| Vendor ethics stance | Nice to have | Can get you banned or rewarded |
| Multi-vendor strategy | Optional optimization | Required risk mitigation |
The NIST Deadline
The timing of NIST’s AI Agent Security RFI closing today is not coincidental. The framework covers exactly the capabilities Anthropic refused to leave unguarded: autonomous agent actions, behavioral hijacking, cascade failures, and agent registration.
NIST is asking industry for “concrete examples, best practices, case studies, and actionable recommendations” for securing autonomous AI agents. Meanwhile, the government just punished the company that provided the most concrete example imaginable: saying “no, this agent should not autonomously decide to kill people.”
The AI Agent Standards Initiative will define how autonomous coding agents, security agents, and operational agents are governed. The Anthropic precedent casts a long shadow over those standards. If having red lines gets you banned, who’s going to implement guardrails?
What Developers Should Do
If you work at a company with any federal exposure:
- Audit your Claude Code dependency now. Know where Claude is embedded in your CI/CD, your code review, your internal tools.
- Build model-agnostic abstractions. If your prompts, tools, and workflows are hardcoded to one provider, you’re exposed. MCP helps here — your tool servers work with any MCP-compatible client.
- Watch the legal challenge. Anthropic is fighting the designation in court. If they win, the supply chain risk label gets lifted. If they lose, the precedent affects every AI vendor.
- Don’t panic if you’re not in government. Amodei clarified the actual terms are narrower than threatened: “The vast majority of our customers are unaffected. It plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War.”
The Bigger Picture
We are watching the AI coding tools market split along political lines in real time. OpenAI has the Pentagon. Anthropic has the developers. DeepSeek V4 is about to drop as an open-source alternative optimized for Chinese hardware, trained on (allegedly) distilled Claude and GPT outputs.
The question for developers is no longer just “which model writes better code?” It’s “which model’s vendor can survive the political environment?”
Anthropic’s $19B ARR suggests they can. Their consumer surge after the ban suggests developers are voting with their wallets. But the fact that a single presidential social media post can force enterprise tool migration — that’s a new kind of vendor risk that no one’s procurement process was designed to handle.
The #1 AI coding tool just became a case study in what happens when technology meets politics. Every developer choosing tools today is making a political choice whether they realize it or not.
And the government? They’re still using Claude for military operations. They just don’t want to admit it.