News recently broke that the Pentagon had used Anthropic’s Claude AI tool to capture Venezuela’s Nicolás Maduro. Cue the memes about Secretary Hegseth prompting AI with something like “hey Claude, go seize Maduro without US casualties, make no mistakes”.
But this was all apparently news to Anthropic itself, which reportedly sought clarification. The AI firm denies making any such enquiries, but the Pentagon presumably interpreted it all as implied disapproval and is now “reviewing its relationship with Anthropic”, adding “our nation requires that our partners be willing to help our warfighters win in any fight.”
So how’d we get here?
Stay on top of your world from inside your inbox.
Subscribe for free today and receive way much more insights.
Trusted by 161,000+ subscribers
No spam. No noise. Unsubscribe any time.
Anthropic, the AI pioneer founded by former OpenAI executives, is barely five years old, but bagged a massive $200M defence contract last July in a deal it highlighted as proof of the firm’s commitment to US national security.
From DC’s perspective, it all reflects Anthropic’s reputation as not only capable, but also controllable — a critical feature for any NatSec client.
But reaaaaally, Anthropic has had a NatSec role since at least 2024, when Palantir incorporated the Claude AI model into its systems. You might know Palantir through its colourful chair (Thiel), CEO (Karp), public filing paperwork, or its clients like ICE. Turns out Claude’s role in the Maduro operation was also via this older Palantir arrangement.
Either way, you’ve got Anthropic going deeper in its US national security work at a time when Trump 2.0 is throwing more US weight around, all while Anthropic’s own CEO (Dario Amodei) is getting more publicly antsy about the risks around AI.
So maybe the only surprise here is how long this spat took to blow up.
Where’s the puck now?
While the Maduro operation really flicked on the lights, this spat was already brewing in the shadows for months via stalled contract negotiations over that big $200M deal.
Military planners argue any AI tool they buy should remain fully available for all lawful operations: arms development, intelligence, battlefield ops, and beyond. They don’t want nerds suddenly refusing tasks or imposing their own ethical guardrails — elected lawmakers are the ones who decide what is and isn’t lawful.
Anthropic, however, is insisting on hard limits against autonomous weapons and mass domestic surveillance. Both red lines could impose tighter limits than what’s strictly lawful — eg, the Foreign Intelligence Surveillance Act allows eavesdropping on non-citizens abroad, including when they’re contacting Americans, resulting in incidental collection.
And Amodei is not budging, which seems to be why the Pentagon has now sensationally started warning it could designate Anthropic a Supply Chain Risk, a label usually reserved for foreign adversaries! In practice, that’d mean not only losing lucrative government contracts, but also contracts with other US government contractors!
So what happens next?
Anthropic insists (against all evidence) the talks are still productive, but Amodei’s public stance might’ve now painted his own firm into a corner. And the Pentagon does have other options: OpenAI’s Sam Altman recently said “never say never” when asked about building weapons, while Google rolled back its own bans early last year, and Musk’s xAI never really had these kinds of red lines to begin with. All three already have US NatSec contracts, and will gladly take Anthropic’s market share if Amodei doubles down.
As for the Pentagon, there’s probably a broader precedent risk at play here: they won’t want others imposing guardrails that limit the government’s access to top tech, given the impact this could have on US competitiveness against rivals with no such limits.
Anyway, regardless how the cards play out, Washington is making sure that the message to Silicon Valley is clear — we make the guardrails, you make the technology.
Intrigue’s Take
We’ve seen this kind of spat before, whether Google’s 2018 employee revolt over Project Maven, or Apple’s 2016 refusal to unlock the San Bernardino terrorist’s iPhone for the FBI.
So what makes this Anthropic fight different?
First, tech has advanced beyond recognition since then. Anthropic didn’t even exist until 2021.
Second, our politics have changed, too: Trump 2.0 acts less constrained, places a bigger premium on loyalty, and has shown way more willingness to intervene in the economy.
And third, US-China competition has changed too, escalating from an initial, semi-transactional trade spat to almost a full spectrum tech war.
But maybe China is where we wrap, because arguably a) it’s China that benefits if the US cuts off its own AI nose with this ‘supply chain risk’ designation, b) it’s really China that comes to mind when Amodei draws the line at AI making us “more like our autocratic adversaries“; and c) it’s also China where Google arguably learned a relevant lesson two decades ago, trading its principles for market share only to end up with neither.
Sound even smarter:
- Anthropic’s Dario Amodei and OpenAI’s Sam Altman are now wrapping an India visit for Prime Minister Modi’s big AI summit. In a fitting snapshot of their well-known rivalry, the two tech leaders avoided holding hands during the group pic.

