2.5 million people quit ChatGPT because OpenAI took a Pentagon contract

OpenAI agreed to deploy AI on classified Department of Defense networks. Within 72 hours, 2.5 million people signed on to #QuitGPT, ChatGPT uninstalls surged 295%, and Anthropic closed a funding round at $380 billion. I've watched tech boycotts fizzle before. Delete Facebook. Delete Uber. They trend for a weekend and evaporate. This one feels different, and the reason is structural, not emotional.

OpenAI's founding promise vs. the DoD deal

OpenAI was founded in 2015 with a charter that prioritized "benefit to all of humanity." The nonprofit structure was the point. The safety research was the point. When the initial funding came in, the pitch was: we'll build the most powerful AI in the world, and we'll do it responsibly, in the open, for everyone.

That charter has been eroding for years. The pivot to capped-profit in 2019, the closed-source shift with GPT-4, the full for-profit conversion announced last year. Each step had critics, but each step also had defensible business logic. You can't compete with Google on nonprofit margins.

The DoD classified network deal is different. This isn't "we need revenue." This is a company whose founding mission was open, safe AI for humanity deploying systems on networks designed to be secret, supporting operations that are by definition not open. The cognitive dissonance finally exceeded what the user base would tolerate.

Why 295% isn't just a number

ChatGPT has roughly 400 million weekly active users. A 295% spike in uninstalls sounds dramatic, but on a base that large, even a sustained 5% churn would be catastrophic to OpenAI's growth narrative. The real damage isn't the people leaving. It's the people who stay but stop paying for Plus. It's the enterprise customers quietly evaluating alternatives.

Anthropic is the obvious beneficiary. Their $380 billion valuation makes them worth more than Goldman Sachs. Claude Opus 4.6 leads for agentic coding tasks. Enterprise migration from ChatGPT to Claude was already happening, and #QuitGPT just poured gasoline on it.

Microsoft, OpenAI's largest investor, is hedging. They launched three proprietary "MAI" models this quarter to reduce their OpenAI dependency. When your biggest investor is building escape routes, the signal is clear.

The part where I'm not sure what to think

I think the boycott is right on the merits. If you built your reputation on "AI safety for humanity" and then signed a classified military contract, you earned the backlash.

But the outrage is selective. Google had Project Maven and quietly resumed military AI work. Amazon's AWS hosts classified government workloads. Palantir's entire business model is government intelligence. The tech industry's relationship with the defense establishment is deep and old. OpenAI just made the mistake of doing it while still pretending to be the good guys.

The harder question: is it even possible to build frontier AI without government money? The compute costs are staggering. OpenAI raised $122 billion in Q1 alone. That money comes with strings. Maybe the real lesson isn't that OpenAI sold out. Maybe it's that building AI at this scale was never compatible with the nonprofit idealism they started with.

What actually changes

The consumer AI market is fragmenting. ChatGPT's dominance was never guaranteed, and #QuitGPT accelerates the shift toward a multi-model world. Claude, Gemini, and open-source models like Gemma 4 and Llama 4 all benefit from every user who reconsiders their default.

The "safety" brand is now a competitive weapon. Anthropic's positioning as the responsible AI company went from marketing copy to market share. Their $380B valuation proves investors believe safety sells.

And the developer community is voting with their feet. B2B enterprise customers evaluating Claude aren't following a hashtag. They're reading EU AI Act compliance requirements and noticing Anthropic's documentation is further along. They're watching OpenAI's trust deficit widen in real time.

I uninstalled ChatGPT about six months ago, for reasons that had nothing to do with the Pentagon. The product just wasn't as good anymore. But watching 2.5 million people reach the same conclusion in a single week, for reasons about values rather than features, that's new. Whether it lasts longer than a news cycle is the real question. History says it won't. The scale of this one says maybe it will.


References:

  1. OpenAI's shocking fall from grace as investors race to Anthropic
  2. Anthropic says Claude Code leak did not expose customer data; valued at $380B
  3. Latest AI news: #QuitGPT movement, Microsoft MAI models