Karen Hao Warns of AI’s “Empire Mentality” and the High Cost of the AGI Race

Last Updated: September 15, 2025By

At the heart of every empire lies an ideology — a belief system powerful enough to justify expansion, often at odds with its declared purpose. For European colonial powers, it was Christianity and the promise of salvation amid resource extraction. Today, according to journalist and bestselling author Karen Hao, the driving ideology is artificial general intelligence (AGI) — marketed as a tool “for the benefit of all humanity.” And OpenAI, she says, has become the chief evangelist of this new empire.

Speaking on TechCrunch’s Equity podcast, Hao reflected on her book Empire of AI, which examines the rise of AI companies through the lens of power, ideology, and consequence. She likens OpenAI’s global influence to that of an empire with immense economic and political sway.

“The only way to understand the scope of OpenAI’s behavior is to see that it has already grown more powerful than most nation-states,” Hao explained. “They’re rewiring our geopolitics and reshaping lives worldwide. That’s empire-level power.”

OpenAI defines AGI as a highly autonomous system capable of outperforming humans in most economically valuable work. The company claims this will “elevate humanity” by boosting abundance, driving economic growth, and unlocking new scientific frontiers. Yet these promises remain largely aspirational, while the industry’s relentless pursuit of AGI continues to demand enormous resources, exploit massive datasets, and strain global energy grids.

Hao argues that this trajectory was not inevitable. AI progress, she said, could have advanced by refining algorithms to use less data and computational power. But OpenAI’s framing of AGI as a “winner-takes-all” race made speed the ultimate priority.

“Speed over efficiency, speed over safety, speed over exploration — that became the mantra,” Hao noted. According to her, this has led companies to pour unprecedented resources into scaling existing techniques, rather than innovating new ones.

The consequences of this strategy ripple across the industry. With top researchers absorbed into corporate labs, the discipline itself has been reshaped around the agendas of private companies. And the financial scale is staggering. OpenAI has projected it will burn through $115 billion in cash by 2029. Meta expects to spend $72 billion on AI infrastructure this year, while Google projects $85 billion in capital expenditures for 2025, most of it dedicated to AI and cloud expansion.

Despite these astronomical costs, the promised “benefits to humanity” remain elusive. Instead, harms are piling up: job displacement, widening wealth concentration, and mental health risks linked to untested AI systems. Hao’s book documents low-wage workers in countries like Kenya and Venezuela who endured exposure to disturbing content while labeling data and moderating toxic material — often for as little as $1 to $2 per hour.

Hao contrasts these harms with AI projects like DeepMind’s AlphaFold, which predicts 3D protein structures from amino acid sequences and has already transformed drug discovery. “That’s the kind of AI we need,” she said. “It doesn’t create mental health crises, environmental damage, or exploitative labor practices. It shows that useful AI doesn’t have to come at such a destructive cost.”

The AGI race has also been framed as a geopolitical contest with China, but Hao believes this narrative has backfired. “Rather than liberalizing the world, Silicon Valley has had an illiberalizing effect,” she argued. “The U.S.-China gap has only narrowed — and the biggest winner has been Silicon Valley itself.”

OpenAI’s dual structure — part nonprofit, part for-profit — further complicates questions about accountability. With its deepening ties to Microsoft and the prospect of going public, concerns are growing that the company’s commitment to “benefiting humanity” is being overshadowed by commercial ambition.

Two former OpenAI safety researchers told TechCrunch they fear the lab increasingly confuses popularity with impact, pointing to ChatGPT’s success as evidence that OpenAI equates product adoption with fulfilling its mission. Hao shares this concern, warning of the dangers of ideology overriding reality.

“Even as evidence grows that their systems are harming people, the mission is used to paper over it all,” she said. “There’s something dark about being so consumed by a belief system that you lose touch with reality.”

Source: Techcrunch

Mail Icon

news via inbox

Get the latest updates delivered straight to your inbox. Subscribe now!