Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI is the landmark 2025 exposé by award-winning journalist Karen Hao, published by Penguin Press on May 20, 2025.
It is more than a biography—it’s a meticulous, deeply human dissection of the rise of OpenAI and its charismatic yet controversial CEO, Sam Altman. Hao, a seasoned reporter formerly of MIT Technology Review, The Wall Street Journal, and The Atlantic, brings seven years of on-the-ground reporting and over 300 interviews to build a multi-dimensional picture of the AI empire now shaping our global future.
This book sits at the intersection of investigative journalism, tech biography, and sociopolitical critique. It explores how OpenAI’s mission to “benefit humanity” became entangled with secrecy, ambition, and unprecedented influence.
The narrative traverses the fault lines of idealism versus capitalism, ambition versus accountability, and technological utopianism versus global equity. While the book is grounded in the specifics of OpenAI’s culture and governance, it aims to interrogate larger questions about power, control, and the mythos of Artificial General Intelligence (AGI).
Karen Hao is uniquely qualified for this inquiry. She was one of the first journalists to gain deep access to OpenAI, documenting its internal culture and ambitions years before ChatGPT’s explosive debut in late 2022.
As someone who traveled to five continents to document the human cost of AI—data annotators in Kenya, resource-hungry data centers in Chile, and exploited labor networks in the Philippines—Hao broadens the scope of the narrative from corporate boardrooms to marginalized communities worldwide.
At its core, “Empire of AI” argues that OpenAI—under the leadership of Sam Altman—has evolved into a modern-day empire. Like the colonial empires of the 1800s, OpenAI extracts resources (data, labor, water, power) from the global south and underprivileged communities to fuel a vision of AGI that primarily benefits the elite. Hao writes:
“The empires of AI are not engaged in the same overt violence and brutality that marked this history. But they, too, seize and extract precious resources… for spinning into lucrative AI technologies.”
The book’s central thesis is chillingly clear: OpenAI has become a technocratic force operating with the power of a state, the wealth of a conglomerate, and the religious fervor of a cult. And the rest of the world? Merely the stage upon which this empire is built.
Table of Contents
Background
To understand the rise of OpenAI as depicted in Karen Hao’s Empire of AI, we must rewind to the roots of artificial intelligence itself, the techno-optimism of Silicon Valley, and the personal ambitions of Sam Altman and Elon Musk. T
he book frames this journey not as a mere evolution of a company, but as a clash of ideologies that gave birth to an AI empire disguised as a nonprofit.
The Philosophical Foundations
At the heart of OpenAI’s creation was a bold ambition—one that would make even science fiction blush. The goal wasn’t just to build smarter tools but to create Artificial General Intelligence (AGI)—a machine intelligence that could outperform humans at virtually any cognitive task.
The idea, rooted in the writings of thinkers like Nick Bostrom, took hold among a cabal of Silicon Valley elites who believed that humanity stood on the brink of its most profound transformation.
AGI wasn’t just a technological aspiration; it was a moral mission. As Hao notes, OpenAI marketed itself as a safeguard for humanity:
“From the beginning, OpenAI had presented itself as a bold experiment… to develop artificial general intelligence… not for the financial gains of shareholders but for the benefit of humanity.”
But that mission, according to Hao, was ultimately “more branding than belief.” The dream of AGI became a justification for unprecedented scale, secrecy, and power consolidation.
The Musk-Altman Nexus: From Heroes to Rivals
The founding dinner that birthed OpenAI is almost cinematic. In the summer of 2015, Elon Musk and Sam Altman gathered a small group of thinkers and engineers at the Rosewood Hotel on Sand Hill Road—Silicon Valley’s power corridor. The mission? To counter Google’s dominance in AI by creating a nonprofit AGI lab that would serve humanity.
Musk, deeply paranoid about the existential risks of AI, saw OpenAI as a necessary counterbalance to DeepMind, the UK-based lab Google had acquired. He feared what would happen if Google’s profit-driven motives guided the trajectory of superintelligence.
“The future of AI,” Musk said during a frantic Skype call, “should not be controlled by Larry [Page].”
Sam Altman, then the 30-year-old president of Y Combinator, was already building a reputation as Silicon Valley’s next kingmaker. Musk viewed him as a fellow visionary. Altman admired Musk’s intensity. But that alliance wouldn’t last.
Soon after founding OpenAI, Musk tried to take control of the organization. When Altman refused, Musk left—and so did his money. What began as a cooperative idealistic venture quickly devolved into a power struggle.
The “Capped Profit” Pivot and Microsoft’s Infiltration
The most pivotal shift in OpenAI’s history, as exposed in Empire of AI, came in 2019 when the organization broke with its original nonprofit mission. Under Altman’s leadership, OpenAI created a “capped profit” structure—an exotic legal construct that allowed it to accept billions in private investment while maintaining a nonprofit parent company.
In reality, it opened the floodgates to Microsoft. With a $1 billion investment (later ballooning to \$10 billion), Microsoft gained exclusive rights to OpenAI’s models. This raised a red flag for anyone who believed in the original mission of transparency and openness.
Hao doesn’t mince words:
“OpenAI became everything that it said it would not be. It turned into a nonprofit in name only.”
The partnership with Microsoft wasn’t just a financial pivot—it marked the beginning of what Hao calls the “commodification of cognition.” From this point forward, OpenAI’s research, once open-source and collaborative, became cloaked in corporate secrecy.
From Open Source to Open Secrets
One of the central betrayals that Empire of AI explores is how OpenAI abandoned its promise of openness. Initially, it had vowed to share its research with the world and even pledged to stop developing AGI if another project surpassed them. That promise now rings hollow.
By 2023, OpenAI’s models were black boxes. Training data, compute resources, model architecture—all became proprietary. Hao frames this as more than a technical decision; it was a political one.
“Gone were notions of transparency and democracy, of self-sacrifice and collaboration. OpenAI executives had a singular obsession: to be the first to reach artificial general intelligence.”
This obsession, she argues, created a culture where internal dissent was punished, ethical concerns were sidelined, and the public was fed an illusion of benevolent innovation.
The Global Context: AI Colonialism
Perhaps the most powerful contribution Hao makes is her framing of OpenAI within the larger context of what she calls “AI colonialism.” Just as European empires once extracted labor, land, and wealth from the Global South, OpenAI and its competitors now extract data, electricity, and human labor under the guise of innovation.
From underpaid data annotators in Kenya to massive data centers draining water in drought-prone areas, the costs of this AI boom are not evenly distributed.
“Rarely have they seen any ‘trickle-down’ gains of this so-called technological revolution; the benefits of generative AI mostly accrue upward.”
This isn’t just a story of Silicon Valley ambition. It’s a global reckoning with who pays—and who profits—in the age of intelligent machines.
Empire of AI Summary
Part I: “The Divine Right to Scale”
The first part of Empire of AI sets the ideological and historical foundation for OpenAI, framing its journey not as a conventional startup story but as the formation of a modern empire, powered by ambition, myth, and a new kind of divine mandate—the belief in Artificial General Intelligence (AGI) as salvation.
1. The Myth of Benevolence
Karen Hao opens by dissecting OpenAI’s 2015 founding, where Elon Musk, Sam Altman, Ilya Sutskever, and Greg Brockman pitched a nonprofit lab to counterbalance Google’s dominance in AI. It promised to build AGI safely and “benefit all of humanity.”
But Hao reveals this mission had a messianic undertone—a quasi-religious belief in AGI as a godlike force.
“They sought to build god, and they wanted to control it.”
This ideology wasn’t just metaphorical—it shaped hiring practices, board design, and even research direction.
2. The Elite Roots
The early OpenAI team was not diverse or global—it was overwhelmingly white, male, elite, and Silicon Valley-rooted. Despite the “open” name, OpenAI published selectively, withheld key code, and built secrecy into its DNA early on.
Hao shows how power and control were always more than side effects—they were design principles.
3. The Financial Contradiction
The first major contradiction appears in this part: to fulfill its mission, OpenAI needed money—lots of it. But being a nonprofit with no equity made it hard to attract billions in funding.
Altman’s solution: a radical dual structure. OpenAI would remain a nonprofit “governance layer,” but create a for-profit subsidiary (OpenAI LP) in 2019 with “capped returns” for investors. Microsoft’s \$1 billion investment followed.
“From that moment, OpenAI’s spiritual aspirations merged with financial ones.”
The rest of Part I documents how this structure began a gradual mission drift, as profit incentives started to outweigh principles.
Part II: “Ascension to Disaster Capitalism”
If Part I is the ideological build-up, Part II is the expansion and erosion. It charts OpenAI’s global rise after GPT-2 and GPT-3, and the labor, political, and environmental costs that powered it.
1. The GPT Boom
GPT-2’s release in 2019—initially withheld for fear of misuse—marked OpenAI’s coming out. GPT-3 in 2020, with 175 billion parameters, shocked the world. Companies, researchers, and the public began seeing OpenAI as the tip of the AGI spear.
Sam Altman became a global figurehead. Interviews, think tanks, and World Economic Forum speeches made him the high priest of AGI ideology.
Hao notes:
“He was no longer just a CEO. He was a prophet of the future.”
2. The Real Cost of AI
Behind the language models was an invisible empire of:
- Low-wage data laborers in Kenya labeling toxic content to make ChatGPT safe.
- Water-hungry data centers in Chile and Arizona causing environmental distress.
- Global South digital extractivism, where data, labor, and energy were sourced with little regulation or benefit-sharing.
This was AI colonialism in action, Hao argues. Just like the British Empire extracted cotton and spices, OpenAI extracted labor and data.
“Their mission claimed to benefit all humanity. But the benefits flowed one way—upward.”
3. The Microsoft Integration
In 2023, Microsoft embedded OpenAI’s models into Bing and Office. The \$10 billion investment gave Microsoft nearly exclusive rights to OpenAI’s commercial products, despite OpenAI’s nonprofit identity.
Hao shows that this effectively turned OpenAI into a “shadow arm of Microsoft’s AI strategy.”
Even worse, OpenAI laid off safety researchers while doubling down on deployment. Internal dissent grew, and safety teams were marginalized.
4. Altman’s Power Consolidation
Altman’s control became increasingly absolute:
- He chaired the board.
- Made all key appointments.
- Controlled fundraising and PR.
The governance structure meant no external shareholders could hold him accountable, and the nonprofit board had no real power to stop profit-driven moves.
Altman often bypassed them with charm, pressure, or political triangulation.
Part III: “Gods and Demons” to “Deliverance”
In this pivotal section, Hao moves from structural critique to human drama—highlighting the psychological, spiritual, and interpersonal breakdowns at the heart of OpenAI’s implosion.
1. AGI as Religion—Not Metaphor
OpenAI’s founders, particularly Ilya Sutskever and Sam Altman, didn’t just believe in AGI. They worshipped it.
- Sutskever reportedly believed AGI was inevitable and potentially conscious.
- Altman began referring to AGI in metaphysical terms: a power to save or destroy humanity.
Hao writes: “It was no longer just research. It was prophecy.”
This belief led to bizarre consequences. Safety researchers were marginalized for raising concerns. Some executives began making key decisions based on faith in the model’s destiny, not hard data.
2. Safety vs Speed—The Great Schism
Tensions reached a breaking point between two camps:
- Accelerationists (Altman, Brockman): Push AGI forward as fast as possible.
- Safety Advocates (Sutskever, Jan Leike, Helen Toner): Pause and think about long-term consequences.
A series of internal resignations, clashes over secrecy, and censorship of dissenting views created a “toxic split.”
“The gods demanded loyalty. The demons warned of consequences.”
3. The November Coup
On November 17, 2023, the board (led by Sutskever and Toner) fired Sam Altman, citing loss of trust and fears over AGI safety.
The backlash was immediate:
- Microsoft backed Altman.
- Over 700 of 770 OpenAI employees threatened to quit.
- Investors panicked.
- Media painted the board as reckless.
Within five days, Altman was reinstated. The board was ousted. Sutskever disappeared from public life.
This moment, Hao argues, was the collapse of OpenAI’s experiment in nonprofit governance.
“A coup attempted to save humanity from AGI. The empire struck back.”
4. Deliverance or Surrender?
Post-coup, Altman had more power than ever:
- A new, more compliant board.
- Full backing of Microsoft.
- A team scared into silence.
AGI development resumed at full speed. No new safety guardrails were added.
“It was a deliverance for Altman. It was a surrender for everyone else.”
Part IV: “The Gambit” to “A Formula for Empire”
In this closing section, Hao steps back and synthesizes her core thesis: OpenAI is not just a company. It is a blueprint for a new kind of empire—built not on territory, but on compute, belief, and labor.
1. The New Colonialism
OpenAI’s model is extractive, centralized, and unchecked:
- Labor: Outsourced to the Global South (data labeling, ethics testing).
- Resources: Massive carbon and water usage in developing countries.
- Data: Scraped from global sources with little consent or compensation.
Just like European empires once extracted spices, sugar, and rubber—OpenAI extracts language, labor, and logic.
2. The Ideological Engine
The OpenAI brand runs on hope and fear:
- Hope in AGI solving cancer, climate change, and war.
- Fear of AGI killing us all—unless OpenAI builds it first.
This ideology justifies everything:
- Closed-source models
- Profit-first partnerships
- Silencing critics
- Dismissing risk
Altman frames AGI as inevitable. Hao argues: Nothing about AGI is inevitable. It’s a political choice.
3. The Real-World Stakes
Hao travels to:
- Kenya, where data workers suffer PTSD from filtering violent content.
- Chile, where OpenAI’s Azure partner is draining water from farms.
- The EU, where regulators are racing to catch up with language models.
She documents a pattern: The benefits of AI go to the center; the costs go to the margins.
“Empire is not a metaphor. It’s a map.”
4. A Formula for Empire
The book concludes with a chilling formula:
Faith + Compute + Labor + Secrecy = Empire
OpenAI is no longer just an AI lab. It is a sovereign-like force shaping:
- How we work
- What we learn
- What we believe
Without accountability, it risks becoming a monopoly over truth itself.
Final Recap: The Full Arc of Empire of AI
Part | Focus | Main Argument |
---|---|---|
Part I: The Divine Right to Scale | Ideology & origins | OpenAI began as a spiritual and technical mission |
Part II: Ascension to Disaster Capitalism | Expansion & contradiction | Profit overtook principles; exploitation began |
Part III: Gods and Demons to Deliverance | Internal collapse | Governance failed; Altman consolidated power |
Part IV: The Gambit to Empire | Global impact | OpenAI mirrors colonial empires in structure and effect |
Critical Analysis
Karen Hao’s Empire of AI stands apart not only for its reporting but for its clear moral courage. It is rare to find a tech book that combines the intellectual rigor of a historian, the curiosity of a journalist, and the emotional intelligence of a citizen bearing witness. In this section, we’ll analyze the book’s content, structure, evidence, accessibility, and relevance—with a special focus on its implications for AI governance and global equity.
Evaluation of Content
✅ Evidence and Reasoning
One of the book’s greatest strengths lies in the sheer volume and quality of its sources. Hao draws from:
- 300+ interviews, including over 90 current/former OpenAI staff.
- Internal documents, Slack messages, emails, legal filings, and board meeting minutes.
- Field reporting across five continents—interviewing marginalized data workers, politicians, and tech stakeholders worldwide.
She rigorously cross-references accounts:
“Every scene, every number, every name… is corroborated by at least two people,” Hao notes in the Author’s Note.
This methodological transparency strengthens the book’s credibility, especially in light of OpenAI CEO Sam Altman’s public criticism of the book. When corporate leaders attack a journalist’s integrity without addressing specific factual errors, it often affirms the book’s potency.
Fulfilling Its Purpose?
Unquestionably. The book’s central thesis—that OpenAI has become a neo-colonial empire exploiting the language of progress to consolidate wealth and power—is supported by extensive reporting and historical analogy. Hao draws a provocative comparison:
“The empires of AI are not engaged in the same overt violence and brutality… but they, too, seize and extract.”
She uses the East India Company and European colonialism not as rhetorical flair but as structural analogies. This allows her to show how today’s data empires rely on similar patterns: extractive labor, commodification of human experience, and top-down control.
Style and Accessibility
Hao’s writing is lucid, propulsive, and often lyrical. Her background in both data science and global reporting helps bridge two difficult terrains: the technical and the human.
Compare these two lines:
- From a Stanford AI paper:
*“GPT-4 exhibits emergent properties under scaling laws.” - From Hao’s summary of the same reality:
“GPT-4 is fifteen thousand times larger than GPT-1… consuming labor, data, and water at an unprecedented rate.”
This humanizing of scale, cost, and impact is what makes Empire of AI readable for both general audiences and policymakers.
Her use of literary devices—parallels, repetition, symbolism—is also notable. For instance, she frames Sutskever’s Slack message (“Visualize the size of the cluster in your mind’s eye”) not as a motivational line but as emblematic of OpenAI’s descent into mysticism over clarity.
Tone: Calm but forceful
Readability: High school and up
Pace: Fast, with moments of stillness to reflect on human cost
Use of Transitions: Excellent; every chapter flows into the next with narrative tension and thematic cohesion
Themes and Relevance
1. Power and Ideology
OpenAI’s founders promised self-sacrifice in the name of humanity. But when stakes rose, those ideals fell:
“What was once unprecedented has become the norm.”
Hao critiques the messianic belief that AGI will save the world—used as a shield against criticism, dissent, and democratic oversight.
2. Colonialism Reimagined
The analogy of AI as empire is more than metaphor. Hao argues that OpenAI and its competitors extract from the global periphery to serve elite, central nodes of power—like Silicon Valley and Wall Street:
“The benefits of generative AI mostly accrue upward.”
The scale of OpenAI’s growth mirrors imperial expansion—through centralized control, exploitation of external resources, and language that masks harm as help.
3. Ethics in the Age of AGI
What’s “ethical” when the technology is built on labor exploitation? Hao forces readers to confront difficult truths:
- Labelers in Kenya earned less than $2/hour to moderate toxic content for ChatGPT.
- AI development is draining water from rural communities to cool data centers.
- Artists are being replaced by AI models trained on their own work—without consent or compensation.
4. Governance Breakdown
The Altman firing saga serves as a microcosm of AI governance failure:
“Such a consequential decision was made behind closed doors… even OpenAI’s own employees were in the dark.”
The board had the legal power to remove Altman, but not the political capital. When investors and Microsoft intervened, democracy collapsed. The nonprofit board resigned, and Altman returned stronger than ever.
Author’s Authority
Karen Hao is arguably one of the most qualified journalists on this topic:
- Former AI reporter at MIT Technology Review, The Wall Street Journal, and The Atlantic.
- First journalist with deep internal access to OpenAI (since 2019).
- Only reporter to combine insider interviews with grassroots global reporting on AI’s real-world impact.
Her credentials matter because they give her authority—but her humility and transparency (e.g., noting where she paraphrases vs. quotes, where evidence is indirect) give the book its integrity.
Even where Hao speculates, she does so responsibly. She marks assumptions, differentiates between direct quotes and reconstructed dialogue, and cross-checks her data.
Intellectual Contribution
Hao’s greatest contribution is not exposing Sam Altman’s ambition—it’s in offering a new frame through which to understand AI: not as inevitable, not as neutral, but as political.
She reframes AGI as ideology. She reframes progress as a commodity. And she reframes OpenAI as a global power broker—not unlike colonial empires or monopolies of the Gilded Age.
Summary of Critical Assessment
Criteria | Assessment |
---|---|
Evidence | ✅ Meticulously sourced, cross-verified |
Writing Style | ✅ Human, clear, even poetic at times |
Structure | ✅ Cohesive and well-paced |
Themes | ✅ Timely, global, urgent |
Accessibility | ✅ Suitable for general readers, students, and policymakers |
Innovation | ✅ Reframes AI history with originality |
Strengths and Weaknesses
Every significant work, even the most masterfully crafted, comes with both triumphs and tensions. Karen Hao’s Empire of AI is no exception. While it delivers a revelatory, deeply researched, and emotionally potent narrative, it also raises some challenges—particularly in the space between access and interpretation, and idealism and urgency.
This section highlights the book’s major strengths and weaknesses, helping readers critically engage with its impact and limitations.
✅ Strengths
1. Unprecedented Scope and Depth
Few books on artificial intelligence have ever attempted what Empire of AI does—and even fewer succeed. Hao not only maps out the internal workings of OpenAI but stretches her lens across continents, making visible the global costs of AI. The book is not just an insider tech exposé—it’s a geopolitical and moral reckoning.
“In Arizona and Chile, I met with local politicians and activists worried about… data centers guzzling their homes’ precious water resources.”
This broad investigative reach gives the book unmatched credibility and depth.
2. Human Voices and Personalization
The real power of this book lies in its human stories. From underpaid data annotators in Kenya to burned-out engineers at OpenAI, Hao constantly centers real people—not just executives and algorithms.
These aren’t statistics. These are lives:
“At the thought of losing all of their equity, a person at the party began to cry.”
This narrative choice elevates the book emotionally and ethically, bridging the gap between complex technology and its lived impacts.
3. Systemic Framing: AI as Empire
One of the boldest and most effective intellectual contributions of the book is its structural framing: the idea that today’s AI companies, especially OpenAI, mirror the expansionist logic of colonial empires.
Hao does not draw this parallel lightly or metaphorically. She supports it with detailed analysis, historical comparison, and a taxonomy of behavior:
- Resource extraction
- Ideological superiority
- Suppression of dissent
- Elite consolidation
“In the simplest terms, empires amassed extraordinary riches… through imposing a colonial world order, at great expense to everyone else.”
This gives the book a larger purpose beyond tech criticism: it becomes a theory of modern power.
4. Writing That’s Personal, Not Preachy
Karen Hao writes like someone who has seen too much to be neutral but still respects her readers enough to let them decide. She uses first-person reflection, field notes, and moments of introspection without centering herself. Her voice is intelligent, humble, and occasionally aching with concern.
Her presence is felt most strongly at the end of the prologue:
“To me, these events were not just some frivolous Silicon Valley power moves. The drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence?”
This gives the work emotional resonance—something missing in most AI journalism.
5. Accessibility Without Oversimplification
The book is written for laypeople, journalists, students, and decision-makers—not just technologists. But it never dumbs things down. Instead, Hao explains difficult topics—like neural scaling laws or model alignment—in plain but precise language. This makes the book an ideal resource for anyone trying to make sense of the AI boom.
❌ Weaknesses
1. Some Narratives May Appear One-Sided
While the book is deeply reported and fair in its sourcing, it is not neutral—and that’s intentional. Hao sets out to expose power, not appease it.
That said, some readers may feel the portrayal of Altman, OpenAI, and the AI industry at large leans heavily toward criticism. There is little exploration of OpenAI’s possible positive impacts (e.g., educational access, automation of routine tasks, or AI for scientific discovery).
Altman’s defenders might argue that the book minimizes the good while overemphasizing the flaws. Yet this isn’t a failure of balance—it’s a refusal to water down a truth many prefer to ignore.
Note: Hao does attempt to reach out to Altman and OpenAI, but they declined to participate. She transparently acknowledges this.
2. Repetition of Themes
The colonial analogy—though powerful—is repeated frequently throughout the book. Some readers may feel it begins to lose its potency over time. This is a challenge with any single organizing metaphor used over 400+ pages.
However, Hao mitigates this by expanding the metaphor through examples: from data extractivism to environmental costs to cultural appropriation.
3. The AGI Argument Remains Abstract
While the book critiques the ideology of AGI (artificial general intelligence) as myth-making, it doesn’t spend much time addressing counterarguments—such as whether AGI is even technically feasible or how alternative AI futures might look in detail.
This may leave some readers wondering: If not AGI, then what? What model of AI governance can actually work?
That said, this may have been beyond the scope of the book—which focuses on the present failure more than the future solution.
4. Limited Economic Alternatives Presented
The book critiques the tech industry’s concentration of wealth but spends little time analyzing economic alternatives—like publicly funded research labs, AI cooperatives, or open-source models. These could have added more vision to the critique.
Overall Assessment: Does It Matter?
Yes—immensely.
Despite minor limitations, Empire of AI is a vital contribution to the public conversation on technology, ethics, and governance. Its value lies not in perfection but in its courage to name the stakes and connect the dots between Silicon Valley, the Global South, and the futures we are blindly marching into.
This is not a book that flatters readers—it challenges them. And in the age of AI, that’s exactly what we need.
At a Glance: Strengths vs. Weaknesses
Strengths | Weaknesses |
---|---|
Unmatched investigative depth | Limited exploration of OpenAI’s benefits |
Bold, systemic framing | Heavy reliance on colonial metaphor |
Accessible yet rigorous writing | AGI critique lacks deep technical exploration |
Human-centric narratives | Lacks detailed alternative economic frameworks |
Moral clarity and global reach | Some sections feel repetitive |
Reception, Criticism, and Influence
Since its release on May 20, 2025, Empire of AI by Karen Hao has sparked intense conversations across media, academia, and tech circles. Described by some as the “most important AI book of the decade” and by others as a “cynical hit job,” the book has been impossible to ignore. It has polarized readers and reviewers alike, which only underlines how central it has become to current debates on artificial intelligence.
This section explores how the world responded—both positively and critically—and why the book’s influence may be long-lasting.
Media Reviews: Rave, Reluctant, and Ruthless
✅ Praise from Journalists and Reviewers
The New York Times called the book “broader and more critical” than any other published account of OpenAI to date. In a joint review of Hao’s book and The Optimist by Keach Hagey, the Times wrote:
“Hao’s reporting dispels any doubt that OpenAI’s belief in ushering in AGI to benefit all of humanity had messianic undertones.”
MIT Technology Review, Hao’s former employer, described Empire of AI as “urgent and unsettling,” highlighting its deep research and global reporting:
“It forces us to ask not just who is building AI, but why, how, and at whose expense.”
Mashable praised Hao’s courage in challenging one of Silicon Valley’s most powerful figures. The site reported:
“This is not just a book about Altman. It’s about the systems that let people like Altman make god-like decisions with no democratic input.”
NPR’s Steve Inskeep, in an in-depth interview with Hao, emphasized the importance of her work:
“This is journalism at its most consequential: illuminating the gap between what a company claims to be and what it is.”
❌ Criticism from Silicon Valley Insiders
Not surprisingly, Empire of AI didn’t sit well with OpenAI’s leadership. Sam Altman refused to cooperate with Hao and later criticized the book online. While he didn’t cite specific factual inaccuracies, he implied that the book was biased, sensational, and motivated by resentment.
Altman allegedly told colleagues that Hao had an “agenda” and warned the public to “read it with caution.”
This kind of pushback, ironically, only fueled curiosity—and sales.
Academic and Policy Circles
Universities, think tanks, and regulatory bodies have been quick to add Empire of AI to reading lists. At Stanford, MIT, and the Oxford Internet Institute, the book is being used in seminars on ethics, governance, and the political economy of AI.
Policymakers in the EU and Global South have cited the book when advocating for stronger international regulation and labor protections.
“Hao’s work shows how data and labor are being extracted globally without informed consent or fair compensation. This cannot be the future of digital progress,” said a UN representative during a Geneva roundtable on AI equity in June 2025.
Commercial Success and Cultural Impact
- #1 New York Times Bestseller (Non-fiction)
- Translated into 19 languages within 3 months
- Featured on major podcasts like Hard Fork, The Ezra Klein Show, and Your Undivided Attention
- Documentary rights reportedly acquired by a major streaming platform (rumored: Netflix or HBO)
The book’s resonance comes not only from what it reveals, but when it was published. Released at the height of global AI hysteria—amid ChatGPT integrations into classrooms, newsrooms, and hospitals—it forced a cultural pause.
As one reviewer wrote:
“Empire of AI is not just about OpenAI. It’s a story about us, our blind faith in technology, and our abdication of responsibility.”
Backlash from AI Hype Community
The so-called “effective accelerationist” (e/acc) crowd—techno-optimists who believe AGI should be built as fast as possible—have called the book fear-mongering.
On X (formerly Twitter), some prominent VCs and founders derided Hao as “bitter,” “anti-progress,” and “out of touch.”
But Hao expected this. She addresses it in interviews:
“This book was never meant to make everyone comfortable. It was meant to challenge the narrative.”
Lasting Influence
Whether you agree with Hao’s conclusions or not, it’s impossible to deny that Empire of AI has shaped the way the public now talks about:
- AI colonialism
- Governance failures
- Data labor exploitation
- Environmental impact of AI
- Power asymmetry in tech
Her phrase “empire of AI” has already entered the academic and media lexicon. Much like “surveillance capitalism” or “platform monopoly,” it may come to define an entire era of critique.
Summary: A Book That Shifted the Discourse
Area | Response |
---|---|
Mainstream Media | Largely positive, praised for courage and depth |
Tech Industry | Defensive, some backlash from Altman and VCs |
Academia & Policy | Widely respected and referenced |
Cultural Impact | Bestseller, film rights sold, public discourse shifted |
Conclusion
Karen Hao’s Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI is more than a book. It’s a mirror, a megaphone, and a map. It reflects the state of modern tech power with startling clarity. It amplifies the often-muted voices of the global majority exploited by AI’s current systems. And it offers a compass to help us chart a different course—before it’s too late.
This isn’t just a book for AI experts or Silicon Valley insiders. It’s for anyone asking questions like:
- Who controls the future of AI?
- What are the costs—human, environmental, ethical—of rapid AI development?
- Can technology be built to serve the many, not the few?
A Global Reckoning, Not Just a Silicon Valley Story
While Empire of AI focuses heavily on OpenAI and Sam Altman, its real scope is planetary. It connects the dots between:
- Kenyan data workers paid \$2/hour to train ChatGPT
- Chilean communities losing water to cool AI data centers
- Microsoft layoffs despite soaring valuations
- And elite boardroom coups that destabilize billion-dollar institutions
This book reminds us that AI is not abstract. It is built on people, energy, and ecosystems. And unless we shift the trajectory, its benefits will be reserved for a global elite.
A Cautionary Tale of Failed Governance
Perhaps the most chilling thread in the book is the collapse of OpenAI’s nonprofit governance model. What began as a bold experiment to build AGI safely and openly turned into a corporate empire driven by secrecy, profit, and mysticism.
“What was once unprecedented has become the norm.”
The November 2023 firing and reinstatement of Altman wasn’t just a tech drama—it was a stress test for the future of AI accountability. And it failed.
A Book That Changes Language and Landscape
Hao popularizes terms like:
- AI Colonialism
- AGI Messianism
- Techno-Empire
- Disaster Capitalism in AI
These terms are now being used in policy debates, university syllabi, and regulatory hearings. That’s impact.
“OpenAI is nothing without its people,” a phrase chanted by its employees during the coup, now doubles as a haunting summary of Hao’s central point: AI must serve people—not just power.
Should You Read It?
Reader Type | Why It’s a Must-Read |
---|---|
Students & Scholars | To understand AI’s political economy and ethical dilemmas |
Policymakers | For grounding in labor rights, regulatory gaps, and global asymmetries |
Tech Professionals | To reflect on industry accountability and long-term impact |
Activists & Advocates | To amplify calls for justice, transparency, and equitable innovation |
General Readers | Because the decisions being made in tech today will shape your future |
A Note of Hope
Despite its dark revelations, Empire of AI ends on a deeply human note. Karen Hao still believes in the possibility of a better path—a more just, democratic, and sustainable approach to artificial intelligence.
“Just as empires of old eventually fell… we, too, can shape the future of AI together.”
But that future won’t be shaped by accident. It requires vigilance, resistance, and vision.
This book provides all three.