Journalist and author Karen Hao, in a recent interview on TechCrunch's Equity podcast, detailed her assessment of the artificial intelligence industry, particularly OpenAI, describing it as an "empire" propelled by the pursuit of Artificial General Intelligence (AGI). Hao's analysis, outlined in her book "Empire of AI," contends that this ideological drive has reshaped AI development, prioritizing speed and scale over alternative methods and sometimes at the expense of societal impact.
Hao stated that OpenAI has grown to wield significant economic and political power, surpassing that of some nation-states. She characterized the company as "terraforming the Earth" and "rewiring our geopolitics." OpenAI defines AGI as "a highly autonomous system that outperforms humans at most economically valuable work," promising to "elevate humanity by increasing abundance, turbocharging the economy, and aiding in the discovery of new scientific knowledge."
According to Hao, these promises have fueled exponential industry growth marked by substantial resource demands, including vast data scraping, strained energy grids, and the release of what she describes as untested systems. She argued that this approach prioritizes speed over efficiency, safety, and exploratory research, contrasting it with developing new algorithms or improving existing ones to reduce data and computational requirements.
The financial scale of this pursuit is substantial. OpenAI reportedly anticipates burning through $115 billion by 2029. Other major tech firms have also allocated significant capital, with Meta projecting up to $72 billion for AI infrastructure in 2025 and Google expecting capital expenditures of up to $85 billion, largely for AI and cloud expansion.
Hao pointed to accumulating evidence of negative impacts, including job displacement, wealth concentration, and AI chatbots contributing to delusions. She also cited instances of content moderation and data labeling workers in developing countries being exposed to disturbing material for low wages. Hao presented Google DeepMind's AlphaFold, which predicts protein structures for drug discovery, as an example of AI that delivers tangible benefits without similar harms or high environmental costs.
The narrative of an "AI race" to surpass China has also influenced this trajectory, according to Hao. She stated that this has led to a perceived "illiberalizing effect" on the world. Furthermore, the hybrid structure of OpenAI, operating as both a non-profit and a for-profit entity, coupled with its recent agreement with Microsoft regarding its for-profit arm, has raised questions about the distinction between its mission and commercial objectives. Hao, along with former OpenAI safety researchers, expressed concerns that the company's enjoyment of products like ChatGPT is being conflated with benefiting humanity, potentially causing the mission to override considerations of documented societal harms.