A Georgia Tech Researcher Says AI Apocalypse Fears Are Misplaced

Milton Mueller argues that computer scientists aren't qualified to predict societal outcomes - and that AI existential risk claims rest on unexamined assumptions.

Ancient Greek philosopher bust statue against a neutral background

The AI safety community operates on a set of assumptions that rarely get examined. What if an all-powerful superintelligence simply isn’t possible? What if the whole scenario rests on technical misunderstandings dressed up as philosophy?

Milton Mueller, a professor at Georgia Tech’s Jimmy and Rosalynn Carter School of Public Policy, published research in the Journal of Cyber Policy late last year that challenges the foundational claims of AI existential risk. His argument isn’t that advanced AI won’t create problems - it’s that the specific apocalyptic scenarios dominating safety discourse don’t hold up to scrutiny.

The Core Critique

Mueller examined 81 peer-reviewed papers on AI existential risk from Scopus and Web of Science. What he found was “a fragmented discourse characterized by bold yet often unsubstantiated claims, including accelerationist growth models and speculative calculations of catastrophic tipping points.”

His central argument: computer scientists are so focused on AI mechanisms and so impressed by recent successes that they’ve lost the ability to place the technology in social and historical context. They’re predicting societal outcomes based on technical extrapolation alone.

“Computer scientists are so focused on the AI’s mechanisms and are overwhelmed by its success,” Mueller writes, “but they are not very good at placing it into a social and historical context.”

The Definitional Problem

Mueller starts with something basic: nobody agrees on what artificial general intelligence actually means. The existential risk scenarios require AGI as a precondition - a system that can recursively self-improve toward superintelligence. But there’s no consensus definition of AGI, no clear path from current systems to that threshold, and no empirical evidence that such a threshold exists.

Different researchers define AGI differently. Some mean “as capable as a human at any cognitive task.” Others mean “capable of general reasoning across domains.” Still others use it loosely to mean “more capable than current systems.” When your existential predictions depend on achieving a state you can’t define, your predictions have a problem.

The Autonomy Misconception

Current AI systems don’t act autonomously. They require directed goals and training. They operate through user prompts rather than independent action. When something goes wrong - when a model “hallucinates” or produces harmful output - the issue stems from flawed instructions or training data, not from the system deciding to pursue its own objectives.

The existential risk scenarios require AI systems that form independent goals, resist human correction, and execute long-term strategies against human interests. Mueller argues there’s no evidence current architectures can do this and no theoretical basis for assuming future systems will.

Physical Constraints

Here’s something the scenarios often skip: even an all-powerful AI would need physical infrastructure to act in the world. Robots. Power sources. Manufacturing capacity. A superintelligent AI running in a data center cannot take over anything without human cooperation to expand its physical footprint.

A data center can’t become omnipotent on its own. It requires humans to build more data centers, design and manufacture robotic systems, maintain power supplies, and create physical actuators. Every step requires human labor and human institutions. The “fast takeoff” scenarios, where AI rapidly bootstraps to superintelligence before humans notice, ignore these dependencies.

The Alternative

Mueller isn’t arguing that AI is safe or that it doesn’t need regulation. His position is that the existential framing is counterproductive. Rather than trying to prevent an AI apocalypse, he argues, society should implement targeted guardrails aligned with human values through appropriate regulatory expertise in specific domains.

Different AI applications require different policies. Copyright law for data scraping. FDA oversight for medical AI. Financial regulations for AI trading systems. Employment law for automated hiring. The existential framing pulls attention away from these concrete, solvable problems toward abstract scenarios that may not be possible.

Steel-Manning the Other Side

The AI safety community has responses to these critiques. Eliezer Yudkowsky and others argue that waiting for empirical evidence of autonomous goal-formation means waiting until it’s too late. The claim is that by the time we have proof, a superintelligent system will already have the capability to prevent us from correcting it.

There’s also a reasonable argument that Mueller’s expertise (internet governance and cyber policy) doesn’t qualify him to assess the technical trajectory of AI systems. Computer scientists who work directly with these models may have insight into capability trends that policy researchers lack.

And the defense-in-depth argument: even if the probability of existential risk is low, the magnitude is high enough that some precautionary effort is warranted. Expected value calculations favor caution.

Why This Matters

The AI safety discourse shapes policy. Congressional testimony, executive orders, and international agreements increasingly reference existential risk. If that framing is built on unexamined assumptions - or if it’s simply a category error by computer scientists making sociological predictions - then policy is being built on a shaky foundation.

Mueller’s research doesn’t resolve the debate. But it does something valuable: it forces the existential risk community to defend assumptions they’ve treated as obvious. The burden of proof should be on those making extraordinary claims. “AI might end humanity” is an extraordinary claim. The evidence base needs to match.