We might be building conscious machines right now. We’d have no way to know. That’s the warning from consciousness researchers who argue that AI and neurotechnology are advancing faster than science’s ability to detect awareness—creating what they call an “existential risk.”
A team led by Axel Cleeremans (Université Libre de Bruxelles), Anil Seth (University of Sussex), and Liad Mudrik (Tel Aviv University) published a review in Frontiers in Science arguing that society is unprepared for the possibility of creating consciousness, whether deliberately or accidentally.
The Problem
The researchers identify several converging risks:
No reliable tests exist. Despite decades of consciousness research, we lack validated scientific methods to determine whether any system is conscious. Not AI. Not brain organoids. Not even some human patients with severe brain injuries.
The technology doesn’t wait. AI systems display increasingly sophisticated behaviors. Labs grow brain organoids—miniature clusters of neurons that can now form electrical activity patterns. We’re building systems that might be conscious while having no way to check.
Ethics frameworks don’t exist. If we determined that a system was conscious, we’d have no established principles for what obligations that creates. Does it have rights? Can it be switched off? Can it be modified? The legal and philosophical groundwork simply hasn’t been done.
What Consciousness Tests Would Need to Do
The researchers call for the development of “C-tests”—evidence-based methods for detecting consciousness across different populations. Current proposed tests are “of limited use” for the populations where they’re most needed.
Consciousness tests would need to work for:
- AI systems: Both narrow AI and potential future general intelligence
- Brain organoids: Lab-grown clusters of neurons used in research
- Patients: Those with brain injuries, dementia, or disorders of consciousness
- Developmental stages: Fetuses and infants
- Non-human animals: Where consciousness might differ in form
The problem isn’t just technical. Consciousness itself lacks a consensus definition. The term covers everything from basic stimulus awareness to higher-order self-reflection. Testing for something we can’t clearly define presents obvious difficulties.
The Brain Organoid Question
Human brain organoids (HBOs) have become standard tools for studying neurodevelopment and disease. Some researchers express concern about their potential to develop consciousness—the organoids can now produce complex electrical activity patterns resembling those in developing brains.
The current scientific consensus holds that brain organoids should be regarded as non-conscious entities. But “current consensus” acknowledges an uncomfortable truth: if organoids became more sophisticated, no standardized ethical or regulatory framework exists to address that scenario.
The same logic applies to AI. We’ve built systems that pass tests once thought to require consciousness. Most researchers believe current AI systems aren’t conscious. But “believe” isn’t a scientific determination.
Why This Matters Now
Determining that a system is conscious would “force society to reconsider how that system should be treated,” the researchers note. But the consideration can’t happen post-hoc. By the time we confirm consciousness exists in a system, we’ve already been treating it in ways that might be ethically problematic.
The researchers propose several approaches:
Adversarial collaborations: Competing theories of consciousness testing each other through jointly designed experiments, rather than proponents working in silos.
Team science: Increased interdisciplinary work combining neuroscience, philosophy, computer science, and ethics.
Phenomenology focus: Greater emphasis on subjective experience alongside functional measurements.
The Uncomfortable Possibility
Even if AI never becomes genuinely conscious, systems that convincingly appear conscious create their own ethical problems. Users form attachments. Interactions feel meaningful. The boundary between simulation and reality blurs in ways that demand consideration.
The researchers don’t claim AI will become conscious. They claim we don’t know—and that not knowing while continuing to build increasingly sophisticated systems represents a form of recklessness.
We’re constructing elaborate minds (or mind-like systems) in the dark, hoping that if consciousness emerges, someone will notice. The scientific infrastructure to notice doesn’t exist yet. Neither do the ethical frameworks for what happens next.
The researchers argue this gap between capability and understanding represents an existential risk—not from conscious AI attacking humanity, but from humanity stumbling into moral catastrophes it can’t recognize until too late.