If an AI system at your company started behaving dangerously right now — hallucinating medical advice, leaking customer data, making unauthorized financial decisions — how quickly could you shut it down?
If you work in IT security, governance, or digital trust, odds are you don’t know. And neither do your colleagues.
ISACA’s 2026 AI Pulse Poll, released at RSA Conference in late March, surveyed more than 3,400 digital trust professionals across IT audit, governance, cybersecurity, privacy, and emerging technology roles. The headline finding: 56 percent of respondents don’t know how quickly they could immediately halt an AI system after a security incident.
Not “it would take too long.” They don’t know the answer at all.
The Numbers
The breakdown gets worse the closer you look.
Of the professionals who did have an estimate, 32 percent believe they could shut down systems within 60 minutes. Seven percent said it would take longer than an hour. The remaining majority — more than half of all respondents — couldn’t give a timeframe.
Incident response confidence follows a similar pattern. Only 43 percent expressed high confidence in their ability to investigate and explain a serious AI incident to leadership or regulators. More than a quarter — 27 percent — reported low or no confidence.
These aren’t random IT workers. These are the people whose entire job is digital trust: auditors, cybersecurity specialists, governance professionals. If they can’t articulate a kill-switch timeline, the organizations deploying AI in customer-facing and business-critical roles certainly can’t.
Nobody’s in Charge
The accountability numbers are the most revealing part of the survey.
When asked who is ultimately responsible if an AI system causes harm, the answers scattered: 28 percent pointed to the board or executives, 18 percent said the CIO or CTO, 13 percent identified the CISO. And 20 percent — one in five — admitted they simply don’t know who holds responsibility.
This isn’t a philosophical question about AI personhood. It’s a practical governance failure. When something goes wrong — and the survey strongly implies that respondents expect it will — a fifth of security professionals at enterprises deploying AI have no idea who answers for it.
Human oversight of AI decision-making is similarly fragmented. Only 36 percent report that humans approve most AI-generated actions before execution. Another 26 percent say humans review selected decisions after the fact. Eleven percent say humans only intervene when alerted to issues. And 20 percent don’t know how — or whether — humans oversee AI decisions at their organization at all.
Deploy First, Govern Later
The survey paints a picture of organizations that have deployed AI ahead of the governance frameworks needed to manage it. Nearly a third of respondents — 32 percent — report that their organization has no disclosure requirements around AI use. Twenty percent have requirements but don’t enforce them consistently. Only 18 percent both require and enforce disclosure.
This tracks with broader industry data. Multiple surveys throughout 2025 and 2026 have found that only around a third of organizations have comprehensive AI policies in place. The technology is live. The rules aren’t.
The timing adds another dimension. The EU AI Act’s enforcement deadlines are actively phasing in throughout 2026, requiring documented risk assessments, human oversight mechanisms, and incident reporting for high-risk AI systems. If more than half of security professionals at surveyed organizations can’t estimate how long it takes to shut down an AI system, meeting these regulatory requirements is going to be a problem.
Why This Should Worry You
There’s a running theme in AI safety discussions: the gap between what labs say about AI capability and what deploying organizations can actually manage. ISACA’s survey quantifies that gap with hard data from the people responsible for closing it.
The models are getting more capable every quarter. Agentic AI systems are taking autonomous actions in production environments — writing code, making API calls, executing workflows. And the people tasked with overseeing these systems are telling surveyors, on the record, that they don’t know how to stop them in an emergency.
This isn’t a hypothetical risk. It’s a measured state of affairs at enterprises that are already running AI in production. The kill switch exists in theory. In practice, more than half the people who would need to pull it don’t know where it is.
What’s Being Done (And Why It’s Not Enough)
The industry’s response has been frameworks. NIST’s AI Risk Management Framework. The EU AI Act. ISO/IEC 42001. ISACA’s own AI governance resources. These provide structure for organizations that choose to implement them.
The operative word is “choose.” The ISACA survey suggests that a significant portion of organizations are deploying AI without the governance infrastructure these frameworks describe. Having a framework available and having one implemented are very different things, and the data shows most organizations are still on the wrong side of that gap.
The full 2026 AI Pulse Poll is scheduled for release in May. The preview findings suggest it will document an industry that knows exactly how to govern AI responsibly and has mostly decided not to bother.