The Ban-Build Cycle: Enterprise vs. Agentic AI
How companies are scrambling to govern AI agents — and why the pattern looks exactly like every technology panic before it.
On February 17, a compromised npm publish token pushed a modified version of the popular Cline command-line interface to the registry. The update looked identical to the legitimate release, with one addition: a postinstall script that silently installed OpenClaw, the viral agentic AI tool, on every machine that pulled the package. The tampered version sat live for eight hours before it was caught. According to security platform Socket, the script had already reached thousands of developer environments.
OpenClaw itself wasn't the malware. Someone had weaponized it as a delivery mechanism. David Shipley of Beauceron Security called the technique "deviously, terrifyingly brilliant." The attacker, Shipley noted, had "effectively turned OpenClaw into malware that EDR isn't going to stop." EDR, or endpoint detection and response, is the security software that runs on corporate machines to catch threats in real time. It works by flagging known malicious programs. OpenClaw isn't malicious. It's a legitimate, widely-used application. The security software had no reason to block it.
The incident crystallized a question that had been building for weeks across corporate IT departments: what do you do with a tool that employees want, attackers can exploit, and nobody fully understands?
The ban wave
The response came fast. At Meta, an executive told his team to keep OpenClaw off company laptops or face termination. Jason Grad, CEO of internet proxy company Massive, sent a late-night Slack warning to his 20-person staff on January 26, before anyone at the company had even installed it. "Our policy is, 'mitigate first, investigate second' when we come across anything that could be harmful to our company, users, or clients," Grad told WIRED.
At Valere, a software firm that works with clients including Johns Hopkins University, an employee posted about OpenClaw on an internal Slack channel the day it launched. The company's president responded within minutes: strictly banned.
Valere CEO Guy Pistone explained the reasoning. "If it got access to one of our developer's machines, it could get access to our cloud services and our clients' sensitive information, including credit card information and GitHub codebases," Pistone told WIRED. "It's pretty good at cleaning up some of its actions, which also scares me."
But Valere didn't stop at banning it. Pistone gave a research team an old employee laptop and 60 days to determine whether OpenClaw could be made safe. The researchers published their findings in a report shared with WIRED, and their conclusion was blunt: users have to "accept that the bot can be tricked." If OpenClaw is set up to summarize a user's email, a hacker could send a malicious message instructing the AI to share copies of files from the user's computer. The vulnerability is structural, not a bug to be patched.
The numbers
Banning a tool is one thing. Controlling an entire category of technology is another.
Gravitee's State of AI Agent Security 2026 report, based on a survey of more than 900 executives and technical practitioners, found that 80.9% of technical teams have moved past planning into active testing or production deployment of AI agents. The speed is real. But the security infrastructure behind it is not.
Only 14.4% of organizations report that all their AI agents go live with full security and IT approval. On average, just 47.1% of an organization's AI agents are actively monitored or secured. Nearly half (45.6%) of teams still rely on shared API keys for agent-to-agent authentication.
The confidence gap is stark. 82% of executives say they feel confident that existing policies protect them from unauthorized agent actions. The data from their own organizations contradicts that belief.
And the incidents are already here. 88% of surveyed organizations reported confirmed or suspected AI agent security incidents in the past year. In healthcare, that figure reached 92.7%. A Dark Reading poll found that 48% of security professionals consider agentic AI the top attack vector for 2026.
Gravitee's researchers gave this phenomenon a name that anyone who worked in IT during the 2010s will recognize: Shadow AI.
Three times before
The pattern is familiar. A new technology arrives. Employees adopt it. Security teams ban it. Employees use it anyway. The ban fails. Eventually, someone builds a governance framework, usually after something goes badly wrong.
Bring your own device. Starting in the late 2000s, employees brought personal smartphones to work. IT departments blocked them. Employees found workarounds. Shadow IT exploded so thoroughly that Dropbox built an entire go-to-market strategy around it. As Computerworld reported in 2015, Dropbox openly wanted "shadow IT to drive enterprise adoption." Companies eventually stopped fighting and built mobile device management (MDM) policies instead. The ban didn't work. Management did.
Cloud computing. Federal agencies resisted migrating to the cloud for years. A 2018 Congressional Research Service report documented "long-held concerns about security" among federal IT managers. Agencies worried about losing control of sensitive data. They delayed, studied, and formed committees. Then adoption happened anyway, and the frameworks followed.
The OPM breach and the road to CMMC. This one is worth tracing in detail because the timeline shows exactly how the ban-then-build cycle plays out at institutional scale.
After 9/11, a series of reforms revealed that the federal government had no coherent system for protecting sensitive unclassified information. President Obama's 2010 Executive Order 13556 described the existing approach as an "inefficient, confusing patchwork" of agency-specific policies that created "impediments to authorized information sharing." A series of cyber incidents in the early 2000s prompted the Pentagon to launch the Defense Industrial Base Cybersecurity Program in 2007. DFARS clause 7012 followed in 2017, requiring defense contractors to implement specific security controls.
Then came the breach that made the theoretical concrete. In 2015, the Office of Personnel Management disclosed that attackers had stolen 21.5 million security clearance records. The House Oversight Committee's report concluded that OPM had "failed to heed repeated recommendations from its Inspector General." The Cybersecurity Maturity Model Certification (CMMC) program followed in 2020, turning voluntary best practices into contractual requirements.
The pattern each time: warnings, then inaction, then a breach large enough to force a regulatory response.
What makes agents different
There is one critical difference between AI agents and the technologies that preceded them in this cycle. Cloud servers and personal phones don't fall for social engineering. AI agents can.
Traditional security models assume that the entity operating software is a human being capable of judgment. An AI agent with system-level access operates under no such assumption. It follows instructions, and it cannot reliably distinguish between legitimate instructions from its operator and malicious instructions embedded in content it processes.
The Valere research team identified this precisely. If an agent reads emails, any email can contain hidden instructions. If an agent browses the web, any webpage can attempt to redirect its behavior. The attack surface isn't a vulnerability in the traditional sense. It's a feature of how language models process input.
NVIDIA's AI Red Team put it in operational terms: the "primary threat to these tools is that of indirect prompt injection, where a portion of the content ingested by the LLM driving the model is provided by an adversary." The adversary doesn't need to breach the system. They just need to put text where the agent will read it.
This is what separates agentic AI from previous ban-build cycles. BYOD required physical access. Cloud exploits required technical sophistication. Prompt injection requires a carefully worded paragraph.
What would actually fix it
Think of deploying an AI agent like hiring a new employee. No company hands a new hire the master key to every system on day one. They get a badge, access to the floors they need, and a supervisor who signs off on anything unusual. Agents should work the same way.
Sandbox it. The agent runs in a contained environment where it can perform its assigned tasks but cannot reach outside its boundaries. NVIDIA's AI Red Team lists three mandatory controls: block network egress to arbitrary sites (preventing data exfiltration), block file writes outside the workspace (preventing persistence mechanisms and sandbox escapes), and block writes to configuration files regardless of location (preventing exploitation of hooks and tool configurations that often run outside sandbox context).
Least privilege. Grant only the permissions required for the specific task, nothing more. OWASP and Kaspersky's joint guidance states it directly: "Enforce the principles of both least autonomy and least privilege." A finance agent that needs to read ledger data should not have write access to the ledger without explicit CFO approval.
Human checkpoints. High-impact actions require human sign-off before execution. MIT Technology Review's guide to securing agentic systems recommends that "anything high-impact should require explicit human approval with a recorded rationale." The recorded rationale matters. It creates an audit trail that makes post-incident analysis possible.
Treat outside content as hostile. If the agent reads an email, a webpage, a PDF, or a repository, that content should be treated as untrusted until proven otherwise. OWASP's prompt injection guidance and OpenAI's own recommendations both call for strict separation of system instructions from user-provided content. In practice: new data sources should be reviewed and tagged before entering the agent's context, and persistent memory should be disabled when untrusted content is present.
Give agents real identities. Treat AI agents as non-human users with the same identity management and access controls applied to employees. MIT Technology Review poses the question every CEO should be able to answer: "Can we show, today, a list of our agents and exactly what each is allowed to do?" The Gravitee report found that only 21.9% of teams treat agents as independent, identity-bearing entities. The rest run them under generic service accounts or shared credentials.
None of this is theoretical. The controls exist. The frameworks have been published. The question is implementation.
The governance gap
McKinsey, via CIO.com, reports that 88% of firms are using AI in some capacity, but only 23% are scaling agentic AI specifically. The gap between experimentation and governed production deployment remains enormous.
At security conferences in early 2026, a recurring admission surfaced among executives: most had written policies banning AI agent tools rather than deploying actual security controls. The Gravitee data confirms this. Organizations have policies. They do not have enforcement. More than half of all deployed agents operate without any security oversight or logging.
The market has noticed the gap and is moving to fill it. Proofpoint acquired AI security startup Acuvity on February 12. Cisco launched agentic guardrails. Teleport introduced an Agentic Identity Framework. Singapore and UC Berkeley are publishing governance frameworks. The UK's Information Commissioner's Office is weighing data protection implications.
The question now is timing. Every previous cycle in enterprise technology has followed the same sequence: early adoption, security concerns, bans, continued adoption despite bans, a major incident, and then governance frameworks that should have been built from the start. The OPM breach happened five years after the executive order that identified the problem. CMMC took another five years after that. The BYOD cycle from first corporate ban to mature MDM policy spanned roughly a decade.
Agentic AI is compressing that timeline. The technology went from launch to corporate bans in less than a month. Adoption numbers are already at production scale. Security incidents are already being reported. The tools exist to govern this responsibly. Whether organizations will implement those tools before or after the forcing function arrives is the open question.
Pistone, the Valere CEO, framed the opportunity clearly: "Whoever figures out how to make it secure for businesses is definitely going to have a winner."
The tools are not going away. The question was never whether companies would adopt agentic AI. It was always whether governance would arrive before or after the breach that makes it mandatory.