Why Agentic AI Is Security's Next Blind Spot

The Hacker News T2 clear 12 May 2026 1951 words ORIGINAL
Classification
SEV 6/10
Why Agentic AI Is Security's Next Blind Spot  The Hacker News  May 12, 2026 Artificial Intelligence / Threat Detection Agentic AI is already running in production environments across many organizations today. It is executing tasks, consuming data, and taking actions — most likely without meaningful involvement from the security team. The industry conversation has largely framed this as a question of policy: allow it, restrict it, or monitor it?
CONFIDENCE53%
Categories
vulnerabilityiot_ot_securitysupply_chain
Threat Actors
ContiPlay
Target Sectors
manufacturinggovernment

Why Agentic AI Is Security's Next Blind Spot  The Hacker News  May 12, 2026 Artificial Intelligence / Threat Detection Agentic AI is already running in production environments across many organizations today. It is executing tasks, consuming data, and taking actions — most likely without meaningful involvement from the security team. The industry conversation has largely framed this as a question of policy: allow it, restrict it, or monitor it?

However, that framing misses the point. The more urgent question is whether security professionals actually understand what they are dealing with. In most organizations, they don't right now. And that gap is compounding by the week. You cannot secure what you do not understand The foundational principle of information security has not changed: genuine fluency in a technology must come before you can meaningfully defend it.

Think about firewalls. You cannot configure one well without understanding networking. When cloud computing arrived, organizations that skipped the foundational work ended up with environments they could not reason about — tools purchased, policies written, and still no real control. We have cloud security as its own discipline today precisely because the technology demanded that practitioners develop deep familiarity with it before security could follow.

The same dynamic is playing out with AI, at a faster pace and with higher stakes. The practical consequence of being behind on agentic AI goes beyond technical exposure. Security teams that cannot speak the language of AI engineering — that cannot challenge design decisions, propose workable controls, or ask informed questions — get bypassed. Business units move forward without them, not out of bad faith, but because a security team that cannot engage substantively with the technology is not a useful partner for decisions about it.

This has played out with every major technology shift over the past two to three decades. AI will be no different. The starting point is engagement. Try building an agent. Experiment with the tools your developers are already using. This hands-on familiarity is where real understanding begins, and real understanding is what makes everything else possible. Three categories of agents, three categories of risk The agentic AI landscape is broad, and the risk profile varies significantly across it.

Three categories are worth understanding distinctly. The first is general-purpose coding and productivity agents — tools like Claude Code and GitHub Copilot. These are already embedded in developer and engineering workflows across your organization. Whether they have been formally approved or not, they are being used. What data they can access, how they interact with codebases, and what actions they can take is baseline security knowledge at this point.

The second is vendor-built agents powered by the Model Context Protocol , or MCP. MCP is the integration layer that allows agents to connect to external services and act on their behalf. Nearly every major vendor either has an MCP server in production or is actively building one. In practice, this means an agent managing a user's calendar, email, or internal ticketing system can receive input from those channels and act on it.

A malicious calendar invite carrying hidden instructions in the event description is a real attack vector — the agent reads it, interprets the embedded prompt, and executes. This is a live attack surface that requires deliberate configuration and security review. The third category is custom agents built by individual users , and this is where the dynamic gets particularly interesting. For years, a real barrier existed between security practitioners who understood risk and the code that ran in their environments.

Most security professionals are not programmers. Building custom tooling required development skills that were not widely distributed across security teams. That barrier is gone. With agentic AI, anyone in the organization can build functional tools — automations, workflows, agents with real system access — without writing traditional code. For security teams, this is genuinely valuable. Incident investigation, forensic triage, threat hunting workflows — these can be accelerated when practitioners can build the tools they actually need.

But that same capability extends to every other team. Marketing, finance, operations — everyone can build agents now. Many will. Most of those agents will not go through a security review before they go live. This is a supply chain problem in a different form. The cost of arriving late When security teams lag behind on a major technology shift, the pattern is consistent. First, the rest of the organization moves forward without security input.

Developers deploy, business units adopt, and security is consulted as a formality — or not at all. Second, the exposure compounds. The more powerful the agents an organization deploys, the more access those agents require. Broad permissions are what make agents useful: access to calendars, communication platforms, file systems, code repositories, internal APIs. That access is also what makes the blast radius significant when something goes wrong.

An agent with access to both a terminal and an email inbox can be manipulated through either channel to act in the other. That is a lateral movement path an attacker will look for. Reasoning about it requires understanding how the agent was built — the kind of understanding that only comes from genuine engagement with the technology. The skills that matter right now Building competency in agentic AI security requires two distinct layers of knowledge. understanding how AI applications are architected — from a practitioner's perspective, not a data scientist's.

What are the components of an AI application? How do agents consume inputs, chain tools together, and produce outputs? What does a session with an MCP-connected agent actually look like from an access control standpoint? This is the foundation that makes everything else actionable. The second layer is currency . The tooling and threat landscape around AI is moving fast. Vendors are building security controls for AI systems, though most are still maturing.

Open-source frameworks are emerging. OWASP and others are publishing threat taxonomies that evolve week to week. Once the foundational layer is in place, staying current becomes the ongoing discipline — knowing which tools are worth evaluating, which frameworks are gaining traction, and what questions to ask when vendors come in with solutions. That second point matters more than it might seem. Security teams are already being approached by vendors selling AI security products.

Without foundational knowledge of how these applications are built, those conversations are almost impossible to navigate well. You cannot distinguish a well-designed control from a marketing wrapper if you don't understand what you're trying to control. Configuration as a security control Many agentic AI deployments carry risk because they were stood up without security-conscious configuration — not because the underlying tools are fundamentally broken.

Take a self-hosted AI assistant connected to a communication channel like Telegram, which can be common. without proper controls, the agent could respond to anyone who messages it. That is a wide-open entry point. A simple configuration change — pairing the agent with a single trusted account — closes most of that exposure. One decision, made early, with a meaningful security outcome. The broader principle is scope.

An agent built to manage your calendar should not have access to your terminal. An agent processing incoming requests should not have write access to your code repository. Scoping agents to their intended function limits the blast radius and reduces the attack surface available for exploitation. The tension is real: powerful agents need broad access to be useful. That is the trade-off organizations will push back on.

Finding the right balance requires security involvement early in the design process — before the architecture is set and before the permissions are already in place. Getting ahead of it at SANSFIRE 2026 The organizations building genuine AI security fluency now will be positioned to shape how these systems are deployed. Those who arrive late will find themselves, once again, applying controls to an architecture that was already decided without them.

This July, I will be teaching SEC545: GenAI and LLM Application Security at SANSFIRE 2026. The course covers how AI applications are actually built, how agentic systems work in practice, the real attack surfaces security teams need to understand, and the tools and controls available to address them — including hands-on work with techniques like model scanning to detect compromised models before they run in your environment.

For practitioners who want to engage with AI systems from a foundation of real understanding, this is where to start. Register for SANSFIRE 2026 here. Note: This article has been expertly written and contributed by Ahmed Abugharbia, SANS Certified Instructor. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter LinkedIn to read more exclusive content we post.  Tweet  Share  Share  Share Agentic AI , artificial intelligence , Claude Code , Cloud security , cybersecurity , GitHub Copilot , MCP , OWASP , threat detection ⚡ Top Stories This Week 30,000 Facebook Accounts Hacked via Google AppSheet Phishing Campaign Trellix Confirms Source Code Breach With Unauthorized Repository Access ⚡ Weekly Recap: AI-Powered Phishing, Android Spying Tool, Linux Exploit, GitHub RCE and More Progress Patches Critical MOVEit Automation Bug Enabling Authentication Bypass Microsoft Details Phishing Campaign Targeting 35,000 Users Across 26 Countries Critical Apache HTTP/2 Flaw (CVE-2026-23918) Enables DoS and Potential RCE Palo Alto PAN-OS Flaw Under Active Exploitation Enables Remote Code Execution The Hacker News Launches 'Cybersecurity Stars Awards 2026' — Submissions Now Open ThreatsDay Bulletin: Edge Plaintext Passwords, ICS 0-Days, Patch-or-Die Alerts and 25+ New Stories PAN-OS RCE Exploit Under Active Use Enabling Root Access and Espionage Linux Kernel Dirty Frag LPE Exploit Enables Root Access Across Major Distributions New Linux PamDOORa Backdoor Uses PAM Modules to Steal SSH Credentials Quasar Linux RAT Steals Developer Credentials for Software Supply Chain Compromise 2026: The Year of AI-Assisted Attacks Day Zero Readiness: The Operational Gaps That Break Incident Response We Scanned 1 Million Exposed AI Services.

Here's How Bad the Security Actually Is ⭐ Featured Resources [Webinar] Learn How Autonomous Validation Keeps Pace With AI Attacks [Guide] Get Practical AI SOC Insights to Improve Threat Detection [Demo] Discover How to Control Autonomous Identity Risks Effectively [Demo] Stop Email Attacks and Protect Cloud Workspace Data Faster Cybersecurity Webinars Building Stronger Defenses Stop Patient Zero Attacks Before They Bypass Detection Learn how to stop patient zero attacks before they bypass detection and compromise your systems at entry points.

Register Reduce AppSec Risk Validate Real Attack Paths Before Attackers Exploit Them Learn how to validate real attack paths and reduce exploitable risk with continuous agentic security validation. ⚡ Latest News Cybersecurity Resources Build Security Strategy That Earns Executive Buy-In — SANS LDR514, NYC SANS LDR514 in NYC, Aug 10–15: policy, risk frameworks, board communication, and strategic leadership.

Your VPN is Helping Attackers Move as Fast as AI AI collapsed human response window and turned remote access into fastest path to breach. Earn a Master's in Cybersecurity Risk Management Lead the future of cybersecurity risk management with an online Master’s from Georgetown. Expert Insights Articles Videos From Phishing to Recovery: Breaking the Ransomware Attack Chain  May 04, 2026 Read ➝ Mythos is Coming: What the Next Six Months Require Your Biggest Security Risk Isn’t Malware — It's What You Already Trust CTM360 Exposes Global GovTrap Campaign With 11,000+ Fake Government Portals Targeting Citizens Worldwide  April 27, 2026 Get the Latest News in Your Inbox Get the latest news, expert insights, exclusive resources, and strategies from industry leaders, all for free.

Extracted Entities (1)
CVEs
CVE-2026-23918
ID: 316Lang: enType: article