Resources
Blog
Blog
Blog

Beyond Prompt Injection: Over-Connected AI Apps Enable Enterprise Breaches

Author:
Yonatan Keller
,
Analyst Team Lead
Published on
January 28, 2026
Blog

The recent Drift AI attacks are a wake-up call for anyone betting on AI-enabled workflows. In August 2025, attackers launched a supply-chain style breach driven by an AI add-on that had privileged connectivity to core business systems: they stole OAuth and refresh tokens tied to Salesloft’s Drift chatbot integration, then pivoted into victims’ Salesforce and even a handful of Google Workspace accounts. As a result, various major IT and cybersecurity vendors, among others Zscaler, Palo Alto Networks, and Cloudflare - publicly confirmed unauthorized access to their Salesforce data.  

The Drift episode is emblematic of two overlapping trends. First, AI systems are now pervasive and over-connected: employees trial chatbots, assistants, and automation plug-ins that connect to data lakes and CRMs with a couple clicks of OAuth. Security teams inherit opaque, rapidly changing trust relationships across SaaS and AI tooling, while attackers exploit these blind spots. What used to be a relatively well-defined SaaS perimeter has turned into a dense web of delegated access, background agents, and third-party AI services operating continuously and often invisibly.

Second, AI apps come with their own classes of attack techniques and vulnerabilities. As listed by OWASP, prompt injections, model denial of services (MDoS), or data poisoning are now among the Top 10 threats to LLM applications. Consider for example last year’s LangChain vulnerability (CVE-2024-8309) showing how prompt injection could be weaponized into SQL/graph-query injection against backends, enabling exfiltration or destructive writes if security controls are missing. 

But focusing only on prompt injection misses where the real risk is.

The first wave of AI vulnerabilities was largely confined to isolated or low-impact use cases, chatbots answering questions, copilots generating text, or demo apps with limited blast radius. Today, we are seeing a second wave that is both quieter and far more dangerous: vulnerabilities in the B2B AI application stack itself. These systems sit deep inside enterprise environments, wired directly into CRMs, ticketing systems, email, document repositories, and internal workflows. They run with service accounts, long-lived tokens, and broad scopes because “otherwise the AI wouldn’t be useful.”

For instance, Zafran Labs’ new research recently uncovered the ChainLeak vulnerabilities, demonstrating a class of issues where the risk isn’t “the model,” but the web server the AI system is built on. ChainLeak refers to two high-severity vulnerabilities in Chainlit that can be triggered with no user interaction to leak sensitive files and cloud secrets (including API keys) and to perform SSRF, turning common AI framework components into high-impact footholds for data exposure and potential cloud takeover.

In that world, an AI breach no longer looks like a model saying something it shouldn’t. It seems more like a new category of vulnerabilities allowing for an era of industrialized, highly automated, full-fledged cyber attacks. The industrialization of AI exploitation is already visible across every stage of the modern attack lifecycle: 

  • Supply-chain attacks - besides the DriftAI/Salesloft attack, the recently discovered n8mare vulnerability in the n8n automation ecosystem highlights that AI plugins can become trojan horses.  The flaw allows attackers to publish malicious "community nodes" masquerading as legitimate integrations while containing hidden code designed to harvest sensitive secrets such API keys and AWS credentials.
  • Credential theft - in a targeted strike against developers, researchers demonstrated MCP-based hijacking against Cursor, where a malicious Model Context Protocol server can inject code into Cursor’s built-in browser runtime and present convincing login prompts, enabling credential theft inside the IDE environment. 
  • Lateral movement - the Morris II worm leverages prompt injections techniques  to demonstrate autonomous replication across networks, mapping internal infrastructures and abuse existing permissions 
  • Phishing - the CoPhish technique demonstrates how Microsoft Copilot Studio can be abused for OAuth-consent phishing. By hosting a malicious agent on a trusted Microsoft hosted domain, attackers trick users into granting consent to an AI agent under the guise of a legitimate workflow - resulting in persistent, legitimate access defined by the permissions granted...

Another emerging threat in this second wave is LLM-to-LLM (agent-to-agent) prompt injections, where one AI system’s output becomes another system’s trusted input, allowing malicious instructions to propagate without any human review. Research on multi-agent systems has shown that a compromised  (or attacker-influenced) agent can issue tool calls or craft prompts to peer agents, cascading malicious behavior across the systems- triggering API calls, modifying records, or exfiltrating data through connected tools.

For example, ServiceNow’s Now Assist agent can be induced to ask a higher-privileged agent to perform sensitive actions. In these cases, the attacks succeed because agents implicitly trust each other’s outputs as safe, turning inter-agent communication itself into an attack surface.

As a leading CTEM solution, Zafran helps you to continuously discover and track AI libraries and integrations across containers or SaaS connectors, then correlate them with internet-facing assets in your environments. You may want to place those assets at the top of a prioritized remediation queue and/or to validate that exposed services are using the right security controls, strong authentication and authorization, network isolation, or endpoint protections. 

In parallel, Zafran’s Agentic Remediation analyzes SBOMs and builds pipelines to give a complete, continuous view of AI library exposure, even when no specific vulnerability has yet been disclosed or is not yet detected by traditional vulnerability scanners. This allows teams to identify exposed or unpatched AI components early, trigger remediation playbooks and generate fast audit-ready reports covering all AI integrations.

The lesson from the latest AI attacks shows that even one over-connected system  can compromise your environment . Zafran, together with its newly released Agentic Remediation, makes AI integrations visible, measurable, and fixable and can help you protect your assets.

  • Google Threat Intelligence, “Widespread Data Theft Targets Salesforce Instances via Salesloft Drift.” Google Cloud  
  • The Hacker News, “Salesloft OAuth Breach via Drift AI Chat Agent.” The Hacker News 
  • Cybernews, “Drift tool taken offline after hundreds were hacked.” Cybernews 
  • TechRadar Pro, “Palo Alto Networks becomes the latest to confirm it was hit by Salesloft Drift attack.” TechRadar 
  • OWASP, “Top 10 for Large Language Model Applications (2025).” OWASP 
  • NVD/GitHub Advisories, “CVE-2024-8309 LangChain GraphCypherQAChain injection.” NVD
  • Anthropic/UK AISI/Turing via TechRadar Pro, “How many malicious docs does it take to poison an LLM?” TechRadar
  • Hugging Face, “Spaces secrets disclosure.” Hugging Face
  • DigitalOcean and Analytics Insight, “Common ML/AI libraries overview.” DigitalOcean 
  • CVE.org, “CVE Program Adds New “CVE Artificial Intelligence Working Group”
  • Dor Attias, “Ni8mare  -  Unauthenticated Remote Code Execution in n8n”, Cyera
  • DataDog Security Labs, “CoPhish: Using Microsoft Copilot Studio as a wrapper for OAuth phishing”, SecurityLabs
  • Cornell University, “Here Comes The AI Worm”, Cornell
  • Cornell University, “Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems”, Cornell 
  • Appomni, “When AI Turns on Its Team”, Appomni
  • Zafran Labs, “ChainLeak: Critical AI framework vulnerabilities expose data, enable cloud takeover”, Zafran Security
A Practical Guide: Evolving from VM to CTEM

Traditional vulnerability management must change. So many are drowning in detections, and still lack insights. The time-to-exploit window sits at 5 days. Implementing a Continuous Threat Exposure Management (CTEM) program is the path forward. Moving from vulnerability management to CTEM doesn't have to be complicated. This guide outlines steps you can take to begin, continue, or refine your CTEM journey.

Download Now
CTEM Whitepaper cover
Discover how Zafran Security can streamline your vulnerability management processes.
Request a demo today and secure your organization’s digital infrastructure.
Discover how Zafran Security can streamline your vulnerability management processes.
Request a demo today and secure your organization’s digital infrastructure.
Request Demo
On This Page
Share this article: