
The recent Drift AI attacks are a wake-up call for anyone betting on AI-enabled workflows. In August 2025, attackers launched a supply-chain style breach driven by an AI add-on that had privileged connectivity to core business systems: they stole OAuth and refresh tokens tied to Salesloft’s Drift chatbot integration, then pivoted into victims’ Salesforce and even a handful of Google Workspace accounts. As a result, various major IT and cybersecurity vendors, among others Zscaler, Palo Alto Networks, and Cloudflare - publicly confirmed unauthorized access to their Salesforce data.
The Drift episode is emblematic of two overlapping trends. First, AI systems are now pervasive and over-connected: employees trial chatbots, assistants, and automation plug-ins that connect to data lakes and CRMs with a couple clicks of OAuth. Security teams inherit opaque, rapidly changing trust relationships across SaaS and AI tooling, while attackers exploit these blind spots. What used to be a relatively well-defined SaaS perimeter has turned into a dense web of delegated access, background agents, and third-party AI services operating continuously and often invisibly.
Second, AI apps come with their own classes of attack techniques and vulnerabilities. As listed by OWASP, prompt injections, model denial of services (MDoS), or data poisoning are now among the Top 10 threats to LLM applications. Consider for example last year’s LangChain vulnerability (CVE-2024-8309) showing how prompt injection could be weaponized into SQL/graph-query injection against backends, enabling exfiltration or destructive writes if security controls are missing.
But focusing only on prompt injection misses where the real risk is.
The first wave of AI vulnerabilities was largely confined to isolated or low-impact use cases, chatbots answering questions, copilots generating text, or demo apps with limited blast radius. Today, we are seeing a second wave that is both quieter and far more dangerous: vulnerabilities in the B2B AI application stack itself. These systems sit deep inside enterprise environments, wired directly into CRMs, ticketing systems, email, document repositories, and internal workflows. They run with service accounts, long-lived tokens, and broad scopes because “otherwise the AI wouldn’t be useful.”
For instance, Zafran Labs’ new research recently uncovered the ChainLeak vulnerabilities, demonstrating a class of issues where the risk isn’t “the model,” but the web server the AI system is built on. ChainLeak refers to two high-severity vulnerabilities in Chainlit that can be triggered with no user interaction to leak sensitive files and cloud secrets (including API keys) and to perform SSRF, turning common AI framework components into high-impact footholds for data exposure and potential cloud takeover.
In that world, an AI breach no longer looks like a model saying something it shouldn’t. It seems more like a new category of vulnerabilities allowing for an era of industrialized, highly automated, full-fledged cyber attacks. The industrialization of AI exploitation is already visible across every stage of the modern attack lifecycle:
Another emerging threat in this second wave is LLM-to-LLM (agent-to-agent) prompt injections, where one AI system’s output becomes another system’s trusted input, allowing malicious instructions to propagate without any human review. Research on multi-agent systems has shown that a compromised (or attacker-influenced) agent can issue tool calls or craft prompts to peer agents, cascading malicious behavior across the systems- triggering API calls, modifying records, or exfiltrating data through connected tools.
For example, ServiceNow’s Now Assist agent can be induced to ask a higher-privileged agent to perform sensitive actions. In these cases, the attacks succeed because agents implicitly trust each other’s outputs as safe, turning inter-agent communication itself into an attack surface.
As a leading CTEM solution, Zafran helps you to continuously discover and track AI libraries and integrations across containers or SaaS connectors, then correlate them with internet-facing assets in your environments. You may want to place those assets at the top of a prioritized remediation queue and/or to validate that exposed services are using the right security controls, strong authentication and authorization, network isolation, or endpoint protections.
In parallel, Zafran’s Agentic Remediation analyzes SBOMs and builds pipelines to give a complete, continuous view of AI library exposure, even when no specific vulnerability has yet been disclosed or is not yet detected by traditional vulnerability scanners. This allows teams to identify exposed or unpatched AI components early, trigger remediation playbooks and generate fast audit-ready reports covering all AI integrations.
The lesson from the latest AI attacks shows that even one over-connected system can compromise your environment . Zafran, together with its newly released Agentic Remediation, makes AI integrations visible, measurable, and fixable and can help you protect your assets.
Traditional vulnerability management must change. So many are drowning in detections, and still lack insights. The time-to-exploit window sits at 5 days. Implementing a Continuous Threat Exposure Management (CTEM) program is the path forward. Moving from vulnerability management to CTEM doesn't have to be complicated. This guide outlines steps you can take to begin, continue, or refine your CTEM journey.
