
Blog
Molly Small
Introducing the Zafran Zero Day Agent: An Autonomous Workflow for the Post-Mythos Era
May 7, 2026
Google's Mandiant just confirmed six ways AI is turbocharging exploitation. Zafran was built to stop each one - from compensating controls to agentic defense.

Threat actors are no longer just using AI to draft phishing emails. Google's Threat Intelligence Group (Mandiant) has confirmed what security teams feared: AI is now used to discover, develop, and deploy zero-day exploits at machine speed - with one confirmed case of an AI-authored exploit prepared for mass exploitation.
The race between attackers and defenders has always been asymmetric. Humans can patch roughly one critical vulnerability every 49 days. Adversaries can now identify and weaponize a new vulnerability in as few as 5 days. AI doesn't close that gap - it annihilates it.
The Mandiant report identifies six distinct AI-enabled attack vectors that together represent a qualitative shift in the threat landscape. Each one demands a response that is faster, more intelligent, and more automated than today's vulnerability management processes can provide. This post walks through all six - and maps each one to where Zafran stops it.
"Humans can no longer defend against machine-speed exploitation. The only answer is AI-native exposure management that acts in the gap."
This is not a future problem. GTIG confirmed that state-sponsored actors from the PRC and DPRK, as well as criminal groups, are already using frontier language models across every phase of the kill chain - reconnaissance, exploit development, evasion, and now autonomous attack orchestration. The six vectors below are not theoretical. They are in use today.
The Mandiant report maps a comprehensive picture of how AI is reshaping adversarial capability. Here is a condensed view of all six before we examine each one in depth.
Taken together, these six vectors describe an adversarial machine that can discover vulnerabilities, build exploits, evade defenses, navigate victim environments, and now attack the AI infrastructure you're building to defend yourself - all at speeds no human-driven security program can match.
What Mandiant found: State-sponsored actors from the PRC and DPRK are using frontier LLMs to reverse-engineer applications, analyze firmware, and identify high-level semantic logic flaws - the kind that traditional scanners miss entirely. A criminal actor used AI to build a Python script exploiting a 2FA bypass zero-day in a popular web admin tool, prepared for mass exploitation. APT45 is sending thousands of automated prompts to recursively analyze CVEs and validate PoC exploits at scale.
The implication: the number of exploitable CVEs is no longer bounded by human analyst capacity. AI can enumerate and weaponize CVEs faster than any team can patch - and it can do so across your entire CVE backlog simultaneously.
What Mandiant found: PROMPTFLUX, HONESTCUE, and CANFAIL are AI-generated malware families designed to continuously mutate their signatures and behavior to evade EDR and AV detection. Unlike traditional polymorphic malware, these use LLMs to reason about what evasion technique will work best against a specific target environment.
The consequence: signature-based detection - the foundation of most endpoint security stacks - is now systematically unreliable against well-resourced adversaries.
What Mandiant found: PROMPTSPY is a next-generation infostealer that doesn't wait for command-and-control instructions. Once inside a network, it uses embedded LLM reasoning to autonomously navigate the victim environment, identify high-value targets (credentials, intellectual property, privileged accounts), and exfiltrate data - on its own, in real time, adapting to whatever it finds.
This represents a fundamental shift: malware that reasons about your environment is exponentially more dangerous than malware that follows pre-programmed rules. Traditional lateral movement detection is calibrated against rule-based adversaries. PROMPTSPY breaks that calibration.
What Mandiant found: APT45 (DPRK) and PRC-affiliated actors use AI agents to automate the entire reconnaissance and exploit validation cycle. They send thousands of automated prompts to analyze CVEs, validate exploitability against specific target configurations, and orchestrate multi-stage attacks - collapsing the time between "discovered vulnerability" and "active exploitation" from weeks to hours.
What used to require a team of skilled analysts can now be run as an automated pipeline. Adversaries are scaling their intelligence operations with the same agentic AI paradigm that enterprises are only beginning to adopt defensively.
What Mandiant found: Criminal groups are building proxy networks and pass-through services to access frontier AI models using stolen API keys - running their attack automation pipelines at massive scale while hiding behind layers of obfuscation. The same infrastructure used to power enterprise AI is being rented out to adversaries.
This is an operational security and supply chain challenge: it means some of the AI-generated exploits and reconnaissance you're facing are powered by the same frontier models your own team is using for productivity.
What Mandiant found: A new and particularly sophisticated attack class targets the AI agent infrastructure itself. GTIG documented attacks against LiteLLM and OpenClaw - AI model routing and orchestration services used by enterprise security and DevOps teams. By compromising these routing layers, adversaries can intercept, manipulate, or corrupt the actions of AI agents operating inside your environment.
This is the frontier threat: as enterprises deploy AI agents to automate security operations, those agents themselves become attack surface. An AI agent that is compromised at the LLM routing layer can be made to take incorrect actions, ignore critical findings, or exfiltrate data while appearing to operate normally.
Zafran is not a detection tool. It is an exposure management platform - one that works by understanding your environment's actual defensive posture and proving which threats can reach you, and which ones can't. That philosophy maps perfectly onto the AI exploitation threat: no matter how fast attackers generate exploits, the answer is always the same question: does your environment have a control in place that stops this specific attack path?
"From vulnerability to exposure. From exposure to compensation control. From missing control to Exposure Gateway. Zafran covers the full arc - and now extends it to the AI agents defending your organization.
The Mandiant report is a signal flare. The attackers are already operating at AI speed. The security teams that will survive this shift are the ones that move from reactive patching to proactive exposure management - using their existing defenses as the primary weapon, augmented by autonomous AI agents that never stop scanning, never stop correlating, and never stop closing the gap.
Vulnerability management was designed for a world where human analysts reviewed CVE lists and triaged based on CVSS scores. That world no longer exists. The Mandiant report confirms what every enterprise security team already feels: the volume, speed, and sophistication of exploitation has crossed a threshold where human-paced processes cannot keep up.
The answer is not to patch faster. Enterprises cannot patch at machine speed. The answer is to change what it means to be secure - and that starts with knowing, in real time, which vulnerabilities in your environment are actually exploitable given your specific deployed controls.
That is what Zafran does. And now, with the Exposure Gateway and Agentic Exposure Management, it extends that same discipline to the AI infrastructure that will define the next generation of your security architecture.
The six vectors in the Mandiant report are not isolated findings. They are a connected picture of an adversarial AI ecosystem that is already operational. The defensive response has to be equally connected, equally automated, and equally intelligent.
If your team is still measuring security by CVSS scores and patch SLAs, the AI exploitation era will expose that model for what it is: a speedometer in a world that now runs on jet fuel.
The security teams winning right now are the ones asking a different question. Not "how fast can we patch?" but "which of these thousands of CVEs can actually reach us - and what can we do about it today, without waiting?"
That question has one answer. And it runs on an Exposure Graph.
Traditional vulnerability management must change. So many are drowning in detections, and still lack insights. The time-to-exploit window sits at 5 days. Implementing a Continuous Threat Exposure Management (CTEM) program is the path forward. Moving from vulnerability management to CTEM doesn't have to be complicated. This guide outlines steps you can take to begin, continue, or refine your CTEM journey.
