Get a Demo

Required fields are marked with an asterisk *

No CVE, No Problem? The Dangerous Blind Spot in AI Security

Author:
Yonatan Keller
,
Analyst Team Lead
Published on
January 13, 2026
Blog

For decades, the industry has relied on a common language to talk about software flaws: CVE IDs. That lingua franca lets defenders correlate advisories, scanners, and patches across vendors and tools. With AI systems, however, we don’t yet have an equivalent, internationally adopted standard for “AI vulnerabilities.” The result is inconsistent reporting and uneven remediation, even as organizations deploy AI across critical workflows.

Why is it hard to define an “AI vulnerability” in the first place? Traditional vulnerability classifications assume a product defect with a clear root cause, reproducible steps, affected versions, and a vendor responsible for a fix. However, AI failures often don’t fit this mold. Model behavior is probabilistic and context-dependent: the same prompt can yield different outputs, and safety depends on external data, tools, and permissions. Attacks like prompt injection weaponize natural-language inputs or embedded instructions in web pages and files to steer downstream actions, without exploiting a conventional code bug. The issue might lay in the model weights, the retrieval pipeline, the tool-calling logic, the dataset, or the hosting configuration. And indeed, that ambiguity makes it hard to assign a single identifier that behaves like a classic CVE. 

This gap is why the current CVE program only partially covers AI issues. CVE works well for conventional software defects in the AI stack (inference servers, SDKs, drivers…) where you can publish a patch and enumerate affected versions. Recent NVIDIA Triton Inference Server bugs, for example, received standard CVE entries and patches. However, model poisoning and prompt-injection cases - such as the latest seven issues in ChatGPT which might be exploited for leaking personal data - are typically not assigned CVE IDs because they concern system behavior rather than a discrete, patchable defect.

The lack of a unified standard carries practical risks. Without a common registry, findings scatter across blogs, repos, and individual advisories; security teams struggle to normalize severity, deduplicate signals, or automate prioritization. Asset owners may miss exposures because scanners don’t know what to look for; and suppliers use different labels for the same issue. Cross-organizational coordination also suffers: if a data-poisoning vector is described differently by each researcher, it’s harder to write detections, craft mitigations, or set remediation SLAs. In short, the absence of CVE-like identifiers for AI-specific problems slows down the cycle from discovery to fix. 

On a more positive note, there are constructive efforts underway: OWASP’s LLM Top 10 captures AI-specific risks, such as prompt injection, model denial of service, and data/model poisoning; MITRE ATLAS maps AI-specific TTPs; the AI Vulnerability Database (AVID) builds a taxonomy for AI failures at different stages of development; NIST released an AI Risk Management Framework; and ENISA tries to develop AI cybersecurity standardization based upon ISO controls.

More important are the activities of the CVE-AI Working Group. Formed in August 2024 by both AI and vulnerability specialists, the group is developing guidance on what aspects of AI systems should be considered “CVE-able.” For instance, among its early conclusions, it suggests to distinguish prompt-injection cases that are essentially detection bypasses (generally not CVE-able) from those that produce classic security weaknesses (such as hidden functionality or exposure of personal data) which can be CVE-able. In parallel, a sister CWE AI working group is drafting AI-relevant weakness families so future AI CVEs can reference stable CWE terminology. None of these efforts is, by itself, a universal registry, but together they move the ecosystem in the right direction.

What should organizations do while the standard matures? Treat AI-specific weaknesses as real exposures even if they lack CVE IDs, track them in internal registries and map them to OWASP/ATLAS categories for consistency. Organizations should also expect that some issues will never receive a vendor patch and instead require control-layer mitigations—tightening authorization on tool use, validating inputs and outputs, or monitoring abuse patterns. Pending upon the establishment of a CVE-like standard for AI infrastructure, disciplined internal taxonomy, shared community references, and risk-management standards are the best way to keep AI vulnerabilities visible and fixable. 

A Practical Guide: Evolving from VM to CTEM

Traditional vulnerability management must change. So many are drowning in detections, and still lack insights. The time-to-exploit window sits at 5 days. Implementing a Continuous Threat Exposure Management (CTEM) program is the path forward. Moving from vulnerability management to CTEM doesn't have to be complicated. This guide outlines steps you can take to begin, continue, or refine your CTEM journey.

Download Now
Discover how Zafran Security can streamline your vulnerability management processes.
Request a demo today and secure your organization’s digital infrastructure.
Discover how Zafran Security can streamline your vulnerability management processes.
Request a demo today and secure your organization’s digital infrastructure.
Request Demo
On This Page
Share this article: