CVE-2026-25580

GHSA-2jrp-274c-jhv3 HIGH
Published February 6, 2026
CISO Take

Any Pydantic AI application accepting message history from external users is exposed to SSRF attacks that can pivot to cloud metadata services and steal IAM credentials. Patch to pydantic-ai >= 1.56.0 immediately and treat this as critical in cloud environments where IMDS is accessible. Until patched, disable or sanitize external message history inputs — cloud credential theft via SSRF is a lateral movement multiplier with consequences far beyond the AI layer.

Affected Systems

Package Ecosystem Vulnerable Range Patched
pydantic-ai pip >= 0.0.26, < 1.56.0 1.56.0
pydantic-ai-slim pip >= 0.0.26, < 1.56.0 1.56.0
pydantic_ai No patch

Severity & Risk

CVSS 3.1
8.6 / 10
EPSS
0.0%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1. PATCH: Upgrade pydantic-ai or pydantic-ai-slim to >= 1.56.0 immediately — this is the only permanent fix. 2. WORKAROUND: If patching is not immediately possible, block or sanitize all message history inputs from untrusted sources — do not pass external conversation history directly to Pydantic AI. 3. NETWORK CONTROLS: Enforce egress filtering on AI agent deployments; explicitly block 169.254.169.254 (cloud IMDS) and RFC1918 addresses from application server outbound traffic. 4. DETECTION: Monitor outbound HTTP requests from AI agent services for connections to private IP ranges, loopback, or cloud metadata endpoints — any hit from the application tier is a strong indicator of exploitation. 5. AUDIT: Scan codebase for all call sites where message_history or equivalent parameters accept user-controlled data and feed into Pydantic AI.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, Robustness and Cybersecurity Article 15 - Accuracy, Robustness and Cybersecurity
ISO 42001
A.6.2 - AI-Related Roles and Responsibilities — Supply Chain A.6.2.6 - Security in AI System Development A.9.4 - AI System Technical Security Controls
NIST AI RMF
GOVERN 1.7 - Processes and procedures are in place for decommissioning and phase out of AI systems MANAGE 2.2 - Mechanisms are in place to maintain the AI system MEASURE 2.6 - Evaluation of AI Risk
OWASP LLM Top 10
LLM06 - Sensitive Information Disclosure LLM07 - Insecure Plugin Design LLM08 - Excessive Agency

Technical Details

NVD Description

Pydantic AI is a Python agent framework for building applications and workflows with Generative AI. From 0.0.26 to before 1.56.0, aServer-Side Request Forgery (SSRF) vulnerability exists in Pydantic AI's URL download functionality. When applications accept message history from untrusted sources, attackers can include malicious URLs that cause the server to make HTTP requests to internal network resources, potentially accessing internal services or cloud credentials. This vulnerability only affects applications that accept message history from external users. This vulnerability is fixed in 1.56.0.

Exploitation Scenario

An attacker submits a crafted conversation message to a public-facing AI assistant built on Pydantic AI (e.g., a customer support chatbot or developer tool exposed via API). The attacker's message history payload contains a URL pointing to http://169.254.169.254/latest/meta-data/iam/security-credentials/ targeting the AWS IMDS. The Pydantic AI URL download handler fetches this URL server-side, the response containing live IAM role credentials (AccessKeyId, SecretAccessKey, SessionToken) flows back through the agent, and the attacker harvests them. With active AWS credentials, the attacker escalates to full cloud environment access — S3 buckets, RDS instances, other services — entirely from a message history injection. No authentication, no AI expertise, and no user interaction required.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:N/A:N

Timeline

Published
February 6, 2026
Last Modified
February 20, 2026
First Seen
February 6, 2026