CVE-2026-26013

GHSA-2g6r-c272-w58r LOW
Published February 10, 2026
CISO Take

LangChain SSRF in token counting for vision models allows unauthenticated attackers to trigger internal network requests by supplying malicious image URLs in multimodal inputs. CVSS 3.7 understates real-world risk: in cloud-hosted AI applications, SSRF reaches cloud metadata services (AWS IMDSv1, GCP), enabling credential theft beyond the stated availability-only impact. Patch to langchain-core >= 1.2.11 now; any public-facing LangChain app accepting vision inputs is exposed.

Affected Systems

Package Ecosystem Vulnerable Range Patched
langchain-core pip < 1.2.11 1.2.11
langchain_core pip No patch

Severity & Risk

CVSS 3.1
3.7 / 10
EPSS
0.0%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1. Patch: Upgrade langchain-core to >= 1.2.11 immediately. Run `pip show langchain-core` to confirm version. 2. Workaround if patching is blocked: Validate and allowlist image URLs in user input before passing to ChatOpenAI — reject non-HTTPS URLs and URLs resolving to RFC 1918/link-local ranges. 3. Cloud hardening: Enable IMDSv2 (hop limit = 1) on all EC2/GCE instances running LangChain to block metadata SSRF impact. Disable IMDSv1 explicitly. 4. Network controls: Restrict egress from LangChain application hosts to required destinations only; block 169.254.169.254, 100.100.100.200, and internal RFC 1918 ranges at the host firewall. 5. Detection: Log and alert on outbound HTTP requests from LLM application processes to non-approved destinations. Monitor for requests to metadata endpoints in application logs.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.2.3 - AI system input controls A.8.4 - AI system input controls A.9.1 - Security of AI systems A.9.3 - AI system vulnerability management
NIST AI RMF
GOVERN 1.7 - Processes and procedures are in place for decommissioning and phasing out AI systems MANAGE 2.2 - Mechanisms for managing AI risks are in place
OWASP LLM Top 10
LLM03 - Supply Chain Vulnerabilities LLM05:2025 - Supply Chain Vulnerabilities LLM07 - System Prompt Leakage LLM07:2025 - Insecure Plugin Design

Technical Details

NVD Description

LangChain is a framework for building agents and LLM-powered applications. Prior to 1.2.11, the ChatOpenAI.get_num_tokens_from_messages() method fetches arbitrary image_url values without validation when computing token counts for vision-enabled models. This allows attackers to trigger Server-Side Request Forgery (SSRF) attacks by providing malicious image URLs in user input. This vulnerability is fixed in 1.2.11.

Exploitation Scenario

Attacker submits a multimodal chat message to a public-facing LangChain application (e.g., a GPT-4o assistant accepting images): the message contains a `image_url` pointing to `http://169.254.169.254/latest/meta-data/iam/security-credentials/` (AWS IMDSv1). When the application calls `get_num_tokens_from_messages()` to estimate cost/context before the LLM call, LangChain fetches the URL server-side without validation. On an EC2 host with IMDSv1 enabled, the response returns IAM role credentials with full AWS access. The attacker never needs to interact with the LLM itself — the vulnerability fires in the preprocessing step, making it invisible to LLM-level input filtering.

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L

Timeline

Published
February 10, 2026
Last Modified
March 17, 2026
First Seen
February 10, 2026