CVE-2024-58340

HIGH
Published January 12, 2026
CISO Take

Any LangChain-based application running MRKL agents on version 0.3.1 or earlier is vulnerable to a DoS attack delivered via prompt injection — no authentication required. An attacker who can influence LLM output (e.g., through user-supplied prompts in a downstream app) can stall your agent service with a single crafted string. Patch to LangChain >0.3.1 immediately; if you cannot patch today, wrap MRKLOutputParser calls with a timeout and sanitize LLM output before parsing.

Affected Systems

Package Ecosystem Vulnerable Range Patched
langchain pip No patch

Do you use langchain? You're affected.

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1. PATCH: Upgrade langchain to the first version past 0.3.1 that includes the fixed regex; verify with `pip show langchain`. 2. WORKAROUND (if patch is not immediate): Wrap MRKLOutputParser.parse() calls with a signal-based or thread-based timeout (e.g., 2–5 seconds); raise a parsing error and abort on timeout. 3. INPUT HYGIENE: Truncate LLM output to a reasonable maximum length (e.g., 4 KB) before passing to the parser; reject outputs with suspicious repetitive patterns. 4. RATE LIMITING: Apply per-user/session rate limits on agent invocations to reduce DoS throughput. 5. DETECTION: Alert on sustained high CPU usage in agent worker processes; log parsing duration and alert on outliers >500 ms. 6. INVENTORY: Audit all internal and customer-facing apps that import langchain.agents.mrkl.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Article 15 - Accuracy, Robustness, and Cybersecurity
ISO 42001
A.6.1.4 - AI Risk Assessment A.6.2.6 - AI system availability and resilience A.9.2 - AI Incident Handling A.9.3 - AI system risk treatment
NIST AI RMF
MANAGE-2.2 - Mechanisms to sustain the deployed AI system MAP-5.1 - Likelihood and Impact of AI Risks
OWASP LLM Top 10
LLM01:2025 - Prompt Injection LLM07:2025 - System Prompt Leakage / Insecure Output Handling LLM10:2025 - Unbounded Consumption

Technical Details

NVD Description

LangChain versions up to and including 0.3.1 contain a regular expression denial-of-service (ReDoS) vulnerability in the MRKLOutputParser.parse() method (libs/langchain/langchain/agents/mrkl/output_parser.py). The parser applies a backtracking-prone regular expression when extracting tool actions from model output. An attacker who can supply or influence the parsed text (for example via prompt injection in downstream applications that pass LLM output directly into MRKLOutputParser.parse()) can trigger excessive CPU consumption by providing a crafted payload, causing significant parsing delays and a denial-of-service condition.

Exploitation Scenario

An attacker targets a public-facing AI assistant built on LangChain MRKL agents. They craft a user prompt designed to cause the underlying LLM to produce output containing a pathological string — for example, a long sequence of spaces or repeated characters that exploits the backtracking in the MRKL action-extraction regex (e.g., `Action: ` followed by thousands of repeated ambiguous characters). The application passes the LLM's raw output directly to MRKLOutputParser.parse() without sanitization. The regex engine enters catastrophic backtracking, pegging one CPU core at 100% for tens of seconds per request. An attacker automating dozens of such requests can exhaust worker threads and render the service unavailable for all users within minutes.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
January 12, 2026
Last Modified
January 21, 2026
First Seen
January 12, 2026