CVE-2025-68665

CRITICAL
Published December 23, 2025
CISO Take

CVE-2025-68665 is a critical deserialization injection in LangChain JS (CVSS 9.1) requiring zero authentication and zero user interaction — patch immediately. Any LangChain JS application that processes user-controlled input through kwargs is potentially exploitable for arbitrary object instantiation, which can escalate to RCE or data exfiltration. Upgrade @langchain/core to ≥0.3.80 or ≥1.1.8, and langchain to ≥0.3.37 or ≥1.2.3 before end of day.

Affected Systems

Package Ecosystem Vulnerable Range Patched
langchain.js pip No patch
langchain.js pip No patch
langchain\/core pip No patch
langchain\/core pip No patch

Severity & Risk

CVSS 3.1
9.1 / 10
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1. PATCH: Upgrade @langchain/core to ≥0.3.80 (stable) or ≥1.1.8 (v1 branch); upgrade langchain package to ≥0.3.37 or ≥1.2.3. Verify via package.json and lock files. 2. AUDIT: Inventory all services using LangChain JS — check CI/CD pipelines, serverless functions, and containerized microservices. 3. WORKAROUND (if patch not immediately possible): Sanitize or reject user-controlled input containing top-level 'lc' keys before it reaches LangChain serialization methods. Implement input validation middleware. 4. DETECT: Add WAF/API gateway rules to flag requests with JSON payloads containing 'lc' key structures in unexpected positions. Monitor LangChain application logs for deserialization errors or unexpected object types. 5. VERIFY: Review commit e5063f9 to understand the exact sanitization applied and validate your patch is complete.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Article 15 - Accuracy, Robustness and Cybersecurity Article 9 - Risk Management System
ISO 42001
6.1.2 - AI Risk Assessment 8.4 - AI System Lifecycle — Development Controls A.8.2 - AI system input data A.9.3 - Third-party and supply chain
NIST AI RMF
GOVERN 1.1 - Policies and accountability for AI risk GOVERN-1.7 - Processes for AI Risk Identification and Response MEASURE 2.5 - AI system risks and impacts are measured MEASURE-2.5 - AI System Robustness and Security Evaluation
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM05:2025 - Insecure Output Handling / Supply Chain Vulnerabilities

Technical Details

NVD Description

LangChain is a framework for building LLM-powered applications. Prior to @langchain/core versions 0.3.80 and 1.1.8, and prior to langchain versions 0.3.37 and 1.2.3, a serialization injection vulnerability exists in LangChain JS's toJSON() method (and subsequently when string-ifying objects using JSON.stringify(). The method did not escape objects with 'lc' keys when serializing free-form data in kwargs. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data. This issue has been patched in @langchain/core versions 0.3.80 and 1.1.8, and langchain versions 0.3.37 and 1.2.3

Exploitation Scenario

An adversary targeting a LangChain JS-powered chatbot or API crafts a JSON payload containing a nested object with the 'lc' key structure used internally by LangChain (e.g., {"input": {"lc": 1, "type": "constructor", "id": ["langchain", "...TargetClass"], "kwargs": {...}}}). When this user-controlled data flows into LangChain's toJSON() or is passed through JSON.stringify(), the framework treats it as a legitimate serialized LangChain object rather than plain user data. During subsequent deserialization, LangChain instantiates the attacker-specified class with attacker-controlled kwargs. Depending on available gadget classes in the runtime context, this can achieve arbitrary file reads, environment variable exfiltration, or code execution — all via a single crafted HTTP request to a public API endpoint.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N

Timeline

Published
December 23, 2025
Last Modified
January 13, 2026
First Seen
December 23, 2025