CVE-2025-49655

GHSA-cvhh-q5g5-qprp CRITICAL
Published October 17, 2025
CISO Take

A critical deserialization RCE in Keras 3.11.0–3.11.2 bypasses safe mode entirely — the protection your ML engineers may have been trusting is worthless on affected versions. Any pipeline that loads Keras model files from external or user-supplied sources is fully compromised on impact. Patch to 3.11.3 now and treat any model loading from untrusted sources as an uncontrolled code execution path until verified.

Affected Systems

Package Ecosystem Vulnerable Range Patched
keras pip >= 3.11.0, < 3.11.3 3.11.3

Do you use keras? You're affected.

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
0.0%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1) PATCH IMMEDIATELY: upgrade all Keras installations to 3.11.3 — this is the only complete fix. Run `pip show keras` across your ML infrastructure to identify affected versions. 2) AUDIT model loading: catalog every place your code calls `keras.models.load_model()` or equivalent and identify the trust level of the source file. 3) DO NOT rely on safe_mode=True as a security control on any Keras version until you've confirmed 3.11.3 is deployed. 4) IMPLEMENT model provenance controls: cryptographic signing and hash verification of model files before loading, even from internal registries. 5) ISOLATE model loading: run model deserialization in sandboxed environments (containers with no network, read-only filesystems, minimal privileges) as a defense-in-depth measure. 6) DETECT: monitor for unexpected process spawning from Python/ML processes, outbound connections from training/inference nodes, and anomalous file access patterns post-model-load. 7) CHECK shared model stores: audit any Keras model files pulled from external sources (HuggingFace, S3, third parties) since Keras 3.11.0 was released (October 2025).

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.2.6 - AI system components and data provenance A.9.3 - AI system testing and validation
NIST AI RMF
GOVERN-6.2 - Policies and procedures are in place for AI supply chain risk management MANAGE-2.2 - Mechanisms are in place to respond to risks from AI system components
OWASP LLM Top 10
LLM05:2025 - Insecure Plugin Design

Technical Details

NVD Description

Deserialization of untrusted data can occur in versions of the Keras framework running versions 3.11.0 up to but not including 3.11.3, enabling a maliciously uploaded Keras file containing a TorchModuleWrapper class to run arbitrary code on an end user’s system when loaded despite safe mode being enabled. The vulnerability can be triggered through both local and remote files.

Exploitation Scenario

Adversary identifies a target organization using Keras for model serving or fine-tuning workflows. They craft a malicious .keras model file embedding executable Python code within a serialized TorchModuleWrapper class payload. The file is uploaded to a shared model registry (internal or public like HuggingFace), submitted as a 'fine-tuned' model via a partner API, or delivered through a compromised ML data pipeline. When an ML engineer or automated serving system calls `keras.models.load_model('malicious.keras', safe=True)` on an affected version, the deserialization triggers arbitrary code execution — establishing persistence, exfiltrating training data and credentials, or pivoting to adjacent GPU/compute infrastructure. The safe_mode=True call gives false confidence and introduces no actual barrier.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
October 17, 2025
Last Modified
October 21, 2025
First Seen
October 17, 2025