CVE-2025-14924

UNKNOWN
Published December 23, 2025
CISO Take

Any team loading Megatron-GPT2 checkpoints via Hugging Face Transformers is exposed to arbitrary code execution at model-load time — patch or restrict checkpoint ingestion immediately. The real danger is not direct attacks but poisoned model files distributed via Hugging Face Hub, internal model registries, or third-party model repositories that your ML pipelines load automatically. Audit all automated checkpoint-loading workflows and enforce allowlists of trusted model sources before resuming normal operations.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch

Do you use transformers? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1. PATCH: Upgrade Hugging Face Transformers to the patched version as soon as ZDI-25-1141 discloses the fixed release. Monitor the Transformers GitHub releases page. 2. RESTRICT: Block loading of checkpoints from unverified sources in all automated pipelines; implement SHA-256 hash verification of checkpoint files against a trusted manifest before deserialization. 3. SANDBOX: Run checkpoint loading in isolated environments (containers with no network egress, restricted filesystem mounts) to limit post-exploit blast radius. 4. AUDIT: Review all pipeline code that calls megatron_gpt2 loading functions; grep for `torch.load`, `pickle.load`, and equivalent calls without `weights_only=True`. 5. DETECT: Alert on unexpected outbound network connections or filesystem writes originating from training/inference processes; these are canary indicators of post-exploit activity. 6. POLICY: Enforce a model provenance policy — only load checkpoints from internal registries with signed provenance records.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.1.4 - AI supply chain A.6.1.5 - AI system supply chain management A.9.1 - AI system operation, monitoring and review A.9.4 - AI system security
NIST AI RMF
GOVERN 6.1 - Policies and procedures for AI supply chain risk MANAGE 2.2 - Mechanisms to sustain and manage identified AI risks MANAGE 2.4 - Mechanisms to sustain treatment of identified risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM05:2025 - Supply Chain Vulnerabilities

Technical Details

NVD Description

Hugging Face Transformers megatron_gpt2 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of checkpoints. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-27984.

Exploitation Scenario

An adversary publishes a seemingly legitimate Megatron-GPT2 fine-tuned checkpoint to Hugging Face Hub, embedding a malicious pickle payload in the checkpoint file. They promote it via AI community forums or social media targeting ML engineers. A data scientist at a target organization downloads and loads the checkpoint using the standard Transformers API. The deserialization step triggers the embedded payload, executing a reverse shell or credential-harvesting script in the context of the training process — which typically runs with broad permissions on GPU infrastructure. Alternatively, an attacker who has compromised a model registry or S3 bucket used by an automated MLOps pipeline can inject the malicious checkpoint, achieving RCE without any direct user interaction beyond the pipeline's normal execution.

Weaknesses (CWE)

Timeline

Published
December 23, 2025
Last Modified
January 15, 2026
First Seen
December 23, 2025