CVE-2025-14921
UNKNOWNIf your organization loads Transformer-XL models from any external source — Hugging Face Hub, shared storage, or third-party repos — you have a live RCE exposure. Update the transformers library immediately and enforce model-source allow-listing. Until patched, treat any externally-sourced Transformer-XL model file as untrusted and sandbox or block its loading in production and CI/CD pipelines.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
Do you use transformers? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH immediately: upgrade huggingface/transformers to the latest release; check the ZDI advisory at zerodayinitiative.com/advisories/ZDI-25-1149 for the confirmed fixed version. 2. AUDIT: inventory all code paths using Transformer-XL model loading; grep for AutoModelForSequenceClassification, TransfoXLModel, from_pretrained with Transformer-XL checkpoints. 3. RESTRICT sources: implement an allow-list of trusted model sources and block loading from arbitrary URLs or untrusted registries. 4. VERIFY integrity: validate SHA256 checksums or cryptographic signatures of model files before loading. 5. SANDBOX: run model loading in isolated containers or VMs with no cloud credential access and network egress filtering. 6. DETECT: alert on unexpected child process spawning (subprocess, os.system) originating from Python ML processes. 7. ROTATE: if compromise is suspected, rotate any credentials accessible to ML workloads or serving infrastructure.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
Hugging Face Transformers Transformer-XL Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25424.
Exploitation Scenario
An adversary registers a typosquatting account on Hugging Face Hub and publishes a poisoned Transformer-XL model checkpoint under a name close to a popular repo (e.g., 'transfo-xl-wt103-finetuned'). The malicious model file embeds a crafted pickle payload within its serialized weights. A data scientist or automated CI pipeline calls from_pretrained('attacker/transfo-xl-wt103-finetuned') for evaluation or fine-tuning. During deserialization, the pickle payload executes arbitrary Python code in the loading process context — establishing a reverse shell to an attacker-controlled server, exfiltrating cloud credentials from environment variables, or installing a persistent backdoor. In a model serving scenario, this gives the attacker persistent RCE on the inference server with access to all models, API keys, and downstream data stores.