CVE-2025-14920
UNKNOWNCVE-2025-14920 is a deserialization RCE in Hugging Face Transformers' Perceiver model loader — an attacker can achieve full code execution on any system that loads a malicious model file. Organizations pulling models from HuggingFace Hub, shared drives, or external sources are directly exposed. Immediate action: audit where Transformers is deployed, restrict model loading to verified/signed sources, and patch or pin to a fixed version once available.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
Do you use transformers? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH: Monitor HuggingFace Transformers releases for a fix targeting the Perceiver model file parser; pin to patched version immediately. 2. INVENTORY: Identify all environments with `transformers` installed that load Perceiver models (`grep -r 'Perceiver' --include='*.py'`). 3. RESTRICT MODEL SOURCES: Enforce allowlisting — only load models from internal artifact registries with SHA-256 hash verification. Reject models from arbitrary URLs or unverified Hub accounts. 4. SANDBOX: Run model loading in isolated environments (separate containers/VMs with no network egress) and inspect model files with tools like `fickling` before loading in production. 5. DETECT: Alert on unexpected network connections or child process spawns from Python ML processes. 6. WORKAROUND: If Perceiver models are not in use, block their loading at the framework level or remove the model class from the import path. 7. REVIEW: Audit recent downloads of Perceiver model files from public registries for tampering.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
Hugging Face Transformers Perceiver Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25423.
Exploitation Scenario
An adversary crafts a malicious Perceiver model file containing a serialized Python object that executes arbitrary commands upon deserialization (classic pickle exploit pattern: `__reduce__` returning `os.system` or `subprocess`). The attacker uploads this model to HuggingFace Hub under a plausible name (e.g., 'perceiver-base-finetuned-images-v2'). A data scientist or automated pipeline calls `PerceiverModel.from_pretrained('attacker/perceiver-base-finetuned-images-v2')`, the Transformers library deserializes the model file without validation, and the payload executes — establishing a reverse shell, exfiltrating API keys from environment variables, or pivoting to internal infrastructure. In CI/CD contexts where models are pulled during training jobs, this achieves server-side RCE with no further attacker interaction.