CVE-2025-14926
UNKNOWNCVE-2025-14926 is a code injection flaw in Hugging Face Transformers that allows RCE when converting malicious model checkpoints. Any team that downloads and converts third-party models — a common MLOps practice — is at risk. Immediately audit Transformers usage across your ML pipelines, patch to a fixed version, and enforce a model provenance policy restricting checkpoint sources to verified publishers.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
Do you use transformers? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH: Upgrade Hugging Face Transformers to the patched version once released; monitor https://github.com/huggingface/transformers/security/advisories for the fix. 2. RESTRICT SOURCES: Enforce a model allowlist — only load checkpoints from verified organizational accounts or cryptographically signed sources. Block unauthenticated HuggingFace Hub downloads in production pipelines. 3. SANDBOX: Run all model conversion and checkpoint loading operations inside isolated containers or VMs with no network access and minimal filesystem permissions. 4. AUDIT CODE: Search codebase for calls to convert_config and any eval/exec patterns in Transformers-dependent code. 5. DETECT: Alert on unexpected child process spawning from Python ML processes, outbound connections from training/inference hosts, and anomalous file writes in model storage directories. 6. INTERIM WORKAROUND: Disable or gate the convert_config workflow until patched; require security review before introducing new model checkpoints into any pipeline.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
Hugging Face Transformers SEW convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must convert a malicious checkpoint. The specific flaw exists within the convert_config function. The issue results from the lack of proper validation of a user-supplied string before using it to execute Python code. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-28251.
Exploitation Scenario
An adversary creates a Hugging Face account and publishes a SEW model checkpoint with a maliciously crafted configuration file. The config contains a payload exploiting the lack of input validation in convert_config — for example, a string value that is passed directly to exec() or eval() in the Python runtime. The attacker promotes the model on social media, AI forums, or submits it as a dependency update to an open-source project. A data scientist or automated MLOps pipeline downloads and calls convert_config on the checkpoint during model evaluation or fine-tuning preparation. Arbitrary code executes with the privileges of the ML process — potentially exfiltrating API keys from environment variables, pivoting to cloud storage containing proprietary training data, or establishing persistence on GPU training infrastructure.