CVE-2025-14928
UNKNOWNCVE-2025-14928 is a code injection RCE in Hugging Face Transformers' HuBERT convert_config function — a malicious model checkpoint triggers arbitrary Python execution on the converting machine. If your ML teams pull and convert external HuBERT checkpoints, treat this as a critical supply chain risk: isolate conversion workflows immediately and block untrusted checkpoint sources until a patch is confirmed. This is the exact attack pattern attackers use to own ML infrastructure via 'innocent' model downloads.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
Do you use transformers? You're affected.
Severity & Risk
Recommended Action
- 1. IMMEDIATE: Audit all pipelines using transformers' HuBERT convert_config — inventory where external checkpoints are consumed. 2. Block or quarantine conversion of HuBERT checkpoints from untrusted sources (anything outside your own model registry). 3. Run any checkpoint conversion in ephemeral, network-isolated containers with no access to production credentials or secrets. 4. Monitor for unexpected subprocess spawning or network connections originating from Python ML processes during model conversion. 5. Subscribe to transformers release notes (GitHub Advisory Database) for patch availability — no fixed version is confirmed yet per CVE data. 6. Apply least-privilege to ML pipeline service accounts so exploitation blast radius is contained. 7. Detection: alert on eval()/exec() calls in transformers processes via runtime security tools (Falco, eBPF-based) if feasible.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
Hugging Face Transformers HuBERT convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must convert a malicious checkpoint. The specific flaw exists within the convert_config function. The issue results from the lack of proper validation of a user-supplied string before using it to execute Python code. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-28253.
Exploitation Scenario
Attacker publishes a crafted HuBERT model to HuggingFace Hub with a poisoned config embedding Python payload (e.g., reverse shell or credential harvester) in a field consumed by convert_config. They promote the model through legitimate-looking channels — a GitHub repo, a paper citation, or a Slack message to an ML team. An ML engineer or automated MLOps job pulls the checkpoint and runs the conversion step. convert_config evaluates the malicious string as Python code, executing the payload with the privileges of the converting process. In a typical MLOps environment this yields access to AWS/GCP credentials in environment variables, internal artifact stores, and training compute — all from a single model download.