CVE-2025-14930
UNKNOWNCVE-2025-14930 is a critical supply chain RCE in Hugging Face Transformers affecting GLM4 model loading. Any team that loads GLM4 model weights from external sources — including HuggingFace Hub — is exposed to arbitrary code execution with the privileges of the loading process. Immediately audit pipelines that auto-load models and restrict model sources to internally verified artifacts until the Transformers library is patched.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
Do you use transformers? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH: Update HuggingFace Transformers to the fixed version as soon as released; monitor ZDI advisory ZDI-25-1145 and HuggingFace GitHub releases for patch confirmation. 2. BLOCK: Until patched, restrict model loading to internally-hosted, hash-verified artifacts only. Disable auto-pull from HuggingFace Hub in production and CI/CD. 3. ISOLATE: Run model loading processes in sandboxed containers with no outbound network, minimal filesystem write access, and no access to secrets/credentials. 4. DETECT: Alert on unexpected child process spawning, network connections, or file writes during model load operations. Monitor for pickle/deserialization execution patterns in ML runtime logs. 5. AUDIT: Inventory all GLM4 model files currently in use; verify SHA256 hashes against official source checksums.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
Hugging Face Transformers GLM4 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of weights. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-28309.
Exploitation Scenario
An attacker crafts a malicious GLM4 model weight file with a serialized payload embedded using Python's pickle or equivalent deserialization vector. They publish it to HuggingFace Hub under a convincing model name (typosquatting a popular GLM4 checkpoint) or compromise an existing model repository. A victim organization's automated MLOps pipeline — running a nightly job to pull the latest model version — downloads and calls `from_pretrained()`, triggering deserialization and executing the attacker's payload. The payload runs as the pipeline service account, which typically has access to cloud credentials, training data, inference infrastructure, and internal APIs. From there, lateral movement or data exfiltration follows.