Update picklescan to v0.0.33 immediately — any model file previously cleared by picklescan should be treated as untrusted and re-scanned. This bypass creates a critical false-negative gap: your security scanner gives a clean bill of health while a shell spawns on model load. If your ML pipeline loads third-party or user-supplied pickle-based models (PyTorch .pkl/.pth), assume you may have loaded malicious code undetected.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | < 0.0.33 | 0.0.33 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH: Upgrade picklescan to >= 0.0.33 across all environments (pip install --upgrade picklescan). 2. RE-SCAN: Re-validate all model files previously cleared by older versions — treat prior scan results as unreliable. 3. DEFENSE-IN-DEPTH: Do not rely solely on picklescan; implement sandboxed model loading (isolated container/VM with no network and minimal filesystem access). 4. MIGRATE FORMAT: Prefer SafeTensors over pickle-based formats for model storage and distribution; enforce this in your model ingestion policy. 5. DETECT: Monitor for unexpected subprocess spawning (especially /bin/sh, pty) from Python processes that handle model loading. Alert on outbound connections from model-loading workers. 6. GOVERNANCE: Enforce model provenance controls — only load signed models from trusted registries; block loading of arbitrary community models in production.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
### Summary Using pty.spawn, which is a built-in python library function to execute arbitrary commands on the host system. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to `pty.spawn` function in the `__reduce__` method. Then the victim attempts to use picklescan to scan the pickle file for issues and sees this - ``` ----------- SCAN SUMMARY ----------- Scanned files: 1 Infected files: 0 Dangerous globals: 0 ``` The victim proceeds to load the pickle file and execute attacker-injected arbitrary code. ### PoC ``` class PtyExploit: def __reduce__(self): return (pty.spawn, (["/bin/sh", "-c", "id; exit"],)) ``` ### Impact **Who is impacted?** Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. **What is the impact?** Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. **Supply Chain Attack**: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Collaborators https://github.com/ajohnston9 https://github.com/geo-lit
Exploitation Scenario
An adversary uploads a weaponized PyTorch model to a public model hub or sends it via a targeted supply chain vector (e.g., a pull request to an internal model repository). The victim's MLOps pipeline runs picklescan on the file as a security gate — it returns 'Infected files: 0'. The model passes validation and is deployed to a model serving endpoint. When the serving process loads the model with `torch.load()`, `__reduce__` fires `pty.spawn(['/bin/sh', '-c', '...'])`, executing an attacker-controlled command. From there, the adversary can exfiltrate cloud credentials from the instance metadata service, establish persistence, or pivot laterally to the training data store or secrets manager.