GHSA-6556-fwc2-fg2p

GHSA-6556-fwc2-fg2p MEDIUM
Published December 30, 2025
CISO Take

Picklescan, a widely-used security gate for vetting pickle/PyTorch model files, can be bypassed using a numpy gadget chain — meaning files your pipeline marks 'safe' can still execute arbitrary code on load. Update picklescan to 0.0.33 immediately and audit any model files scanned with prior versions in shared repositories. Treat picklescan as one layer, not the only layer: adopt SafeTensors format for model exchange and sandbox model loading.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.33 0.0.33

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1. IMMEDIATE: Upgrade picklescan to 0.0.33 in all environments (pip install --upgrade picklescan). 2. AUDIT: Re-scan any model files previously cleared by picklescan < 0.0.33 in shared repos or model registries, treating prior clean verdicts as unverified. 3. ARCHITECTURE: Migrate model exchange to SafeTensors format (safe by design, no code execution on load) — enforce this for all externally-sourced models. 4. DEFENSE-IN-DEPTH: Load untrusted models in isolated sandboxes (containers with no network, restricted filesystem) even after scanning. 5. DETECTION: Alert on picklescan versions < 0.0.33 in dependency manifests via SCA tooling. Monitor for suspicious process spawning from model-loading services (e.g., whoami, curl, sh from Python processes). 6. VERIFY SUPPLY CHAIN: Require cryptographic signing for models ingested from external sources; verify hashes against trusted upstream sources before loading.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.10 - Third-party and supplier relationships A.10.1 - Supply chain security for AI systems A.9.3 - Security of AI system inputs
NIST AI RMF
GOVERN-1.7 - AI risks and benefits are communicated and managed across the supply chain MANAGE 2.2 - Treatment of AI risks including third-party and supply chain risks MANAGE-2.4 - Mechanisms for detecting and responding to AI risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM05:2025 - Insecure Output Handling / Supply Chain Vulnerabilities

Technical Details

NVD Description

### Summary Picklescan uses the `numpy.f2py.crackfortran._eval_length` function (a NumPy F2PY helper) to execute arbitrary Python code during unpickling. ### Details Picklescan fails to detect a malicious pickle that uses the gadget `numpy.f2py.crackfortran._eval_length` in `__reduce__`, allowing arbitrary command execution when the pickle is loaded. A crafted object returns this function plus attacker‑controlled arguments; the scan reports the file as safe, but pickle.load() triggers execution. ### PoC ```python class PoC: def __reduce__(self): from numpy.f2py.crackfortran import _eval_length return _eval_length, ("__import__('os').system('whoami')", None) ``` ### Impact - Arbitrary code execution on the victim machine once they load the “scanned as safe” pickle / model file. - Affects any workflow relying on Picklescan to vet untrusted pickle / PyTorch artifacts. - Enables supply‑chain poisoning of shared model files. ### Credits - [ac0d3r](https://github.com/ac0d3r) - [Tong Liu](https://lyutoon.github.io), Institute of information engineering, CAS

Exploitation Scenario

An adversary targets an organization's ML pipeline that uses picklescan to vet community models before loading. The attacker clones a legitimate popular model repository on Hugging Face, adds a malicious pytorch_model.bin crafted with the numpy _eval_length gadget (establishing a reverse shell or exfiltrating cloud credentials), and submits it as a 'model update' or mirrors it under a typosquatted repository name. The victim's ingestion pipeline runs picklescan, receives a clean verdict, and loads the model during training or inference. The gadget executes silently, granting the adversary shell access to the training infrastructure — which typically holds cloud IAM credentials, proprietary training data, and access to production serving endpoints.

Timeline

Published
December 30, 2025
Last Modified
December 30, 2025
First Seen
March 24, 2026