GHSA-r8g5-cgf2-4m4m

GHSA-r8g5-cgf2-4m4m HIGH
Published December 29, 2025
CISO Take

picklescan — the de-facto security scanner used to validate ML model artifacts in CI/CD pipelines and platforms like Hugging Face — can be completely bypassed by embedding numpy.f2py eval() calls in pickle payloads. Any pipeline relying on picklescan < 0.0.33 as a security gate is providing false assurance: malicious models pass the scan and execute arbitrary OS commands on load. Patch to 0.0.33 immediately and treat picklescan as a failed single point of defense until you add layered controls.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.33 0.0.33

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1. PATCH: Upgrade picklescan to >= 0.0.33 immediately — this is the only remediation. 2. AUDIT: If picklescan was your sole pickle safety control, treat all externally-sourced models loaded in the past 90 days as potentially compromised and investigate. 3. ELIMINATE PICKLE: Enforce safetensors format for model weights where possible — eliminates the pickle attack surface entirely. 4. SANDBOX: Load all third-party models in isolated containers with no network access, read-only filesystem mounts, and seccomp/AppArmor profiles. 5. LAYER DEFENSES: Do not rely on a single scanner; combine picklescan with static analysis, behavioral sandboxing, and cryptographic signature verification on trusted model artifacts. 6. DETECT: Add alerting for numpy.f2py imports and eval() calls in model-loading contexts; these are anomalous in normal inference/training workloads.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2 - AI system supply chain
NIST AI RMF
MANAGE 2.2 - Risk treatments are verified to be effective
OWASP LLM Top 10
LLM03 - Supply Chain

Technical Details

NVD Description

### Summary An unsafe deserialization vulnerability allows an attacker to execute arbitrary code on the host when loading a malicious pickle payload from an untrusted source. ### Details The `numpy.f2py.crackfortran` module exposes many functions that call `eval` on arbitrary strings of values. This is the case for `getlincoef` and `_eval_length`. This list is probably not exhaustive. According to https://numpy.org/doc/stable/reference/security.html#advice-for-using-numpy-on-untrusted-data, the whole `numpy.f2py` should be considered unsafe when loading a pickle. ### PoC ```python from numpy.f2py.crackfortran import getlincoef class EvilClass: def __reduce__(self): payload = "__import__('os').system('echo \"successful attack\"')" return getlincoef, (payload, []) ``` ### Impact Who is impacted? Any organization or individual relying on `picklescan` to detect malicious pickle files from untrusted sources. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Note The problem was originally reported to the joblib project, but this was deemed unrelated to joblib itself. However, I checked that `picklescan` was indeed vulnerable.

Exploitation Scenario

An attacker publishes a 'fine-tuned Mistral' model to a public model hub. The model's pickle file uses numpy.f2py.crackfortran.getlincoef as the deserialization hook, passing a malicious eval() payload that runs os.system() to download and execute a reverse shell. A victim organization's CI/CD pipeline runs picklescan before ingesting the model — the scan returns clean. When the model is loaded in the training cluster or inference server, the payload fires, establishing persistent access or exfiltrating API keys and model weights. The attack is particularly damaging because the organization's own security gate (picklescan) provided explicit false assurance, likely bypassing additional human review.

Timeline

Published
December 29, 2025
Last Modified
December 29, 2025
First Seen
March 24, 2026