CVE-2025-33233
HIGHNVIDIA Merlin Transformers4Rec contains a code injection flaw (CWE-94) that allows a local low-privileged attacker to achieve full code execution with no user interaction. Organizations running Transformers4Rec in shared ML compute environments, GPU clusters, or multi-tenant data science platforms should patch immediately — local access in these environments is routine for many users. Audit all deployments and enforce least-privilege on ML workloads as an interim control.
Severity & Risk
Recommended Action
- 1. PATCH: Apply NVIDIA's fix referenced in advisory a_id/5761 immediately. Monitor NVIDIA Security Bulletins for updated package versions. 2. ISOLATION: Until patched, restrict Transformers4Rec execution to dedicated, single-tenant environments — no shared Jupyter servers or multi-user ML platforms. 3. LEAST PRIVILEGE: Enforce strict OS-level user isolation on ML compute nodes; run training jobs under dedicated service accounts with no write access to model artifact stores. 4. DETECTION: Monitor for anomalous process spawning from Python interpreter processes on ML nodes (e.g., unexpected shell invocations, network connections from training jobs). Alert on unexpected writes to model artifact directories. 5. SBOM AUDIT: Enumerate all internal pipelines and MLOps tooling that depend on Transformers4Rec via pip/conda dependency trees. 6. CONTAINER HARDENING: If running in containers, ensure seccomp/AppArmor profiles block unexpected syscalls from ML workloads.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
NVIDIA Merlin Transformers4Rec for all platforms contains a vulnerability where an attacker could cause code injection. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.
Exploitation Scenario
An attacker with low-privileged access to a shared GPU training server (e.g., a data scientist account, compromised CI/CD runner, or malicious insider) crafts a malicious serialized object or configuration input that Transformers4Rec processes without proper sanitization. The injected code executes in the context of the training process, which may run with elevated privileges to access GPU resources or network-attached storage. The attacker pivots to: (1) exfiltrate user behavioral training data from the dataset store, (2) modify model checkpoint files to embed a backdoor that activates on specific inputs in production, or (3) establish persistence on the ML server by modifying shared pipeline scripts. In Kubernetes-based MLOps platforms, the attacker may escape the training pod and access cluster secrets.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H