CVE-2026-1777

GHSA-rjrp-m2jw-pv9c HIGH
Published February 2, 2026
CISO Take

If your team uses SageMaker Python SDK remote functions, patch to v3.2.0 (v3.x) or v2.256.0 (v2.x) immediately — this is not optional. An attacker with DescribeTrainingJob IAM permissions can extract the HMAC key, forge malicious serialized payloads, and achieve arbitrary code execution in your client environment when ML results are fetched, with no integrity validation error to tip you off. Audit IAM for overly permissive DescribeTrainingJob access and restrict S3 write permissions on training output buckets as a compensating control.

Affected Systems

Package Ecosystem Vulnerable Range Patched
sagemaker pip >= 3.0, < 3.2.0 3.2.0

Do you use sagemaker? You're affected.

Severity & Risk

CVSS 3.1
7.2 / 10
EPSS
0.0%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1. PATCH NOW: Upgrade SageMaker Python SDK to v3.2.0 or v2.256.0 across all environments — CI/CD pipelines, SageMaker Studio notebooks, containerized training jobs, and local dev environments. 2. AUDIT IAM: Treat DescribeTrainingJob as a sensitive API. Restrict permissions to only principals that operationally require it; remove from broad developer or analyst roles. 3. RESTRICT S3: Apply least-privilege bucket policies on training job output locations; enforce cross-tenant isolation in shared environments to prevent unauthorized writes. 4. ROTATE SECRETS: For any training jobs that ran on unpatched SDK versions, assume HMAC keys were exposed — treat affected training environments as potentially compromised and rotate accessible secrets. 5. DETECT: Enable CloudTrail alerts on DescribeTrainingJob calls from principals not owning the training job, and on unexpected S3 writes to training output paths. These are your primary detection signals pre-patch.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Annex IV, Section 2(f) - Technical documentation — cybersecurity measures Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
6.1.2 - AI risk assessment 8.4 - AI system security A.6.1.2 - Information security within AI system development A.9.4 - Integrity of AI system outputs
NIST AI RMF
GOVERN 1.7 - Processes and procedures are in place for decommissioning and phasing out AI systems GOVERN-1.7 - Processes for AI risk management MANAGE 2.2 - Mechanisms to sustain the value of deployed AI systems are evaluated and applied MANAGE-2.4 - Residual risks and mitigation responses
OWASP LLM Top 10
LLM05:2025 - Insecure Output Handling

Technical Details

NVD Description

### Summary SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker. An issue where the HMAC secret key is stored in environment variables and disclosed via the DescribeTrainingJob API has been identified. ### Impact - Function and Payload Tampering: Attackers with DescribeTrainingJob permissions may extract HMAC secret keys and forge serialized function payloads stored in S3. These tampered payloads would be processed and executed without triggering integrity validation errors, enabling unintended code substitution. - Arbitrary Code Execution in the Training Environment: An third party with both DescribeTrainingJob permissions and write access to the job's S3 output location can extract the HMAC key, craft inappropriate Python objects, and achieve remote code execution in the client's Python process when the victim retrieves remote function results. - Data and Credentials Handling: Arbitrary remote code execution may interact with sensitive data, model artifacts, environment variables, and potentially AWS metadata. - Cross-Tenant or Shared Environment Risks: In multi-tenant, shared S3 bucket, a disclosed HMAC key could act as a pivot point to perform inappropriate actions against other users' remote function workloads. This could leverage the IAM permissions, shared S3 buckets, or VPC resources to compromise adjacent services or data. ### Impacted versions - SageMaker Python SDK v3 < v3.2.0 - SageMaker Python SDK v2 < v2.256.0 ### Patches This issue has been addressed in SageMaker Python SDK version [v3.2.0](https://github.com/aws/sagemaker-python-sdk/tree/22d30f577a6139431a1fb9154b7b88a0e2a1ace6) and [v2.256.0](https://github.com/aws/sagemaker-python-sdk/tree/a140cfcd12abfee10254cb4dea3bb10758e4321c). Upgrading to the latest version immediately and ensuring any forked or derivative code is patched to incorporate the new fixes is recommended. ### Workarounds Customers using self-signed certificates for internal model downloads should add their private Certificate Authority (CA) certificate to the container image rather than relying on the SDK’s previous insecure configuration. This opt-in approach maintains security while accommodating internal trusted domains. ### Resources If there are any questions or comments about this advisory, contact AWS Security via the [vulnerability reporting page](https://aws.amazon.com/security/vulnerability-reporting) or directly via email to [aws-security@amazon.com](mailto:aws-security@amazon.com). Please do not create a public GitHub issue.

Exploitation Scenario

An insider or an attacker holding compromised AWS credentials with DescribeTrainingJob permission calls the API against an active or recently completed SageMaker training job. The API response includes the HMAC secret key in the job's environment variables. The attacker uses the key to craft a malicious pickled Python object — for example, one that exfiltrates IAM credentials via the instance metadata service or establishes a reverse shell to an attacker-controlled host. They upload this forged payload to the legitimate S3 output path for that job, overwriting or staging alongside the expected model output. When the data science team's pipeline or notebook fetches and deserializes the remote function result expecting model weights or metrics, the malicious payload executes silently with the client process's privileges — full ML engineering workstation compromise with no integrity error raised.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
February 2, 2026
Last Modified
February 3, 2026
First Seen
March 24, 2026