CVE-2025-62372

GHSA-pmqf-x6x8-p7qw MEDIUM
Published November 21, 2025
CISO Take

If your organization runs vLLM for multimodal inference, patch to 0.11.1 immediately — any authenticated API user can crash the entire serving engine with a single malformed request, taking down all dependent services. This is a hard availability risk with no workaround other than restricting API access to fully trusted callers. Patch-or-restrict is the only acceptable posture.

Affected Systems

Package Ecosystem Vulnerable Range Patched
vllm pip >= 0.5.5, < 0.11.1 0.11.1
vllm pip No patch
vllm pip No patch
vllm pip No patch

Severity & Risk

CVSS 3.1
6.5 / 10
EPSS
0.1%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1) Patch: upgrade vLLM to >= 0.11.1 (pip install vllm==0.11.1). 2) If patching is delayed, restrict vLLM API access to known trusted callers via network policy or API gateway — remove low-privilege or anonymous access. 3) Add input validation at the API gateway layer to reject embedding payloads with unexpected shape dimensions before they reach vLLM. 4) Implement process supervision (systemd, Kubernetes liveness probes) to auto-restart the vLLM engine on crash and alert on restart events. 5) Monitor vLLM process crash logs for unexpected terminations as a detection signal for exploitation attempts.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.6 - AI system input validation and robustness
NIST AI RMF
MANAGE-2.4 - Residual risks are managed
OWASP LLM Top 10
LLM04 - Model Denial of Service

Technical Details

NVD Description

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.

Exploitation Scenario

An attacker with any level of API access to a vLLM multimodal endpoint — including a free-tier or internal dev account — crafts a POST request to the inference API submitting a multimodal embedding tensor with the correct number of dimensions (correct ndim) but wrong hidden dimension size. vLLM's improper array index validation (CWE-129) fails to catch the shape mismatch, causing an unhandled exception that crashes the engine process. The attacker can repeat this in a loop to cause sustained denial of service, or use it as a one-shot to disrupt a critical inference pipeline during a sensitive business window.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
November 21, 2025
Last Modified
December 4, 2025
First Seen
November 21, 2025