vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to...
Full analysis pending. Showing NVD description excerpt.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| vllm | pip | >= 0.8.0, < 0.9.0 | 0.9.0 |
| vllm | pip | — | No patch |
Severity & Risk
Recommended Action
Patch available
Update vllm to version 0.9.0
Compliance Impact
Compliance analysis pending. Sign in for full compliance mapping when available.
Technical Details
NVD Description
vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H References
- github.com/advisories/GHSA-vrq3-r879-7m65
- github.com/vllm-project/vllm/pull/17623
- github.com/vllm-project/vllm/security/advisories/GHSA-vrq3-r879-7m65
- nvd.nist.gov/vuln/detail/CVE-2025-48944
- github.com/vllm-project/vllm/pull/17623 Issue Vendor
- github.com/vllm-project/vllm/security/advisories/GHSA-vrq3-r879-7m65 Exploit Vendor