vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.10.1.1, a Denial of Service (DoS) vulnerability can be triggered by sending a single HTTP GET request...
Full analysis pending. Showing NVD description excerpt.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| vllm | pip | >= 0.1.0, < 0.10.1.1 | 0.10.1.1 |
| vllm | pip | — | No patch |
Severity & Risk
Recommended Action
Patch available
Update vllm to version 0.10.1.1
Compliance Impact
Compliance analysis pending. Sign in for full compliance mapping when available.
Technical Details
NVD Description
vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.10.1.1, a Denial of Service (DoS) vulnerability can be triggered by sending a single HTTP GET request with an extremely large header to an HTTP endpoint. This results in server memory exhaustion, potentially leading to a crash or unresponsiveness. The attack does not require authentication, making it exploitable by any remote user. This vulnerability is fixed in 0.10.1.1.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H References
- github.com/advisories/GHSA-rxc4-3w6r-4v47
- github.com/vllm-project/vllm/commit/d8b736f913a59117803d6701521d2e4861701944
- github.com/vllm-project/vllm/pull/23267
- github.com/vllm-project/vllm/security/advisories/GHSA-rxc4-3w6r-4v47
- nvd.nist.gov/vuln/detail/CVE-2025-48956
- github.com/vllm-project/vllm/commit/d8b736f913a59117803d6701521d2e4861701944 Patch
- github.com/vllm-project/vllm/pull/23267 Issue Patch
- github.com/vllm-project/vllm/security/advisories/GHSA-rxc4-3w6r-4v47 Vendor