Chainlit deployments running versions prior to 2.8.5 expose an authorization bypass that lets any authenticated user read other users' AI conversation threads or hijack thread ownership. Patch immediately to 2.8.5—Chainlit threads routinely contain sensitive LLM prompts, business context, and RAG-retrieved data that users assume is private. Audit all Chainlit instances across your AI stack, including internal copilots and customer-facing chat interfaces.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| chainlit | pip | < 2.8.5 | 2.8.5 |
Do you use chainlit? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH: Upgrade all Chainlit instances to 2.8.5+ immediately—the fix is available and straightforward. 2. AUDIT: Query access logs for thread reads where the requesting user does not match thread owner; flag anomalous enumeration patterns. 3. ISOLATE: If patching is delayed, restrict Chainlit behind VPN or add WAF rules to block cross-user thread ID enumeration attempts. 4. DATA MINIMIZATION: Review what sensitive content is persisted in Chainlit threads—avoid storing API keys, PII, or system prompts in thread history. 5. DETECT: Implement alerting on thread access where session user differs from thread owner at the application layer.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
Chainlit versions prior to 2.8.5 contain an authorization bypass through user-controlled key vulnerability. If this vulnerability is exploited, threads may be viewed or thread ownership may be obtained by an attacker who can log in to the product.
Exploitation Scenario
An attacker creates a low-privilege account on a multi-user Chainlit deployment (e.g., an internal AI assistant or customer-facing LLM product). They observe that thread IDs in API requests to /thread/{id} are sequential, UUID-based but discoverable, or leaked via other endpoints. By iterating or guessing thread IDs with their authenticated session, they read conversation histories of other users—potentially exposing executive AI assistant sessions containing M&A context, HR queries, or embedded customer data. In a more targeted attack, the adversary obtains ownership of a specific high-value thread and injects adversarial context before the victim resumes their session, covertly manipulating the LLM's behavior through thread-context poisoning without any model access.
Weaknesses (CWE)
CVSS Vector
CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:U/C:L/I:L/A:N References
- github.com/Chainlit/chainlit/commit/8f1153db439eca58ae5c50c8276ba6fdd311448e
- github.com/Chainlit/chainlit/pull/2637
- github.com/Chainlit/chainlit/releases
- github.com/Chainlit/chainlit/releases/tag/2.8.5
- github.com/advisories/GHSA-v492-6xx2-p57g
- jvn.jp/en/jp/JVN34964581
- nvd.nist.gov/vuln/detail/CVE-2025-68492