CVE-2025-63390

MEDIUM
Published December 18, 2025
CISO Take

If your organization runs AnythingLLM v1.8.5, assume your system prompts and full AI workspace configurations are publicly readable — no credentials required. This is a recon goldmine: attackers enumerate your prompts, model choices, and agent configurations before launching targeted prompt injection or social engineering attacks. Patch immediately or block unauthenticated access to /api/workspaces at the network/reverse-proxy layer.

Affected Systems

Package Ecosystem Vulnerable Range Patched
anythingllm No patch

Do you use anythingllm? You're affected.

Severity & Risk

CVSS 3.1
5.3 / 10
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1. PATCH: Upgrade AnythingLLM to the latest version — check https://github.com/Mintplex-Labs/anything-llm/releases for a fix addressing CWE-306 on /api/workspaces. 2. IMMEDIATE WORKAROUND: Block unauthenticated access to /api/workspaces at reverse proxy/WAF/firewall level — require valid session tokens before routing to this endpoint. 3. AUDIT: Review all system prompts (openAiPrompt fields) for embedded credentials, internal URLs, sensitive instructions, or security bypass information that should now be considered compromised. 4. ROTATE: If system prompts reference API keys, internal hostnames, or credentials, rotate them now. 5. DETECT: Query logs for unauthenticated GET requests to /api/workspaces — any hits from external IPs indicate active exploitation. 6. HARDEN: Apply network segmentation — AnythingLLM should not be internet-accessible unless explicitly required.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, Robustness and Cybersecurity Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.2.6 - Access control to AI systems A.8.4 - Protection of AI system information
NIST AI RMF
GOVERN-6.1 - Policies and procedures for AI risk MANAGE 2.4 - Risk Treatment and Residual Risk Management PROTECT-2.1 - AI system configuration and sensitive data protection
OWASP LLM Top 10
LLM02:2025 - Sensitive Information Disclosure LLM07:2025 - System Prompt Leakage

Technical Details

NVD Description

An authentication bypass vulnerability exists in AnythingLLM v1.8.5 in via the /api/workspaces endpoint. The endpoint fails to implement proper authentication checks, allowing unauthenticated remote attackers to enumerate and retrieve detailed information about all configured workspaces. Exposed data includes: workspace identifiers (id, name, slug), AI model configurations (chatProvider, chatModel, agentProvider), system prompts (openAiPrompt), operational parameters (temperature, history length, similarity thresholds), vector search settings, chat modes, and timestamps.

Exploitation Scenario

An attacker discovers an AnythingLLM instance via Shodan/Censys or targeted reconnaissance. They send a single unauthenticated HTTP GET to /api/workspaces and receive a JSON response listing every configured workspace with full metadata: the names and slugs reveal business context, chatProvider/chatModel reveal the exact LLM in use, and openAiPrompt exposes the system prompt verbatim — including security restrictions and persona instructions. The attacker uses the system prompt content to craft precise prompt injection payloads that bypass stated restrictions, knowing exactly what guardrails to circumvent. They also identify agentProvider settings to understand what tools the agent can invoke, planning further exploitation via agent tool abuse.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N

Timeline

Published
December 18, 2025
Last Modified
January 22, 2026
First Seen
December 18, 2025