Tool HIGH
Charoes Huang, Xin Huang, Amin Milani Fard
Prompt injection is listed as the number-one vulnerability class in the OWASP Top 10 for LLM Applications that can subvert LLM guardrails, disclose...
2 days ago cs.CR cs.SE
PDF
Tool LOW
Octavian Untila
An autonomous AI ecosystem (SUBSTRATE S3), generating product specifications without explicit instructions about formal methods, independently...
3 days ago cs.SE cs.AI
PDF
Tool MEDIUM
Uchi Uchibeke
AI agents today have passwords but no permission slips. They execute tool calls (fund transfers, database queries, shell commands, sub-agent...
3 days ago cs.CR cs.AI
PDF
Tool MEDIUM
Vincent Siu, Jingxuan He, Kyle Montgomery +4 more
Security in LLM agents is inherently contextual. For example, the same action taken by an agent may represent legitimate behavior or a security...
5 days ago cs.CR cs.AI
PDF
Tool HIGH
Md Takrim Ul Alam, Akif Islam, Mohd Ruhul Ameen +2 more
Large language models (LLMs) deployed behind APIs and retrieval-augmented generation (RAG) stacks are vulnerable to prompt injection attacks that may...