Policy Compiler for Secure Agentic Systems
specific restructuring required. We evaluate PCAS on three case studies: information flow policies for prompt injection defense, approval workflows in a multi-agent pharmacovigilance system, and organizational policies for customer
OMNI-LEAK: Orchestrator Multi-Agent Network Induced Data Leakage
OMNI-LEAK, that compromises several agents to leak sensitive data through a single indirect prompt injection, even in the presence of data access control. We report the susceptibility of frontier
Peak + Accumulation: A Proxy-Level Scoring Formula for Multi-Turn LLM Attack Detection
Multi-turn prompt injection attacks distribute malicious intent across multiple conversation turns, exploiting the assumption that each turn is evaluated independently. While single-turn detection has been extensively studied
Blind Gods and Broken Screens: Architecting a Secure, Intent-Centric Mobile Agent Operating System
Action Execution - revealing critical flaws such as fake App identity, visual spoofing, indirect prompt injection, and unauthorized privilege escalation stemming from a reliance on unstructured visual data. To address these
Autonomous Action Runtime Management(AARM):A System Specification for Securing AI-Driven Actions at Runtime
records tamper-evident receipts for forensic reconstruction. We formalize a threat model addressing prompt injection, confused deputy attacks, data exfiltration, and intent drift. We introduce an action classification framework distinguishing
When Actions Go Off-Task: Detecting and Correcting Misaligned Actions in Computer-Use Agents
user's original intent. Such misaligned actions may arise from external attacks (e.g., indirect prompt injection) or from internal limitations (e.g., erroneous reasoning). They not only expose CUAs to safety
When the Model Said 'No Comment', We Knew Helpfulness Was Dead, Honesty Was Alive, and Safety Was Terrified
experts. To resolve this, we propose AlignX, a two-stage framework. Stage 1 uses prompt-injected fine-tuning to extract axis-specific task features, mitigating catastrophic forgetting. Stage 2 deploys
Agents in the Wild: Safety, Society, and the Illusion of Sociality on Moltbook
content touches safety-related themes; social engineering (31.9% of attacks) far outperforms prompt injection (3.7%), and adversarial posts receive 6x higher engagement than normal content. (3) The Illusion of Sociality
vLLM Hook v0: A Plug-in for Programming Model Internals on vLLM
core functions of vLLM Hook, in version 0, we demonstrate 3 use cases including prompt injection detection, enhanced retrieval-augmented retrieval (RAG), and activation steering. Finally, we welcome the community
Human Society-Inspired Approaches to Agentic AI Security: The 4C Framework
Although recent work has strengthened defenses against model and pipeline level vulnerabilities such as prompt injection, data poisoning, and tool misuse, these system centric approaches may fail to capture risks
SMCP: Secure Model Context Protocol
security and privacy challenges. These include risks such as unauthorized access, tool poisoning, prompt injection, privilege escalation, and supply chain attacks, any of which can impact different parts
CAI find_file Agent Tool has Command Injection Vulnerability Through
Machine-Assisted Grading of Nationwide School-Leaving Essay Exams with LLMs and Statistical NLP
raters and tends to fall within the human scoring range. We also evaluate bias, prompt injection risks, and LLMs as essay writers. These findings demonstrate that a principled, rubric-driven
From Biased Chatbots to Biased Agents: Examining Role Assignment Effects on LLM Agent Robustness
shifts appear across task types and model architectures, indicating that persona conditioning and simple prompt injections can distort an agent's decision-making reliability. Our findings reveal an overlooked vulnerability
MirrorGuard: Toward Secure Computer-Use Agents via Simulation-to-Real Reasoning Correction
perform complex tasks. This autonomy introduces serious security risks: malicious instructions or visual prompt injections can trigger unsafe reasoning and cause harmful system-level actions. Existing defenses, such as detection
Agentic Artificial Intelligence (AI): Architectures, Taxonomies, and Evaluation of Large Language Model Agents
practices. Finally, we highlight open challenges, such as hallucination in action, infinite loops, and prompt injection, and outline future research directions toward more robust and reliable autonomous systems
AgenTRIM: Tool Risk Mitigation for Agentic AI
While such tools extend capability, improper tool permissions introduce security risks such as indirect prompt injection and tool misuse. We characterize these failures as unbalanced tool-driven agency. Agents
Agent Skills in the Wild: An Empirical Study of Security Vulnerabilities at Scale
skills contain at least one vulnerability, spanning 14 distinct patterns across four categories: prompt injection, data exfiltration, privilege escalation, and supply chain risks. Data exfiltration (13.3%) and privilege escalation
ToolSafe: Enhancing Tool Invocation Safety of LLM-based agents via Proactive Step-level Guardrail and Feedback
percent on average and improves benign task completion by approximately 10 percent under prompt injection attacks
CaMeLs Can Use Computers Too: System-level Security for Computer Use Agents
agents are vulnerable to prompt injection attacks, where malicious content hijacks agent behavior to steal credentials or cause financial loss. The only known robust defense is architectural isolation that strictly