Paper 2603.10163v1

Compatibility at a Cost: Systematic Discovery and Exploitation of MCP Clause-Compliance Vulnerabilities

attack surface that allows adversaries to achieve multiple attacks (e.g, silent prompt injection, DoS, etc.), named as \emph{compatibility-abusing attacks}. In this work, we present the first systematic framework

high relevance attack
Paper 2603.07708v1

VoiceSHIELD-Small: Real-Time Malicious Speech Detection and Transcription

people to interact with AI systems. This also brings new security risks, such as prompt injection, social engineering, and harmful voice commands. Traditional security methods rely on converting speech

medium relevance defense
Paper 2603.04469v1

Beyond Input Guardrails: Reconstructing Cross-Agent Semantic Flows for Execution-Aware Attack Detection

autonomous execution and unstructured inter-agent communication introduces severe risks, such as indirect prompt injection, that easily circumvent conventional input guardrails. To address this, we propose \SysName, a framework that

high relevance attack
Paper 2603.03633v1

Goal-Driven Risk Assessment for LLM-Powered Systems: A Healthcare Case Study

challenges emerge due to the potential cyber kill chain cycles that combine adversarial model, prompt injection and conventional cyber attacks. Threat modeling methods enable the system designers to identify potential

medium relevance tool
Paper 2603.04459v2

Benchmark of Benchmarks: Unpacking Influence and Code Repository Quality in LLM Safety Benchmarks

human assessment) on LLM safety benchmarks, analyzing 31 benchmarks and 382 non-benchmarks across prompt injection, jailbreak, and hallucination. We find that benchmark papers show no significant advantage in academic

medium relevance benchmark
Paper 2603.20214v1

Beyond Detection: Governing GenAI in Academic Peer Review as a Sociotechnical Challenge

highlight concerns about epistemic harm, over-standardization, unclear responsibility, and adversarial risks such as prompt injection. User interviews reveal how structural strain and institutional policy ambiguity shift interpretive and enforcement

medium relevance survey
Paper 2603.00991v1

Tracking Capabilities for Safer Agents

challenges: agents might leak private information, cause unintended side effects, or be manipulated through prompt injection. To address these challenges, we propose to put the agent in a programming-language

medium relevance attack
Paper 2603.00472v1

From Goals to Aspects, Revisited: An NFR Pattern Language for Agentic AI Systems

patterns address agent-specific crosscutting concerns absent from traditional AOP literature: tool-scope sandboxing, prompt injection detection, token budget management, and action audit trails. We extend the V-graph model

medium relevance tool
Paper 2603.00200v1

LiaisonAgent: An Multi-Agent Framework for Autonomous Risk Investigation and Governance

Furthermore, the system exhibits significant resilience against out-of-distribution noise and adversarial prompt injections, while achieving a 92.7% reduction in manual investigation overhead

medium relevance tool
Paper 2603.00164v1

Reverse CAPTCHA: Evaluating LLM Susceptibility to Invisible Unicode Instruction Injection

statistically significant (p < 0.05, Bonferroni-corrected). These results highlight an underexplored attack surface for prompt injection via invisible Unicode payloads

high relevance attack
Paper 2602.20867v1

SoK: Agentic Skills -- Beyond Tool Use in LLM Agents

analyze the security and governance implications of skill-based agents, covering supply-chain risks, prompt injection via skill payloads, and trust-tiered execution, grounded by a case study

medium relevance survey
Paper 2603.04443v1

AMV-L: Lifecycle-Managed Agent Memory for Tail-Latency Control in Long-Running LLM Systems

running workloads against two baselines: TTL and an LRU working-set policy, with fixed prompt-injection caps. Relative to TTL, AMV-L improves throughput by 3.1x and reduces latency

medium relevance tool
Paper 2602.16708v2

Policy Compiler for Secure Agentic Systems

specific restructuring required. We evaluate PCAS on three case studies: information flow policies for prompt injection defense, approval workflows in a multi-agent pharmacovigilance system, and organizational policies for customer

medium relevance attack
Paper 2602.13477v2

OMNI-LEAK: Orchestrator Multi-Agent Network Induced Data Leakage

OMNI-LEAK, that compromises several agents to leak sensitive data through a single indirect prompt injection, even in the presence of data access control. We report the susceptibility of frontier

medium relevance attack
Paper 2602.11247v2

Peak + Accumulation: A Proxy-Level Scoring Formula for Multi-Turn LLM Attack Detection

Multi-turn prompt injection attacks distribute malicious intent across multiple conversation turns, exploiting the assumption that each turn is evaluated independently. While single-turn detection has been extensively studied

high relevance attack
Paper 2602.10915v3

Blind Gods and Broken Screens: Architecting a Secure, Intent-Centric Mobile Agent Operating System

Action Execution - revealing critical flaws such as fake App identity, visual spoofing, indirect prompt injection, and unauthorized privilege escalation stemming from a reliance on unstructured visual data. To address these

medium relevance benchmark
Paper 2602.09433v1

Autonomous Action Runtime Management(AARM):A System Specification for Securing AI-Driven Actions at Runtime

records tamper-evident receipts for forensic reconstruction. We formalize a threat model addressing prompt injection, confused deputy attacks, data exfiltration, and intent drift. We introduce an action classification framework distinguishing

medium relevance tool
Paper 2602.08995v1

When Actions Go Off-Task: Detecting and Correcting Misaligned Actions in Computer-Use Agents

user's original intent. Such misaligned actions may arise from external attacks (e.g., indirect prompt injection) or from internal limitations (e.g., erroneous reasoning). They not only expose CUAs to safety

medium relevance benchmark
Paper 2602.07381v1

When the Model Said 'No Comment', We Knew Helpfulness Was Dead, Honesty Was Alive, and Safety Was Terrified

experts. To resolve this, we propose AlignX, a two-stage framework. Stage 1 uses prompt-injected fine-tuning to extract axis-specific task features, mitigating catastrophic forgetting. Stage 2 deploys

low relevance defense
Paper 2602.13284v1

Agents in the Wild: Safety, Society, and the Illusion of Sociality on Moltbook

content touches safety-related themes; social engineering (31.9% of attacks) far outperforms prompt injection (3.7%), and adversarial posts receive 6x higher engagement than normal content. (3) The Illusion of Sociality

medium relevance defense
Previous Page 10 of 15 Next