Paper 2601.07072v1

Overcoming the Retrieval Barrier: Indirect Prompt Injection in the Wild for LLM Systems

rely on retrieving information from external corpora. This creates a new attack surface: indirect prompt injection (IPI), where hidden instructions are planted in the corpora and hijack model behavior once

high relevance tool
Paper 2601.22569v1

Whispers of Wealth: Red-Teaming Google's Agent Payments Protocol via Prompt Injection

teaming evaluation of AP2 and identify vulnerabilities arising from indirect and direct prompt injection. We introduce two attack techniques, the Branded Whisper Attack and the Vault Whisper Attack which manipulate

high relevance attack
Paper 2602.10453v1

The Landscape of Prompt Injection Threats in LLM Agents: From Taxonomy to Analysis

LLMs) has resulted in a paradigm shift towards autonomous agents, necessitating robust security against Prompt Injection (PI) vulnerabilities where untrusted inputs hijack agent behaviors. This SoK presents a comprehensive overview

high relevance survey
Paper 2510.05709v1

Towards Reliable and Practical LLM Security Evaluations via Bayesian Modelling

prompts are designed imperfectly, and practitioners only have a limited amount of compute to evaluate vulnerabilities. We show the improved inferential capabilities of the model in several prompt injection attack

medium relevance benchmark
Paper 2510.23675v3

QueryIPI: Query-agnostic Indirect Prompt Injection on Coding Agents

high-privilege system access, creating a high-stakes attack surface. Prior work on Indirect Prompt Injection (IPI) is mainly query-specific, requiring particular user queries as triggers and leading

high relevance attack
Paper 2602.18514v1

Trojan Horses in Recruiting: A Red-Teaming Case Study on Indirect Prompt Injection in Standard vs. Reasoning Models

automated decision-making pipelines, specifically within Human Resources (HR), the security implications of Indirect Prompt Injection (IPI) become critical. While a prevailing hypothesis posits that "Reasoning" or "Chain-of-Thought

high relevance attack
Paper 2603.15417v1

Amplification Effects in Test-Time Reinforcement Learning: Safety and Reasoning Vulnerabilities

labels. However, this reliance on test data also makes TTT methods vulnerable to harmful prompt injections. In this paper, we investigate safety vulnerabilities of TTT methods, where we study

medium relevance defense
Paper 2602.22450v1

Silent Egress: When Implicit Prompt Injection Makes LLM Agents Leak Without a Trace

URLs and calling external tools. We show that this workflow gives rise to implicit prompt injection: adversarial instructions embedded in automatically generated URL previews, including titles, metadata, and snippets

high relevance attack
Paper 2603.15714v1

How Vulnerable Are AI Agents to Indirect Prompt Injections? Insights from a Large-Scale Public Competition

data sources such as emails, documents, and code repositories. This creates exposure to indirect prompt injection attacks, where adversarial instructions embedded in external content manipulate agent behavior without user awareness

high relevance attack
Paper 2510.03204v1

FocusAgent: Simple Yet Effective Ways of Trimming the Large Context of Web Agents

computational cost processing; moreover, processing full pages exposes agents to security risks such as prompt injection. Existing pruning strategies either discard relevant content or retain irrelevant context, leading to suboptimal

medium relevance benchmark
Paper 2601.10923v2

Hidden-in-Plain-Text: A Benchmark for Social-Web Indirect Prompt Injection in RAG

amplifying both their usefulness and their attack surface. Most notably, indirect prompt injection and retrieval poisoning attack the web-native carriers that survive ingestion pipelines and are very concerning

high relevance benchmark
Paper 2602.20720v1

AdapTools: Adaptive Tool-based Indirect Prompt Injection Attacks on Agentic LLMs

powerful for complex task execution. However, this advancement introduces critical security vulnerabilities, particularly indirect prompt injection (IPI) attacks. Existing attack methods are limited by their reliance on static patterns

high relevance tool
Paper 2602.03117v2

AgentDyn: A Dynamic Open-Ended Benchmark for Evaluating Prompt Injection Attacks of Real-World Agent Security System

However, the external data which agent consumes also leads to the risk of indirect prompt injection attacks, where malicious instructions embedded in third-party content hijack agent behavior. Guided

high relevance benchmark
Paper 2512.20986v1

AegisAgent: An Autonomous Defense Agent Against Prompt Injection Attacks in LLM-HARs

understanding. However, the reliability of these systems is critically undermined by their vulnerability to prompt injection attacks, where attackers deliberately input deceptive instructions into LLMs. Traditional defenses, based on static

high relevance attack
Paper 2510.00451v1

A Call to Action for a Secure-by-Design Generative AI Paradigm

Large language models have gained widespread prominence, yet their vulnerability to prompt injection and other adversarial attacks remains a critical concern. This paper argues for a security-by-design

medium relevance attack
Paper 2512.23128v1

It's a TRAP! Task-Redirecting Agent Persuasion Benchmark for Web Agents

professional networking. Their reliance on dynamic web content, however, makes them vulnerable to prompt injection attacks: adversarial instructions hidden in interface elements that persuade the agent to divert from

medium relevance benchmark
Paper 2602.05484v1

Clouding the Mirror: Stealthy Prompt Injection Attacks Targeting LLM-based Phishing Detection

phishing site. While these approaches are promising, LLMs are inherently vulnerable to prompt injection (PI). Because attackers can fully control various elements of phishing sites, this creates the potential

high relevance attack
Paper 2512.23557v1

Toward Trustworthy Agentic AI: A Multimodal Framework for Preventing Prompt Injection Attacks

GraphChain. Nevertheless, this agentic environment increases the probability of the occurrence of multimodal prompt injection (PI) attacks, in which concealed or malicious instructions carried in text, pictures, metadata, or agent

high relevance tool
Paper 2510.09462v2

Adaptive Attacks on Trusted Monitors Subvert AI Control Protocols

simple adaptive attack vector by which the attacker embeds publicly known or zero-shot prompt injections in the model outputs. Using this tactic, frontier models consistently evade diverse monitors

high relevance attack
Paper 2602.07398v1

AgentSys: Secure and Dynamic LLM Agents Through Explicit Hierarchical Memory Management

Indirect prompt injection threatens LLM agents by embedding malicious instructions in external content, enabling unauthorized actions and data theft. LLM agents maintain working memory through their context window, which stores

medium relevance attack
Previous Page 6 of 15 Next