Paper 2603.20976v1

Detection of adversarial intent in Human-AI teams using LLMs

useful, it also exposes them to a broad range of attacks, including data poisoning, prompt injection, and even prompt engineering. Through these attack vectors, malicious actors can manipulate

medium relevance attack
Paper 2510.24801v1

Fortytwo: Swarm Inference with Peer-Ranked Consensus

evaluation indicates higher accuracy and strong resilience to adversarial and noisy free-form prompting (e.g., prompt-injection degradation of only 0.12% versus 6.20% for a monolithic single-model baseline), while

medium relevance benchmark
Paper 2602.19547v1

CIBER: A Comprehensive Benchmark for Security Evaluation of Code Interpreter Agents

vulnerability of code interpreter agents against four major types of adversarial attacks: Direct/Indirect Prompt Injection, Memory Poisoning, and Prompt-based Backdoor. We evaluate six foundation models across two representative code

medium relevance benchmark
Paper 2512.04520v1

Boundary-Aware Test-Time Adaptation for Zero-Shot Medical Image Segmentation

test-time adaptation. This framework integrates two key mechanisms: (1) The encoder-level Gaussian prompt injection embeds Gaussian-based prompts directly into the image encoder, providing explicit guidance for initial

medium relevance benchmark
Paper 2601.06884v1

Paraphrasing Adversarial Attack on LLM-as-a-Reviewer

growing attention, making it essential to examine their potential vulnerabilities. Prior attacks rely on prompt injection, which alters manuscript content and conflates injection susceptibility with evaluation robustness. We propose

high relevance survey
Paper 2601.03868v2

What Matters For Safety Alignment?

services, highlighting an urgent need for architectural and deployment safeguards. Fourth, roleplay, prompt injection, and gradient-based search for adversarial prompts are the predominant methodologies for eliciting unaligned behaviors

medium relevance defense
Paper 2512.19011v2

PromptScreen: Efficient Jailbreak Mitigation Using Semantic Linear Classification in a Multi-Staged Pipeline

Prompt injection and jailbreaking attacks pose persistent security challenges to large language model (LLM)-based systems. We present PromptScreen, an efficient and systematically evaluated defense architecture that mitigates these threats

high relevance attack
Paper 2512.14860v1

Penetration Testing of Agentic AI: A Comparative Security Analysis Across Models and Frameworks

functionality of a university information management system and 13 distinct attack scenarios that span prompt injection, Server Side Request Forgery (SSRF), SQL injection, and tool misuse. Our 130 total test

medium relevance tool
Paper 2510.20333v3

GhostEI-Bench: Do Mobile Agents Resilience to Environmental Injection in Dynamic On-Device Environments?

inter-app interactions, exposes them to a unique and underexplored threat vector: environmental injection. Unlike prompt-based attacks that manipulate textual instructions, environmental injection corrupts an agent's visual perception

high relevance attack
Paper 2603.19974v1

Trojan's Whisper: Stealthy Manipulation of OpenClaw through Injected Bootstrapped Guidance

stealthy attack vector that embeds adversarial operational narratives into bootstrap guidance files. Unlike traditional prompt injection, which relies on explicit malicious instructions, guidance injection manipulates the agent's reasoning context

medium relevance benchmark
Paper 2510.05025v1

Imperceptible Jailbreaking against Large Language Models

imperceptible jailbreaks achieve high attack success rates against four aligned LLMs and generalize to prompt injection attacks, all without producing any visible modifications in the written prompt. Our code

high relevance attack
Paper 2601.08490v1

BenchOverflow: Measuring Overflow in Large Language Models via Plain-Text Prompts

large language models (LLMs) in which plain-text prompts elicit excessive outputs, a phenomenon we term Overflow. Unlike jailbreaks or prompt injection, Overflow arises under ordinary interaction settings

medium relevance benchmark
Paper 2603.12023v1

Cascade: Composing Software-Hardware Attack Gadgets for Adversarial Threat Amplification in Compound AI Systems

with algorithmic weaknesses: (1) Exploiting a software code injection flaw along with a guardrail Rowhammer attack to inject an unaltered jailbreak prompt into an LLM, resulting in an AI safety

high relevance tool
Paper 2602.10481v1

Protecting Context and Prompts: Deterministic Security for Non-Deterministic AI

Large Language Model (LLM) applications are vulnerable to prompt injection and context manipulation attacks that traditional security models cannot prevent. We introduce two novel primitives--authenticated prompts and authenticated context

medium relevance benchmark
Paper 2602.08062v1

Efficient and Adaptable Detection of Malicious LLM Prompts via Bootstrap Aggregation

However, these systems remain susceptible to malicious prompts that induce unsafe or policy-violating behavior through harmful requests, jailbreak techniques, and prompt injection attacks. Existing defenses face fundamental limitations: black

medium relevance defense
Paper 2601.05755v2

VIGIL: Defending LLM Agents Against Tool Stream Injection via Verify-Before-Commit

agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated metadata and runtime feedback hijack execution flow. Existing defenses encounter

high relevance tool
Paper 2603.01564v1

From Secure Agentic AI to Secure Agentic Web: Challenges, Threats, and Future Directions

Secure Agentic Web. We first summarize a component-aligned threat taxonomy covering prompt abuse, environment injection, memory attacks, toolchain abuse, model tampering, and agent network attacks. We then review defense

medium relevance survey
Paper 2510.21057v2

Soft Instruction De-escalation Defense

agentic systems that interact with an external environment; this makes them susceptible to prompt injections when dealing with untrusted data. To overcome this limitation, we propose SIC (Soft Instruction Control

medium relevance defense
Paper 2602.10498v1

When Skills Lie: Hidden-Comment Injection in LLM Agents

Skills to describe available tools and recommended procedures. We study a hidden-comment prompt injection risk in this documentation layer: when a Markdown Skill is rendered to HTML, HTML comment

high relevance attack

TaskWeaver has Protection Mechanism Failure and Server-Side Request Forgery

CVSS 6.5 agentos-taskweaver View details
Previous Page 7 of 14 Next