277 results in 71ms
Paper 2603.19423v1

The Autonomy Tax: Defense Training Breaks LLM Agents

autonomously complete complex multi-step tasks. Practitioners deploy defense-trained models to protect against prompt injection attacks that manipulate agent behavior through malicious observations or retrieved content. We reveal

medium relevance defense
Paper 2602.01378v1

Context Dependence and Reliability in Autoregressive Language Models

unpredictable shifts in attribution scores, undermining interpretability and raising concerns about risks like prompt injection. This work addresses the challenge of distinguishing essential context elements from correlated ones. We introduce

medium relevance attack
Paper 2511.23174v1

Are LLMs Good Safety Agents or a Propaganda Engine?

approaches (erasing the concept of politics); and, 2) vulnerability of models on PSP through prompt injection attacks (PIAs). Associating censorship with refusals on content with masked implicit intent, we find

medium relevance defense
Paper 2603.11853v1

OpenClaw PRISM: A Zero-Fork, Defense-in-Depth Runtime Security Layer for Tool-Augmented LLM Agents

augmented LLM agents introduce security risks that extend beyond user-input filtering, including indirect prompt injection through fetched content, unsafe tool execution, credential leakage, and tampering with local control files

medium relevance tool
Paper 2603.01574v1

DualSentinel: A Lightweight Framework for Detecting Targeted Attacks in Black-box LLM via Dual Entropy Lull Pattern

APIs, but their trustworthiness may be critically undermined by targeted attacks like backdoor and prompt injection attacks, which secretly force LLMs to generate specific malicious sequences. Existing defensive approaches

high relevance tool
Paper 2603.21975v1

SecureBreak -- A dataset towards safe and secure models

growing body of scientific literature showing that attacks, such as jailbreaking and prompt injection, can bypass existing security alignment mechanisms. As a consequence, additional security strategies are needed both

medium relevance benchmark
Paper 2603.20381v1

The production of meaning in the processing of natural language

word order, and discuss the information-theoretic constraints that genuine contextuality imposes on prompt injection defenses and its human analogue, whereby careful construction and maintenance of social contextuality

medium relevance benchmark
Paper 2603.17419v1

Caging the Agents: A Zero Trust Security Architecture for Autonomous AI in Healthcare

instructions, sensitive information disclosure, identity spoofing, cross-agent propagation of unsafe practices, and indirect prompt injection through external resources [7]. In healthcare environments processing Protected Health Information, every such vulnerability

medium relevance attack
Paper 2603.18063v1

MCP-38: A Comprehensive Threat Taxonomy for Model Context Protocol Systems (v1.0)

addresses critical threats arising from MCP's semantic attack surface (tool description poisoning, indirect prompt injection, parasitic tool chaining, and dynamic trust violations), none of which are adequately captured

medium relevance survey
Paper 2603.16215v1

CoMAI: A Collaborative Multi-Agent Framework for Robust and Equitable Interview Evaluation

scoring, and summarization. These agents work collaboratively to provide multi-layered security defenses against prompt injection, support multidimensional evaluation with adaptive difficulty adjustment, and enable rubric-based structured scoring that

medium relevance benchmark
Paper 2603.12230v1

Security Considerations for Artificial Intelligence Agents

across tools, connectors, hosting boundaries, and multi-agent coordination, with particular emphasis on indirect prompt injection, confused-deputy behavior, and cascading failures in long-running workflows. We then assess current

medium relevance benchmark
Paper 2603.11619v1

Taming OpenClaw: Security Analysis and Mitigation of Autonomous LLM Agent Threats

execution, and systematically examine compound threats across the agent's operational lifecycle, including indirect prompt injection, skill supply chain contamination, memory poisoning, and intent drift. Through detailed case studies

medium relevance defense
Paper 2603.11460v2

Follow the Saliency: Supervised Saliency for Retrieval-augmented Dense Video Captioning

that drives retrieval via saliency-guided segmentation and informs caption generation through explicit Saliency Prompts injected into the decoder. By enforcing saliency-constrained segmentation, our method produces temporally coherent segments

low relevance benchmark
Paper 2603.10163v1

Compatibility at a Cost: Systematic Discovery and Exploitation of MCP Clause-Compliance Vulnerabilities

attack surface that allows adversaries to achieve multiple attacks (e.g, silent prompt injection, DoS, etc.), named as \emph{compatibility-abusing attacks}. In this work, we present the first systematic framework

high relevance attack
Paper 2603.07708v1

VoiceSHIELD-Small: Real-Time Malicious Speech Detection and Transcription

people to interact with AI systems. This also brings new security risks, such as prompt injection, social engineering, and harmful voice commands. Traditional security methods rely on converting speech

medium relevance defense
Paper 2603.04469v1

Beyond Input Guardrails: Reconstructing Cross-Agent Semantic Flows for Execution-Aware Attack Detection

autonomous execution and unstructured inter-agent communication introduces severe risks, such as indirect prompt injection, that easily circumvent conventional input guardrails. To address this, we propose \SysName, a framework that

high relevance attack
Paper 2603.03633v1

Goal-Driven Risk Assessment for LLM-Powered Systems: A Healthcare Case Study

challenges emerge due to the potential cyber kill chain cycles that combine adversarial model, prompt injection and conventional cyber attacks. Threat modeling methods enable the system designers to identify potential

medium relevance tool
Paper 2603.04459v2

Benchmark of Benchmarks: Unpacking Influence and Code Repository Quality in LLM Safety Benchmarks

human assessment) on LLM safety benchmarks, analyzing 31 benchmarks and 382 non-benchmarks across prompt injection, jailbreak, and hallucination. We find that benchmark papers show no significant advantage in academic

medium relevance benchmark
Paper 2603.20214v1

Beyond Detection: Governing GenAI in Academic Peer Review as a Sociotechnical Challenge

highlight concerns about epistemic harm, over-standardization, unclear responsibility, and adversarial risks such as prompt injection. User interviews reveal how structural strain and institutional policy ambiguity shift interpretive and enforcement

medium relevance survey
Paper 2603.00991v1

Tracking Capabilities for Safer Agents

challenges: agents might leak private information, cause unintended side effects, or be manipulated through prompt injection. To address these challenges, we propose to put the agent in a programming-language

medium relevance attack
Previous Page 9 of 14 Next