Paper 2512.23132v1

Multi-Agent Framework for Threat Mitigation and Resilience in AI-Based Systems

finance, healthcare, and critical infrastructure, making them targets for data poisoning, model extraction, prompt injection, automated jailbreaking, and preference-guided black-box attacks that exploit model comparisons. Larger models

medium relevance tool
Paper 2512.23032v1

Is Chain-of-Thought Really Not Explainability? Chain-of-Thought Can Be Faithful without Hint Verbalization

using the Biasing Features metric, labels a CoT as unfaithful if it omits a prompt-injected hint that affected the prediction. We argue this metric confuses unfaithfulness with incompleteness

low relevance benchmark
Paper 2512.21999v1

Look Closer! An Adversarial Parametric Editing Framework for Hallucination Mitigation in VLMs

analyzing differential hidden states of response pairs. Then, these clusters are fine-tuned using prompts injected with adversarial tuned prefixes that are optimized to maximize visual neglect, thereby forcing

low relevance attack
Paper 2601.08843v1

Rubric-Conditioned LLM Grading: Alignment, Uncertainty, and Robustness

remaining subset. Additionally, robustness experiments reveal that while the model is resilient to prompt injection, it is sensitive to synonym substitutions. Our work provides critical insights into the capabilities

medium relevance defense
Paper 2512.12921v1

Cisco Integrated AI Security and Safety Framework Report

outputs), model and data integrity compromise (e.g., poisoning, supply-chain tampering), runtime manipulations (e.g., prompt injection, tool and agent misuse), and ecosystem risks (e.g., orchestration abuse, multi-agent collusion). Existing

medium relevance tool
Paper 2512.08737v1

Insured Agents: A Decentralized Trust Insurance Mechanism for Agentic Economy

despite the empirical reality that LLM agents remain unreliable, hallucinated, manipulable, and vulnerable to prompt-injection and tool-abuse. A natural response is "agents-at-stake": binding economically meaningful, slashable

medium relevance attack
Paper 2512.06914v2

SoK: Trust-Authorization Mismatch in LLM Agent Interactions

stages-Belief Formation, Intent Generation, and Permission Grant-we demonstrate that diverse threats, from prompt injection to tool poisoning, share a common root cause: the desynchronization between dynamic trust states

medium relevance survey
Paper 2512.06716v2

Cognitive Control Architecture (CCA): A Lifecycle Supervision Framework for Robustly Aligned AI Agents

Autonomous Large Language Model (LLM) agents exhibit significant vulnerability to Indirect Prompt Injection (IPI) attacks. These attacks hijack agent behavior by polluting external information sources, exploiting fundamental trade-offs between

medium relevance tool
Paper 2512.06556v1

Securing the Model Context Protocol: Defending LLMs Against Tool Poisoning and Adversarial Attacks

workflows. However, this autonomy creates a largely overlooked security gap. Existing defenses focus on prompt-injection attacks and fail to address threats embedded in tool metadata, leaving MCP-based systems

high relevance tool
Paper 2512.04895v1

Chameleon: Adaptive Adversarial Agents for Scaling-Based Visual Prompt Injection in Multimodal AI Systems

Multimodal Artificial Intelligence (AI) systems, particularly Vision-Language Models (VLMs

high relevance tool
Paper 2512.01295v2

Systems Security Foundations for Agentic Computing

third-party servers. For example, a malicious adversary can cause data exfiltration by executing prompt injection attacks, as well as other unwarranted behavior. These security concerns have recently motivated researchers

medium relevance tool
Paper 2512.00742v1

On the Regulatory Potential of User Interfaces for AI Agent Governance

consequential risks. Prior proposals for governing AI agents primarily target system-level safeguards (e.g., prompt injection monitors) or agent infrastructure (e.g., agent IDs). In this work, we explore a complementary

medium relevance attack
Paper 2511.19483v1

Z-Space: A Multi-Agent Tool Orchestration Framework for Enterprise-Grade LLM Automation

become a core challenge restricting system practicality. Existing approaches generally rely on full-prompt injection or static semantic retrieval, facing issues including semantic disconnection between user queries and tool descriptions

medium relevance tool
Paper 2511.19477v1

Building Browser Agents: Architecture, Security, and Practical Solutions

performance; architectural decisions determine success or failure. Security analysis of real-world incidents reveals prompt injection attacks make general-purpose autonomous operation fundamentally unsafe. The paper argues against developing general

medium relevance benchmark
Paper 2511.15203v1

Taxonomy, Evaluation and Exploitation of IPI-Centric LLM Agent Defense Frameworks

based agents with function-calling capabilities are increasingly deployed, but remain vulnerable to Indirect Prompt Injection (IPI) attacks that hijack their tool calls. In response, numerous IPI-centric defense frameworks

high relevance survey
Paper 2511.12423v1

GRAPHTEXTACK: A Realistic Black-Box Node Injection Attack on LLM-Enhanced GNNs

vulnerabilities: GNNs are sensitive to structural perturbations, while LLM-derived features are vulnerable to prompt injection and adversarial phrasing. While existing adversarial attacks largely perturb structure or text independently

high relevance attack
Paper 2511.06212v1

RAG-targeted Adversarial Attack on LLM-based Threat Detection and Mitigation Framework

expands the attack surface, putting entire networks at risk by introducing vulnerabilities such as prompt injection and data poisoning. In this work, we attack an LLM-based IoT attack analysis

high relevance tool
Paper 2511.05919v2

Injecting Falsehoods: Adversarial Man-in-the-Middle Attacks Undermining Factual Recall in LLMs

attacks. Here, we propose the first principled attack evaluation on LLM factual memory under prompt injection via Xmera, our novel, theory-grounded MitM framework. By perturbing the input given

high relevance attack
Paper 2511.05867v3

Can LLM Infer Risk Information From MCP Server System Logs?

when the MCP server is compromised or untrustworthy. While prior benchmarks primarily focus on prompt injection attacks or analyze the vulnerabilities of LLM-MCP interaction trajectories, limited attention has been

medium relevance tool
Paper 2511.03434v1

Inter-Agent Trust Models: A Comparative Study of Brief, Claim, Proof, Stake, Reputation and Constraint in Agentic Web Protocol Design-A2A, AP2, ERC-8004, and Beyond

assumptions, attack surfaces, and design trade-offs, with particular emphasis on LLM-specific fragilities-prompt injection, sycophancy/nudge-susceptibility, hallucination, deception, and misalignment-that render purely reputational or claim-only approaches brittle

medium relevance attack
Previous Page 12 of 15 Next