197 results in 51ms
Paper 2510.27062v1

Consistency Training Helps Stop Sycophancy and Jailbreaks

LLM's factuality and refusal training can be compromised by simple changes to a prompt. Models often adopt user beliefs (sycophancy) or satisfy inappropriate requests which are wrapped within special

high relevance attack
Paper 2510.01529v2

Bypassing Prompt Guards in Production with Controlled-Release Prompting

attack exploits a resource asymmetry between the prompt guard and the main LLM, encoding a jailbreak prompt that lightweight guards cannot decode but the main model can. This reveals

medium relevance attack
Paper 2511.19009v1

Understanding and Mitigating Over-refusal for Large Language Models via Safety Representation

various natural language processing tasks, yet they also harbor safety vulnerabilities. To enhance LLM safety, various jailbreak defense methods have been proposed to guard against harmful outputs. However, improvements

medium relevance defense
Paper 2511.14140v1

Beyond Fixed and Dynamic Prompts: Embedded Jailbreak Templates for Advancing LLM Security

having the LLM generate entire templates, which often compromises intent clarity and reproductibility. To address this gap, this paper introduces the Embedded Jailbreak Template, which preserves the structure of existing

high relevance attack
Paper 2603.12023v1

Cascade: Composing Software-Hardware Attack Gadgets for Adversarial Threat Amplification in Compound AI Systems

injection flaw along with a guardrail Rowhammer attack to inject an unaltered jailbreak prompt into an LLM, resulting in an AI safety violation, and (2) Manipulating a knowledge database

high relevance tool
Paper 2511.16278v1

"To Survive, I Must Defect": Jailbreaking LLMs via the Game-Theory Scenarios

maintains high ASR while lowering detection under prompt-guard models. Beyond benchmarks, GTA jailbreaks real-world LLM applications and reports a longitudinal safety monitoring of popular HuggingFace LLMs

high relevance attack
Paper 2512.20168v1

Odysseus: Jailbreaking Commercial Multimodal LLM-integrated Systems via Dual Steganography

Despite these efforts, recent studies have shown that jailbreak attacks can circumvent alignment and elicit unsafe outputs. Currently, most existing jailbreak methods are tailored for open-source models and exhibit

high relevance tool
Paper 2601.19487v1

LLM-VA: Resolving the Jailbreak-Overrefusal Trade-off via Vector Alignment

jailbreak (answering harmful inputs) and over-refusal (declining benign queries). Existing vector steering methods adjust the magnitude of answer vectors, but this creates a fundamental trade-off -- reducing jailbreak increases

high relevance attack
Paper 2511.13548v1

ForgeDAN: An Evolutionary Framework for Jailbreaking Aligned Large Language Models

process toward semantically relevant and harmful outputs; finally, ForgeDAN integrates dual-dimensional jailbreak judgment, leveraging an LLM-based classifier to jointly assess model compliance and output harmfulness, thereby reducing false

high relevance tool
Paper 2601.15706v1

Improving Methodologies for LLM Evaluations Across Global Languages

five harm categories (privacy, non-violent crime, violent crime, intellectual property and jailbreak robustness), using both LLM-as-a-judge and human annotation. The exercise shows how safety behaviours

medium relevance benchmark
Paper 2510.17904v2

BreakFun: Jailbreaking LLMs via Schema Exploitation

paradoxically vulnerable. In this paper, we investigate this vulnerability through BreakFun, a jailbreak methodology that weaponizes an LLM's adherence to structured schemas. BreakFun employs a three-part prompt that

high relevance attack
Paper 2509.23037v1

GuardNet: Graph-Attention Filtering for Jailbreak Defense in Large Language Models

cross-domain evaluations, making it a practical and robust defense against jailbreak threats in real-world LLM deployments

high relevance attack
Paper 2601.22240v1

A Systematic Literature Review on LLM Defenses Against Prompt Injection and Jailbreaking: Expanding NIST Taxonomy

The rapid advancement and widespread adoption of generative artificial intelligence

high relevance survey
Paper 2601.18998v1

Malicious Repurposing of Open Science Artefacts by Using Large Language Models

introducing an end-to-end pipeline that first bypasses LLM safeguards through persuasion-based jailbreaking, then reinterprets NLP papers to identify and repurpose their artefacts (datasets, methods, and tools

medium relevance benchmark
Paper 2601.01627v1

JMedEthicBench: A Multi-Turn Conversational Benchmark for Evaluating Medical Safety in Japanese Large Language Models

contains over 50,000 adversarial conversations generated using seven automatically discovered jailbreak strategies. Using a dual-LLM scoring protocol, we evaluate 27 models and find that commercial models maintain robust

medium relevance benchmark
Paper 2509.23882v2

Quant Fever, Reasoning Blackholes, Schrodinger's Compliance, and More: Probing GPT-OSS-20B

probes the model's behavior under different adversarial conditions. Using the Jailbreak Oracle (JO) [1], a systematic LLM evaluation tool, the study uncovers several failure modes including quant fever, reasoning

medium relevance benchmark
Paper 2509.21761v2

Backdoor Attribution: Elucidating and Controlling Backdoor in Language Models

these attacks remain a black box. Previous research on interpretability for LLM safety tends to focus on alignment, jailbreak, and hallucination, but overlooks backdoor mechanisms, making it difficult to understand

medium relevance attack
Paper 2602.14161v1

When Benchmarks Lie: Evaluating Malicious Prompt Classifiers Under True Distribution Shift

Detecting prompt injection and jailbreak attacks is critical for deploying LLM-based agents safely. As agents increasingly process untrusted data from emails, documents, tool outputs, and external APIs, robust attack

medium relevance benchmark
Paper 2511.18581v2

TASO: Jailbreak LLMs via Alternative Template and Suffix Optimization

Many recent studies showed that LLMs are vulnerable to jailbreak attacks, where an attacker can perturb the input of an LLM to induce it to generate an output

high relevance attack
Paper 2601.04034v1

HoneyTrap: Deceiving Large Language Model Attackers to Honeypot Traps with Resilient Multi-Agent Defense

address this critical challenge, we propose HoneyTrap, a novel deceptive LLM defense framework leveraging collaborative defenders to counter jailbreak attacks. It integrates four defensive agents, Threat Interceptor, Misdirection Controller, Forensic

high relevance attack
Previous Page 3 of 10 Next