195 results in 62ms
Paper 2603.04859v1

Osmosis Distillation: Model Hijacking with the Fewest Samples

generated by dataset distillation methods, where an adversary can perform a model hijacking attack with only a few poisoned samples in the synthetic dataset. To reveal this threat, we propose

medium relevance benchmark
Paper 2603.12989v1

Test-Time Attention Purification for Backdoored Large Vision Language Models

defenses across diverse datasets and backdoor attack types, while preserving the model's utility on both clean and poisoned samples

medium relevance benchmark
Paper 2509.26032v2

Stealthy Yet Effective: Distribution-Preserving Backdoor Attacks on Graph Classification

semantic deviation caused by label flipping, both of which make poisoned graphs easily detectable by anomaly detection models. To address this, we propose DPSBA, a clean-label backdoor framework that

high relevance attack
Paper 2603.18034v1

Semantic Chameleon: Corpus-Dependent Poisoning Attacks and Defenses in RAG Systems

documents are preferentially retrieved at inference time, enabling targeted manipulation of model outputs. We study gradient-guided corpus poisoning attacks against modern RAG pipelines and evaluate retrieval-layer defenses that

high relevance attack
Paper 2602.11213v1

Transferable Backdoor Attacks for Code Models via Sharpness-Aware Adversarial Perturbation

software development but remain vulnerable to backdoor attacks via poisoned training data. Existing backdoor attacks on code models face a fundamental trade-off between transferability and stealthiness. Static trigger-based

high relevance attack
Paper 2602.06532v1

Dependable Artificial Intelligence with Reliability and Security (DAIReS): A Unified Syndrome Decoding Approach for Hallucination and Backdoor Trigger Detection

models, including Large Language Models (LLMs), are characterized by a range of system-level attributes such as security and reliability. Recent studies have demonstrated that ML models are vulnerable

medium relevance defense
Paper 2509.19921v2

On the Fragility of Contribution Score Computation in Federated Learning

alter the final scores. Second, we explore vulnerabilities posed by poisoning attacks, where malicious participants strategically manipulate their model updates to inflate their own contribution scores or reduce the importance

medium relevance benchmark
Paper 2603.01019v1

BadRSSD: Backdoor Attacks on Regularized Self-Supervised Diffusion Models

backdoor attack targeting the representation layer of self-supervised diffusion models. Specifically, it hijacks the semantic representations of poisoned samples with triggers in Principal Component Analysis (PCA) space toward those

high relevance attack
Paper 2601.05504v2

Memory Poisoning Attack and Defense on Memory Based LLM-Agents

Large language model agents equipped with persistent memory are vulnerable to memory poisoning attacks, where adversaries inject malicious instructions through query only interactions that corrupt the agents long term memory

high relevance attack
Paper 2509.21761v2

Backdoor Attribution: Elucidating and Controlling Backdoor in Language Models

Fine-tuned Large Language Models (LLMs) are vulnerable to backdoor attacks through data poisoning, yet the internal mechanisms governing these attacks remain a black box. Previous research on interpretability

medium relevance attack
Paper 2602.04899v1

Phantom Transfer: Data-level Defences are Insufficient Against Data Poisoning

data-level defences are insufficient for stopping sophisticated data poisoning attacks. We suggest that future work should focus on model audits and white-box security methods

medium relevance attack
Paper 2602.02629v1

Trustworthy Blockchain-based Federated Learning for Electronic Health Records: Securing Participant Identity with Decentralized Identifiers and Verifiable Credentials

patient data. Despite its potential, FL remains vulnerable to poisoning and Sybil attacks, in which malicious participants corrupt the global model or infiltrate the network using fake identities. While recent

medium relevance benchmark
Paper 2602.19547v1

CIBER: A Comprehensive Benchmark for Security Evaluation of Code Interpreter Agents

four major types of adversarial attacks: Direct/Indirect Prompt Injection, Memory Poisoning, and Prompt-based Backdoor. We evaluate six foundation models across two representative code interpreter agents (OpenInterpreter and OpenCodeInterpreter), incorporating

medium relevance benchmark
Paper 2602.07200v1

BadSNN: Backdoor Attacks on Spiking Neural Networks via Adversarial Spiking Neuron

converts input data into spikes following the Leaky Integrate-and-Fire (LIF) neuron model. This model includes several important hyperparameters, such as the membrane potential threshold and membrane time constant

high relevance attack
Paper 2602.01942v1

Human Society-Inspired Approaches to Agentic AI Security: The 4C Framework

software components. Although recent work has strengthened defenses against model and pipeline level vulnerabilities such as prompt injection, data poisoning, and tool misuse, these system centric approaches may fail

medium relevance tool
Paper 2510.09710v2

SeCon-RAG: A Two-Stage Semantic Filtering and Conflict-Free Framework for Trustworthy RAG

Retrieval-augmented generation (RAG) systems enhance large language models (LLMs) with external knowledge but are vulnerable to corpus poisoning and contamination attacks, which can compromise output integrity. Existing defenses often

medium relevance benchmark
Paper 2511.08944v1

Robust Backdoor Removal by Reconstructing Trigger-Activated Changes in Latent Representation

Backdoor attacks pose a critical threat to machine learning models, causing them to behave normally on clean data but misclassify poisoned data into a poisoned class. Existing defenses often attempt

medium relevance benchmark
Paper 2602.15195v2

Weight space Detection of Backdoors in LoRA Adapters

trigger for backdoor behavior is unknown. We detect poisoned adapters by analyzing their weight matrices directly, without running the model -- making our method data-agnostic. Our method extracts simple statistics

medium relevance defense
Paper 2602.10780v1

Kill it with FIRE: On Leveraging Latent Space Directions for Runtime Backdoor Mitigation in Deep Neural Networks

input. Existing mitigations filter training data, modify the model, or perform expensive input modifications on samples. If a vulnerable model has already been deployed, however, those strategies are either ineffective

medium relevance defense
Paper 2511.10714v1

BadThink: Triggered Overthinking Attacks on Chain-of-Thought Reasoning in Large Language Models

process to embed the behavior by generating highly naturalistic poisoned data. Our experiments on multiple state-of-the-art models and reasoning tasks show that BadThink consistently increases reasoning trace

high relevance attack
Previous Page 4 of 10 Next