Attack MEDIUM
Yuting Tan, Yi Huang, Zhuo Li
Backdoor attacks on large language models (LLMs) typically couple a secret trigger to an explicit malicious output. We show that this explicit...
4 months ago cs.LG cs.CR
PDF
Attack HIGH
Hasini Jayathilaka
Prompt injection attacks are an emerging threat to large language models (LLMs), enabling malicious users to manipulate outputs through carefully...
Attack HIGH
Rui Wang, Zeming Wei, Xiyue Zhang +1 more
Deep Neural Networks (DNNs) are known to be vulnerable to various adversarial perturbations. To address the safety concerns arising from these...
4 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Gil Goren, Shahar Katz, Lior Wolf
Large Language Models (LLMs) are vulnerable to adversarial attacks that bypass safety guidelines and generate harmful content. Mitigating these...
Attack MEDIUM
Sajad U P
Phishing and related cyber threats are becoming more varied and technologically advanced. Among these, email-based phishing remains the most dominant...
4 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Shaowei Guan, Yu Zhai, Zhengyu Zhang +2 more
Large Language Models (LLMs) are increasingly vulnerable to adversarial attacks that can subtly manipulate their outputs. While various defense...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Hao Li, Jiajun He, Guangshuo Wang +3 more
Retrieval-Augmented Generation (RAG) enhances large language models by integrating external knowledge, but reliance on proprietary or sensitive...
Attack MEDIUM
Lucas Fenaux, Christopher Srinivasa, Florian Kerschbaum
Transparency and security are both central to Responsible AI, but they may conflict in adversarial settings. We investigate the strategic effect of...
4 months ago cs.LG cs.CR cs.GT
PDF
Attack HIGH
Lama Sleem, Jerome Francois, Lujun Li +3 more
Jailbreak attacks designed to bypass safety mechanisms pose a serious threat by prompting LLMs to generate harmful or inappropriate content, despite...
4 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Farhad Abtahi, Fernando Seoane, Iván Pau +1 more
Healthcare AI systems face major vulnerabilities to data poisoning that current defenses and regulations cannot adequately address. We analyzed eight...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Runpeng Geng, Yanting Wang, Chenlong Yin +3 more
Long context LLMs are vulnerable to prompt injection, where an attacker can inject an instruction in a long context to induce an LLM to generate an...
4 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Srikant Panda, Avinash Rai
Large Language Models (LLMs) are commonly evaluated for robustness against paraphrased or semantically equivalent jailbreak prompts, yet little...
4 months ago cs.CL cs.AI
PDF
Attack HIGH
Shuaitong Liu, Renjue Li, Lijia Yu +3 more
Recent advances in Chain-of-Thought (CoT) prompting have substantially improved the reasoning capabilities of large language models (LLMs), but have...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Yudong Yang, Xuezhen Zhang, Zhifeng Han +6 more
Recent progress in LLMs has enabled understanding of audio signals, but has also exposed new safety risks arising from complex audio inputs that are...
4 months ago cs.SD cs.AI
PDF
Attack HIGH
Zihan Wang, Guansong Pang, Wenjun Miao +2 more
Recent advances in Large Visual Language Models (LVLMs) have demonstrated impressive performance across various vision-language tasks by leveraging...
Attack LOW
Xin Zhao, Xiaojun Chen, Bingshan Liu +3 more
Generative vision-language models like Stable Diffusion demonstrate remarkable capabilities in creative media synthesis, but they also pose...
4 months ago cs.AI cs.CR cs.CV
PDF
Attack HIGH
Shigeki Kusaka, Keita Saito, Mikoto Kudo +3 more
Large language models (LLMs) are increasingly deployed in real-world systems, making it critical to understand their vulnerabilities. While data...
4 months ago cs.LG cs.AI
PDF
Attack HIGH
Hongyi Li, Chengxuan Zhou, Chu Wang +5 more
Large Audio-language Models (LAMs) have recently enabled powerful speech-based interactions by coupling audio encoders with Large Language Models...
Attack MEDIUM
Zixun Xiong, Gaoyi Wu, Qingyang Yu +5 more
Given the high cost of large language model (LLM) training from scratch, safeguarding LLM intellectual property (IP) has become increasingly crucial....
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Tiago Machado, Maysa Malfiza Garcia de Macedo, Rogerio Abreu de Paula +5 more
This work aims to investigate how different Large Language Models (LLMs) alignment methods affect the models' responses to prompt attacks. We...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial