Attack HIGH
Zhifang Zhang, Qiqi Tao, Jiaqi Lv +3 more
Large vision-language models (LVLMs) have achieved impressive performance across a wide range of vision-language tasks, while they remain vulnerable...
Attack HIGH
Yixu Wang, Yan Teng, Yingchun Wang +1 more
Parameter-Efficient Fine-Tuning (PEFT) methods like LoRA have transformed vision model adaptation, enabling the rapid deployment of customized...
5 months ago cs.CR cs.CV
PDF
Attack HIGH
Zhaoqi Wang, Daqing He, Zijian Zhang +4 more
Large language models (LLMs) have demonstrated remarkable capabilities, yet they also introduce novel security challenges. For instance, prompt...
5 months ago cs.AI cs.CR
PDF
Attack HIGH
Francesco Marchiori, Rohan Sinha, Christopher Agia +4 more
Large Language Models (LLMs) and Vision-Language Models (VLMs) are increasingly deployed in robotic environments but remain vulnerable to...
Attack HIGH
Zi Liang, Qingqing Ye, Xuan Liu +3 more
Synthetic data refers to artificial samples generated by models. While it has been validated to significantly enhance the performance of large...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Javad Forough, Mohammad Maheri, Hamed Haddadi
Large Language Models (LLMs) are increasingly susceptible to jailbreak attacks, which are adversarial prompts that bypass alignment constraints and...
Attack HIGH
Aashnan Rahman, Abid Hasan, Sherajul Arifin +5 more
Federated learning (FL) enables privacy-preserving model training by keeping data decentralized. However, it remains vulnerable to label-flipping...
Attack HIGH
Roie Kazoom, Yuval Ratzabi, Etamar Rothstein +1 more
Adversarial robustness in structured data remains an underexplored frontier compared to vision and language domains. In this work, we introduce a...
6 months ago cs.LG cs.AI
PDF
Attack HIGH
Hwan Chang, Yonghyun Jun, Hwanhee Lee
The growing deployment of large language model (LLM) based agents that interact with external environments has created new attack surfaces for...
Attack HIGH
Wonjun Lee, Haon Park, Doehyeon Lee +2 more
Along with the rapid advancement of numerous Text-to-Video (T2V) models, growing concerns have emerged regarding their safety risks. While recent...
6 months ago cs.CV cs.AI
PDF
Attack HIGH
Aravindhan G, Yuvaraj Govindarajulu, Parin Shah
Recent studies have demonstrated the vulnerability of Automatic Speech Recognition systems to adversarial examples, which can deceive these systems...
6 months ago cs.SD cs.AI cs.CR
PDF
Attack HIGH
Yue Liu, Yanjie Zhao, Yunbo Lyu +3 more
Agentic AI coding editors driven by large language models have recently become more popular due to their ability to improve developer productivity...
6 months ago cs.CR cs.SE
PDF
Attack HIGH
Taeyoung Yun, Pierre-Luc St-Charles, Jinkyoo Park +2 more
We address the challenge of generating diverse attack prompts for large language models (LLMs) that elicit harmful behaviors (e.g., insults, sexual...
6 months ago cs.LG cs.AI
PDF
Attack HIGH
Jingkai Guo, Chaitali Chakrabarti, Deliang Fan
Model integrity of Large language models (LLMs) has become a pressing security concern with their massive online deployment. Prior Bit-Flip Attacks...
6 months ago cs.CR cs.CL cs.LG
PDF
Attack HIGH
Haibo Tong, Dongcheng Zhao, Guobin Shen +4 more
The remarkable capabilities of Large Language Models (LLMs) have raised significant safety concerns, particularly regarding "jailbreak" attacks that...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Runqi Lin, Alasdair Paren, Suqin Yuan +4 more
The integration of new modalities enhances the capabilities of multimodal large language models (MLLMs) but also introduces additional...
Attack HIGH
Hanbo Huang, Yiran Zhang, Hao Zheng +4 more
Large Language Models (LLMs) watermarking has shown promise in detecting AI-generated content and mitigating misuse, with prior work claiming...
Attack HIGH
Atousa Arzanipour, Rouzbeh Behnia, Reza Ebrahimi +1 more
Retrieval-Augmented Generation (RAG) is an emerging approach in natural language processing that combines large language models (LLMs) with external...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Tanmay Khule, Stefan Marksteiner, Jose Alguindigue +3 more
In modern automotive development, security testing is critical for safeguarding systems against increasingly advanced threats. Attack trees are...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Md Jueal Mia, M. Hadi Amini
Vision-Language Models (VLMs) have remarkable abilities in generating multimodal reasoning tasks. However, potential misuse or safety alignment...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial