Benchmark HIGH
Simin Chen, Yixin He, Suman Jana +1 more
LLM-based agents are increasingly deployed for software maintenance tasks such as automated program repair (APR). APR agents automatically fetch...
Attack HIGH
Yein Park, Jungwoo Park, Jaewoo Kang
Large language models (LLMs), despite being safety-aligned, exhibit brittle refusal behaviors that can be circumvented by simple linguistic changes....
Tool HIGH
Jing-Jing Li, Jianfeng He, Chao Shang +6 more
As LLMs advance into autonomous agents with tool-use capabilities, they introduce security challenges that extend beyond traditional content-based...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Yuepeng Hu, Zhengyuan Jiang, Mengyuan Li +4 more
Large language models (LLMs) are often modified after release through post-processing such as post-training or quantization, which makes it...
5 months ago cs.CR cs.CL
PDF
Attack HIGH
Yupei Liu, Yanting Wang, Yuqi Jia +2 more
Prompt injection attacks pose a pervasive threat to the security of Large Language Models (LLMs). State-of-the-art prevention-based defenses...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Zhifang Zhang, Qiqi Tao, Jiaqi Lv +3 more
Large vision-language models (LVLMs) have achieved impressive performance across a wide range of vision-language tasks, while they remain vulnerable...
Survey HIGH
Weibo Zhao, Jiahao Liu, Bonan Ruan +2 more
Model Context Protocol (MCP) servers enable AI applications to connect to external systems in a plug-and-play manner, but their rapid proliferation...
5 months ago cs.CR cs.SE
PDF
Benchmark HIGH
Alireza Lotfi, Charalampos Katsis, Elisa Bertino
Software vulnerabilities remain a critical security challenge, providing entry points for attackers into enterprise networks. Despite advances in...
Benchmark HIGH
Jianshuo Dong, Sheng Guo, Hao Wang +6 more
Search agents connect LLMs to the Internet, enabling them to access broader and more up-to-date information. However, this also introduces a new...
5 months ago cs.AI cs.CL cs.CR
PDF
Attack HIGH
Yixu Wang, Yan Teng, Yingchun Wang +1 more
Parameter-Efficient Fine-Tuning (PEFT) methods like LoRA have transformed vision model adaptation, enabling the rapid deployment of customized...
5 months ago cs.CR cs.CV
PDF
Attack HIGH
Zhaoqi Wang, Daqing He, Zijian Zhang +4 more
Large language models (LLMs) have demonstrated remarkable capabilities, yet they also introduce novel security challenges. For instance, prompt...
5 months ago cs.AI cs.CR
PDF
Attack HIGH
Francesco Marchiori, Rohan Sinha, Christopher Agia +4 more
Large Language Models (LLMs) and Vision-Language Models (VLMs) are increasingly deployed in robotic environments but remain vulnerable to...
Attack HIGH
Zi Liang, Qingqing Ye, Xuan Liu +3 more
Synthetic data refers to artificial samples generated by models. While it has been validated to significantly enhance the performance of large...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Javad Forough, Mohammad Maheri, Hamed Haddadi
Large Language Models (LLMs) are increasingly susceptible to jailbreak attacks, which are adversarial prompts that bypass alignment constraints and...
Attack HIGH
Aashnan Rahman, Abid Hasan, Sherajul Arifin +5 more
Federated learning (FL) enables privacy-preserving model training by keeping data decentralized. However, it remains vulnerable to label-flipping...
Attack HIGH
Roie Kazoom, Yuval Ratzabi, Etamar Rothstein +1 more
Adversarial robustness in structured data remains an underexplored frontier compared to vision and language domains. In this work, we introduce a...
5 months ago cs.LG cs.AI
PDF
Attack HIGH
Hwan Chang, Yonghyun Jun, Hwanhee Lee
The growing deployment of large language model (LLM) based agents that interact with external environments has created new attack surfaces for...
Attack HIGH
Wonjun Lee, Haon Park, Doehyeon Lee +2 more
Along with the rapid advancement of numerous Text-to-Video (T2V) models, growing concerns have emerged regarding their safety risks. While recent...
6 months ago cs.CV cs.AI
PDF
Tool HIGH
Petar Radanliev
This study presents a structured approach to evaluating vulnerabilities within quantum cryptographic protocols, focusing on the BB84 quantum key...
6 months ago cs.CR cs.AI cs.NI
PDF
Attack HIGH
Aravindhan G, Yuvaraj Govindarajulu, Parin Shah
Recent studies have demonstrated the vulnerability of Automatic Speech Recognition systems to adversarial examples, which can deceive these systems...
6 months ago cs.SD cs.AI cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial