Attack HIGH
Yixu Wang, Yan Teng, Yingchun Wang +1 more
Parameter-Efficient Fine-Tuning (PEFT) methods like LoRA have transformed vision model adaptation, enabling the rapid deployment of customized...
5 months ago cs.CR cs.CV
PDF
Attack HIGH
Zhaoqi Wang, Daqing He, Zijian Zhang +4 more
Large language models (LLMs) have demonstrated remarkable capabilities, yet they also introduce novel security challenges. For instance, prompt...
5 months ago cs.AI cs.CR
PDF
Attack MEDIUM
Han Yan, Zheyuan Liu, Meng Jiang
With the rapid advancement of large language models, Machine Unlearning has emerged to address growing concerns around user privacy, copyright...
5 months ago cs.CL cs.AI
PDF
Attack HIGH
Francesco Marchiori, Rohan Sinha, Christopher Agia +4 more
Large Language Models (LLMs) and Vision-Language Models (VLMs) are increasingly deployed in robotic environments but remain vulnerable to...
Attack HIGH
Zi Liang, Qingqing Ye, Xuan Liu +3 more
Synthetic data refers to artificial samples generated by models. While it has been validated to significantly enhance the performance of large...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Javad Forough, Mohammad Maheri, Hamed Haddadi
Large Language Models (LLMs) are increasingly susceptible to jailbreak attacks, which are adversarial prompts that bypass alignment constraints and...
Attack MEDIUM
Jeongyeon Hwang, Sangdon Park, Jungseul Ok
Watermarking offers a promising solution for detecting LLM-generated content, yet its robustness under realistic query-free (black-box) evasion...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Aashnan Rahman, Abid Hasan, Sherajul Arifin +5 more
Federated learning (FL) enables privacy-preserving model training by keeping data decentralized. However, it remains vulnerable to label-flipping...
Attack HIGH
Roie Kazoom, Yuval Ratzabi, Etamar Rothstein +1 more
Adversarial robustness in structured data remains an underexplored frontier compared to vision and language domains. In this work, we introduce a...
5 months ago cs.LG cs.AI
PDF
Attack HIGH
Hwan Chang, Yonghyun Jun, Hwanhee Lee
The growing deployment of large language model (LLM) based agents that interact with external environments has created new attack surfaces for...
Attack MEDIUM
Xingyu Li, Juefei Pu, Yifan Wu +13 more
Open-source software projects are foundational to modern software ecosystems, with the Linux kernel standing out as a critical exemplar due to its...
6 months ago cs.CR cs.LG
PDF
Attack HIGH
Wonjun Lee, Haon Park, Doehyeon Lee +2 more
Along with the rapid advancement of numerous Text-to-Video (T2V) models, growing concerns have emerged regarding their safety risks. While recent...
6 months ago cs.CV cs.AI
PDF
Attack MEDIUM
David Benfield, Stefano Coniglio, Phan Tu Vuong +1 more
Adversarial machine learning concerns situations in which learners face attacks from active adversaries. Such scenarios arise in applications such as...
6 months ago cs.LG cs.CR
PDF
Attack HIGH
Aravindhan G, Yuvaraj Govindarajulu, Parin Shah
Recent studies have demonstrated the vulnerability of Automatic Speech Recognition systems to adversarial examples, which can deceive these systems...
6 months ago cs.SD cs.AI cs.CR
PDF
Attack HIGH
Yue Liu, Yanjie Zhao, Yunbo Lyu +3 more
Agentic AI coding editors driven by large language models have recently become more popular due to their ability to improve developer productivity...
6 months ago cs.CR cs.SE
PDF
Attack HIGH
Taeyoung Yun, Pierre-Luc St-Charles, Jinkyoo Park +2 more
We address the challenge of generating diverse attack prompts for large language models (LLMs) that elicit harmful behaviors (e.g., insults, sexual...
6 months ago cs.LG cs.AI
PDF
Attack HIGH
Jingkai Guo, Chaitali Chakrabarti, Deliang Fan
Model integrity of Large language models (LLMs) has become a pressing security concern with their massive online deployment. Prior Bit-Flip Attacks...
6 months ago cs.CR cs.CL cs.LG
PDF
Attack MEDIUM
Miao Yu, Zhenhong Zhou, Moayad Aloqaily +5 more
Fine-tuned Large Language Models (LLMs) are vulnerable to backdoor attacks through data poisoning, yet the internal mechanisms governing these...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Haibo Tong, Dongcheng Zhao, Guobin Shen +4 more
The remarkable capabilities of Large Language Models (LLMs) have raised significant safety concerns, particularly regarding "jailbreak" attacks that...
6 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Jiahao Huo, Shuliang Liu, Bin Wang +5 more
Semantic-level watermarking (SWM) for large language models (LLMs) enhances watermarking robustness against text modifications and paraphrasing...
6 months ago cs.CR cs.CL
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial