Attack HIGH
Xiaozuo Shen, Yifei Cai, Rui Ning +2 more
The widespread adoption of Vision Transformers (ViTs) elevates supply-chain risk on third-party model hubs, where an adversary can implant backdoors...
Attack HIGH
Nirab Hossain, Pablo Moriano
Modern vehicles rely on electronic control units (ECUs) interconnected through the Controller Area Network (CAN), making in-vehicle communication a...
1 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Samuel Nellessen, Tal Kachman
The evolution of large language models into autonomous agents introduces adversarial failures that exploit legitimate tool privileges, transforming...
1 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Pengfei He, Ash Fox, Lesly Miculicich +7 more
Large language models (LLMs) have shown promise in assisting cybersecurity tasks, yet existing approaches struggle with automatic vulnerability...
1 months ago cs.LG cs.CR
PDF
Attack HIGH
Jiayao Wang, Yang Song, Zhendong Zhao +5 more
Federated self-supervised learning (FSSL) enables collaborative training of self-supervised representation models without sharing raw unlabeled data....
Attack HIGH
Mingrui Liu, Sixiao Zhang, Cheng Long +1 more
Large Language Models (LLMs) are increasingly vulnerable to Prompt Injection (PI) attacks, where adversarial instructions hidden within retrieved...
1 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Seyed Mohammad Hadi Hosseini, Amir Najafi, Mahdieh Soleymani Baghshah
Bandit algorithms have recently emerged as a powerful tool for evaluating machine learning models, including generative image models and large...
1 months ago cs.LG cs.AI
PDF
Tool HIGH
Zehua Cheng, Jianwei Yang, Wei Dai +1 more
Large Language Models (LLMs) remain vulnerable to adaptive jailbreaks that easily bypass empirical defenses like GCG. We propose a framework for...
1 months ago cs.CL cs.AI
PDF
Attack HIGH
Haobo Wang, Weiqi Luo, Xiaojun Jia +1 more
Large vision-language models (VLMs) are vulnerable to transfer-based adversarial perturbations, enabling attackers to optimize on surrogate models...
Attack HIGH
Xiaoyu Wen, Zhida He, Han Qi +7 more
Ensuring robust safety alignment is crucial for Large Language Models (LLMs), yet existing defenses often lag behind evolving adversarial attacks due...
1 months ago cs.AI cs.CL cs.LG
PDF
Attack HIGH
Ziyue Wang, Jiangshan Yu, Kaihua Qin +3 more
Decentralized Finance (DeFi) has turned blockchains into financial infrastructure, allowing anyone to trade, lend, and build protocols without...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Terry Yue Zhuo, Yangruibo Ding, Wenbo Guo +1 more
For over a decade, cybersecurity has relied on human labor scarcity to limit attackers to high-value targets manually or generic automated attacks at...
1 months ago cs.CR cs.AI cs.CY
PDF
Attack HIGH
Kaiyuan Cui, Yige Li, Yutao Wu +4 more
Vision-language models (VLMs) extend large language models (LLMs) with vision encoders, enabling text generation conditioned on both images and text....
1 months ago cs.LG cs.AI cs.CV
PDF
Attack HIGH
Xueyi Li, Zhuoneng Zhou, Zitao Liu +2 more
Large language models (LLMs) have demonstrated remarkable potential for automatic short answer grading (ASAG), significantly boosting student...
1 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Licheng Pan, Yunsheng Lu, Jiexi Liu +5 more
Uncovering the mechanisms behind "jailbreaks" in large language models (LLMs) is crucial for enhancing their safety and reliability, yet these...
1 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Md Jahedur Rahman, Ihsen Alouani
Large language models (LLMs) are increasingly used in interactive and retrieval-augmented systems, but they remain vulnerable to task drift;...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Yuxuan Lu, Yongkang Guo, Yuqing Kong
Safety alignment in Large Language Models (LLMs) often creates a systematic discrepancy between a model's aligned output and the underlying...
1 months ago cs.CL cs.AI cs.CR
PDF
Tool HIGH
Haoran Ou, Kangjie Chen, Gelei Deng +4 more
Fact-checking systems with search-enabled large language models (LLMs) have shown strong potential for verifying claims by dynamically retrieving...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Yihang Chen, Zhao Xu, Youyuan Jiang +2 more
Large Vision-Language Models (LVLMs) are increasingly equipped with robust safety safeguards to prevent responses to harmful or disallowed prompts....
1 months ago cs.CV cs.AI cs.CR
PDF
Attack HIGH
Jiate Li, Defu Cao, Li Li +8 more
Large language models (LLMs) have been serving as effective backbones for retrieval systems, including Retrieval-Augmentation-Generation (RAG), Dense...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial