Attack HIGH
Yule Liu, Heyi Zhang, Jinyi Zheng +6 more
Membership inference attacks (MIAs) on large language models (LLMs) pose significant privacy risks across various stages of model training. Recent...
4 months ago cs.CR cs.AI cs.CL
PDF
Defense MEDIUM
Quoc Viet Vo, Tashreque M. Haq, Paul Montague +3 more
Certified defenses promise provable robustness guarantees. We study the malicious exploitation of probabilistic certification frameworks to better...
4 months ago cs.LG cs.CR cs.CV
PDF
Tool HIGH
Badhan Chandra Das, Md Tasnim Jawad, Md Jueal Mia +2 more
Large Vision Language Models (LVLMs) demonstrate strong capabilities in multimodal reasoning and many real-world applications, such as visual...
Attack HIGH
Pascal Zimmer, Ghassan Karame
In this paper, we present the first detailed analysis of how training hyperparameters -- such as learning rate, weight decay, momentum, and batch...
4 months ago cs.LG cs.CR cs.CV
PDF
Tool HIGH
Siyang Cheng, Gaotian Liu, Rui Mei +7 more
The rapid adoption of large language models (LLMs) has brought both transformative applications and new security risks, including jailbreak attacks...
4 months ago cs.CR cs.AI cs.CL
PDF
Benchmark MEDIUM
Yuyang Xia, Ruixuan Liu, Li Xiong
Large language models (LLMs) perform in-context learning (ICL) by adapting to tasks from prompt demonstrations, which in practice often contain...
Attack MEDIUM
Fuyao Zhang, Jiaming Zhang, Che Wang +6 more
The reliance of mobile GUI agents on Multimodal Large Language Models (MLLMs) introduces a severe privacy vulnerability: screenshots containing...
Benchmark MEDIUM
Longfei Chen, Ruibin Yan, Taiyu Wong +2 more
Smart contracts are prone to vulnerabilities and are analyzed by experts as well as automated systems, such as static analysis and AI-assisted...
4 months ago cs.SE cs.CR
PDF
Benchmark LOW
Aishwarya Agarwal, Srikrishna Karanam, Vineet Gandhi
Contrastive vision-language models (VLMs) such as CLIP achieve strong zero-shot recognition yet remain vulnerable to spurious correlations,...
Benchmark MEDIUM
Minjie Wang, Jinguang Han, Weizhi Meng
In federated learning, multiple parties can cooperate to train the model without directly exchanging their own private data, but the gradient leakage...
4 months ago cs.CR cs.AI
PDF
Defense LOW
Mohammad Marufur Rahman, Guanchu Wang, Kaixiong Zhou +2 more
Catastrophic forgetting is a longstanding challenge in continual learning, where models lose knowledge from earlier tasks when learning new ones....
4 months ago cs.LG cs.AI
PDF
Attack MEDIUM
Ayush Chaudhary, Sisir Doppalpudi
The deployment of robust malware detection systems in big data environments requires careful consideration of both security effectiveness and...
4 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Thomas Rivasseau
Current Large Language Model alignment research mostly focuses on improving model robustness against adversarial attacks and misbehavior by training...
4 months ago cs.CL cs.CR
PDF
Attack HIGH
Mukkesh Ganesh, Kaushik Iyer, Arun Baalaaji Sankar Ananthan
The Key Value(KV) cache is an important component for efficient inference in autoregressive Large Language Models (LLMs), but its role as a...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Yunhao Chen, Xin Wang, Juncheng Li +5 more
Automated red teaming frameworks for Large Language Models (LLMs) have become increasingly sophisticated, yet they share a fundamental limitation:...
4 months ago cs.CL cs.CR
PDF
Tool LOW
Samuel Nathanson, Alexander Lee, Catherine Chen Kieffer +7 more
Assurance for artificial intelligence (AI) systems remains fragmented across software supply-chain security, adversarial machine learning, and...
4 months ago cs.CR cs.AI cs.LG
PDF
Tool MEDIUM
Rathin Chandra Shit, Sharmila Subudhi
The security of autonomous vehicle networks is facing major challenges, owing to the complexity of sensor integration, real-time performance demands,...
4 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Haotian Jin, Yang Li, Haihui Fan +3 more
Backdoor attacks pose a serious threat to the security of large language models (LLMs), causing them to exhibit anomalous behavior under specific...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Samuel Nathanson, Rebecca Williams, Cynthia Matuszek
Large language models (LLMs) increasingly operate in multi-agent and safety-critical settings, raising open questions about how their vulnerabilities...
4 months ago cs.LG cs.AI cs.CL
PDF
Defense MEDIUM
JoonHo Lee, HyeonMin Cho, Jaewoong Yun +3 more
We present SGuard-v1, a lightweight safety guardrail for Large Language Models (LLMs), which comprises two specialized models to detect harmful...
4 months ago cs.CL cs.AI cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial