Attack HIGH
Wang Jian, Shen Hong, Ke Wei +1 more
While federated learning protects data privacy, it also makes the model update process vulnerable to long-term stealthy perturbations. Existing...
3 weeks ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Yangyang Wei, Yijie Xu, Zhenyuan Li +2 more
Multi-Agent System is emerging as the \textit{de facto} standard for complex task orchestration. However, its reliance on autonomous execution and...
3 weeks ago cs.CR cs.MA
PDF
Attack HIGH
Neha Nagaraja, Lan Zhang, Zhilong Wang +2 more
Multimodal Large Language Models (MLLMs) integrate vision and text to power applications, but this integration introduces new vulnerabilities. We...
3 weeks ago cs.CV cs.AI cs.CR
PDF
Attack MEDIUM
Achyutha Menon, Magnus Saebo, Tyler Crosse +3 more
The accelerating adoption of language models (LMs) as agents for deployment in long-context tasks motivates a thorough understanding of goal drift:...
Attack HIGH
Zhi Xu, Jiaqi Li, Xiaotong Zhang +2 more
Large language models (LLMs) have achieved remarkable success across diverse applications but remain vulnerable to jailbreak attacks, where attackers...
Attack MEDIUM
Edouard Lansiaux
Federated Learning (FL) enables collaborative training of medical AI models across hospitals without centralizing patient data. However, the exchange...
3 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Peter Horvath, Ilia Shumailov, Lukasz Chmielewski +2 more
The multi-million dollar investment required for modern machine learning (ML) has made large ML models a prime target for theft. In response, the...
Attack HIGH
Jiayao Wang, Mohammad Maruf Hasan, Yiping Zhang +5 more
Self-Supervised Learning (SSL) has emerged as a significant paradigm in representation learning thanks to its ability to learn without extensive...
Attack MEDIUM
Shuyi Zhou, Zeen Song, Wenwen Qiang +4 more
Large Language Models remain vulnerable to adversarial prefix attacks (e.g., ``Sure, here is'') despite robust standard safety. We diagnose this...
Attack HIGH
Huw Day, Adrianna Jezierska, Jessica Woodgate
Large Language Models have intensified the scale and strategic manipulation of political discourse on social media, leading to conflict escalation....
3 weeks ago cs.HC cs.AI
PDF
Attack MEDIUM
Guoxin Shi, Haoyu Wang, Zaihui Yang +2 more
Adversarial behavior plays a central role in aligning large language models with human values. However, existing alignment methods largely rely on...
3 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Duoxun Tang, Dasen Dai, Jiyao Wang +3 more
Video-LLMs are increasingly deployed in safety-critical applications but are vulnerable to Energy-Latency Attacks (ELAs) that exhaust computational...
3 weeks ago cs.CV cs.AI
PDF
Attack HIGH
Xinyu Huang, Qiang Yang, Leming Shen +2 more
Embodied Large Language Models (LLMs) enable AI agents to interact with the physical world through natural language instructions and actions....
Attack HIGH
Jiayao Wang, Yiping Zhang, Mohammad Maruf Hasan +5 more
Self-supervised diffusion models learn high-quality visual representations via latent space denoising. However, their representation layer poses a...
3 weeks ago cs.CR cs.LG
PDF
Attack MEDIUM
Martin Odersky, Yaoyu Zhao, Yichen Xu +2 more
AI agents that interact with the real world through tool calls pose fundamental safety challenges: agents might leak private information, cause...
3 weeks ago cs.AI cs.PL
PDF
Attack HIGH
Oluseyi Olukola, Nick Rahimi
Machine learning based network intrusion detection systems are vulnerable to adversarial attacks that degrade classification performance under both...
3 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Hsin Lin, Yan-Lun Chen, Ren-Hung Hwang +1 more
Backdoor attacks pose a critical threat to the security of deep neural networks, yet existing efforts on universal backdoors often rely on visually...
3 weeks ago cs.CR cs.CV cs.LG
PDF
Attack HIGH
Yilian Liu, Xiaojun Jia, Guoshun Nan +6 more
Multimodal Large Language Models (MLLMs) have achieved remarkable performance but remain vulnerable to jailbreak attacks that can induce harmful...
3 weeks ago cs.CV cs.AI cs.CR
PDF
Attack HIGH
Swapnil Parekh
Image captioning models are encoder-decoder architectures trained on large-scale image-text datasets, making them susceptible to adversarial attacks....
3 weeks ago cs.CV cs.AI
PDF
Attack MEDIUM
Jingyuan Xie, Wenjie Wang, Ji Wu +1 more
Supervised fine-tuning (SFT) is essential for the development of medical large language models (LLMs), yet prior poisoning studies have mainly...
3 weeks ago cs.CR cs.AI cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial