Attack HIGH
Mansi Phute, Matthew Hull, Haoran Wang +6 more
Deep learning models deployed in safety critical applications like autonomous driving use simulations to test their robustness against adversarial...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Amirkia Rafiei Oskooei, Mehmet S. Aktas
The proficiency of Large Language Models (LLMs) in processing structured data and adhering to syntactic rules is a capability that drives their...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Alireza Heshmati, Saman Soleimani Roudi, Sajjad Amini +2 more
Existing adversarial attacks often neglect perturbation sparsity, limiting their ability to model structural changes and to explain how deep neural...
5 months ago cs.CR cs.LG eess.IV
PDF
Defense HIGH
Yiyang Huang, Liang Shi, Yitian Zhang +2 more
Large Vision-Language Models (LVLMs) excel in diverse cross-modal tasks. However, object hallucination, where models produce plausible but inaccurate...
5 months ago cs.CV cs.AI
PDF
Attack HIGH
Dimitris Stefanopoulos, Andreas Voskou
This report presents the winning solution for Task 1 of Colliding with Adversaries: A Challenge on Robust Learning in High Energy Physics Discovery...
5 months ago cs.LG cs.CR
PDF
Tool HIGH
Kate Glazko, Jennifer Mankoff
Generative AI risks such as bias and lack of representation impact people who do not interact directly with GAI systems, but whose content does:...
5 months ago cs.CR cs.CY
PDF
Attack HIGH
Owais Makroo, Siva Rajesh Kasa, Sumegh Roychowdhury +4 more
Membership Inference Attacks (MIAs) pose a critical privacy threat by enabling adversaries to determine whether a specific sample was included in a...
5 months ago cs.CR cs.CL cs.LG
PDF
Attack HIGH
Shuang Liang, Zhihao Xu, Jialing Tao +2 more
Despite extensive alignment efforts, Large Vision-Language Models (LVLMs) remain vulnerable to jailbreak attacks, posing serious safety risks. To...
5 months ago cs.CV cs.AI
PDF
Attack HIGH
Deyue Zhang, Dongdong Yang, Junjie Mu +6 more
Multimodal large language models (MLLMs) exhibit remarkable capabilities but remain susceptible to jailbreak attacks exploiting cross-modal...
5 months ago cs.CR cs.AI
PDF
Tool HIGH
ChenYu Wu, Yi Wang, Yang Liao
Large language models (LLMs) are increasingly vulnerable to multi-turn jailbreak attacks, where adversaries iteratively elicit harmful behaviors that...
5 months ago cs.CR cs.AI
PDF
Tool HIGH
Zixuan Liu, Yi Zhao, Zhuotao Liu +4 more
Machine Learning (ML)-based malicious traffic detection is a promising security paradigm. It outperforms rule-based traditional detection by...
Benchmark HIGH
Bin Liu, Yanjie Zhao, Guoai Xu +1 more
Large language model (LLM) agents have demonstrated remarkable capabilities in software engineering and cybersecurity tasks, including code...
5 months ago cs.SE cs.CR
PDF
Attack HIGH
Evangelos Lamprou, Julian Dai, Grigoris Ntousakis +2 more
Software supply-chain attacks are an important and ongoing concern in the open source software ecosystem. These attacks maintain the standard...
Attack HIGH
Xiaoyu Xue, Yuni Lai, Chenxi Huang +4 more
The emergence of graph foundation models (GFMs), particularly those incorporating language models (LMs), has revolutionized graph learning and...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Yingguang Yang, Xianghua Zeng, Qi Wu +5 more
Social networks have become a crucial source of real-time information for individuals. The influence of social bots within these platforms has...
5 months ago cs.LG cs.AI cs.CR
PDF
Benchmark HIGH
Trilok Padhi, Pinxian Lu, Abdulkadir Erol +5 more
Large Language Model (LLM) agents are powering a growing share of interactive web applications, yet remain vulnerable to misuse and harm. Prior...
Attack HIGH
Abdulrahman Alhaidari, Balaji Palanisamy, Prashant Krishnamurthy
Billions of dollars are lost every year in DeFi platforms by transactions exploiting business logic or accounting vulnerabilities. Existing defenses...
5 months ago cs.CR cs.AI cs.DC
PDF
Attack HIGH
Wei Zou, Yupei Liu, Yanting Wang +3 more
LLM-integrated applications are vulnerable to prompt injection attacks, where an attacker contaminates the input to inject malicious instructions,...
5 months ago cs.CR cs.LG
PDF
Benchmark HIGH
Ivan Dubrovsky, Anastasia Orlova, Illarion Iov +3 more
Benchmarking outcomes increasingly govern trust, selection, and deployment of LLMs, yet these evaluations remain vulnerable to semantically...
Attack HIGH
Avihay Cohen
Large Language Model (LLM) based agents integrated into web browsers (often called agentic AI browsers) offer powerful automation of web tasks....
5 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial