Attack MEDIUM
Asmita Mohanty, Gezheng Kang, Lei Gao +1 more
Large Language Models (LLMs) have demonstrated strong performance across diverse tasks, but fine-tuning them typically relies on cloud-based,...
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Alireza Heshmati, Saman Soleimani Roudi, Sajjad Amini +2 more
Existing adversarial attacks often neglect perturbation sparsity, limiting their ability to model structural changes and to explain how deep neural...
5 months ago cs.CR cs.LG eess.IV
PDF
Attack HIGH
Dimitris Stefanopoulos, Andreas Voskou
This report presents the winning solution for Task 1 of Colliding with Adversaries: A Challenge on Robust Learning in High Energy Physics Discovery...
5 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Sarah Egler, John Schulman, Nicholas Carlini
Large Language Model (LLM) providers expose fine-tuning APIs that let end users fine-tune their frontier LLMs. Unfortunately, it has been shown that...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Owais Makroo, Siva Rajesh Kasa, Sumegh Roychowdhury +4 more
Membership Inference Attacks (MIAs) pose a critical privacy threat by enabling adversaries to determine whether a specific sample was included in a...
5 months ago cs.CR cs.CL cs.LG
PDF
Attack HIGH
Shuang Liang, Zhihao Xu, Jialing Tao +2 more
Despite extensive alignment efforts, Large Vision-Language Models (LVLMs) remain vulnerable to jailbreak attacks, posing serious safety risks. To...
5 months ago cs.CV cs.AI
PDF
Attack HIGH
Deyue Zhang, Dongdong Yang, Junjie Mu +6 more
Multimodal large language models (MLLMs) exhibit remarkable capabilities but remain susceptible to jailbreak attacks exploiting cross-modal...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Evangelos Lamprou, Julian Dai, Grigoris Ntousakis +2 more
Software supply-chain attacks are an important and ongoing concern in the open source software ecosystem. These attacks maintain the standard...
Attack HIGH
Xiaoyu Xue, Yuni Lai, Chenxi Huang +4 more
The emergence of graph foundation models (GFMs), particularly those incorporating language models (LMs), has revolutionized graph learning and...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Andrew Zhao, Reshmi Ghosh, Vitor Carvalho +4 more
Large language model (LLM) systems increasingly power everyday AI applications such as chatbots, computer-use assistants, and autonomous robots,...
5 months ago cs.LG cs.AI cs.CL
PDF
Attack HIGH
Yingguang Yang, Xianghua Zeng, Qi Wu +5 more
Social networks have become a crucial source of real-time information for individuals. The influence of social bots within these platforms has...
5 months ago cs.LG cs.AI cs.CR
PDF
Attack MEDIUM
Fanchao Meng, Jiaping Gui, Yunbo Li +1 more
Modern Network Intrusion Detection Systems generate vast volumes of low-level alerts, yet these outputs remain semantically fragmented, requiring...
Attack MEDIUM
Jianzhu Yao, Hongxu Su, Taobo Liao +4 more
Neural networks increasingly run on hardware outside the user's control (cloud GPUs, inference marketplaces). Yet ML-as-a-Service reveals little...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Abdulrahman Alhaidari, Balaji Palanisamy, Prashant Krishnamurthy
Billions of dollars are lost every year in DeFi platforms by transactions exploiting business logic or accounting vulnerabilities. Existing defenses...
5 months ago cs.CR cs.AI cs.DC
PDF
Attack HIGH
Wei Zou, Yupei Liu, Yanting Wang +3 more
LLM-integrated applications are vulnerable to prompt injection attacks, where an attacker contaminates the input to inject malicious instructions,...
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Avihay Cohen
Large Language Model (LLM) based agents integrated into web browsers (often called agentic AI browsers) offer powerful automation of web tasks....
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Baogang Song, Dongdong Zhao, Jianwen Xiang +2 more
Backdoor attacks pose a persistent security risk to deep neural networks (DNNs) due to their stealth and durability. While recent research has...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Tuan T. Nguyen, John Le, Thai T. Vu +2 more
Large language models (LLMs) achieve impressive performance across diverse tasks yet remain vulnerable to jailbreak attacks that bypass safety...
Attack MEDIUM
Daniel Pulido-Cortázar, Daniel Gibert, Felip Manyà
Over the last decade, machine learning has been extensively applied to identify malicious Android applications. However, such approaches remain...
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Yuqi Jia, Yupei Liu, Zedian Shao +2 more
Prompt injection attacks deceive a large language model into completing an attacker-specified task instead of its intended task by contaminating its...
5 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial