Tool HIGH
Zehua Cheng, Jianwei Yang, Wei Dai +1 more
Large Language Models (LLMs) remain vulnerable to adaptive jailbreaks that easily bypass empirical defenses like GCG. We propose a framework for...
1 months ago cs.CL cs.AI
PDF
Tool HIGH
Haoran Ou, Kangjie Chen, Gelei Deng +4 more
Fact-checking systems with search-enabled large language models (LLMs) have shown strong potential for verifying claims by dynamically retrieving...
1 months ago cs.CR cs.AI
PDF
Tool HIGH
Chanwoo Park, Chanwoo Kim
Evasion attacks pose significant threats to AI systems, exploiting vulnerabilities in machine learning models to bypass detection mechanisms. The...
1 months ago cs.SD cs.CR eess.AS
PDF
Tool HIGH
Nirhoshan Sivaroopan, Kanchana Thilakarathna, Albert Zomaya +6 more
Sponge attacks increasingly threaten LLM systems by inducing excessive computation and DoS. Existing defenses either rely on statistical filters that...
1 months ago cs.CR cs.AI
PDF
Tool HIGH
Qi Li, Xinchao Wang
Enabling large language models (LLMs) to solve complex reasoning tasks is a key step toward artificial general intelligence. Recent work augments...
Tool HIGH
Narek Maloyan, Dmitry Namiot
The Model Context Protocol (MCP) has emerged as a de facto standard for integrating Large Language Models with external tools, yet no formal security...
2 months ago cs.CR cs.AI
PDF
Tool HIGH
Adeyemi Adeseye, Aisvarya Adeseye
Loop vulnerabilities are one major risky construct in software development. They can easily lead to infinite loops or executions, exhaust resources,...
Tool HIGH
Hongyan Chang, Ergute Bao, Xinjian Luo +1 more
Large language models (LLMs) increasingly rely on retrieving information from external corpora. This creates a new attack surface: indirect prompt...
2 months ago cs.CR cs.AI
PDF
Tool HIGH
Harshil Parmar, Pushti Vyas, Prayers Khristi +1 more
As vulnerability research increasingly adopts generative AI, a critical reliance on opaque model outputs has emerged, creating a "trust gap" in...
2 months ago cs.CR cs.AI cs.SE
PDF
Tool HIGH
Junda Lin, Zhaomeng Zhou, Zhi Zheng +4 more
LLM agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated...
2 months ago cs.CR cs.AI
PDF
Tool HIGH
Jingxiao Yang, Ping He, Tianyu Du +2 more
Recent advances in software vulnerability detection have been driven by Language Model (LM)-based approaches. However, these models remain vulnerable...
2 months ago cs.CR cs.AI
PDF
Tool HIGH
Zhaoqi Wang, Zijian Zhang, Daqing He +5 more
Large language models (LLMs) have demonstrated remarkable capabilities across diverse applications, however, they remain critically vulnerable to...
2 months ago cs.CR cs.AI
PDF
Tool HIGH
Keerthi Kumar. M, Swarun Kumar Joginpelly, Sunil Khemka +2 more
Background: Cyber-attacks have evolved rapidly in recent years, many individuals and business owners have been affected by cyber-attacks in various...
2 months ago cs.CR cs.AI cs.LG
PDF
Tool HIGH
Qiang Yu, Xinran Cheng, Chuanyi Liu
As LLM agents transition from digital assistants to physical controllers in autonomous systems and robotics, they face an escalating threat from...
2 months ago cs.AI cs.CL cs.CR
PDF
Tool HIGH
Hongming Fei, Zilong Hu, Prosanta Gope +1 more
Physical Unclonable Functions (PUFs) serve as lightweight, hardware-intrinsic entropy sources widely deployed in IoT security applications. However,...
Tool HIGH
Yunhao Feng, Yige Li, Yutao Wu +6 more
Large language model (LLM) agents execute tasks through multi-step workflows that combine planning, memory, and tool use. While this design enables...
2 months ago cs.AI cs.CL
PDF
Tool HIGH
Xiangdong Hu, Yangyang Jiang, Qin Hu +1 more
Multimodal Large Language Models (MLLMs) have become widely deployed, yet their safety alignment remains fragile under adversarial inputs. Previous...
Tool HIGH
Xin Wang, Yunhao Chen, Juncheng Li +7 more
The rapid integration of Multimodal Large Language Models (MLLMs) into critical applications is increasingly hindered by persistent safety...
2 months ago cs.CR cs.CV
PDF
Tool HIGH
Yueyan Dong, Minghui Xu, Qin Hu +5 more
Low-Rank Adaptation (LoRA) has become a popular solution for fine-tuning large language models (LLMs) in federated settings, dramatically reducing...
Tool HIGH
Toqeer Ali Syed, Mishal Ateeq Almutairi, Mahmoud Abdel Moaty
Powerful autonomous systems, which reason, plan, and converse using and between numerous tools and agents, are made possible by Large Language Models...
2 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial