Benchmark HIGH
Yuhang Wang, Feiming Xu, Zheng Lin +6 more
Although large language model (LLM)-based agents, exemplified by OpenClaw, are increasingly evolving from task-oriented systems into personalized AI...
Tool HIGH
Xiaoxu Peng, Dong Zhou, Jianwen Zhang +3 more
Vision Language Models (VLMs) have advanced perception in autonomous driving (AD), but they remain vulnerable to adversarial threats. These risks...
1 months ago cs.CV eess.IV
PDF
Attack HIGH
Sahar Zargarzadeh, Mohammad Islam
The Internet of Things (IoT) has revolutionized connectivity by linking billions of devices worldwide. However, this rapid expansion has also...
1 months ago cs.CR cs.LG
PDF
Attack HIGH
Md Rafi Ur Rashid, MD Sadik Hossain Shanto, Vishnu Asutosh Dasu +1 more
Vision-Language Models (VLMs) are now a core part of modern AI. Recent work proposed several visual jailbreak attacks using single/ holistic images....
1 months ago cs.CV cs.AI
PDF
Benchmark HIGH
Nanda Rani, Kimberly Milner, Minghao Shao +9 more
Real-world offensive security operations are inherently open-ended: attackers explore unknown attack surfaces, revise hypotheses under uncertainty,...
1 months ago cs.CR cs.AI cs.MA
PDF
Attack HIGH
Minbeom Kim, Mihir Parmar, Phillip Wallis +5 more
AI agents equipped with tool-calling capabilities are susceptible to Indirect Prompt Injection (IPI) attacks. In this attack scenario, malicious...
1 months ago cs.CR cs.LG stat.ME
PDF
Tool HIGH
Tianyi Wang, Huawei Fan, Yuanchao Shu +2 more
Large Language Models face an emerging and critical threat known as latency attacks. Because LLM inference is inherently expensive, even modest...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Yuhao Wang, Shengfang Zhai, Guanghao Jin +3 more
Large Language Model (LLM)-based agents employ external and internal memory systems to handle complex, goal-oriented tasks, yet this exposes them to...
1 months ago cs.CR cs.AI cs.CL
PDF
Benchmark HIGH
Tianyi Wu, Mingzhe Du, Yue Liu +4 more
Large language models (LLMs) are increasingly used in software development, yet their tendency to generate insecure code remains a major barrier to...
1 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Abdullah Arafat Miah, Kevin Vu, Yu Bi
Spiking Neural Networks (SNNs) are energy-efficient counterparts of Deep Neural Networks (DNNs) with high biological plausibility, as information is...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Shang Liu, Hanyu Pei, Zeyan Liu
Large Language Models(LLMs) have been successful in numerous fields. Alignment has usually been applied to prevent them from harmful purposes....
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Zhuoheng Li, Ying Chen
Multimodal large language models (MLLMs) have advanced the capabilities to interpret and act on visual input in 3D environments, empowering diverse...
1 months ago cs.CV cs.AI
PDF
Attack HIGH
Mingqian Feng, Xiaodong Liu, Weiwei Yang +4 more
Multi-turn jailbreaks capture the real threat model for safety-aligned chatbots, where single-turn attacks are merely a special case. Yet existing...
Attack HIGH
Yassine Chagna, Antal Goldschmidt
This project explores large language models (LLMs) for anomaly detection across heterogeneous log sources. Traditional intrusion detection systems...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Ying Song, Balaji Palanisamy
Graph-structured data underpin a wide spectrum of modern applications. However, complex graph topologies and homophilic patterns can facilitate...
1 months ago cs.CR cs.LG
PDF
Benchmark HIGH
Li Lu, Yanjie Zhao, Hongzhou Rao +2 more
Large Language Models (LLMs) have demonstrated remarkable proficiency in vulnerability detection. However, a critical reliability gap persists:...
Attack HIGH
Mengyao Du, Han Fang, Haokai Ma +4 more
Suffix-based jailbreak attacks append an adversarial suffix, i.e., a short token sequence, to steer aligned LLMs into unsafe outputs. Since suffixes...
Attack HIGH
Haipeng Li, Rongxuan Peng, Anwei Luo +3 more
The rapid advancement of AI-Generated Content (AIGC) technologies poses significant challenges for authenticity assessment. However, existing...
1 months ago cs.CV cs.CR
PDF
Attack HIGH
Minkyoo Song, Jaehan Kim, Myungchul Kang +3 more
Graph-based retrieval-augmented generation (Graph RAG) is increasingly deployed to support LLM applications by augmenting user queries with...
Attack HIGH
Sung-Hoon Yoon, Ruizhi Qian, Minda Zhao +2 more
Large Language Models (LLMs) have become integral to many domains, making their safety a critical priority. Prior jailbreaking research has explored...
1 months ago cs.CL cs.AI cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial