AI Security Research
2,077+ academic papers on AI security, attacks, and defenses
Tool HIGH
ChenYu Wu, Yi Wang, Yang Liao
Large language models (LLMs) are increasingly vulnerable to multi-turn jailbreak attacks, where adversaries iteratively elicit harmful behaviors that...
5 months ago cs.CR cs.AI
PDF
Tool HIGH
Zixuan Liu, Yi Zhao, Zhuotao Liu +4 more
Machine Learning (ML)-based malicious traffic detection is a promising security paradigm. It outperforms rule-based traditional detection by...
Tool HIGH
Caelin Kaplan, Alexander Warnecke, Neil Archibald
AI models are being increasingly integrated into real-world systems, raising significant concerns about their safety and security. Consequently, AI...
5 months ago cs.CR cs.AI
PDF
Tool HIGH
Zicheng Liu, Lige Huang, Jie Zhang +3 more
The increasing autonomy of Large Language Models (LLMs) necessitates a rigorous evaluation of their potential to aid in cyber offense. Existing...
5 months ago cs.CR cs.AI
PDF
Tool HIGH
Pengyu Zhu, Lijun Li, Yaxing Lyu +3 more
LLM-based multi-agent systems (MAS) demonstrate increasing integration into next-generation applications, but their safety in backdoor attacks...
Tool HIGH
Hyeseon An, Shinwoo Park, Suyeon Woo +1 more
The promise of LLM watermarking rests on a core assumption that a specific watermark proves authorship by a specific model. We demonstrate that this...
5 months ago cs.CR cs.AI
PDF
Tool HIGH
Dennis Rall, Bernhard Bauer, Mohit Mittal +1 more
Large language models (LLMs) are now routinely used to autonomously execute complex tasks, from natural language processing to dynamic workflows like...
5 months ago cs.CR cs.CL
PDF
Tool HIGH
Jonathan Sneh, Ruomei Yan, Jialin Yu +6 more
As LLMs increasingly power agents that interact with external tools, tool use has become an essential mechanism for extending their capabilities....
5 months ago cs.CR cs.AI
PDF
Tool HIGH
Shoumik Saha, Jifan Chen, Sam Mayers +3 more
Code-capable large language model (LLM) agents are increasingly embedded into software engineering workflows where they can read, write, and execute...
5 months ago cs.CR cs.AI
PDF
Tool HIGH
Jing-Jing Li, Jianfeng He, Chao Shang +6 more
As LLMs advance into autonomous agents with tool-use capabilities, they introduce security challenges that extend beyond traditional content-based...
5 months ago cs.CR cs.AI cs.CL
PDF
Tool HIGH
Petar Radanliev
This study presents a structured approach to evaluating vulnerabilities within quantum cryptographic protocols, focusing on the BB84 quantum key...
6 months ago cs.CR cs.AI cs.NI
PDF
Tool HIGH
Ping He, Changjiang Li, Binbin Zhao +2 more
The remarkable capability of large language models (LLMs) has led to the wide application of LLM-based agents in various domains. To standardize...
6 months ago cs.CR cs.AI cs.SE
PDF
Tool HIGH
Adam Swanda, Amy Chang, Alexander Chen +3 more
The widespread adoption of Large Language Models (LLMs) has revolutionized AI deployment, enabling autonomous and semi-autonomous applications across...
6 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial