Tool HIGH
Charoes Huang, Xin Huang, Amin Milani Fard
Prompt injection is listed as the number-one vulnerability class in the OWASP Top 10 for LLM Applications that can subvert LLM guardrails, disclose...
2 days ago cs.CR cs.SE
PDF
Tool HIGH
Md Takrim Ul Alam, Akif Islam, Mohd Ruhul Ameen +2 more
Large language models (LLMs) deployed behind APIs and retrieval-augmented generation (RAG) stacks are vulnerable to prompt injection attacks that may...
Tool HIGH
Yihao Zhang, Zeming Wei, Xiaokun Luan +7 more
Autonomous LLM-based agents increasingly operate as long-running processes forming densely interconnected multi-agent ecosystems, whose security...
1 weeks ago cs.CR cs.AI cs.LG
PDF
Tool HIGH
Yihao Zhang, Zeming Wei, Xiaokun Luan +7 more
Autonomous LLM-based agents increasingly operate as long-running processes forming densely interconnected multi-agent ecosystems, whose security...
1 weeks ago cs.CR cs.AI cs.LG
PDF
Tool HIGH
Sarbartha Banerjee, Prateek Sahu, Anjo Vahldiek-Oberwagner +2 more
Rapid progress in generative AI has given rise to Compound AI systems - pipelines comprised of multiple large language models (LLM), software tools...
1 weeks ago cs.CR cs.AI
PDF
Tool HIGH
Xiangwen Wang, Ananth Balashankar, Varun Chandrasekaran
Large language models remain vulnerable to jailbreak attacks, yet we still lack a systematic understanding of how jailbreak success scales with...
2 weeks ago cs.LG cs.CR
PDF
Tool HIGH
Yu He, Haozhe Zhu, Yiming Li +4 more
LLM agents are highly vulnerable to Indirect Prompt Injection (IPI), where adversaries embed malicious directives in untrusted tool outputs to hijack...
Tool HIGH
Touseef Hasan, Blessing Airehenbuwa, Nitin Pundir +2 more
Large language models (LLMs) have shown remarkable capabilities in natural language processing tasks, yet their application in hardware security...
2 weeks ago cs.CR cs.AI
PDF
Tool HIGH
Max Landauer, Wolfgang Hotwagner, Thorina Boenke +2 more
Log data are essential for intrusion detection and forensic investigations. However, manual log analysis is tedious due to high data volumes,...
3 weeks ago cs.CR cs.AI
PDF
Tool HIGH
Xiaoyi Pang, Xuanyi Hao, Pengyu Liu +3 more
Recent intelligent systems integrate powerful Large Language Models (LLMs) through APIs, but their trustworthiness may be critically undermined by...
3 weeks ago cs.CR cs.AI
PDF
Tool HIGH
Xinfeng Li, Shenyu Dai, Kelong Zheng +4 more
Large language model (LLM) agents are rapidly becoming trusted copilots in high-stakes domains like software development and healthcare. However,...
4 weeks ago cs.HC cs.AI cs.CR
PDF
Tool HIGH
Che Wang, Jiaming Zhang, Ziqi Zhang +6 more
The integration of external data services (e.g., Model Context Protocol, MCP) has made large language model-based agents increasingly powerful for...
4 weeks ago cs.CR cs.AI
PDF
Tool HIGH
Ian Steenstra, Paola Pedrelli, Weiyan Shi +2 more
Large Language Models (LLMs) are increasingly utilized for mental health support; however, current safety benchmarks often fail to detect the...
1 months ago cs.CL cs.AI cs.CY
PDF
Tool HIGH
Xingyu Shen, Tommy Duong, Xiaodong An +6 more
Age estimation systems are increasingly deployed as gatekeepers for age-restricted online content, yet their robustness to cosmetic modifications has...
1 months ago cs.CV cs.CR cs.LG
PDF
Tool HIGH
Phan The Duy, Nghi Hoang Khoa, Nguyen Tran Anh Quan +3 more
The increasing deployment of Federated Learning (FL) in Intrusion Detection Systems (IDS) introduces new challenges related to data privacy,...
1 months ago cs.CR cs.AI
PDF
Tool HIGH
Doron Shavit
Jailbreak prompts are a practical and evolving threat to large language models (LLMs), particularly in agentic systems that execute tools over...
1 months ago cs.CR cs.AI
PDF
Tool HIGH
Yuepeng Hu, Yuqi Jia, Mengyuan Li +2 more
In a malicious tool attack, an attacker uploads a malicious tool to a distribution platform; once a user installs the tool and the LLM agent selects...
Tool HIGH
Hayfa Dhabhi, Kashyap Thimmaraju
Large Language Models (LLMs) deploy safety mechanisms to prevent harmful outputs, yet these defenses remain vulnerable to adversarial prompts. While...
1 months ago cs.CR cs.AI cs.CY
PDF
Tool HIGH
Xiaoxu Peng, Dong Zhou, Jianwen Zhang +3 more
Vision Language Models (VLMs) have advanced perception in autonomous driving (AD), but they remain vulnerable to adversarial threats. These risks...
1 months ago cs.CV eess.IV
PDF
Tool HIGH
Tianyi Wang, Huawei Fan, Yuanchao Shu +2 more
Large Language Models face an emerging and critical threat known as latency attacks. Because LLM inference is inherently expensive, even modest...
1 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial