MalTool: Malicious Tool Attacks on LLM Agents
Yuepeng Hu, Yuqi Jia, Mengyuan Li +2 more
In a malicious tool attack, an attacker uploads a malicious tool to a distribution platform; once a user installs the tool and the LLM agent selects...
2,077+ academic papers on AI security, attacks, and defenses
Showing 61–80 of 226 papers
Clear filtersYuepeng Hu, Yuqi Jia, Mengyuan Li +2 more
In a malicious tool attack, an attacker uploads a malicious tool to a distribution platform; once a user installs the tool and the LLM agent selects...
Hayfa Dhabhi, Kashyap Thimmaraju
Large Language Models (LLMs) deploy safety mechanisms to prevent harmful outputs, yet these defenses remain vulnerable to adversarial prompts. While...
Herman Errico
As artificial intelligence systems evolve from passive assistants into autonomous agents capable of executing consequential actions, the security...
Xiaoxu Peng, Dong Zhou, Jianwen Zhang +3 more
Vision Language Models (VLMs) have advanced perception in autonomous driving (AD), but they remain vulnerable to adversarial threats. These risks...
Tianyi Wang, Huawei Fan, Yuanchao Shu +2 more
Large Language Models face an emerging and critical threat known as latency attacks. Because LLM inference is inherently expensive, even modest...
Juefei Pu, Xingyu Li, Zhengchuan Liang +5 more
Autonomous large language model (LLM) based systems have recently shown promising results across a range of cybersecurity tasks. However, there is no...
Saad Hossain, Tom Tseng, Punya Syon Pandey +8 more
As increasingly capable open-weight large language models (LLMs) are deployed, improving their tamper resistance against unsafe modifications,...
Guowei Guan, Yurong Hao, Jiaming Zhang +6 more
Multimodal large language models (MLLMs) are pushing recommender systems (RecSys) toward content-grounded retrieval and ranking via cross-modal...
Guangwei Zhang, Jianing Zhu, Cheng Qian +12 more
We present Copyright Detective, the first interactive forensic system for detecting, analyzing, and visualizing potential copyright risks in LLM...
Gautam Savaliya, Robert Aufschläger, Abhishek Subedi +2 more
Artificial intelligence systems introduce complex privacy risks throughout their lifecycle, especially when processing sensitive or high-dimensional...
Jiaqi Gao, Zijian Zhang, Yuqiang Sun +5 more
Business logic vulnerabilities have become one of the most damaging yet least understood classes of smart contract vulnerabilities. Unlike...
Alsharif Abuadbba, Nazatul Sultan, Surya Nepal +1 more
AI is moving from domain-specific autonomy in closed, predictable settings to large-language-model-driven agents that plan and act in open,...
Zehua Cheng, Jianwei Yang, Wei Dai +1 more
Large Language Models (LLMs) remain vulnerable to adaptive jailbreaks that easily bypass empirical defenses like GCG. We propose a framework for...
Weizhe Tang, Junwei You, Jiaxi Liu +5 more
End-to-end autonomous driving models increasingly benefit from large vision--language models for semantic understanding, yet ensuring safe and...
Haoran Ou, Kangjie Chen, Gelei Deng +4 more
Fact-checking systems with search-enabled large language models (LLMs) have shown strong potential for verifying claims by dynamically retrieving...
Naen Xu, Hengyu An, Shuo Shi +7 more
Recent advancements in large language models (LLMs) have significantly enhanced the capabilities of collaborative multi-agent systems, enabling them...
Saeid Jamshidi, Kawser Wazed Nafi, Arghavan Moradi Dakhel +3 more
Large Language Models (LLMs) are increasingly adopted in sensitive domains such as healthcare and financial institutions' data analytics; however,...
Waleed Khan Mohammed, Zahirul Arief Irfan Bin Shahrul Anuar, Mousa Sufian Mousa Mitani +2 more
Advanced Persistent Threats (APTs) are among the most challenging cyberattacks to detect. They are carried out by highly skilled attackers who...
Chanwoo Park, Chanwoo Kim
Evasion attacks pose significant threats to AI systems, exploiting vulnerabilities in machine learning models to bypass detection mechanisms. The...
Xiang Zheng, Yutao Wu, Hanxun Huang +5 more
Autonomous code agents built on large language models are reshaping software and AI development through tool use, long-horizon reasoning, and...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial