Tool HIGH
Yu He, Haozhe Zhu, Yiming Li +4 more
LLM agents are highly vulnerable to Indirect Prompt Injection (IPI), where adversaries embed malicious directives in untrusted tool outputs to hijack...
Tool MEDIUM
Panagiotis Georgios Pennas, Konstantinos Papaioannou, Marco Guarnieri +1 more
Large Language Models (LLMs) rely on optimizations like Automatic Prefix Caching (APC) to accelerate inference. APC works by reusing previously...
2 weeks ago cs.CR cs.DC cs.LG
PDF
Tool MEDIUM
Zhengyang Shan, Jiayun Xin, Yue Zhang +1 more
Code agents powered by large language models can execute shell commands on behalf of users, introducing severe security vulnerabilities. This paper...
Tool MEDIUM
Shriti Priya, Julian James Stephen, Arjun Natarajan
Enterprises and organizations today increasingly deploy in-house, cloud based applications and APIs for internal operations or external customers....
Tool LOW
Eeham Khan, Luis Rodriguez, Marc Queudot
Retrieval-Augmented Generation (RAG) significantly improves the factuality of Large Language Models (LLMs), yet standard pipelines often lack...
Tool MEDIUM
Yinpeng Wu, Yitong Chen, Lixiang Wang +3 more
Device-side Large Language Models (LLMs) have witnessed explosive growth, offering higher privacy and availability compared to cloud-side LLMs....
2 weeks ago cs.CR cs.LG cs.OS
PDF
Tool LOW
Tzafrir Rehan
We present Test-Driven AI Agent Definition (TDAD), a methodology that treats agent prompts as compiled artifacts: engineers provide behavioral...
2 weeks ago cs.SE cs.AI
PDF
Tool LOW
JV Roig
How much do large language models actually hallucinate when answering questions grounded in provided documents? Despite the critical importance of...
2 weeks ago cs.CL cs.AI
PDF
Tool MEDIUM
Yuhang Huang, Boyang Ma, Biwei Yan +5 more
The Model Context Protocol (MCP) is an open and standardized interface that enables large language models (LLMs) to interact with external tools and...
2 weeks ago cs.CR cs.AI
PDF
Tool MEDIUM
Neha Nagaraja, Hayretdin Bahsi
Large Language Models (LLMs) are increasingly integrated into safety-critical workflows, yet existing security analyses remain fragmented and often...
2 weeks ago cs.CR cs.AI
PDF
Tool MEDIUM
Punyajoy Saha, Sudipta Halder, Debjyoti Mondal +1 more
Safety alignment is critical for deploying large language models (LLMs) in real-world applications, yet most existing approaches rely on large...
2 weeks ago cs.CL cs.AI cs.LG
PDF
Tool HIGH
Touseef Hasan, Blessing Airehenbuwa, Nitin Pundir +2 more
Large language models (LLMs) have shown remarkable capabilities in natural language processing tasks, yet their application in hardware security...
2 weeks ago cs.CR cs.AI
PDF
Tool LOW
Furkan Mumcu, Yasin Yilmaz
As Large Language Models (LLMs) transition into autonomous multi-agent ecosystems, robust minimax training becomes essential yet remains prone to...
3 weeks ago cs.LG cs.AI cs.CR
PDF
Tool HIGH
Max Landauer, Wolfgang Hotwagner, Thorina Boenke +2 more
Log data are essential for intrusion detection and forensic investigations. However, manual log analysis is tedious due to high data volumes,...
3 weeks ago cs.CR cs.AI
PDF
Tool MEDIUM
Arther Tian, Alex Ding, Frank Chen +2 more
Decentralized large language model (LLM) inference networks can pool heterogeneous compute to scale serving, but they require lightweight and...
3 weeks ago cs.LG cs.AI cs.CR
PDF
Tool MEDIUM
Neha Nagaraja, Hayretdin Bahsi
While incorporating LLMs into systems offers significant benefits in critical application areas such as healthcare, new security challenges emerge...
3 weeks ago cs.CR cs.AI
PDF
Tool LOW
Subramanyam Sahoo
Agentic AI systems - capable of goal interpretation, world modeling, planning, tool use, long-horizon operation, and autonomous coordination -...
3 weeks ago cs.CY cs.AI
PDF
Tool MEDIUM
Romina Omidi, Yun Dong, Binghui Wang
Google's SynthID-Text, the first ever production-ready generative watermark system for large language model, designs a novel Tournament-based method...
3 weeks ago cs.CR cs.AI
PDF
Tool MEDIUM
Zixuan Xu, Tiancheng He, Huahui Yi +7 more
Vision-language models remain susceptible to multimodal jailbreaks and over-refusal because safety hinges on both visual evidence and user intent,...
Tool MEDIUM
Bhanu Pallakonda, Mikkel Hindsbo, Sina Ehsani +1 more
The proliferation of open-weight Large Language Models (LLMs) has democratized agentic AI, yet fine-tuned weights are frequently shared and adopted...
3 weeks ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial