The LLMbda Calculus: AI Agents, Conversations, and Information Flow
Zac Garby, Andrew D. Gordon, David Sands
A conversation with a large language model (LLM) is a sequence of prompts and responses, with each response generated from the preceding...
2,077+ academic papers on AI security, attacks, and defenses
Showing 41–60 of 259 papers
Clear filtersZac Garby, Andrew D. Gordon, David Sands
A conversation with a large language model (LLM) is a sequence of prompts and responses, with each response generated from the preceding...
Natalie Shapira, Chris Wendler, Avery Yen +35 more
We report an exploratory red-teaming study of autonomous language-model-powered agents deployed in a live laboratory environment with persistent...
Xunzhuo Liu, Huamin Chen, Samzong Lu +27 more
As large language models (LLMs) diversify across modalities, capabilities, and cost profiles, the problem of intelligent request routing -- selecting...
Kaiwen Wang, Xiaolin Chang, Yuehan Dong +1 more
Secure comparison is a fundamental primitive in multi-party computation, supporting privacy-preserving applications such as machine learning and data...
Diego Soi, Silvia Lucia Sanna, Lorenzo Pisu +2 more
In recent years, stealthy Android malware has increasingly adopted sophisticated techniques to bypass automatic detection mechanisms and harden...
Justin Albrethsen, Yash Datta, Kunal Kumar +1 more
While Large Language Model (LLM) capabilities have scaled, safety guardrails remain largely stateless, treating multi-turn dialogues as a series of...
Nils Palumbo, Sarthak Choudhary, Jihye Choi +2 more
LLM-based agents are increasingly being deployed in contexts requiring complex authorization policies: customer service protocols, approval...
Yuval Felendler, Parth A. Gandhi, Idan Habler +2 more
Model Context Protocols (MCPs) provide a unified platform for agent systems to discover, select, and orchestrate tools across heterogeneous execution...
Varun Pratap Bhardwaj
We present SuperLocalMemory, a local-first memory system for multi-agent AI that defends against OWASP ASI06 memory poisoning through architectural...
Chengzhi Hu, Jonas Dornbusch, David Lüdke +2 more
Adversarial training for LLMs is one of the most promising methods to reliably improve robustness against adversaries. However, despite significant...
Yohan Lee, Jisoo Jang, Seoyeon Choi +2 more
Tool-using LLM agents increasingly coordinate real workloads by selecting and chaining third-party tools based on text-visible metadata such as tool...
Zhenhong Zhou, Yuanhe Zhang, Hongwei Cai +6 more
The Model Context Protocol (MCP) standardizes tool use for LLM-based agents and enable third-party servers. This openness introduces a security...
Mario Marín Caballero, Miguel Betancourt Alonso, Daniel Díaz-López +3 more
The most valuable asset of any cloud-based organization is data, which is increasingly exposed to sophisticated cyberattacks. Until recently, the...
Akshat Naik, Jay Culligan, Yarin Gal +4 more
As Large Language Model (LLM) agents become more capable, their coordinated use in the form of multi-agent systems is anticipated to emerge as a...
Yiran Gao, Kim Hammar, Tao Li
Rapidly evolving cyberattacks demand incident response systems that can autonomously learn and adapt to changing threats. Prior work has extensively...
Oguzhan Baser, Elahe Sadeghi, Eric Wang +5 more
Most large language models (LLMs) run on external clouds: users send a prompt, pay for inference, and must trust that the remote GPU executes the LLM...
Abhishek Saini, Haolin Jiang, Hang Liu
The deployment of large language models (LLMs) on third-party devices requires new ways to protect model intellectual property. While Trusted...
Zhenyu Xu, Victor S. Sheng
Protecting the intellectual property of large language models (LLMs) is a critical challenge due to the proliferation of unauthorized derivative...
Benjamin Livshits
We argue that when it comes to producing secure code with AI, the prevailing "fighting fire with fire" approach -- using probabilistic AI-based...
Zhiyu Sun, Minrui Luo, Yu Wang +2 more
Large language models (LLMs) are pretrained on corpora containing trillions of tokens and, therefore, inevitably memorize sensitive information....
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial