Large-scale online deanonymization with LLMs
Simon Lermen, Daniel Paleka, Joshua Swanson +3 more
We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News...
2,077+ academic papers on AI security, attacks, and defenses
Showing 181–200 of 986 papers
Clear filtersSimon Lermen, Daniel Paleka, Joshua Swanson +3 more
We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News...
Nils Palumbo, Sarthak Choudhary, Jihye Choi +2 more
LLM-based agents are increasingly being deployed in contexts requiring complex authorization policies: customer service protocols, approval...
Michael Cunningham
We present a practical system for privacy-aware large language model (LLM) inference that splits a transformer between a trusted local GPU and an...
Nivya Talokar, Ayush K Tarun, Murari Mandal +2 more
LLM-based agents execute real-world workflows via tools and memory. These affordances enable ill-intended adversaries to also use these agents to...
Johannes Bertram, Jonas Geiping
We introduce NESSiE, the NEceSsary SafEty benchmark for large language models (LLMs). With minimal test cases of information and access security,...
Ahmed Ryan, Ibrahim Khalil, Abdullah Al Jahid +4 more
The prevalence of malicious packages in open-source repositories, such as PyPI, poses a critical threat to the software supply chain. While Large...
Yuval Felendler, Parth A. Gandhi, Idan Habler +2 more
Model Context Protocols (MCPs) provide a unified platform for agent systems to discover, select, and orchestrate tools across heterogeneous execution...
Shahriar Golchin, Marc Wetter
We systematically evaluate the quality of widely used AI safety datasets from two perspectives: in isolation and in practice. In isolation, we...
Haodong Zhao, Jinming Hu, Gongshen Liu
Federated learning security research has predominantly focused on backdoor threats from a minority of malicious clients that intentionally corrupt...
Varun Pratap Bhardwaj
We present SuperLocalMemory, a local-first memory system for multi-agent AI that defends against OWASP ASI06 memory poisoning through architectural...
Chengzhi Hu, Jonas Dornbusch, David Lüdke +2 more
Adversarial training for LLMs is one of the most promising methods to reliably improve robustness against adversaries. However, despite significant...
David Puertolas Merenciano, Ekaterina Vasyagina, Raghav Dixit +4 more
LoRA adapters let users fine-tune large language models (LLMs) efficiently. However, LoRA adapters are shared through open repositories like Hugging...
Yohan Lee, Jisoo Jang, Seoyeon Choi +2 more
Tool-using LLM agents increasingly coordinate real workloads by selecting and chaining third-party tools based on text-visible metadata such as tool...
Tianyu Chen, Dongrui Liu, Xia Hu +2 more
Clawdbot is a self-hosted, tool-using personal AI agent with a broad action space spanning local execution and web-mediated workflows, which raises...
Zhenhong Zhou, Yuanhe Zhang, Hongwei Cai +6 more
The Model Context Protocol (MCP) standardizes tool use for LLM-based agents and enable third-party servers. This openness introduces a security...
Matic Korun
We propose a geometric taxonomy of large language model hallucinations based on observable signatures in token embedding cluster structure. By...
Max Fomin
Detecting prompt injection and jailbreak attacks is critical for deploying LLM-based agents safely. As agents increasingly process untrusted data...
Mario Marín Caballero, Miguel Betancourt Alonso, Daniel Díaz-López +3 more
The most valuable asset of any cloud-based organization is data, which is increasingly exposed to sophisticated cyberattacks. Until recently, the...
Mohamed Shaaban, Mohamed Elmahallawy
Federated learning (FL) enables collaborative training across organizational silos without sharing raw data, making it attractive for...
Akshat Naik, Jay Culligan, Yarin Gal +4 more
As Large Language Model (LLM) agents become more capable, their coordinated use in the form of multi-agent systems is anticipated to emerge as a...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial