On The Dangers of Poisoned LLMs In Security Automation
Patrick Karlsen, Even Eilertsen
This paper investigates some of the risks introduced by "LLM poisoning," the intentional or unintentional introduction of malicious or biased data...
2,077+ academic papers on AI security, attacks, and defenses
Showing 741–760 of 986 papers
Clear filtersPatrick Karlsen, Even Eilertsen
This paper investigates some of the risks introduced by "LLM poisoning," the intentional or unintentional introduction of malicious or biased data...
Hanzhong Liang, Yue Duan, Xing Su +5 more
As the Web3 ecosystem evolves toward a multi-chain architecture, cross-chain bridges have become critical infrastructure for enabling...
Sogol Masoumzadeh
Timely identification of issue reports reflecting software vulnerabilities is crucial, particularly for Internet-of-Things (IoT) where analysis is...
Yuhan Cao, Yu Wang, Sitong Liu +3 more
The widespread adoption of Large Language Models (LLMs) through Application Programming Interfaces (APIs) induces a critical vulnerability: the...
Kasimir Schulz, Amelia Kawasaki, Leo Ring
Large language models (LLMs) are widely deployed across various applications, often with safeguards to prevent the generation of harmful or...
Ariyan Hossain, Khondokar Mohammad Ahanaf Hannan, Rakinul Haque +4 more
Gender bias in language models has gained increasing attention in the field of natural language processing. Encoder-based transformer models, which...
Yifan Xia, Guorui Chen, Wenqian Yu +3 more
Large language models (LLMs) excel in diverse applications but face dual challenges: generating harmful content under jailbreak attacks and...
Mohammed N. Swileh, Shengli Zhang
Centralized Software-Defined Networking (cSDN) offers flexible and programmable control of networks but suffers from scalability and reliability...
David Lüdke, Tom Wollschläger, Paul Ungermann +2 more
We introduce a novel framework that transforms the resource-intensive (adversarial) prompt optimization problem into an \emph{efficient, amortized...
Kathrin Grosse, Nico Ebert
Recent improvement gains in large language models (LLMs) have lead to everyday usage of AI-based Conversational Agents (CAs). At the same time, LLMs...
Chenghao Du, Quanfeng Huang, Tingxuan Tang +3 more
Large Language Models (LLMs) have transformed software development, enabling AI-powered applications known as LLM-based agents that promise to...
Heehwan Kim, Sungjune Park, Daeseon Choi
Large Language Models (LLMs) are generally equipped with guardrails to block the generation of harmful responses. However, existing defenses always...
Arnabh Borah, Md Tanvirul Alam, Nidhi Rastogi
Security applications are increasingly relying on large language models (LLMs) for cyber threat detection; however, their opaque reasoning often...
Zishuo Zheng, Vidhisha Balachandran, Chan Young Park +2 more
As large language model (LLM) based systems take on high-stakes roles in real-world decision-making, they must reconcile competing instructions from...
Shaked Zychlinski, Yuval Kainan
Large Language Models (LLMs) are susceptible to jailbreak attacks where malicious prompts are disguised using ciphers and character-level encodings...
Yingjia Wang, Ting Qiao, Xing Liu +3 more
The rapid advancement of deep neural networks (DNNs) heavily relies on large-scale, high-quality datasets. However, unauthorized commercial use of...
Haohua Duan, Liyao Xiang, Xin Zhang
Watermarking schemes for large language models (LLMs) have been proposed to identify the source of the generated text, mitigating the potential...
Lisha Shuai, Jiuling Dong, Nan Zhang +5 more
Local Differential Privacy (LDP) is a widely adopted privacy-protection model in the Internet of Things (IoT) due to its lightweight, decentralized,...
Weifei Jin, Yuxin Cao, Junjie Su +5 more
Recent advances in Audio-Language Models (ALMs) have significantly improved multimodal understanding capabilities. However, the introduction of the...
Zheng Zhang, Haonan Li, Xingyu Li +2 more
Bug bisection has been an important security task that aims to understand the range of software versions impacted by a bug, i.e., identifying the...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial