Attack HIGH
Jie Ma, Junqing Zhang, Guanxiong Shen +2 more
Radio frequency fingerprint identification (RFFI) is an emerging technique for the lightweight authentication of wireless Internet of things (IoT)...
3 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Jamal Al-Karaki, Muhammad Al-Zafar Khan, Rand Derar Mohammad Al Athamneh
The scarcity of cyberattack data hinders the development of robust intrusion detection systems. This paper introduces PHANTOM, a novel adversarial...
3 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Jing Cui, Yufei Han, Jianbin Jiao +1 more
Backdoor attacks embed malicious behaviors into Large Language Models (LLMs), enabling adversaries to trigger harmful outputs or bypass safety...
3 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Neha, Tarunpreet Bhatia
Intrusion Detection Systems (IDS) are critical components in safeguarding 5G/6G networks from both internal and external cyber threats. While...
3 months ago cs.CR cs.LG
PDF
Attack HIGH
Khurram Khalil, Khaza Anuarul Hoque
Generative Artificial Intelligence models, such as Large Language Models (LLMs) and Large Vision Models (VLMs), exhibit state-of-the-art performance...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Mohamed Afane, Abhishek Satyam, Ke Chen +3 more
Backdoor attacks create significant security threats to language models by embedding hidden triggers that manipulate model behavior during inference,...
3 months ago cs.CR cs.CL
PDF
Attack HIGH
Reachal Wang, Yuqi Jia, Neil Zhenqiang Gong
Prompt injection attacks aim to contaminate the input data of an LLM to mislead it into completing an attacker-chosen task instead of the intended...
Attack MEDIUM
Miranda Christ, Noah Golowich, Sam Gunn +2 more
Watermarks are an essential tool for identifying AI-generated content. Recently, Christ and Gunn (CRYPTO '24) introduced pseudorandom...
Attack HIGH
Joshua Ward, Bochao Gu, Chi-Hua Wang +1 more
Large Language Models (LLMs) have recently demonstrated remarkable performance in generating high-quality tabular synthetic data. In practice, two...
3 months ago cs.LG cs.AI
PDF
Attack MEDIUM
Botao 'Amber' Hu, Bangdao Chen
The emerging "agentic web" envisions large populations of autonomous agents coordinating, transacting, and delegating across open networks. Yet many...
3 months ago cs.CY cs.MA
PDF
Attack HIGH
Yinan Zhong, Qianhao Miao, Yanjiao Chen +3 more
Large Language Models (LLMs) have been integrated into many applications (e.g., web agents) to perform more sophisticated tasks. However,...
Attack HIGH
Tailun Chen, Yu He, Yan Wang +9 more
Retrieval-Augmented Generation (RAG) systems enhance LLMs with external knowledge but introduce a critical attack surface: corpus poisoning. While...
Attack HIGH
Zafaryab Haider, Md Hafizur Rahman, Shane Moeykens +2 more
Hard-to-detect hardware bit flips, from either malicious circuitry or bugs, have already been shown to make transformers vulnerable in non-generative...
3 months ago cs.LG cs.AI
PDF
Attack LOW
Sampriti Soor, Suklav Ghosh, Arijit Sur
Language models are vulnerable to short adversarial suffixes that can reliably alter predictions. Previous works usually find such suffixes with...
Attack HIGH
Stephan Carney, Soham Hans, Sofia Hirschmann +4 more
Adversaries (hackers) attempting to infiltrate networks frequently face uncertainty in their operational environments. This research explores the...
3 months ago cs.CR cs.HC
PDF
Attack HIGH
Xiqiao Xiong, Ouxiang Li, Zhuo Liu +5 more
Large language models have seen widespread adoption, yet they remain vulnerable to multi-turn jailbreak attacks, threatening their safe deployment....
3 months ago cs.AI cs.LG
PDF
Attack LOW
Ziming Hong, Tianyu Huang, Runnan Chen +4 more
Recent studies have extended diffusion-based instruction-driven 2D image editing pipelines to 3D Gaussian Splatting (3DGS), enabling faithful...
3 months ago cs.CV cs.CR cs.LG
PDF
Attack HIGH
Max Zhang, Derek Liu, Kai Zhang +2 more
Large language models (LLMs) are increasingly deployed worldwide, yet their safety alignment remains predominantly English-centric. This allows for...
Attack HIGH
Yunzhe Li, Jianan Wang, Hongzi Zhu +3 more
Large Language Models (LLMs) have become foundational components in a wide range of applications, including natural language understanding and...
3 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Richard Young
Despite substantial investment in safety alignment, the vulnerability of large language models to sophisticated multi-turn adversarial attacks...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial