Attack HIGH
Kunal Mukherjee, Zulfikar Alom, Tran Gia Bao Ngo +2 more
The rise of bot accounts on social media poses significant risks to public discourse. To address this threat, modern bot detectors increasingly rely...
1 months ago cs.LG cs.AI cs.CR
PDF
Survey HIGH
Luze Sun, Alina Oprea, Eric Wong
LLM-based vulnerability detectors are increasingly deployed in security-critical code review, yet their resilience to evasion under...
1 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Ye Yu, Haibo Jin, Yaoning Yu +2 more
Large audio-language models increasingly operate on raw speech inputs, enabling more seamless integration across domains such as voice assistants,...
1 months ago cs.CL cs.AI cs.CR
PDF
Attack HIGH
Zhixiang Zhang, Zesen Liu, Yuchong Xie +2 more
Semantic caching has emerged as a pivotal technique for scaling LLM applications, widely adopted by major providers including AWS and Microsoft. By...
1 months ago cs.CR cs.AI
PDF
Benchmark HIGH
Yunpeng Xiong, Ting Zhang
Static Application Security Testing (SAST) tools are essential for identifying software vulnerabilities, but they often produce a high volume of...
Benchmark HIGH
Ivan K. Tung, Yu Xiang Shi, Alex Chien +2 more
Creating attack paths for cyber defence exercises requires substantial expert effort. Existing automation requires vulnerability graphs or exploit...
1 months ago cs.CR cs.AI
PDF
Benchmark HIGH
Miao Lin, Feng Yu, Rui Ning +6 more
Deep neural networks are highly susceptible to backdoor attacks, yet most defense methods to date rely on balanced data, overlooking the pervasive...
1 months ago cs.CR cs.CV cs.LG
PDF
Attack HIGH
Tanusree Debi, Wentian Zhu
Large language model (LLM) based agents are increasingly used to automate financial transactions, yet their reliance on contextual reasoning exposes...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Naen Xu, Jinghuai Zhang, Ping He +6 more
Large language models (LLMs) have been widely integrated into critical automated workflows, including contract review and job application processes....
1 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Aarush Noheria, Yuguang Yao
Vision-language models (VLMs) have become central to tasks such as visual question answering, image captioning, and text-to-image generation....
1 months ago cs.CV cs.AI
PDF
Tool HIGH
Chanwoo Park, Chanwoo Kim
Evasion attacks pose significant threats to AI systems, exploiting vulnerabilities in machine learning models to bypass detection mechanisms. The...
1 months ago cs.SD cs.CR eess.AS
PDF
Attack HIGH
Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer +1 more
Regression models are widely used in industrial processes, engineering, and in natural and physical sciences, yet their robustness to poisoning has...
1 months ago cs.LG cs.AI cs.CR
PDF
Survey HIGH
Pedro H. Barcha Correia, Ryan W. Achjian, Diego E. G. Caetano de Oliveira +5 more
The rapid advancement and widespread adoption of generative artificial intelligence (GenAI) and large language models (LLMs) has been accompanied by...
1 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Xiaogeng Liu, Xinyan Wang, Yechao Zhang +5 more
Large reasoning models (LRMs) extend large language models with explicit multi-step reasoning traces, but this capability introduces a new class of...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Ningyuan He, Ronghong Huang, Qianqian Tang +3 more
In-context learning (ICL) has become a powerful, data-efficient paradigm for text classification using large language models. However, its robustness...
Attack HIGH
Xingwei Lin, Wenhao Lin, Sicong Cao +4 more
Multi-turn jailbreak attacks have emerged as a critical threat to Large Language Models (LLMs), bypassing safety mechanisms by progressively...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Yuetian Chen, Kaiyuan Zhang, Yuntao Du +5 more
Diffusion Language Models (DLMs) represent a promising alternative to autoregressive language models, using bidirectional masked token prediction....
1 months ago cs.LG cs.AI
PDF
Attack HIGH
Md Tasnim Jawad, Mingyan Xiao, Yanzhao Wu
With the widespread adoption of Large Language Models (LLMs) and increasingly stringent privacy regulations, protecting data privacy in LLMs has...
Attack HIGH
Haonan Zhang, Dongxia Wang, Yi Liu +2 more
Safety-aligned LLMs suffer from two failure modes: jailbreak (answering harmful inputs) and over-refusal (declining benign queries). Existing vector...
1 months ago cs.LG cs.AI
PDF
Tool HIGH
Nirhoshan Sivaroopan, Kanchana Thilakarathna, Albert Zomaya +6 more
Sponge attacks increasingly threaten LLM systems by inducing excessive computation and DoS. Existing defenses either rely on statistical filters that...
1 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial