Attack HIGH
Omer Gazit, Yael Itzhakev, Yuval Elovici +1 more
Radio frequency (RF) based systems are increasingly used to detect drones by analyzing their RF signal patterns, converting them into spectrogram...
3 months ago cs.CR cs.LG
PDF
Attack HIGH
Linzhi Chen, Yang Sun, Hongru Wei +1 more
Low-Rank Adaptation (LoRA) has emerged as an efficient method for fine-tuning large language models (LLMs) and is widely adopted within the...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Sameera K. M., Serena Nicolazzo, Antonino Nocera +2 more
Federated Learning (FL) has recently emerged as a revolutionary approach to collaborative training Machine Learning models. In particular, it enables...
3 months ago cs.CR cs.LG
PDF
Attack HIGH
Akshaj Prashanth Rao, Advait Singh, Saumya Kumaar Saksena +1 more
Prompt injection and jailbreaking attacks pose persistent security challenges to large language model (LLM)-based systems. We present PromptScreen,...
3 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Jianyi Zhang, Shizhao Liu, Ziyin Zhou +1 more
The rapid advancement of large language models (LLMs) has intensified concerns about the robustness of their safety alignment. While existing...
Attack HIGH
Huixin Zhan
Genomic Foundation Models (GFMs), such as Evolutionary Scale Modeling (ESM), have demonstrated remarkable success in variant effect prediction....
3 months ago cs.CR cs.LG q-bio.QM
PDF
Attack HIGH
Kai Hu, Abhinav Aggarwal, Mehran Khodabandeh +6 more
This paper introduces Jailbreak-Zero, a novel red teaming methodology that shifts the paradigm of Large Language Model (LLM) safety evaluation from a...
3 months ago cs.CL cs.CR cs.LG
PDF
Attack HIGH
Hao Li, Yubing Ren, Yanan Cao +4 more
With the rapid development of cloud-based services, large language models (LLMs) have become increasingly accessible through various web platforms....
3 months ago cs.CR cs.CL
PDF
Attack HIGH
Joao Queiroz
Recent evidence shows that the versification of prompts constitutes a highly effective adversarial mechanism against aligned LLMs. The study...
3 months ago cs.CL cs.AI
PDF
Attack HIGH
Pablo Montaña-Fernández, Ines Ortega-Fernandez
Federated Learning is a machine learning setting that reduces direct data exposure, improving the privacy guarantees of machine learning models. Yet,...
3 months ago cs.LG cs.CR
PDF
Attack HIGH
Xingfu Zhou, Pengfei Wang
Large Language Model (LLM) agents relying on external retrieval are increasingly deployed in high-stakes environments. While existing adversarial...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Yunhao Yao, Zhiqiang Wang, Haoran Cheng +3 more
The evolution of Large Language Models (LLMs) into Agentic AI has established the Model Context Protocol (MCP) as the standard for connecting...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Shuxin Zhao, Bo Lang, Nan Xiao +1 more
Object detection models deployed in real-world applications such as autonomous driving face serious threats from backdoor attacks. Despite their...
3 months ago cs.CV cs.CR
PDF
Attack HIGH
Sabrine Ennaji, Elhadj Benkhelifa, Luigi Vincenzo Mancini
Machine learning based intrusion detection systems are increasingly targeted by black box adversarial attacks, where attackers craft evasive inputs...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Karina Chichifoi, Fabio Merizzi, Michele Colajanni
Deep learning and federated learning (FL) are becoming powerful partners for next-generation weather forecasting. Deep learning enables...
3 months ago cs.LG cs.CR
PDF
Attack HIGH
Md. Hasib Ur Rahman
As Large Language Models (LLMs) become ubiquitous, the challenge of securing them against adversarial "jailbreaking" attacks has intensified. Current...
3 months ago cs.LG cs.AI
PDF
Attack HIGH
Yixin Tan, Zhe Yu, Jun Sakuma
Finetuning pretrained large language models (LLMs) has become the standard paradigm for developing downstream applications. However, its security...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Safwan Shaheer, G. M. Refatul Islam, Mohammad Rafid Hamid +3 more
Prompt injection attacks can compromise the security and stability of critical systems, from infrastructure to large web applications. This work...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Peichun Hua, Hao Li, Shanghao Shi +2 more
Large Vision-Language Models (LVLMs) are vulnerable to a growing array of multimodal jailbreak attacks, necessitating defenses that are both...
3 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Jie Ma, Junqing Zhang, Guanxiong Shen +2 more
Radio frequency fingerprint identification (RFFI) is an emerging technique for the lightweight authentication of wireless Internet of things (IoT)...
3 months ago cs.CR cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial