Attack HIGH
Bowen Fan, Zhilin Guo, Xunkai Li +5 more
Graph Neural Networks (GNNs) have become a pivotal framework for modeling graph-structured data, enabling a wide range of applications from social...
Attack HIGH
Xiaoxue Ren, Penghao Jiang, Kaixin Li +6 more
Web applications are prime targets for cyberattacks as gateways to critical services and sensitive data. Traditional penetration testing is costly...
5 months ago cs.CR cs.CL
PDF
Attack HIGH
Harsh Kasyap, Minghong Fang, Zhuqing Liu +2 more
Federated learning (FL) is a privacy-preserving machine learning technique that facilitates collaboration among participants across demographics. FL...
5 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Deeksha Hareesha Kulal, Chidozie Princewill Arannonu, Afsah Anwar +2 more
Phishing remains a critical cybersecurity threat, especially with the advent of large language models (LLMs) capable of generating highly convincing...
Attack HIGH
Ting Li, Yang Yang, Yipeng Yu +3 more
Adversarial attacks on knowledge graph embeddings (KGE) aim to disrupt the model's ability of link prediction by removing or inserting triples. A...
5 months ago cs.CL cs.CR
PDF
Attack MEDIUM
Sean Oesch, Jack Hutchins, Luke Koch +1 more
In living off the land attacks, malicious actors use legitimate tools and processes already present on a system to avoid detection. In this paper, we...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Rui Xu, Jiawei Chen, Zhaoxia Yin +2 more
The widespread use of large language models (LLMs) and open-source code has raised ethical and security concerns regarding the distribution and...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Michael Schlichtkrull
When AI agents retrieve and reason over external documents, adversaries can manipulate the data they receive to subvert their behaviour. Previous...
5 months ago cs.CL cs.AI
PDF
Attack HIGH
Vasilije Stambolic, Aritra Dhar, Lukas Cavigelli
Retrieval-Augmented Generation (RAG) increases the reliability and trustworthiness of the LLM response and reduces hallucination by eliminating the...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Zonghuan Xu, Jiayu Li, Yunhan Zhao +3 more
Vision-Language-Action (VLA) models map multimodal perception and language instructions to executable robot actions, making them particularly...
5 months ago cs.CR cs.AI cs.RO
PDF
Attack MEDIUM
Zaixi Zhang, Souradip Chakraborty, Amrit Singh Bedi +16 more
The rapid adoption of generative artificial intelligence (GenAI) in the biosciences is transforming biotechnology, medicine, and synthetic biology....
5 months ago cs.CR q-bio.BM
PDF
Attack MEDIUM
Tiarnaigh Downey-Webb, Olamide Jogunola, Oluwaseun Ajao
This paper presents a systematic security assessment of four prominent Large Language Models (LLMs) against diverse adversarial attack vectors. We...
5 months ago cs.CR cs.AI cs.CY
PDF
Attack HIGH
Ming Tan, Wei Li, Hu Tao +4 more
Open-source large language models (LLMs) have demonstrated considerable dominance over proprietary LLMs in resolving neural processing tasks, thanks...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Guan-Yan Yang, Tzu-Yu Cheng, Ya-Wen Teng +2 more
The integration of Large Language Models (LLMs) into computer applications has introduced transformative capabilities but also significant security...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Wentian Zhu, Zhen Xiang, Wei Niu +1 more
Unlike regular tokens derived from existing text corpora, special tokens are artificially created to annotate structured conversations during the...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Yutao Wu, Xiao Liu, Yinghui Li +5 more
Knowledge poisoning poses a critical threat to Retrieval-Augmented Generation (RAG) systems by injecting adversarial content into knowledge bases,...
5 months ago cs.CL cs.AI cs.CR
PDF
Attack HIGH
Mengyao Zhao, Kaixuan Li, Lyuye Zhang +4 more
Recent advances in Large Language Models (LLMs) have brought remarkable progress in code understanding and reasoning, creating new opportunities and...
Attack HIGH
Yue Deng, Francisco Santos, Pang-Ning Tan +1 more
Deep learning based weather forecasting (DLWF) models leverage past weather observations to generate future forecasts, supporting a wide range of...
5 months ago cs.LG cs.CR stat.ML
PDF
Attack HIGH
Ruizhe Zhu
The widespread application of large vision language models has significantly raised safety concerns. In this project, we investigate text prompt...
5 months ago cs.CL cs.CV
PDF
Attack HIGH
Mikhail Terekhov, Alexander Panfilov, Daniil Dzenhaliou +4 more
AI control protocols serve as a defense mechanism to stop untrusted LLM agents from causing harm in autonomous settings. Prior work treats this as a...
5 months ago cs.LG cs.AI cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial