Attack HIGH
Hao Li, Jiajun He, Guangshuo Wang +3 more
Retrieval-Augmented Generation (RAG) enhances large language models by integrating external knowledge, but reliance on proprietary or sensitive...
Attack MEDIUM
Lucas Fenaux, Christopher Srinivasa, Florian Kerschbaum
Transparency and security are both central to Responsible AI, but they may conflict in adversarial settings. We investigate the strategic effect of...
4 months ago cs.LG cs.CR cs.GT
PDF
Benchmark LOW
Xingshuang Lin, Binbin Zhao, Jinwen Wang +3 more
Smart Contract Reusable Components(SCRs) play a vital role in accelerating the development of business-specific contracts by promoting modularity and...
4 months ago cs.SE cs.CR
PDF
Survey HIGH
Gioliano de Oliveira Braga, Pedro Henrique dos Santos Rocha, Rafael Pimenta de Mattos Paixão +3 more
Wi-Fi Channel State Information (CSI) has been repeatedly proposed as a biometric modality, often with reports of high accuracy and operational...
4 months ago cs.CR cs.LG cs.NI
PDF
Benchmark MEDIUM
Yanbo Dai, Zongjie Li, Zhenlan Ji +1 more
Large language models (LLMs) have achieved remarkable success across a wide range of natural language processing tasks, demonstrating human-level...
Attack HIGH
Lama Sleem, Jerome Francois, Lujun Li +3 more
Jailbreak attacks designed to bypass safety mechanisms pose a serious threat by prompting LLMs to generate harmful or inappropriate content, despite...
4 months ago cs.CR cs.AI
PDF
Survey LOW
Shaowei Guan, Hin Chi Kwok, Ngai Fong Law +3 more
Retrieval-augmented generation (RAG) has rapidly emerged as a transformative approach for integrating large language models into clinical and...
4 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Ruoxi Cheng, Haoxuan Ma, Teng Ma +1 more
Large Vision-Language Models (LVLMs) exhibit powerful reasoning capabilities but suffer sophisticated jailbreak vulnerabilities. Fundamentally,...
Defense HIGH
Biagio Boi, Christian Esposito
Smart contracts have emerged as key components within decentralized environments, enabling the automation of transactions through self-executing...
Attack MEDIUM
Farhad Abtahi, Fernando Seoane, Iván Pau +1 more
Healthcare AI systems face major vulnerabilities to data poisoning that current defenses and regulations cannot adequately address. We analyzed eight...
4 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Zichao Wei, Jun Zeng, Ming Wen +8 more
Software vulnerabilities are increasing at an alarming rate. However, manual patching is both time-consuming and resource-intensive, while existing...
4 months ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Feilong Wang, Fuqiang Liu
The integration of large language models (LLMs) into automated driving systems has opened new possibilities for reasoning and decision-making by...
4 months ago cs.LG cs.AI cs.CR
PDF
Benchmark MEDIUM
Guangke Chen, Yuhui Wang, Shouling Ji +2 more
Modern text-to-speech (TTS) systems, particularly those built on Large Audio-Language Models (LALMs), generate high-fidelity speech that faithfully...
4 months ago cs.SD cs.AI cs.CR
PDF
Tool MEDIUM
Dennis Wei, Ronny Luss, Xiaomeng Hu +6 more
Large Language Models (LLMs) have become ubiquitous in everyday life and are entering higher-stakes applications ranging from summarizing meeting...
4 months ago cs.CL cs.LG
PDF
Benchmark MEDIUM
Fred Heiding, Simon Lermen
We present an end-to-end demonstration of how attackers can exploit AI safety failures to harm vulnerable populations: from jailbreaking LLMs to...
4 months ago cs.CR cs.AI cs.CY
PDF
Attack HIGH
Runpeng Geng, Yanting Wang, Chenlong Yin +3 more
Long context LLMs are vulnerable to prompt injection, where an attacker can inject an instruction in a long context to induce an LLM to generate an...
4 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Srikant Panda, Avinash Rai
Large Language Models (LLMs) are commonly evaluated for robustness against paraphrased or semantically equivalent jailbreak prompts, yet little...
4 months ago cs.CL cs.AI
PDF
Attack HIGH
Shuaitong Liu, Renjue Li, Lijia Yu +3 more
Recent advances in Chain-of-Thought (CoT) prompting have substantially improved the reasoning capabilities of large language models (LLMs), but have...
4 months ago cs.CR cs.AI
PDF
Benchmark LOW
Yuping Yan, Yuhan Xie, Yuanshuai Li +3 more
Since Multimodal Large Language Models (MLLMs) are increasingly being integrated into everyday tools and intelligent agents, growing concerns have...
4 months ago cs.LG cs.CL
PDF
Attack HIGH
Yudong Yang, Xuezhen Zhang, Zhifeng Han +6 more
Recent progress in LLMs has enabled understanding of audio signals, but has also exposed new safety risks arising from complex audio inputs that are...
4 months ago cs.SD cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial