Attack HIGH
Hamin Koo, Minseon Kim, Jaehyung Kim
Identifying the vulnerabilities of large language models (LLMs) is crucial for improving their safety by addressing inherent weaknesses. Jailbreaks,...
Survey HIGH
Qin Zhou, Zhexin Zhang, Zhi Li +1 more
With the rapid advancement of AI models, their deployment across diverse tasks has become increasingly widespread. A notable emerging application is...
4 months ago cs.CL cs.CR
PDF
Tool HIGH
Minseok Kim, Hankook Lee, Hyungjoon Koo
Large language models (LLMs) are reshaping numerous facets of our daily lives, leading widespread adoption as web-based services. Despite their...
4 months ago cs.CR cs.AI cs.IR
PDF
Attack HIGH
Xin Liu, Aoyang Zhou, Aoyang Zhou
Visual-Language Pre-training (VLP) models have achieved significant performance across various downstream tasks. However, they remain vulnerable to...
4 months ago cs.CV cs.AI
PDF
Attack HIGH
Berk Atil, Rebecca J. Passonneau, Fred Morstatter
Large language models (LLMs) undergo safety alignment after training and tuning, yet recent work shows that safety can be bypassed through jailbreak...
Attack HIGH
Peng Ding, Jun Kuang, Wen Sun +5 more
Large language models (LLMs) remain vulnerable to jailbreaking attacks despite their impressive capabilities. Investigating these weaknesses is...
Attack HIGH
Phil Blandfort, Robert Graham
Activation probes are attractive monitors for AI systems due to low cost and latency, but their real-world robustness remains underexplored. We ask:...
4 months ago cs.LG cs.AI
PDF
Attack HIGH
Ruofan Liu, Yun Lin, Zhiyong Huang +1 more
Large language models (LLMs) are increasingly integrated into IT infrastructures, where they process user data according to predefined instructions....
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Xin Yao, Haiyang Zhao, Yimin Chen +3 more
The Contrastive Language-Image Pretraining (CLIP) model has significantly advanced vision-language modeling by aligning image-text pairs from...
4 months ago cs.CV cs.CR cs.LG
PDF
Attack HIGH
Kayua Oleques Paim, Rodrigo Brandao Mansilha, Diego Kreutz +2 more
The rapid proliferation of Large Language Models (LLMs) has raised significant concerns about their security against adversarial attacks. In this...
4 months ago cs.CR cs.AI cs.LG
PDF
Defense HIGH
Md Abdul Hannan, Ronghao Ni, Chi Zhang +3 more
Large language models (LLMs) have demonstrated impressive capabilities across a wide range of coding tasks, including summarization, translation,...
4 months ago cs.SE cs.CR cs.LG
PDF
Attack HIGH
Alex Irpan, Alexander Matt Turner, Mark Kurzeja +2 more
An LLM's factuality and refusal training can be compromised by simple changes to a prompt. Models often adopt user beliefs (sycophancy) or satisfy...
4 months ago cs.LG cs.AI
PDF
Tool HIGH
Seif Ikbarieh, Maanak Gupta, Elmahedi Mahalal
The Internet of Things has expanded rapidly, transforming communication and operations across industries but also increasing the attack surface and...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
David Schmotz, Sahar Abdelnabi, Maksym Andriushchenko
Enabling continual learning in LLMs remains a key unresolved research challenge. In a recent announcement, a frontier LLM company made a step towards...
Benchmark HIGH
Kaiwen Zhou, Ahmed Elgohary, A S M Iftekhar +1 more
The ability of LLM agents to plan and invoke tools exposes them to new safety risks, making a comprehensive red-teaming system crucial for...
4 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Zirui Cheng, Jikai Sun, Anjun Gao +4 more
Large language models (LLMs) have transformed natural language processing (NLP), enabling applications from content generation to decision support....
4 months ago cs.CR cs.IR cs.LG
PDF
Attack HIGH
Ziyao Cui, Minxing Zhang, Jian Pei
Privacy concerns have become increasingly critical in modern AI and data science applications, where sensitive information is collected, analyzed,...
4 months ago cs.CR cs.LG
PDF
Attack HIGH
Yufan Liu, Wanqian Zhang, Huashan Chen +4 more
Despite rapid advancements in text-to-image (T2I) models, their safety mechanisms are vulnerable to adversarial prompts, which maliciously generate...
Attack HIGH
Yuchong Xie, Zesen Liu, Mingyu Luo +7 more
Modern coding agents integrated into IDEs orchestrate powerful tools and high-privilege system access, creating a high-stakes attack surface. Prior...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Zesen Liu, Zhixiang Zhang, Yuchong Xie +1 more
LLM-powered agents often use prompt compression to reduce inference costs, but this introduces a new security risk. Compression modules, which are...
5 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial