Attack MEDIUM
Botao 'Amber' Hu, Helena Rong
As the "agentic web" takes shape-billions of AI agents (often LLM-powered) autonomously transacting and collaborating-trust shifts from human...
4 months ago cs.HC cs.AI cs.MA
PDF
Attack HIGH
Yize Liu, Yunyun Hou, Aina Sui
Large Language Models (LLMs) have been widely deployed across various applications, yet their potential security and ethical risks have raised...
4 months ago cs.CR cs.CL
PDF
Attack HIGH
Amy Chang, Nicholas Conley, Harish Santhanalakshmi Ganesan +1 more
Open-weight models provide researchers and developers with accessible foundations for diverse downstream applications. We tested the safety and...
4 months ago cs.CR cs.LG
PDF
Attack HIGH
Rishi Rajesh Shah, Chen Henry Wu, Shashwat Saxena +3 more
Recent advances in long-context language models (LMs) have enabled million-token inputs, expanding their capabilities across complex tasks like...
4 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Chloe Loughridge, Paul Colognese, Avery Griffin +3 more
As AI deployments become more complex and high-stakes, it becomes increasingly important to be able to estimate their risk. AI control is one...
Attack MEDIUM
W. K. M Mithsara, Ning Yang, Ahmed Imteaj +2 more
The widespread integration of wearable sensing devices in Internet of Things (IoT) ecosystems, particularly in healthcare, smart homes, and...
4 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Roy Rinberg, Adam Karvonen, Alexander Hoover +2 more
As large AI models become increasingly valuable assets, the risk of model weight exfiltration from inference servers grows accordingly. An attacker...
4 months ago cs.CR cs.LG
PDF
Attack HIGH
Aashray Reddy, Andrew Zagula, Nicholas Saban
Large Language Models (LLMs) remain vulnerable to jailbreaking attacks where adversarial prompts elicit harmful outputs. Yet most evaluations focus...
4 months ago cs.CL cs.AI cs.CR
PDF
Attack HIGH
Chen-Wei Chang, Shailik Sarkar, Hossein Salemi +7 more
Scam detection remains a critical challenge in cybersecurity as adversaries craft messages that evade automated filters. We propose a Hierarchical...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Daniyal Ganiuly, Assel Smaiyl
Large Language Models (LLMs) are increasingly used in intelligent systems that perform reasoning, summarization, and code generation. Their ability...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Hamin Koo, Minseon Kim, Jaehyung Kim
Identifying the vulnerabilities of large language models (LLMs) is crucial for improving their safety by addressing inherent weaknesses. Jailbreaks,...
Attack HIGH
Xin Liu, Aoyang Zhou, Aoyang Zhou
Visual-Language Pre-training (VLP) models have achieved significant performance across various downstream tasks. However, they remain vulnerable to...
4 months ago cs.CV cs.AI
PDF
Attack HIGH
Berk Atil, Rebecca J. Passonneau, Fred Morstatter
Large language models (LLMs) undergo safety alignment after training and tuning, yet recent work shows that safety can be bypassed through jailbreak...
Attack MEDIUM
Kasimir Schulz, Amelia Kawasaki, Leo Ring
Large language models (LLMs) are widely deployed across various applications, often with safeguards to prevent the generation of harmful or...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Peng Ding, Jun Kuang, Wen Sun +5 more
Large language models (LLMs) remain vulnerable to jailbreaking attacks despite their impressive capabilities. Investigating these weaknesses is...
Attack HIGH
Phil Blandfort, Robert Graham
Activation probes are attractive monitors for AI systems due to low cost and latency, but their real-world robustness remains underexplored. We ask:...
4 months ago cs.LG cs.AI
PDF
Attack HIGH
Ruofan Liu, Yun Lin, Zhiyong Huang +1 more
Large language models (LLMs) are increasingly integrated into IT infrastructures, where they process user data according to predefined instructions....
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Xin Yao, Haiyang Zhao, Yimin Chen +3 more
The Contrastive Language-Image Pretraining (CLIP) model has significantly advanced vision-language modeling by aligning image-text pairs from...
4 months ago cs.CV cs.CR cs.LG
PDF
Attack HIGH
Kayua Oleques Paim, Rodrigo Brandao Mansilha, Diego Kreutz +2 more
The rapid proliferation of Large Language Models (LLMs) has raised significant concerns about their security against adversarial attacks. In this...
4 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
David Lüdke, Tom Wollschläger, Paul Ungermann +2 more
We introduce a novel framework that transforms the resource-intensive (adversarial) prompt optimization problem into an \emph{efficient, amortized...
4 months ago cs.LG stat.ML
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial