Attack HIGH
Yaxin Xiao, Qingqing Ye, Zi Liang +4 more
Machine learning models constitute valuable intellectual property, yet remain vulnerable to model extraction attacks (MEA), where adversaries...
4 months ago cs.CR cs.CV cs.LG
PDF
Attack HIGH
Xingyu Li, Xiaolei Liu, Cheng Liu +4 more
As large language models (LLMs) scale, their inference incurs substantial computational resources, exposing them to energy-latency attacks, where...
4 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Hui Lu, Yi Yu, Song Xia +5 more
Large-scale Video Foundation Models (VFMs) has significantly advanced various video-related tasks, either through task-specific models or Multi-modal...
4 months ago cs.CV cs.CR
PDF
Attack HIGH
Reem Al-Saidi, Erman Ayday, Ziad Kobti
This study investigates embedding reconstruction attacks in large language models (LLMs) applied to genomic sequences, with a specific focus on how...
Attack HIGH
Alina Fastowski, Bardh Prenkaj, Yuxiao Li +1 more
LLMs are now an integral part of information retrieval. As such, their role as question answering chatbots raises significant concerns due to their...
4 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Yigitcan Kaya, Anton Landerer, Stijn Pletinckx +3 more
Prompt injection attacks pose a critical threat to large language models (LLMs), with prior work focusing on cutting-edge LLM applications like...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Janet Jenq, Hongda Shen
Multimodal product retrieval systems in e-commerce platforms rely on effectively combining visual and textual signals to improve search relevance and...
Attack HIGH
Mohammad Karami, Mohammad Reza Nemati, Aidin Kazemi +3 more
Artificial intelligence (AI) has shown great potential in medical imaging, particularly for brain tumor detection using Magnetic Resonance Imaging...
4 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Hongwei Yao, Yun Xia, Shuo Shao +3 more
Large language models (LLMs) increasingly employ guardrails to enforce ethical, legal, and application-specific constraints on their outputs. While...
4 months ago cs.CR cs.CL
PDF
Attack HIGH
Geoff McDonald, Jonathan Bar Or
Large Language Models (LLMs) are increasingly deployed in sensitive domains including healthcare, legal services, and confidential communications,...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Yize Liu, Yunyun Hou, Aina Sui
Large Language Models (LLMs) have been widely deployed across various applications, yet their potential security and ethical risks have raised...
4 months ago cs.CR cs.CL
PDF
Attack HIGH
Amy Chang, Nicholas Conley, Harish Santhanalakshmi Ganesan +1 more
Open-weight models provide researchers and developers with accessible foundations for diverse downstream applications. We tested the safety and...
4 months ago cs.CR cs.LG
PDF
Attack HIGH
Rishi Rajesh Shah, Chen Henry Wu, Shashwat Saxena +3 more
Recent advances in long-context language models (LMs) have enabled million-token inputs, expanding their capabilities across complex tasks like...
4 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Chloe Loughridge, Paul Colognese, Avery Griffin +3 more
As AI deployments become more complex and high-stakes, it becomes increasingly important to be able to estimate their risk. AI control is one...
Attack HIGH
Aashray Reddy, Andrew Zagula, Nicholas Saban
Large Language Models (LLMs) remain vulnerable to jailbreaking attacks where adversarial prompts elicit harmful outputs. Yet most evaluations focus...
4 months ago cs.CL cs.AI cs.CR
PDF
Attack HIGH
Chen-Wei Chang, Shailik Sarkar, Hossein Salemi +7 more
Scam detection remains a critical challenge in cybersecurity as adversaries craft messages that evade automated filters. We propose a Hierarchical...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Daniyal Ganiuly, Assel Smaiyl
Large Language Models (LLMs) are increasingly used in intelligent systems that perform reasoning, summarization, and code generation. Their ability...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Hamin Koo, Minseon Kim, Jaehyung Kim
Identifying the vulnerabilities of large language models (LLMs) is crucial for improving their safety by addressing inherent weaknesses. Jailbreaks,...
Attack HIGH
Xin Liu, Aoyang Zhou, Aoyang Zhou
Visual-Language Pre-training (VLP) models have achieved significant performance across various downstream tasks. However, they remain vulnerable to...
4 months ago cs.CV cs.AI
PDF
Attack HIGH
Berk Atil, Rebecca J. Passonneau, Fred Morstatter
Large language models (LLMs) undergo safety alignment after training and tuning, yet recent work shows that safety can be bypassed through jailbreak...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial