Attack MEDIUM
Jiaxiang Liu, Jiawei Du, Xiao Liu +2 more
Pre-trained vision-language models (VLMs) such as CLIP have demonstrated strong zero-shot capabilities across diverse domains, yet remain highly...
Attack MEDIUM
Devon A. Kelly, Christiana Chamon
Wide-bandgap (WBG) technologies offer unprecedented improvements in power system efficiency, size, and performance, but also introduce unique sensor...
5 months ago cs.CR cs.LG eess.SY
PDF
Attack MEDIUM
Sarah Ball, Niki Hasrati, Alexander Robey +4 more
Discrete optimization-based jailbreaking attacks on large language models aim to generate short, nonsensical suffixes that, when appended onto input...
5 months ago cs.CL cs.AI
PDF
Attack MEDIUM
Soham Hans, Stacy Marsella, Sophia Hirschmann +1 more
Understanding adversarial behavior in cybersecurity has traditionally relied on high-level intelligence reports and manual interpretation of attack...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Austin Jia, Avaneesh Ramesh, Zain Shamsi +2 more
Retrieval-Augmented Generation (RAG) has emerged as the dominant architectural pattern to operationalize Large Language Model (LLM) usage in Cyber...
5 months ago cs.CR cs.AI cs.IR
PDF
Attack MEDIUM
Daniel Gilkarov, Ran Dubin
Pretrained deep learning model sharing holds tremendous value for researchers and enterprises alike. It allows them to apply deep learning by...
Attack MEDIUM
Tushar Nayan, Ziqi Zhang, Ruimin Sun
With the increasing deployment of Large Language Models (LLMs) on mobile and edge platforms, securing them against model extraction attacks has...
5 months ago cs.CR cs.LG cs.SE
PDF
Attack MEDIUM
Petar Radanliev
Problem Space: AI Vulnerabilities and Quantum Threats Generative AI vulnerabilities: model inversion, data poisoning, adversarial inputs. Quantum...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Yushi Yang, Shreyansh Padarha, Andrew Lee +1 more
Agentic reinforcement learning (RL) trains large language models to autonomously call tools during reasoning, with search as the most common...
Attack MEDIUM
Elias Hossain, Swayamjit Saha, Somshubhra Roy +1 more
Even when prompts and parameters are secured, transformer language models remain vulnerable because their key-value (KV) cache during inference...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Jie Zhang, Meng Ding, Yang Liu +2 more
We present a novel approach for attacking black-box large language models (LLMs) by exploiting their ability to express confidence in natural...
5 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Asmita Mohanty, Gezheng Kang, Lei Gao +1 more
Large Language Models (LLMs) have demonstrated strong performance across diverse tasks, but fine-tuning them typically relies on cloud-based,...
5 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Sarah Egler, John Schulman, Nicholas Carlini
Large Language Model (LLM) providers expose fine-tuning APIs that let end users fine-tune their frontier LLMs. Unfortunately, it has been shown that...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Andrew Zhao, Reshmi Ghosh, Vitor Carvalho +4 more
Large language model (LLM) systems increasingly power everyday AI applications such as chatbots, computer-use assistants, and autonomous robots,...
5 months ago cs.LG cs.AI cs.CL
PDF
Attack MEDIUM
Fanchao Meng, Jiaping Gui, Yunbo Li +1 more
Modern Network Intrusion Detection Systems generate vast volumes of low-level alerts, yet these outputs remain semantically fragmented, requiring...
Attack MEDIUM
Jianzhu Yao, Hongxu Su, Taobo Liao +4 more
Neural networks increasingly run on hardware outside the user's control (cloud GPUs, inference marketplaces). Yet ML-as-a-Service reveals little...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Daniel Pulido-Cortázar, Daniel Gibert, Felip Manyà
Over the last decade, machine learning has been extensively applied to identify malicious Android applications. However, such approaches remain...
5 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Deeksha Hareesha Kulal, Chidozie Princewill Arannonu, Afsah Anwar +2 more
Phishing remains a critical cybersecurity threat, especially with the advent of large language models (LLMs) capable of generating highly convincing...
Attack MEDIUM
Sean Oesch, Jack Hutchins, Luke Koch +1 more
In living off the land attacks, malicious actors use legitimate tools and processes already present on a system to avoid detection. In this paper, we...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Rui Xu, Jiawei Chen, Zhaoxia Yin +2 more
The widespread use of large language models (LLMs) and open-source code has raised ethical and security concerns regarding the distribution and...
5 months ago cs.CR cs.AI cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial