Defense MEDIUM
Enrico Ahlers, Daniel Passon, Yannic Noller +1 more
Machine learning models are increasingly present in our everyday lives; as a result, they become targets of adversarial attackers seeking to...
1 months ago cs.LG cs.AI cs.CR
PDF
Benchmark MEDIUM
Yuxin Cao, Wei Song, Shangzhi Xu +2 more
Video Large Language Models (VideoLLMs) have recently achieved strong performance in video understanding tasks. However, we identify a previously...
1 months ago cs.CV cs.CR cs.MM
PDF
Defense MEDIUM
Zijing Xu, Ziwei Ning, Tiancheng Hu +4 more
The rapid evolution of cyber threats has highlighted significant gaps in security knowledge integration. Cybersecurity Knowledge Graphs (CKGs)...
Survey MEDIUM
Viet Hoang Luu, Amirmohammad Pasdar, Wachiraphan Charoenwet +3 more
Modern fuzzers scale to large, real-world software but often fail to exercise the program states developers consider most fragile or...
1 months ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Mohan Rajagopalan, Vinay Rao
Large Language Model (LLM) applications are vulnerable to prompt injection and context manipulation attacks that traditional security models cannot...
1 months ago cs.CR cs.AI cs.MA
PDF
Survey MEDIUM
Ashwath Vaithinathan Aravindan, Mayank Kejriwal
Chain-of-Thought (CoT) prompting has emerged as a foundational technique for eliciting reasoning from Large Language Models (LLMs), yet the...
1 months ago cs.CL cs.AI cs.LG
PDF
Defense MEDIUM
Weichen Yu, Ravi Mangal, Yinyi Luo +4 more
Large Language Models are rapidly becoming core components of modern software development workflows, yet ensuring code security remains challenging....
1 months ago cs.CR cs.SE
PDF
Defense MEDIUM
Kun Wang, Zherui Li, Zhenhong Zhou +8 more
Omni-modal Large Language Models (OLLMs) greatly expand LLMs' multimodal capabilities but also introduce cross-modal safety risks. However, a...
1 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Zhenyu Xu, Victor S. Sheng
Protecting the intellectual property of large language models (LLMs) is a critical challenge due to the proliferation of unauthorized derivative...
1 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Herman Errico
As artificial intelligence systems evolve from passive assistants into autonomous agents capable of executing consequential actions, the security...
1 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Yuting Ning, Jaylen Jones, Zhehao Zhang +5 more
Computer-use agents (CUAs) have made tremendous progress in the past year, yet they still frequently produce misaligned actions that deviate from the...
Defense MEDIUM
Oliver Daniels, Perusha Moodley, Benjamin M. Marlin +1 more
Alignment audits aim to robustly identify hidden goals from strategic, situationally aware misaligned models. Despite this threat model, existing...
Defense MEDIUM
Yu Fu, Haz Sameen Shahgir, Huanli Gong +3 more
Large language models (LLMs) increasingly combine long-context processing with advanced reasoning, enabling them to retrieve and synthesize...
1 months ago cs.CL cs.CR
PDF
Defense MEDIUM
Yukun Jiang, Hai Huang, Mingjie Li +3 more
By introducing routers to selectively activate experts in Transformer layers, the mixture-of-experts (MoE) architecture significantly reduces...
1 months ago cs.LG cs.AI cs.CR
PDF
Benchmark MEDIUM
Igor Santos-Grueiro
Safety evaluation for advanced AI systems assumes that behavior observed under evaluation predicts behavior in deployment. This assumption weakens...
1 months ago cs.AI cs.CR cs.LG
PDF
Benchmark MEDIUM
Pouria Arefijamal, Mahdi Ahmadlou, Bardia Safaei +1 more
Federated learning (FL) is a decentralized learning paradigm widely adopted in resource-constrained Internet of Things (IoT) environments. These...
1 months ago cs.LG cs.CR cs.DC
PDF
Attack MEDIUM
Benjamin Livshits
We argue that when it comes to producing secure code with AI, the prevailing "fighting fire with fire" approach -- using probabilistic AI-based...
1 months ago cs.CR cs.AI cs.SE
PDF
Benchmark MEDIUM
Liwen Wang, Zongjie Li, Yuchong Xie +4 more
The evolution of Large Language Models (LLMs) into agentic systems that perform autonomous reasoning and tool use has created significant...
1 months ago cs.AI cs.CR
PDF
Benchmark MEDIUM
Shadman Rabby, Md. Hefzul Hossain Papon, Sabbir Ahmed +3 more
Sycophancy in Vision-Language Models (VLMs) refers to their tendency to align with user opinions, often at the expense of moral or factual accuracy....
Defense MEDIUM
Shayan Ali Hassan, Tao Ni, Zafar Ayyub Qazi +1 more
Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language understanding, reasoning, and generation. However, these...
1 months ago cs.LG cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial