Defense MEDIUM
Rui Yang Tan, Yujia Hu, Roy Ka-Wei Lee
Multimodal Large Language Models (MLLMs) extend text-only LLMs with visual reasoning, but also introduce new safety failure modes under visually...
2 days ago cs.CR cs.AI cs.MM
PDF
Defense MEDIUM
Xinyue Liu, Niloofar Mireshghallah, Jane C. Ginsburg +1 more
Frontier LLM companies have repeatedly assured courts and regulators that their models do not store copies of training data. They further rely on...
3 days ago cs.CL cs.AI cs.CY
PDF
Defense LOW
Anders Giovanni Møller, Elisa Bassignana, Francesco Pierri +1 more
The ubiquity of multimedia content is reshaping online information spaces, particularly in social media environments. At the same time, search is...
5 days ago cs.CY cs.CL cs.HC
PDF
Defense MEDIUM
Shawn Li, Yue Zhao
Large language model (LLM) agents increasingly rely on external tools (file operations, API calls, database transactions) to autonomously complete...
5 days ago cs.CR cs.AI cs.LG
PDF
Defense LOW
Rohan Siva, Kai Cheung, Lichi Li +1 more
Modern machine learning systems rely on complex data engineering workflows to extract, transform, and load (ELT) data into production pipelines....
5 days ago cs.SE cs.AI cs.CL
PDF
Defense MEDIUM
Carlos Hinojosa, Clemens Grange, Bernard Ghanem
Vision-language models (VLMs) are increasingly deployed in real-world and embodied settings where safety decisions depend on visual context. However,...
6 days ago cs.CV cs.AI cs.CL
PDF
Defense LOW
Roberto Morabito, Mallik Tatipamula
The Internet has evolved by progressively expanding what humanity connects: first computers, then people, and later billions of devices through the...
1 weeks ago cs.NI cs.AI
PDF
Defense MEDIUM
Ce Zhang, Jinxi He, Junyi He +2 more
Multi-modal Large Language Models (MLLMs) have achieved remarkable performance across a wide range of visual reasoning tasks, yet their vulnerability...
1 weeks ago cs.CV cs.CL cs.CR
PDF
Defense MEDIUM
Zhenheng Tang, Xiang Liu, Qian Wang +3 more
As Large Language Models (LLMs) become more powerful and autonomous, they increasingly face conflicts and dilemmas in many scenarios. We first...
1 weeks ago cs.AI cs.CY
PDF
Defense MEDIUM
Vanshaj Khattar, Md Rafi ur Rashid, Moumita Choudhury +4 more
Test-time training (TTT) has recently emerged as a promising method to improve the reasoning abilities of large language models (LLMs), in which the...
1 weeks ago cs.LG cs.AI cs.CL
PDF
Defense MEDIUM
Yewon Han, Yumin Seol, EunGyung Kong +2 more
Existing jailbreak defence frameworks for Large Vision-Language Models often suffer from a safety utility tradeoff, where strengthening safety...
1 weeks ago cs.CV cs.AI
PDF
Defense LOW
Max Hellrigel-Holderbaum, Edward James Young
As AI systems advance in capabilities, measuring their safety and alignment to human values is becoming paramount. A fast-growing field of AI...
1 weeks ago cs.CY cs.AI cs.CL
PDF
Defense MEDIUM
Suvadeep Hajra, Palash Nandi, Tanmoy Chakraborty
Safety tuning through supervised fine-tuning and reinforcement learning from human feedback has substantially improved the robustness of large...
Defense MEDIUM
Pengcheng Li, Jie Zhang, Tianwei Zhang +5 more
Safety alignment in large language models is typically evaluated under isolated queries, yet real-world use is inherently multi-turn. Although...
1 weeks ago cs.CR cs.AI
PDF
Defense MEDIUM
Matthew Butler, Yi Fan, Christos Faloutsos
The proposed method (FraudFox) provides solutions to adversarial attacks in a resource constrained environment. We focus on questions like the...
1 weeks ago cs.CR cs.LG
PDF
Defense MEDIUM
Zonghao Ying, Xiao Yang, Siyang Wu +7 more
The rapid evolution of Large Language Models (LLMs) into autonomous, tool-calling agents has fundamentally altered the cybersecurity landscape....
Defense MEDIUM
Xinhao Deng, Yixiang Zhang, Jiaqing Wu +15 more
Autonomous Large Language Model (LLM) agents, exemplified by OpenClaw, demonstrate remarkable capabilities in executing complex, long-horizon tasks....
1 weeks ago cs.CR cs.AI
PDF
Defense LOW
Lu Niu, Cheng Xue
Vision-language models offer strong few-shot capability through prompt tuning but remain vulnerable to noisy labels, which can corrupt prompts and...
Defense MEDIUM
Zhiyu Xue, Zimo Qi, Guangliang Liu +2 more
Safety alignment aims to ensure that large language models (LLMs) refuse harmful requests by post-training on harmful queries paired with refusal...
Defense LOW
Ali Eslami, Jiangbo Yu
This paper develops a control-theoretic framework for analyzing agentic systems embedded within feedback control loops, where an AI agent may adapt...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial