Attack MEDIUM
Giorgio Piras, Raffaele Mura, Fabio Brau +3 more
Refusal refers to the functional behavior enabling safety-aligned language models to reject harmful or unethical prompts. Following the growing...
4 months ago cs.AI cs.LG
PDF
Attack HIGH
Yuxuan Zhou, Yuzhao Peng, Yang Bai +7 more
Large Vision-Language Models (VLMs) are susceptible to jailbreak attacks: researchers have developed a variety of attack strategies that can...
Benchmark MEDIUM
Junxiao Han, Zheng Yu, Lingfeng Bao +5 more
The widespread adoption of open-source software (OSS) has accelerated software innovation but also increased security risks due to the rapid...
4 months ago cs.CR cs.SE
PDF
Benchmark HIGH
Zhishen Sun, Guang Dai, Haishan Ye
LLMs demonstrate performance comparable to human abilities in complex tasks such as mathematical reasoning, but their robustness in mathematical...
Attack LOW
Ke Jia, Yuheng Ma, Yang Li +1 more
We revisit the problem of generating synthetic data under differential privacy. To address the core limitations of marginal-based methods, we propose...
4 months ago stat.ML cs.CR cs.LG
PDF
Attack HIGH
Yaxin Xiao, Qingqing Ye, Zi Liang +4 more
Machine learning models constitute valuable intellectual property, yet remain vulnerable to model extraction attacks (MEA), where adversaries...
4 months ago cs.CR cs.CV cs.LG
PDF
Attack HIGH
Xingyu Li, Xiaolei Liu, Cheng Liu +4 more
As large language models (LLMs) scale, their inference incurs substantial computational resources, exposing them to energy-latency attacks, where...
4 months ago cs.CR cs.AI cs.CL
PDF
Defense MEDIUM
Binayak Kara, Ujjwal Sahua, Ciza Thomas +1 more
Securing Dew-Enabled Edge-of-Things (EoT) networks against sophisticated intrusions is a critical challenge. This paper presents HybridGuard, a...
4 months ago cs.CR cs.AI cs.LG
PDF
Tool LOW
Yi Ni, Liwei Zhu, Shuai Li
Chimeric antigen receptor T-cell (CAR-T) therapy represents a paradigm shift in cancer treatment, yet development timelines of 8-12 years and...
4 months ago q-bio.QM cs.AI
PDF
Benchmark LOW
Manh Nguyen, Sunil Gupta, Hung Le
Large Language Models (LLMs) exhibit strong performance across various natural language processing (NLP) tasks but remain vulnerable to...
Defense MEDIUM
Tyler Slater
Context: The integration of Large Language Models (LLMs) into core software systems is accelerating. However, existing software architecture patterns...
4 months ago cs.SE cs.AI cs.CR
PDF
Other LOW
Pukang Ye, Junwei Luo, Xiaolei Dong +1 more
Data duplication within large-scale corpora often impedes large language models' (LLMs) performance and privacy. In privacy-concerned federated...
4 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Binyan Xu, Fan Yang, Di Tang +2 more
Clean-image backdoor attacks, which use only label manipulation in training datasets to compromise deep neural networks, pose a significant threat to...
4 months ago cs.CV cs.CR cs.LG
PDF
Attack MEDIUM
Hanlin Cai, Houtianfu Wang, Haofan Dong +3 more
Internet of Agents (IoA) envisions a unified, agent-centric paradigm where heterogeneous large language model (LLM) agents can interconnect and...
4 months ago cs.NI cs.CL
PDF
Benchmark MEDIUM
Marcin Podhajski, Jan Dubiński, Franziska Boenisch +3 more
Current graph neural network (GNN) model-stealing methods rely heavily on queries to the victim model, assuming no hard query limits. However, in...
4 months ago cs.LG cs.CR
PDF
Tool MEDIUM
Liang Shan, Kaicheng Shen, Wen Wu +7 more
Ensuring the safety of Large Language Models (LLMs) is critical for real-world deployment. However, current safety measures often fail to address...
4 months ago cs.AI cs.CL
PDF
Attack MEDIUM
Zhisheng Zhang, Derui Wang, Yifan Mi +6 more
Recent advancements in speech synthesis technology have enriched our daily lives, with high-quality and human-like audio widely adopted across...
4 months ago cs.SD cs.AI cs.CR
PDF
Attack HIGH
Hui Lu, Yi Yu, Song Xia +5 more
Large-scale Video Foundation Models (VFMs) has significantly advanced various video-related tasks, either through task-specific models or Multi-modal...
4 months ago cs.CV cs.CR
PDF
Attack MEDIUM
Yuanheng Li, Zhuoyang Chen, Xiaoyun Liu +5 more
As large language models (LLMs) become increasingly capable, concerns over the unauthorized use of copyrighted and licensed content in their training...
Benchmark MEDIUM
Yilin Jiang, Mingzi Zhang, Xuanyu Yin +5 more
Large Language Models for Simulating Professions (SP-LLMs), particularly as teachers, are pivotal for personalized education. However, ensuring their...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial