Survey MEDIUM
James Jin Kang, Dang Bui, Thanh Pham +1 more
The growing use of large language models in sensitive domains has exposed a critical weakness: the inability to ensure that private information can...
Survey MEDIUM
Gabrielle M Gauthier, Eesha Ali, Amna Asim +2 more
Human content moderators (CMs) routinely review distressing digital content at scale. Beyond exposure, the work context (e.g., workload, team...
Defense MEDIUM
Daniyal Ganiuly, Nurzhau Bolatbek
The increasing virtualization of fifth generation (5G) networks expands the attack surface of the user plane, making spoofing a persistent threat to...
4 months ago cs.CR cs.NI
PDF
Benchmark MEDIUM
Zexu Wang, Jiachi Chen, Zewei Lin +7 more
Smart contracts have significantly advanced blockchain technology, and digital signatures are crucial for reliable verification of contract...
4 months ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Yunfei Yang, Xiaojun Chen, Yuexin Xuan +3 more
Model watermarking techniques can embed watermark information into the protected model for ownership declaration by constructing specific...
4 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Kazuki Iwahana, Yusuke Yamasaki, Akira Ito +2 more
Backdoor attacks pose a critical threat to machine learning models, causing them to behave normally on clean data but misclassify poisoned data into...
4 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Zixun Xiong, Gaoyi Wu, Qingyang Yu +5 more
Given the high cost of large language model (LLM) training from scratch, safeguarding LLM intellectual property (IP) has become increasingly crucial....
4 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Giorgio Piras, Raffaele Mura, Fabio Brau +3 more
Refusal refers to the functional behavior enabling safety-aligned language models to reject harmful or unethical prompts. Following the growing...
4 months ago cs.AI cs.LG
PDF
Benchmark MEDIUM
Junxiao Han, Zheng Yu, Lingfeng Bao +5 more
The widespread adoption of open-source software (OSS) has accelerated software innovation but also increased security risks due to the rapid...
4 months ago cs.CR cs.SE
PDF
Defense MEDIUM
Binayak Kara, Ujjwal Sahua, Ciza Thomas +1 more
Securing Dew-Enabled Edge-of-Things (EoT) networks against sophisticated intrusions is a critical challenge. This paper presents HybridGuard, a...
4 months ago cs.CR cs.AI cs.LG
PDF
Defense MEDIUM
Tyler Slater
Context: The integration of Large Language Models (LLMs) into core software systems is accelerating. However, existing software architecture patterns...
4 months ago cs.SE cs.AI cs.CR
PDF
Benchmark MEDIUM
Binyan Xu, Fan Yang, Di Tang +2 more
Clean-image backdoor attacks, which use only label manipulation in training datasets to compromise deep neural networks, pose a significant threat to...
4 months ago cs.CV cs.CR cs.LG
PDF
Attack MEDIUM
Hanlin Cai, Houtianfu Wang, Haofan Dong +3 more
Internet of Agents (IoA) envisions a unified, agent-centric paradigm where heterogeneous large language model (LLM) agents can interconnect and...
4 months ago cs.NI cs.CL
PDF
Benchmark MEDIUM
Marcin Podhajski, Jan Dubiński, Franziska Boenisch +3 more
Current graph neural network (GNN) model-stealing methods rely heavily on queries to the victim model, assuming no hard query limits. However, in...
4 months ago cs.LG cs.CR
PDF
Tool MEDIUM
Liang Shan, Kaicheng Shen, Wen Wu +7 more
Ensuring the safety of Large Language Models (LLMs) is critical for real-world deployment. However, current safety measures often fail to address...
4 months ago cs.AI cs.CL
PDF
Attack MEDIUM
Zhisheng Zhang, Derui Wang, Yifan Mi +6 more
Recent advancements in speech synthesis technology have enriched our daily lives, with high-quality and human-like audio widely adopted across...
4 months ago cs.SD cs.AI cs.CR
PDF
Attack MEDIUM
Yuanheng Li, Zhuoyang Chen, Xiaoyun Liu +5 more
As large language models (LLMs) become increasingly capable, concerns over the unauthorized use of copyrighted and licensed content in their training...
Benchmark MEDIUM
Yilin Jiang, Mingzi Zhang, Xuanyu Yin +5 more
Large Language Models for Simulating Professions (SP-LLMs), particularly as teachers, are pivotal for personalized education. However, ensuring their...
Tool MEDIUM
Peng Zhang, Peijie Sun
Safety alignment instills in Large Language Models (LLMs) a critical capacity to refuse malicious requests. Prior works have modeled this refusal...
4 months ago cs.CR cs.AI cs.LG
PDF
Benchmark MEDIUM
Nicy Scaria, Silvester John Joseph Kennedy, Deepak Subramani
Small Language Models (SLMs) are increasingly being deployed in resource-constrained environments, yet their behavioral robustness to data...
4 months ago cs.CL cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial