Benchmark MEDIUM
Yuyang Xia, Ruixuan Liu, Li Xiong
Large language models (LLMs) perform in-context learning (ICL) by adapting to tasks from prompt demonstrations, which in practice often contain...
Benchmark MEDIUM
Longfei Chen, Ruibin Yan, Taiyu Wong +2 more
Smart contracts are prone to vulnerabilities and are analyzed by experts as well as automated systems, such as static analysis and AI-assisted...
4 months ago cs.SE cs.CR
PDF
Benchmark MEDIUM
Minjie Wang, Jinguang Han, Weizhi Meng
In federated learning, multiple parties can cooperate to train the model without directly exchanging their own private data, but the gradient leakage...
4 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Shanmin Wang, Dongdong Zhao
Knowledge Distillation (KD) is essential for compressing large models, yet relying on pre-trained "teacher" models downloaded from third-party...
4 months ago cs.CR cs.AI cs.CV
PDF
Benchmark MEDIUM
Yanbo Dai, Zongjie Li, Zhenlan Ji +1 more
Large language models (LLMs) have achieved remarkable success across a wide range of natural language processing tasks, demonstrating human-level...
Benchmark MEDIUM
Zichao Wei, Jun Zeng, Ming Wen +8 more
Software vulnerabilities are increasing at an alarming rate. However, manual patching is both time-consuming and resource-intensive, while existing...
4 months ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Feilong Wang, Fuqiang Liu
The integration of large language models (LLMs) into automated driving systems has opened new possibilities for reasoning and decision-making by...
4 months ago cs.LG cs.AI cs.CR
PDF
Benchmark MEDIUM
Guangke Chen, Yuhui Wang, Shouling Ji +2 more
Modern text-to-speech (TTS) systems, particularly those built on Large Audio-Language Models (LALMs), generate high-fidelity speech that faithfully...
4 months ago cs.SD cs.AI cs.CR
PDF
Benchmark MEDIUM
Fred Heiding, Simon Lermen
We present an end-to-end demonstration of how attackers can exploit AI safety failures to harm vulnerable populations: from jailbreaking LLMs to...
4 months ago cs.CR cs.AI cs.CY
PDF
Benchmark MEDIUM
Catherine Xia, Manar H. Alalfi
AI programming assistants have demonstrated a tendency to generate code containing basic security vulnerabilities. While developers are ultimately...
4 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Zexu Wang, Jiachi Chen, Zewei Lin +7 more
Smart contracts have significantly advanced blockchain technology, and digital signatures are crucial for reliable verification of contract...
4 months ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Yunfei Yang, Xiaojun Chen, Yuexin Xuan +3 more
Model watermarking techniques can embed watermark information into the protected model for ownership declaration by constructing specific...
4 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Kazuki Iwahana, Yusuke Yamasaki, Akira Ito +2 more
Backdoor attacks pose a critical threat to machine learning models, causing them to behave normally on clean data but misclassify poisoned data into...
4 months ago cs.LG cs.CR
PDF
Benchmark MEDIUM
Junxiao Han, Zheng Yu, Lingfeng Bao +5 more
The widespread adoption of open-source software (OSS) has accelerated software innovation but also increased security risks due to the rapid...
4 months ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Binyan Xu, Fan Yang, Di Tang +2 more
Clean-image backdoor attacks, which use only label manipulation in training datasets to compromise deep neural networks, pose a significant threat to...
4 months ago cs.CV cs.CR cs.LG
PDF
Benchmark MEDIUM
Marcin Podhajski, Jan Dubiński, Franziska Boenisch +3 more
Current graph neural network (GNN) model-stealing methods rely heavily on queries to the victim model, assuming no hard query limits. However, in...
4 months ago cs.LG cs.CR
PDF
Benchmark MEDIUM
Yilin Jiang, Mingzi Zhang, Xuanyu Yin +5 more
Large Language Models for Simulating Professions (SP-LLMs), particularly as teachers, are pivotal for personalized education. However, ensuring their...
Benchmark MEDIUM
Nicy Scaria, Silvester John Joseph Kennedy, Deepak Subramani
Small Language Models (SLMs) are increasingly being deployed in resource-constrained environments, yet their behavioral robustness to data...
4 months ago cs.CL cs.AI
PDF
Benchmark MEDIUM
Dachuan Lin, Guobin Shen, Zihao Yang +3 more
Safety evaluation of large language models (LLMs) increasingly relies on LLM-as-a-judge pipelines, but strong judges can still be expensive to use at...
4 months ago cs.AI cs.CR
PDF
Benchmark MEDIUM
Amr Gomaa, Ahmed Salem, Sahar Abdelnabi
As language models evolve into autonomous agents that act and communicate on behalf of users, ensuring safety in multi-agent ecosystems becomes a...
4 months ago cs.CR cs.CL cs.CY
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial