Attack HIGH
Pengfei He, Ash Fox, Lesly Miculicich +7 more
Large language models (LLMs) have shown promise in assisting cybersecurity tasks, yet existing approaches struggle with automatic vulnerability...
1 months ago cs.LG cs.CR
PDF
Attack HIGH
Jiayao Wang, Yang Song, Zhendong Zhao +5 more
Federated self-supervised learning (FSSL) enables collaborative training of self-supervised representation models without sharing raw unlabeled data....
Defense MEDIUM
Ali Mahdavi, Santa Aghapour, Azadeh Zamanifar +1 more
Existing Byzantine robust aggregation mechanisms typically rely on fulldimensional gradi ent comparisons or pairwise distance computations, resulting...
1 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Alsharif Abuadbba, Nazatul Sultan, Surya Nepal +1 more
AI is moving from domain-specific autonomy in closed, predictable settings to large-language-model-driven agents that plan and act in open,...
1 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Siqi Wen, Shu Yang, Shaopeng Fu +3 more
Vision Language Action (VLA) models close the perception action loop by translating multimodal instructions into executable behaviors, but this very...
Defense MEDIUM
Siqi Wen, Shu Yang, Shaopeng Fu +3 more
Vision Language Action (VLA) models close the perception action loop by translating multimodal instructions into executable behaviors, but this very...
Benchmark LOW
Wenjin Hou, Wei Liu, Han Hu +3 more
Multimodal Large Language Models (MLLMs) have shown remarkable proficiency on general-purpose vision-language benchmarks, reaching or even exceeding...
Attack HIGH
Mingrui Liu, Sixiao Zhang, Cheng Long +1 more
Large Language Models (LLMs) are increasingly vulnerable to Prompt Injection (PI) attacks, where adversarial instructions hidden within retrieved...
1 months ago cs.CR cs.AI cs.LG
PDF
Survey MEDIUM
Yilin Geng, Omri Abend, Eduard Hovy +1 more
It is not only what we ask large language models (LLMs) to do that matters, but also how we prompt. Phrases like "This is urgent" or "As your...
1 months ago cs.CL cs.AI
PDF
Attack LOW
Pengyu Li, Lingling Zhang, Zhitao Gao +5 more
While Large Language Models (LLMs) have achieved remarkable capabilities, they unintentionally memorize sensitive data, posing critical privacy and...
1 months ago cs.LG cs.CL
PDF
Attack HIGH
Seyed Mohammad Hadi Hosseini, Amir Najafi, Mahdieh Soleymani Baghshah
Bandit algorithms have recently emerged as a powerful tool for evaluating machine learning models, including generative image models and large...
1 months ago cs.LG cs.AI
PDF
Benchmark MEDIUM
Yen-Shan Chen, Zhi Rui Tam, Cheng-Kuang Wu +1 more
Current evaluations of LLM safety predominantly rely on severity-based taxonomies to assess the harmfulness of malicious queries. We argue that this...
1 months ago cs.CR cs.CL cs.CY
PDF
Tool HIGH
Zehua Cheng, Jianwei Yang, Wei Dai +1 more
Large Language Models (LLMs) remain vulnerable to adaptive jailbreaks that easily bypass empirical defenses like GCG. We propose a framework for...
1 months ago cs.CL cs.AI
PDF
Attack HIGH
Haobo Wang, Weiqi Luo, Xiaojun Jia +1 more
Large vision-language models (VLMs) are vulnerable to transfer-based adversarial perturbations, enabling attackers to optimize on surrogate models...
Attack HIGH
Xiaoyu Wen, Zhida He, Han Qi +7 more
Ensuring robust safety alignment is crucial for Large Language Models (LLMs), yet existing defenses often lag behind evolving adversarial attacks due...
1 months ago cs.AI cs.CL cs.LG
PDF
Benchmark LOW
Yangfan Deng, Anirudh Nakra, Min Wu
3D content acquisition and creation are expanding rapidly in the new era of machine learning and AI. 3D Gaussian Splatting (3DGS) has become a...
1 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Max Manolov, Tony Gao, Siddharth Shukla +2 more
Large language models (LLMs) are increasingly used to assist developers with code, yet their implementations of cryptographic functionality often...
1 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Poushali Sengupta, Shashi Raj Pandey, Sabita Maharjan +1 more
Large language models (LLMs) generate outputs by utilizing extensive context, which often includes redundant information from prompts, retrieved...
1 months ago cs.CL cs.AI stat.ML
PDF
Attack MEDIUM
Eliron Rahimi, Elad Hirshel, Rom Himelstein +3 more
Diffusion language models (DLMs) have recently emerged as a promising alternative to autoregressive (AR) models, offering parallel decoding and...
1 months ago cs.LG cs.AI
PDF
Attack HIGH
Ziyue Wang, Jiangshan Yu, Kaihua Qin +3 more
Decentralized Finance (DeFi) has turned blockchains into financial infrastructure, allowing anyone to trade, lend, and build protocols without...
1 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial