Attack HIGH
Jiayao Wang, Yiping Zhang, Mohammad Maruf Hasan +5 more
Self-supervised diffusion models learn high-quality visual representations via latent space denoising. However, their representation layer poses a...
3 weeks ago cs.CR cs.LG
PDF
Attack HIGH
Oluseyi Olukola, Nick Rahimi
Machine learning based network intrusion detection systems are vulnerable to adversarial attacks that degrade classification performance under both...
3 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Hsin Lin, Yan-Lun Chen, Ren-Hung Hwang +1 more
Backdoor attacks pose a critical threat to the security of deep neural networks, yet existing efforts on universal backdoors often rely on visually...
3 weeks ago cs.CR cs.CV cs.LG
PDF
Attack HIGH
Yilian Liu, Xiaojun Jia, Guoshun Nan +6 more
Multimodal Large Language Models (MLLMs) have achieved remarkable performance but remain vulnerable to jailbreak attacks that can induce harmful...
3 weeks ago cs.CV cs.AI cs.CR
PDF
Attack HIGH
Swapnil Parekh
Image captioning models are encoder-decoder architectures trained on large-scale image-text datasets, making them susceptible to adversarial attacks....
3 weeks ago cs.CV cs.AI
PDF
Attack HIGH
Linxi Jiang, Zhijie Liu, Haotian Luo +1 more
Browser-use agents are widely used for everyday tasks. They enable automated interaction with web pages through structured DOM based interfaces or...
3 weeks ago cs.CR cs.AI
PDF
Benchmark HIGH
Zhicheng Fang, Jingjie Zheng, Chenxu Fu +1 more
Jailbreak techniques for large language models (LLMs) evolve faster than benchmarks, making robustness estimates stale and difficult to compare...
3 weeks ago cs.CR cs.AI cs.CL
PDF
Benchmark HIGH
Xuhui Dou, Hayretdin Bahsi, Alejandro Guerra-Manzanares
Recent work applies Large Language Models (LLMs) to source-code vulnerability detection, but most evaluations still rely on random train-test splits...
3 weeks ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Kennedy Edemacu, Mohammad Mahdi Shokri
Retrieval-augmented generation (RAG) has emerged as a powerful paradigm for enhancing multimodal large language models by grounding their responses...
3 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Xun Huang, Simeng Qin, Xiaoshuang Jia +6 more
As Large Language Models (LLMs) are increasingly used, their security risks have drawn increasing attention. Existing research reveals that LLMs are...
3 weeks ago cs.AI cs.CR
PDF
Attack HIGH
Tian Zhang, Yiwei Xu, Juan Wang +8 more
Large language model (LLM) agents increasingly rely on external tools and retrieval systems to autonomously complete complex tasks. However, this...
3 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Marcus Graves
We introduce Reverse CAPTCHA, an evaluation framework that tests whether large language models follow invisible Unicode-encoded instructions embedded...
3 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Zhonghao Zhan, Krinos Li, Yefan Zhang +1 more
Edge deployment of LLM agents on IoT hardware introduces attack surfaces absent from cloud-hosted orchestration. We present an empirical security...
Attack HIGH
Qianlong Lan, Anuj Kaul, Shaun Jones +1 more
Agentic large language model systems increasingly automate tasks by retrieving URLs and calling external tools. We show that this workflow gives rise...
4 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Zheng Gao, Xiaoyu Li, Zhicheng Bao +2 more
Generative images have proliferated on Web platforms in social media and online copyright distribution scenarios, and semantic watermarking has...
4 weeks ago cs.LG cs.CR cs.CV
PDF
Tool HIGH
Xinfeng Li, Shenyu Dai, Kelong Zheng +4 more
Large language model (LLM) agents are rapidly becoming trusted copilots in high-stakes domains like software development and healthcare. However,...
4 weeks ago cs.HC cs.AI cs.CR
PDF
Attack HIGH
Piyush Jaiswal, Aaditya Pratap, Shreyansh Saraswati +2 more
Large Language Models (LLMs) are widely deployed in real-world systems. Given their broader applicability, prompt engineering has become an efficient...
4 weeks ago cs.CR cs.AI
PDF
Survey HIGH
Shruti Srivastava, Kiranmayee Janardhan, Shaurya Jauhari
Cybersecurity threats are becoming increasingly sophisticated, making traditional defense mechanisms and manual red teaming approaches insufficient...
4 weeks ago cs.CR cs.AI
PDF
Tool HIGH
Che Wang, Jiaming Zhang, Ziqi Zhang +6 more
The integration of external data services (e.g., Model Context Protocol, MCP) has made large language model-based agents increasingly powerful for...
4 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Che Wang, Fuyao Zhang, Jiaming Zhang +6 more
Large Language Model (LLM) agents are susceptible to Indirect Prompt Injection (IPI) attacks, where malicious instructions in retrieved content...
4 weeks ago cs.AI cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial