Attack MEDIUM
Wenhui Zhang, Huiyu Xu, Zhibo Wang +4 more
Recent advancements in multi-model AI systems have leveraged LLM routers to reduce computational cost while maintaining response quality by assigning...
Benchmark MEDIUM
Devanshu Sahoo, Manish Prasad, Vasudev Majhi +5 more
The rapid integration of Large Language Models (LLMs) into educational assessment rests on the unverified assumption that instruction following...
1 months ago cs.CL cs.AI cs.ET
PDF
Tool MEDIUM
Xiang Zheng, Yutao Wu, Hanxun Huang +5 more
Autonomous code agents built on large language models are reshaping software and AI development through tool use, long-horizon reasoning, and...
Attack MEDIUM
Alvi Md Ishmam, Najibul Haque Sarker, Zaber Ibn Abdul Hakim +1 more
Multimodal Large Language Models (MLLMs) have achieved remarkable performance across vision-language tasks. Recent advancements allow these models to...
Attack MEDIUM
Arther Tian, Alex Ding, Frank Chen +2 more
Decentralized large language model inference networks require lightweight mechanisms to reward high quality outputs under heterogeneous latency and...
1 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Jarrod Barnes
As large language models (LLMs) improve, so do their offensive applications: frontier agents now generate working exploits for under $50 in compute...
Attack MEDIUM
Onkar Shelar, Travis Desell
Evolutionary prompt search is a practical black-box approach for red teaming large language models (LLMs), but existing methods often collapse onto a...
2 months ago cs.NE q-bio.PE
PDF
Benchmark LOW
Mingqiao Mo, Yunlong Tan, Hao Zhang +2 more
Large language models (LLMs) have achieved remarkable progress in code generation, yet their potential for software protection remains largely...
Attack HIGH
Xingwei Lin, Wenhao Lin, Sicong Cao +4 more
Multi-turn jailbreak attacks have emerged as a critical threat to Large Language Models (LLMs), bypassing safety mechanisms by progressively...
2 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Yizhong Ding
Webshells remain a primary foothold for attackers to compromise servers, particularly within PHP ecosystems. However, existing detection mechanisms...
2 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Holly Trikilis, Pasindu Marasinghe, Fariza Rashid +1 more
Phishing continues to be one of the most prevalent attack vectors, making accurate classification of phishing URLs essential. Recently, large...
2 months ago cs.CR cs.AI
PDF
Survey MEDIUM
Mohsen Hatami, Van Tuan Pham, Hozefa Lakadawala +1 more
The increasing integration of AI agents into cyber-physical systems (CPS) introduces new security risks that extend beyond traditional cyber or...
2 months ago cs.CR cs.DC
PDF
Attack HIGH
Yuetian Chen, Kaiyuan Zhang, Yuntao Du +5 more
Diffusion Language Models (DLMs) represent a promising alternative to autoregressive language models, using bidirectional masked token prediction....
2 months ago cs.LG cs.AI
PDF
Benchmark LOW
Faezeh Hosseini, Mohammadali Yousefzadeh, Yadollah Yaghoobzadeh
Figurative language, particularly fixed figurative expressions (FFEs) such as idioms and proverbs, poses persistent challenges for large language...
Attack HIGH
Md Tasnim Jawad, Mingyan Xiao, Yanzhao Wu
With the widespread adoption of Large Language Models (LLMs) and increasingly stringent privacy regulations, protecting data privacy in LLMs has...
Defense LOW
Pragatheeswaran Vipulanandan, Kamal Premaratne, Dilip Sarkar
Large language models (LLMs) exhibit strong generative capabilities but remain vulnerable to confabulations, fluent yet unreliable outputs that vary...
Benchmark MEDIUM
Bharath Krishnamurthy, Ajita Rattani
Morphing techniques generate artificial biometric samples that combine features from multiple individuals, allowing each contributor to be verified...
2 months ago cs.SD cs.CR cs.LG
PDF
Benchmark MEDIUM
Nourin Shahin, Izzat Alsmadi
As large language models (LLMs) move from research prototypes to enterprise systems, their security vulnerabilities pose serious risks to data...
2 months ago cs.CR cs.LG
PDF
Tool MEDIUM
Lige Huang, Zicheng Liu, Jie Zhang +3 more
The dual offensive and defensive utility of Large Language Models (LLMs) highlights a critical gap in AI security: the lack of unified frameworks for...
2 months ago cs.CR cs.AI cs.CL
PDF
Benchmark MEDIUM
Xiangyang Zhu, Yuan Tian, Zicheng Zhang +6 more
Large vision-language models (LVLMs) exhibit remarkable capabilities in cross-modal tasks but face significant safety challenges, which undermine...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial