Attack HIGH
Shuxin Zhao, Bo Lang, Nan Xiao +1 more
Object detection models deployed in real-world applications such as autonomous driving face serious threats from backdoor attacks. Despite their...
3 months ago cs.CV cs.CR
PDF
Tool MEDIUM
Dongchao Zhou, Lingyun Ying, Huajun Chai +1 more
JavaScript's widespread adoption has made it an attractive target for malicious attackers who employ sophisticated obfuscation techniques to conceal...
3 months ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Akhil Sharma, Shaikh Yaser Arafat, Jai Kumar Sharma +1 more
The increasing operational reliance on complex Multi-Agent Systems (MAS) across safety-critical domains necessitates rigorous adversarial robustness...
Survey MEDIUM
Asa Cooper Stickland, Jan Michelfeit, Arathi Mani +6 more
LLM-based software engineering agents are increasingly used in real-world development tasks, often with access to sensitive data or security-critical...
Attack HIGH
Sabrine Ennaji, Elhadj Benkhelifa, Luigi Vincenzo Mancini
Machine learning based intrusion detection systems are increasingly targeted by black box adversarial attacks, where attackers craft evasive inputs...
3 months ago cs.CR cs.AI
PDF
Attack MEDIUM
David Lindner, Charlie Griffin, Tomek Korbak +4 more
Automated control monitors could play an important role in overseeing highly capable AI agents that we do not fully trust. Prior work has explored...
3 months ago cs.CR cs.AI cs.MA
PDF
Benchmark MEDIUM
Ali Al Sahili, Ali Chehab, Razane Tajeddine
Large Language Models (LLMs) are prone to memorizing training data, which poses serious privacy risks. Two of the most prominent concerns are...
3 months ago cs.LG cs.CL cs.CR
PDF
Attack HIGH
Karina Chichifoi, Fabio Merizzi, Michele Colajanni
Deep learning and federated learning (FL) are becoming powerful partners for next-generation weather forecasting. Deep learning enables...
3 months ago cs.LG cs.CR
PDF
Benchmark MEDIUM
Md Nahid Hasan Shuvo, Moinul Hossain
Connected autonomous vehicles (CAVs) rely on vision-based deep neural networks (DNNs) and low-latency (Vehicle-to-Everything) V2X communication to...
3 months ago cs.CV cs.AI cs.CR
PDF
Tool MEDIUM
Amy Chang, Tiffany Saade, Sanket Mendapara +2 more
Artificial intelligence (AI) systems are being readily and rapidly adopted, increasingly permeating critical domains: from consumer platforms and...
3 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Shashie Dilhara Batan Arachchige, Benjamin Zi Hao Zhao, Hassan Jameel Asghar +2 more
Large Language Models (LLMs) are often fine-tuned to adapt their general-purpose knowledge to specific tasks and domains such as cyber threat...
3 months ago cs.CR cs.AI cs.LG
PDF
Defense MEDIUM
Perry Abdulkadir
Large language models (LLMs) are increasingly deployed behind safety guardrails such as system prompts and content filters, especially in settings...
3 months ago cs.CR cs.CL cs.LG
PDF
Attack MEDIUM
Samruddhi Baviskar
We evaluate adversarial robustness in tabular machine learning models used in financial decision making. Using credit scoring and fraud detection...
3 months ago cs.LG cs.AI cs.CR
PDF
Attack MEDIUM
Mohammad Mahdi Razmjoo, Mohammad Mahdi Sharifian, Saeed Bagheri Shouraki
Despite their remarkable performance, deep neural networks exhibit a critical vulnerability: small, often imperceptible, adversarial perturbations...
3 months ago cs.LG cs.CR cs.CV
PDF
Attack MEDIUM
Li Lin, Siyuan Xin, Yang Cao +1 more
Watermarking large language models (LLMs) is vital for preventing their misuse, including the fabrication of fake news, plagiarism, and spam. It is...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Md. Hasib Ur Rahman
As Large Language Models (LLMs) become ubiquitous, the challenge of securing them against adversarial "jailbreaking" attacks has intensified. Current...
3 months ago cs.LG cs.AI
PDF
Benchmark MEDIUM
Sanjay Das, Swastik Bhattacharya, Shamik Kundu +3 more
State-space models (SSMs), exemplified by the Mamba architecture, have recently emerged as state-of-the-art sequence-modeling frameworks, offering...
3 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Luoxi Meng, Henry Feng, Ilia Shumailov +1 more
Browser-using agents (BUAs) are an emerging class of AI agents that interact with web browsers in human-like ways, including clicking, scrolling,...
3 months ago cs.CR cs.LG
PDF
Attack HIGH
Yixin Tan, Zhe Yu, Jun Sakuma
Finetuning pretrained large language models (LLMs) has become the standard paradigm for developing downstream applications. However, its security...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Safwan Shaheer, G. M. Refatul Islam, Mohammad Rafid Hamid +3 more
Prompt injection attacks can compromise the security and stability of critical systems, from infrastructure to large web applications. This work...
3 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial