Attack MEDIUM
Mohammad Mahdi Razmjoo, Mohammad Mahdi Sharifian, Saeed Bagheri Shouraki
Despite their remarkable performance, deep neural networks exhibit a critical vulnerability: small, often imperceptible, adversarial perturbations...
3 months ago cs.LG cs.CR cs.CV
PDF
Attack MEDIUM
Li Lin, Siyuan Xin, Yang Cao +1 more
Watermarking large language models (LLMs) is vital for preventing their misuse, including the fabrication of fake news, plagiarism, and spam. It is...
3 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Hua Ma, Ruoxi Sun, Minhui Xue +4 more
Accurate time-series forecasting is increasingly critical for planning and operations in low-carbon power systems. Emerging time-series large...
3 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Jamal Al-Karaki, Muhammad Al-Zafar Khan, Rand Derar Mohammad Al Athamneh
The scarcity of cyberattack data hinders the development of robust intrusion detection systems. This paper introduces PHANTOM, a novel adversarial...
3 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Neha, Tarunpreet Bhatia
Intrusion Detection Systems (IDS) are critical components in safeguarding 5G/6G networks from both internal and external cyber threats. While...
3 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Miranda Christ, Noah Golowich, Sam Gunn +2 more
Watermarks are an essential tool for identifying AI-generated content. Recently, Christ and Gunn (CRYPTO '24) introduced pseudorandom...
Attack MEDIUM
Botao 'Amber' Hu, Bangdao Chen
The emerging "agentic web" envisions large populations of autonomous agents coordinating, transacting, and delegating across open networks. Yet many...
3 months ago cs.CY cs.MA
PDF
Attack MEDIUM
George Mikros
Large language models (LLMs) present a dual challenge for forensic linguistics. They serve as powerful analytical tools enabling scalable corpus...
3 months ago cs.CL cs.CY
PDF
Attack MEDIUM
Sima Jafarikhah, Daniel Thompson, Eva Deans +2 more
Manual vulnerability scoring, such as assigning Common Vulnerability Scoring System (CVSS) scores, is a resource-intensive process that is often...
3 months ago cs.CR cs.AI cs.PL
PDF
Attack MEDIUM
Donghang Duan, Xu Zheng, Yuefeng He +3 more
Current LLM-based text anonymization frameworks usually rely on remote API services from powerful LLMs, which creates an inherent privacy paradox:...
3 months ago cs.CR cs.CL
PDF
Attack MEDIUM
Jinbo Liu, Defu Cao, Yifei Wei +6 more
Graph topology is a fundamental determinant of memory leakage in multi-agent LLM systems, yet its effects remain poorly quantified. We introduce MAMA...
3 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Itay Yona, Amir Sarid, Michael Karasik +1 more
We introduce $\textbf{Doublespeak}$, a simple in-context representation hijacking attack against large language models (LLMs). The attack works by...
3 months ago cs.CL cs.AI cs.CR
PDF
Attack MEDIUM
Hanxiu Zhang, Yue Zheng
The protection of Intellectual Property (IP) in Large Language Models (LLMs) represents a critical challenge in contemporary AI research. While...
3 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Thomas Rivasseau
Current research on operator control of Large Language Models improves model robustness against adversarial attacks and misbehavior by training on...
Attack MEDIUM
Adel Chehade, Edoardo Ragusa, Paolo Gastaldo +1 more
Traffic classification (TC) plays a critical role in cybersecurity, particularly in IoT and embedded contexts, where inspection must often occur...
3 months ago cs.NI cs.CR cs.LG
PDF
Attack MEDIUM
Zixia Wang, Gaojie Jin, Jia Hu +1 more
Recent advancements in Large Language Models (LLMs) have led to their widespread adoption in daily applications. Despite their impressive...
3 months ago cs.LG cs.AI
PDF
Attack MEDIUM
Alexander Boyd, Franz Nowak, David Hyland +2 more
World models have been recently proposed as sandbox environments in which AI agents can be trained and evaluated before deployment. Although...
Attack MEDIUM
Aaron Sandoval, Cody Rushing
The field of AI Control seeks to develop robust control protocols, deployment safeguards for untrusted AI which may be intentionally subversive....
3 months ago cs.CR cs.CL
PDF
Attack MEDIUM
Adeela Bashir, The Anh han, Zia Ush Shamszaman
The integration of large language models (LLMs) into healthcare IoT systems promises faster decisions and improved medical support. LLMs are also...
3 months ago cs.CR cs.LG cs.MA
PDF
Attack MEDIUM
K. J. Kevin Feng, Tae Soo Kim, Rock Yuren Pang +3 more
AI agents that take actions in their environment autonomously over extended time horizons require robust governance interventions to curb their...
3 months ago cs.CY cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial