Attack HIGH
Harsh Kasyap, Minghong Fang, Zhuqing Liu +2 more
Federated learning (FL) is a privacy-preserving machine learning technique that facilitates collaboration among participants across demographics. FL...
5 months ago cs.LG cs.CR
PDF
Defense MEDIUM
Han Zhu, Juntao Dai, Jiaming Ji +8 more
With the widespread use of multi-modal Large Language models (MLLMs), safety issues have become a growing concern. Multi-turn dialogues, which are...
5 months ago cs.CL cs.AI
PDF
Survey MEDIUM
Zhenyu Mao, Jacky Keung, Fengji Zhang +3 more
The increasing demand for software development has driven interest in automating software engineering (SE) tasks using Large Language Models (LLMs)....
Benchmark MEDIUM
Lipeng He, Vasisht Duddu, N. Asokan
Chatbot providers (e.g., OpenAI) rely on tiered subscription schemes to generate revenue, offering basic models for free users, and advanced models...
5 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Deeksha Hareesha Kulal, Chidozie Princewill Arannonu, Afsah Anwar +2 more
Phishing remains a critical cybersecurity threat, especially with the advent of large language models (LLMs) capable of generating highly convincing...
Benchmark MEDIUM
Shuo Chen, Zonggen Li, Zhen Han +7 more
Deep Research (DR) agents built on Large Language Models (LLMs) can perform complex, multi-step research by decomposing tasks, retrieving online...
5 months ago cs.CR cs.CL
PDF
Benchmark MEDIUM
Dominik Schwarz
The security of Large Language Model (LLM) applications is fundamentally challenged by "form-first" attacks like prompt injection and jailbreaking,...
5 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Sarah Ball, Andreas Haupt
Generative models are increasingly paired with safety classifiers that filter harmful or undesirable outputs. A common strategy is to fine-tune the...
5 months ago cs.LG cs.CL
PDF
Tool HIGH
Caelin Kaplan, Alexander Warnecke, Neil Archibald
AI models are being increasingly integrated into real-world systems, raising significant concerns about their safety and security. Consequently, AI...
5 months ago cs.CR cs.AI
PDF
Tool HIGH
Zicheng Liu, Lige Huang, Jie Zhang +3 more
The increasing autonomy of Large Language Models (LLMs) necessitates a rigorous evaluation of their potential to aid in cyber offense. Existing...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Ting Li, Yang Yang, Yipeng Yu +3 more
Adversarial attacks on knowledge graph embeddings (KGE) aim to disrupt the model's ability of link prediction by removing or inserting triples. A...
5 months ago cs.CL cs.CR
PDF
Benchmark MEDIUM
Jiayu Ding, Lei Cui, Li Dong +2 more
Recent advances in Large Language Models (LLMs) show that extending the length of reasoning chains significantly improves performance on complex...
Attack MEDIUM
Sean Oesch, Jack Hutchins, Luke Koch +1 more
In living off the land attacks, malicious actors use legitimate tools and processes already present on a system to avoid detection. In this paper, we...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Rui Xu, Jiawei Chen, Zhaoxia Yin +2 more
The widespread use of large language models (LLMs) and open-source code has raised ethical and security concerns regarding the distribution and...
5 months ago cs.CR cs.AI cs.LG
PDF
Tool HIGH
Pengyu Zhu, Lijun Li, Yaxing Lyu +3 more
LLM-based multi-agent systems (MAS) demonstrate increasing integration into next-generation applications, but their safety in backdoor attacks...
Attack HIGH
Michael Schlichtkrull
When AI agents retrieve and reason over external documents, adversaries can manipulate the data they receive to subvert their behaviour. Previous...
5 months ago cs.CL cs.AI
PDF
Defense MEDIUM
Jiahao Liu, Bonan Ruan, Xianglin Yang +5 more
LLM-based agents have demonstrated promising adaptability in real-world applications. However, these agents remain vulnerable to a wide range of...
Attack HIGH
Vasilije Stambolic, Aritra Dhar, Lukas Cavigelli
Retrieval-Augmented Generation (RAG) increases the reliability and trustworthiness of the LLM response and reduces hallucination by eliminating the...
5 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Alexander Sternfeld, Andrei Kucharavy, Ljiljana Dolamic
Large language Models (LLMs) have shown remarkable proficiency in code generation tasks across various programming languages. However, their outputs...
5 months ago cs.CL cs.CR
PDF
Defense MEDIUM
Zhuochen Yang, Kar Wai Fok, Vrizlynn L. L. Thing
Large language models have gained widespread attention recently, but their potential security vulnerabilities, especially privacy leakage, are also...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial