LLM-Assisted Pseudo-Relevance Feedback
David Otero, Javier Parapar
Query expansion is a long-standing technique to mitigate vocabulary mismatch in ad hoc Information Retrieval. Pseudo-relevance feedback methods, such...
2,077+ academic papers on AI security, attacks, and defenses
Showing 241–260 of 603 papers
Clear filtersDavid Otero, Javier Parapar
Query expansion is a long-standing technique to mitigate vocabulary mismatch in ad hoc Information Retrieval. Pseudo-relevance feedback methods, such...
Yuanxiang Liu, Songze Li, Xiaoke Guo +4 more
Large Language Models (LLMs) have demonstrated remarkable reasoning capabilities but often grapple with reliability challenges like hallucinations....
Haoze Guo, Ziqi Wei
Retrieval-augmented generation (RAG) systems put more and more emphasis on grounding their responses in user-generated content found on the Web,...
Sara AlMahri, Liming Xu, Alexandra Brintrup
Modern supply chains are increasingly exposed to disruptions from geopolitical events, demand shocks, trade restrictions, to natural disasters. While...
Greta Dolcetti, Giulio Zizzo, Sergio Maffeis
We present an experimental evaluation that assesses the robustness of four open source LLMs claiming function-calling capabilities against three...
Shaznin Sultana, Sadia Afreen, Nasir U. Eisty
Context: Traditional software security analysis methods struggle to keep pace with the scale and complexity of modern codebases, requiring...
Ziqi Ding, Yunfeng Wan, Wei Song +7 more
CAPTCHAs are widely used by websites to block bots and spam by presenting challenges that are easy for humans but difficult for automated programs to...
Seong-Gyu Park, Sohee Park, Jisu Lee +2 more
Recent LLMs increasingly integrate reasoning mechanisms like Chain-of-Thought (CoT). However, this explicit reasoning exposes a new attack surface...
Erin Feiglin, Nir Hutnik, Raz Lapid
We investigate a failure mode of large language models (LLMs) in which plain-text prompts elicit excessive outputs, a phenomenon we term Overflow....
Dongryeol Lee, Yerin Hwang, Taegwan Kang +3 more
While large language models (LLMs) are increasingly used as automatic judges for question answering (QA) and other reference-conditioned evaluation...
Huipeng Ma, Luan Zhang, Dandan Song +10 more
In multi-hop reasoning, multi-round retrieval-augmented generation (RAG) methods typically rely on LLM-generated content as the retrieval query....
Weipeng Jiang, Xiaoyu Zhang, Juan Zhai +3 more
Emoticons are widely used in digital communication to convey affective intent, yet their safety implications for Large Language Models (LLMs) remain...
Andrew D. Maynard
Large language model (LLM)-based conversational AI systems present a challenge to human cognition that current frameworks for understanding...
Ying Zhou, Jiacheng Wei, Yu Qi +2 more
Large language models (LLMs) demonstrate remarkable capabilities in natural language understanding and generation. Despite being trained on...
Vasanth Iyer, Leonardo Bobadilla, S. S. Iyengar
Large Language Models (LLMs) such as Gemma-2B have shown strong performance in various natural language processing tasks. However, general-purpose...
Qiang Zhang, Elena Emma Wang, Jiaming Li +1 more
This study presents a Secure Multi-Tenant Architecture (SMTA) combined with a novel concept Burn-After-Use (BAU) mechanism for enterprise LLM...
Minfeng Qi, Dongyang He, Qin Wang +1 more
Visual Reasoning CAPTCHAs (VRCs) combine visual scenes with natural-language queries that demand compositional inference over objects, attributes,...
Keyang Zhang, Zeyu Chen, Xuan Feng +4 more
The security of scripting languages such as PowerShell is critical given their powerful automation and administration capabilities, often exercised...
Hoang-Chau Luong, Lingwei Chen
Low-Rank Adaptation (LoRA) is widely used for parameter-efficient fine-tuning of large language models, but it is notably ineffective at removing...
Tianshi Li
On December 4, 2025, Anthropic released Anthropic Interviewer, an AI tool for running qualitative interviews at scale, along with a public dataset of...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial