LLM-Agent Interactions on Markets with Information Asymmetries
Alexander Erlei, Lukas Meub
As AI agents increasingly act on behalf of human stakeholders in economic settings, understanding their behavior in complex market environments...
2,077+ academic papers on AI security, attacks, and defenses
Showing 181–200 of 2,077 papers
Alexander Erlei, Lukas Meub
As AI agents increasingly act on behalf of human stakeholders in economic settings, understanding their behavior in complex market environments...
Tzafrir Rehan
We present Test-Driven AI Agent Definition (TDAD), a methodology that treats agent prompts as compiled artifacts: engineers provide behavioral...
Yi Chen, Yun Bian, Haiquan Wang +2 more
The application of large language models to code generation has evolved from one-shot generation to iterative refinement, yet the evolution of...
Zhishu Liu, Kaishen Yuan, Bo Zhao +2 more
Micro-expression Action Unit (AU) detection identifies localized AUs from subtle facial muscle activations, providing a foundation for decoding...
Junxian Li, Tu Lan, Haozhen Tan +2 more
Modern vision-language-model (VLM) based graphical user interface (GUI) agents are expected not only to execute actions accurately but also to...
JV Roig
How much do large language models actually hallucinate when answering questions grounded in provided documents? Despite the critical importance of...
Yonghong Deng, Zhen Yang, Ping Jian +3 more
With the rapid advancement of large language models (LLMs), the safety of LLMs has become a critical concern. Despite significant efforts in safety...
Hui Liu, Kecheng Chen, Jialiang Wang +3 more
Vision-Language Models (VLMs), such as CLIP, have significantly advanced zero-shot image recognition. However, their performance remains limited by...
Bo Jiang
Knowledge distillation from proprietary LLM APIs poses a growing threat to model providers, yet defenses against this attack remain fragmented and...
Sumit Ranjan, Sugandha Sharma, Ubaid Abbas +1 more
Voice interfaces are quickly becoming a common way for people to interact with AI systems. This also brings new security risks, such as prompt...
Chenxi Li, Xianggan Liu, Dake Shen +9 more
Despite the rapid progress of Large Vision-Language Models (LVLMs), the integration of visual modalities introduces new safety vulnerabilities that...
Xiaolei Zhang, Lu Zhou, Xiaogang Xu +5 more
Artificial Intelligence (AI) agents have evolved from passive predictive tools into active entities capable of autonomous decision-making and...
Xiaolei Zhang, Lu Zhou, Xiaogang Xu +5 more
Artificial Intelligence (AI) agents have evolved from passive predictive tools into active entities capable of autonomous decision-making and...
Yuhang Huang, Boyang Ma, Biwei Yan +5 more
The Model Context Protocol (MCP) is an open and standardized interface that enables large language models (LLMs) to interact with external tools and...
Neha Nagaraja, Hayretdin Bahsi
Large Language Models (LLMs) are increasingly integrated into safety-critical workflows, yet existing security analyses remain fragmented and often...
Yige Li, Wei Zhao, Zhe Li +6 more
Backdoor mechanisms have traditionally been studied as security threats that compromise the integrity of machine learning models. However, the same...
Saroj Mishra, Suman Niroula, Umesh Yadav +3 more
Retrieval-Augmented Generation (RAG) systems are increasingly evolving into agentic architectures where large language models autonomously coordinate...
Eduard Hirsch, Kristina Raab, Tobias J. Bauer +1 more
IT systems are facing an increasing number of security threats, including advanced persistent attacks and future quantum-computing vulnerabilities....
Yuxu Ge
Autonomous agents powered by large language models introduce a class of execution-layer vulnerabilities -- prompt injection, retrieval poisoning, and...
Jialai Wang, Ya Wen, Zhongmou Liu +4 more
Targeted bit-flip attacks (BFAs) exploit hardware faults to manipulate model parameters, posing a significant security threat. While prior work...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial