OpenSec: Measuring Incident Response Agent Calibration Under Adversarial Evidence
Jarrod Barnes
As large language models (LLMs) improve, so do their offensive applications: frontier agents now generate working exploits for under $50 in compute...
2,077+ academic papers on AI security, attacks, and defenses
Showing 321–340 of 966 papers
Clear filtersJarrod Barnes
As large language models (LLMs) improve, so do their offensive applications: frontier agents now generate working exploits for under $50 in compute...
Onkar Shelar, Travis Desell
Evolutionary prompt search is a practical black-box approach for red teaming large language models (LLMs), but existing methods often collapse onto a...
Yizhong Ding
Webshells remain a primary foothold for attackers to compromise servers, particularly within PHP ecosystems. However, existing detection mechanisms...
Holly Trikilis, Pasindu Marasinghe, Fariza Rashid +1 more
Phishing continues to be one of the most prevalent attack vectors, making accurate classification of phishing URLs essential. Recently, large...
Mohsen Hatami, Van Tuan Pham, Hozefa Lakadawala +1 more
The increasing integration of AI agents into cyber-physical systems (CPS) introduces new security risks that extend beyond traditional cyber or...
Bharath Krishnamurthy, Ajita Rattani
Morphing techniques generate artificial biometric samples that combine features from multiple individuals, allowing each contributor to be verified...
Nourin Shahin, Izzat Alsmadi
As large language models (LLMs) move from research prototypes to enterprise systems, their security vulnerabilities pose serious risks to data...
Lige Huang, Zicheng Liu, Jie Zhang +3 more
The dual offensive and defensive utility of Large Language Models (LLMs) highlights a critical gap in AI security: the lack of unified frameworks for...
Xiangyang Zhu, Yuan Tian, Zicheng Zhang +6 more
Large vision-language models (LVLMs) exhibit remarkable capabilities in cross-modal tasks but face significant safety challenges, which undermine...
Binyan Xu, Fan Yang, Xilin Dai +2 more
Deep Neural Networks remain inherently vulnerable to backdoor attacks. Traditional test-time defenses largely operate under the paradigm of internal...
Quy-Anh Dang, Chris Ngo
Despite significant progress in alignment, large language models (LLMs) remain vulnerable to adversarial attacks that elicit harmful behaviors....
Yuxiang Wang, Hongyu Liu, Dekun Chen +2 more
As Speech Language Models (SLMs) transition from personal devices to shared, multi-user environments such as smart homes, a new challenge emerges:...
Yangyang Guo, Ziwei Xu, Si Liu +2 more
This study reveals a previously unexplored vulnerability in the safety alignment of Large Language Models (LLMs). Existing aligned LLMs predominantly...
Sen Nie, Jie Zhang, Zhuo Wang +2 more
Vision-language models (VLMs) such as CLIP have demonstrated remarkable zero-shot generalization, yet remain highly vulnerable to adversarial...
Wachiraphan Charoenwet, Kla Tantithamthavorn, Patanamon Thongtanunam +3 more
Secure code review is critical at the pre-commit stage, where vulnerabilities must be caught early under tight latency and limited-context...
Satyapriya Krishna, Matteo Memelli, Tong Wang +5 more
Amazon published its Frontier Model Safety Framework (FMSF) as part of the Paris AI summit, following which we presented a report on Amazon's Premier...
Henry Chen, Victor Aranda, Samarth Keshari +2 more
Prompt-based attack techniques are one of the primary challenges in securely deploying and protecting LLM-based AI systems. LLM inputs are an...
Zahra Hashemi, Zhiqiang Zhong, Jun Pang +1 more
The rapid evolution of large language models (LLMs) has fuelled enthusiasm about their role in advancing scientific discovery, with studies exploring...
Mohamed Amine Ferrag, Abderrahmane Lakas, Merouane Debbah
Autonomous unmanned aerial vehicle (UAV) systems are increasingly deployed in safety-critical, networked environments where they must operate...
Geunsik Lim
As climate-related hazards intensify, conventional early warning systems (EWS) disseminate alerts rapidly but often fail to trigger timely protective...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial