Diverse LLMs vs. Vulnerabilities: Who Detects and Fixes Them Better?
Arastoo Zibaeirad, Marco Vieira
Large Language Models (LLMs) are increasingly being studied for Software Vulnerability Detection (SVD) and Repair (SVR). Individual LLMs have...
2,104+ academic papers on AI security, attacks, and defenses
Showing 1181–1200 of 2,104 papers
Arastoo Zibaeirad, Marco Vieira
Large Language Models (LLMs) are increasingly being studied for Software Vulnerability Detection (SVD) and Repair (SVR). Individual LLMs have...
J. Alexander Curtis, Nasir U. Eisty
Penetration testing is a cornerstone of cybersecurity, traditionally driven by manual, time-intensive processes. As systems grow in complexity, there...
Dang-Khoa Nguyen, Gia-Thang Ho, Quang-Minh Pham +5 more
Software supply chain attacks targeting the npm ecosystem have become increasingly sophisticated, leveraging obfuscation and complex logic to evade...
Hua Ma, Ruoxi Sun, Minhui Xue +4 more
Accurate time-series forecasting is increasingly critical for planning and operations in low-carbon power systems. Emerging time-series large...
Padmeswari Nandiya, Ahmad Mohsin, Ahmed Ibrahim +2 more
Industry 5.0's increasing integration of IT and OT systems is transforming industrial operations but also expanding the cyber-physical attack...
Xin Yang, Omid Ardakanian
Data obfuscation is a promising technique for mitigating attribute inference attacks by semi-trusted parties with access to time-series data emitted...
Peichun Hua, Hao Li, Shanghao Shi +2 more
Large Vision-Language Models (LVLMs) are vulnerable to a growing array of multimodal jailbreak attacks, necessitating defenses that are both...
Jie Ma, Junqing Zhang, Guanxiong Shen +2 more
Radio frequency fingerprint identification (RFFI) is an emerging technique for the lightweight authentication of wireless Internet of things (IoT)...
Edward Lue Chee Lip, Anthony Channg, Diana Kim +2 more
As AI capabilities advance, we increasingly rely on powerful models to decompose complex tasks $\unicode{x2013}$ but what if the decomposer itself is...
Andrew Adiletta, Kathryn Adiletta, Kemal Derya +1 more
The rapid deployment of Large Language Models (LLMs) has created an urgent need for enhanced security and privacy measures in Machine Learning (ML)....
Jamal Al-Karaki, Muhammad Al-Zafar Khan, Rand Derar Mohammad Al Athamneh
The scarcity of cyberattack data hinders the development of robust intrusion detection systems. This paper introduces PHANTOM, a novel adversarial...
Jing Cui, Yufei Han, Jianbin Jiao +1 more
Backdoor attacks embed malicious behaviors into Large Language Models (LLMs), enabling adversaries to trigger harmful outputs or bypass safety...
Alexander K. Saeri, Sophia Lloyd George, Jess Graham +4 more
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI...
Manon Kempermann, Sai Suresh Macharla Vasu, Mahalakshmi Raveenthiran +2 more
Safety evaluations of large language models (LLMs) typically focus on universal risks like dangerous capabilities or undesirable propensities....
Neha, Tarunpreet Bhatia
Intrusion Detection Systems (IDS) are critical components in safeguarding 5G/6G networks from both internal and external cyber threats. While...
Han Yang, Shaofeng Li, Tian Dong +3 more
Deep Neural Networks (DNNs), as valuable intellectual property, face unauthorized use. Existing protections, such as digital watermarking, are...
Chaomeng Lu, Bert Lagaisse
Vulnerability detection methods based on deep learning (DL) have shown strong performance on benchmark datasets, yet their real-world effectiveness...
Devanshu Sahoo, Manish Prasad, Vasudev Majhi +5 more
Driven by surging submission volumes, scientific peer review has catalyzed two parallel trends: individual over-reliance on LLMs and institutional...
N Mangala, Murtaza Rangwala, S Aishwarya +5 more
Healthcare has become exceptionally sophisticated, as wearables and connected medical devices are revolutionising remote patient monitoring,...
Devanshu Sahoo, Vasudev Majhi, Arjun Neekhra +3 more
The use of Large Language Models (LLMs) as automatic judges for code evaluation is becoming increasingly prevalent in academic environments. But...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial