Large language models have gained widespread prominence, yet their vulnerability to prompt injection and other adversarial attacks remains a critical...
Generative models can generate photorealistic images at scale. This raises urgent concerns about the ability to detect synthetically generated images...
Firas Ben Hmida, Abderrahmen Amich, Ata Kaboudi +1 more
Deep neural networks (DNNs) are increasingly being deployed in high-stakes applications, from self-driving cars to biometric authentication. However,...
Marco Zimmerli, Andreas Plesner, Till Aczel +1 more
Deep neural networks remain vulnerable to adversarial examples despite advances in architectures and training paradigms. We investigate how training...
Large language models (LLMs) have become increasingly popular due to their ability to interact with unstructured content. As such, LLMs are now a key...
Large language models (LLMs), despite being safety-aligned, exhibit brittle refusal behaviors that can be circumvented by simple linguistic changes....
Large vision-language models (LVLMs) have achieved impressive performance across a wide range of vision-language tasks, while they remain vulnerable...
Large language models can express values in two main ways: (1) intrinsic expression, reflecting the model's inherent values learned during training,...
Large Reasoning Models (LRMs) have demonstrated remarkable capabilities in complex problem-solving through Chain-of-Thought (CoT) reasoning. However,...