TY - JOUR
T1 - Bi-LORA
T2 - A Vision-Language Approach for Synthetic Image Detection
AU - Keita, Mamadou
AU - Hamidouche, Wassim
AU - Bougueffa Eutamene, Hessen
AU - Taleb-Ahmed, Abdelmalik
AU - Camacho, David
AU - Hadid, Abdenour
N1 - Publisher Copyright:
© 2025 John Wiley & Sons Ltd.
PY - 2025/2
Y1 - 2025/2
N2 - Advancements in deep image synthesis techniques, such as generative adversarial networks (GANs) and diffusion models (DMs), have ushered in an era of generating highly realistic images. While this technological progress has captured significant interest, it has also raised concerns about the high challenge in distinguishing real images from their synthetic counterparts. This paper takes inspiration from the potent convergence capabilities between vision and language, coupled with the zero-shot nature of vision-language models (VLMs). We introduce an innovative method called Bi-LORA that leverages VLMs, combined with low-rank adaptation (LORA) tuning techniques, to enhance the precision of synthetic image detection for unseen model-generated images. The pivotal conceptual shift in our methodology revolves around reframing binary classification as an image captioning task, leveraging the distinctive capabilities of cutting-edge VLM, notably bootstrapping language image pre-training (BLIP)2. Rigorous and comprehensive experiments are conducted to validate the effectiveness of our proposed approach, particularly in detecting unseen diffusion-generated images from unknown diffusion-based generative models during training, showcasing robustness to noise, and demonstrating generalisation capabilities to GANs. The experiments show that Bi-LORA outperforms state of the art models in cross-generator tasks because it leverages multi-modal learning, open-world visual knowledge, and benefits from robust, high-level semantic understanding. By combining visual and textual knowledge, it can handle variations in the data distribution (such as those caused by different generators) and maintain strong performance across different domains. Its ability to transfer knowledge, robustly extract features and perform zero-shot learning also contributes to its generalisation capabilities, making it more adaptable to new generators. The experimental results showcase an impressive average accuracy of 93.41% in synthetic image detection on unseen generation models. The code and models associated with this research can be publicly accessed at https://github.com/Mamadou-Keita/VLM-DETECT.
AB - Advancements in deep image synthesis techniques, such as generative adversarial networks (GANs) and diffusion models (DMs), have ushered in an era of generating highly realistic images. While this technological progress has captured significant interest, it has also raised concerns about the high challenge in distinguishing real images from their synthetic counterparts. This paper takes inspiration from the potent convergence capabilities between vision and language, coupled with the zero-shot nature of vision-language models (VLMs). We introduce an innovative method called Bi-LORA that leverages VLMs, combined with low-rank adaptation (LORA) tuning techniques, to enhance the precision of synthetic image detection for unseen model-generated images. The pivotal conceptual shift in our methodology revolves around reframing binary classification as an image captioning task, leveraging the distinctive capabilities of cutting-edge VLM, notably bootstrapping language image pre-training (BLIP)2. Rigorous and comprehensive experiments are conducted to validate the effectiveness of our proposed approach, particularly in detecting unseen diffusion-generated images from unknown diffusion-based generative models during training, showcasing robustness to noise, and demonstrating generalisation capabilities to GANs. The experiments show that Bi-LORA outperforms state of the art models in cross-generator tasks because it leverages multi-modal learning, open-world visual knowledge, and benefits from robust, high-level semantic understanding. By combining visual and textual knowledge, it can handle variations in the data distribution (such as those caused by different generators) and maintain strong performance across different domains. Its ability to transfer knowledge, robustly extract features and perform zero-shot learning also contributes to its generalisation capabilities, making it more adaptable to new generators. The experimental results showcase an impressive average accuracy of 93.41% in synthetic image detection on unseen generation models. The code and models associated with this research can be publicly accessed at https://github.com/Mamadou-Keita/VLM-DETECT.
KW - deepfake
KW - diffusion models
KW - generative adversarial nets
KW - image captioning
KW - large language model
KW - low rank adaptation
KW - text-to-image generation
KW - visual language model
UR - https://www.scopus.com/pages/publications/85214668825
U2 - 10.1111/exsy.13829
DO - 10.1111/exsy.13829
M3 - Article
AN - SCOPUS:85214668825
SN - 0266-4720
VL - 42
JO - Expert Systems
JF - Expert Systems
IS - 2
M1 - e13829
ER -