Bi-LORA: A Vision-Language Approach for Synthetic Image Detection

Mamadou Keita, Wassim Hamidouche, Hessen Bougueffa Eutamene, Abdelmalik Taleb-Ahmed, David Camacho, Abdenour Hadid

    Research output: Contribution to journalArticlepeer-review

    3 Scopus citations

    Abstract

    Advancements in deep image synthesis techniques, such as generative adversarial networks (GANs) and diffusion models (DMs), have ushered in an era of generating highly realistic images. While this technological progress has captured significant interest, it has also raised concerns about the high challenge in distinguishing real images from their synthetic counterparts. This paper takes inspiration from the potent convergence capabilities between vision and language, coupled with the zero-shot nature of vision-language models (VLMs). We introduce an innovative method called Bi-LORA that leverages VLMs, combined with low-rank adaptation (LORA) tuning techniques, to enhance the precision of synthetic image detection for unseen model-generated images. The pivotal conceptual shift in our methodology revolves around reframing binary classification as an image captioning task, leveraging the distinctive capabilities of cutting-edge VLM, notably bootstrapping language image pre-training (BLIP)2. Rigorous and comprehensive experiments are conducted to validate the effectiveness of our proposed approach, particularly in detecting unseen diffusion-generated images from unknown diffusion-based generative models during training, showcasing robustness to noise, and demonstrating generalisation capabilities to GANs. The experiments show that Bi-LORA outperforms state of the art models in cross-generator tasks because it leverages multi-modal learning, open-world visual knowledge, and benefits from robust, high-level semantic understanding. By combining visual and textual knowledge, it can handle variations in the data distribution (such as those caused by different generators) and maintain strong performance across different domains. Its ability to transfer knowledge, robustly extract features and perform zero-shot learning also contributes to its generalisation capabilities, making it more adaptable to new generators. The experimental results showcase an impressive average accuracy of 93.41% in synthetic image detection on unseen generation models. The code and models associated with this research can be publicly accessed at https://github.com/Mamadou-Keita/VLM-DETECT.

    Original languageBritish English
    Article numbere13829
    JournalExpert Systems
    Volume42
    Issue number2
    DOIs
    StatePublished - Feb 2025

    Keywords

    • deepfake
    • diffusion models
    • generative adversarial nets
    • image captioning
    • large language model
    • low rank adaptation
    • text-to-image generation
    • visual language model

    Fingerprint

    Dive into the research topics of 'Bi-LORA: A Vision-Language Approach for Synthetic Image Detection'. Together they form a unique fingerprint.

    Cite this