Multimodal Hate Speech Detection in Memes Using Contrastive Language-Image Pre-Training

Greeshma Arya, Mohammad Kamrul Hasan, Ashish Bagwari, Nurhizam Safie, Shayla Islam, Fatima Rayan Awad Ahmed, Aaishani De, Muhammad Attique Khan, Taher M. Ghazal

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In contemporary society, the proliferation of online hateful messages has emerged as a pressing concern, inflicting deleterious consequences on both societal fabric and individual well-being. The automatic detection of such malevolent content online using models designed to recognize it, holds promise in mitigating its harmful impact. However, the advent of 'Hateful Memes' poses fresh challenges to the detection paradigm, particularly within the realm of deep learning models. These memes, constituting of a textual element associated with an image are individually innocuous but their combination causes a detrimental effect. Consequently, entities responsible for disseminating information via web browsers are compelled to institute mechanisms that regulate and automatically filter out such injurious content. Effectively identifying hateful memes demands algorithms and models endowed with robust vision and language fusion capabilities, capable of reasoning across diverse modalities. This research introduces a novel approach by leveraging the multimodal Contrastive Language-Image Pre-Training (CLIP) model, fine-tuned through the incorporation of prompt engineering. This innovative methodology achieves a commendable accuracy of 87.42%. Comprehensive metrics such as loss, AUROC, and f1 score are also meticulously computed, corroborating the efficacy of the proposed strategy. Our findings suggest that this approach presents an efficient means to regulate the dissemination of hate speech in the form of viral meme content across social networking platforms, thereby contributing to a safer online environment.

Original languageBritish English
Pages (from-to)22359-22375
Number of pages17
JournalIEEE Access
Volume12
DOIs
StatePublished - 2024

Keywords

  • CLIP
  • contrastive learning
  • cosine similarity matrix
  • facebook hateful meme dataset
  • InfoNCE contrastive loss
  • multimodal
  • prompt engineering
  • zero-shot prediction

Fingerprint

Dive into the research topics of 'Multimodal Hate Speech Detection in Memes Using Contrastive Language-Image Pre-Training'. Together they form a unique fingerprint.

Cite this