Saliency Prediction in Uncategorized Videos Based on Audio-Visual Correlation

Maryam Qamar, Suleman Qamar, Muhammad Muneeb, Sung Ho Bae, Anis Rahman

    Research output: Contribution to journalArticlepeer-review

    1 Scopus citations

    Abstract

    Substantial research has been done in saliency modeling to make intelligent machines that can perceive and interpret their surroundings and focus only on the salient regions in a visual scene. But existing spatio-temporal saliency models either treat videos as merely image sequences excluding any audio information or are unable to cope with inherently varying content. Based on the hypothesis that an audiovisual saliency model will perform better than traditional spatio-temporal saliency models, this work aims to provide a generic preliminary audio/video saliency model. This is achieved by augmenting visual saliency map with an audio saliency map computed by synchronizing low-level audio and visual features. The proposed model was evaluated using different criteria against eye fixations data for a publicly available video dataset DIEM. The evaluation results show that the model outperforms two state-of-the-art visual spatio-temporal saliency models. Thus, supporting our hypothesis that an audiovisual model performs better in comparison to a visual model for natural uncategorized videos.

    Original languageBritish English
    Pages (from-to)15460-15470
    Number of pages11
    JournalIEEE Access
    Volume11
    DOIs
    StatePublished - 2023

    Keywords

    • audiovisual
    • Saliency
    • spatioâ€Â"temporal
    • uncategorized videos

    Fingerprint

    Dive into the research topics of 'Saliency Prediction in Uncategorized Videos Based on Audio-Visual Correlation'. Together they form a unique fingerprint.

    Cite this