Multimodal Misinformation Detection using Large Vision-Language Models

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

  • Sahar Tahmasebi
  • Eric Müller-Budack
  • Ralph Ewerth

Organisationseinheiten

Externe Organisationen

  • Technische Informationsbibliothek (TIB) Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksCIKM 2024
UntertitelProceedings of the 33rd ACM International Conference on Information and Knowledge Management
Seiten2189-2199
Seitenumfang11
ISBN (elektronisch)9798400704369
PublikationsstatusVeröffentlicht - 21 Okt. 2024
Veranstaltung33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 - Boise, USA / Vereinigte Staaten
Dauer: 21 Okt. 202425 Okt. 2024

Abstract

The increasing proliferation of misinformation and its alarming impact have motivated both industry and academia to develop approaches for misinformation detection and fact checking. Recent advances on large language models (LLMs) have shown remarkable performance in various tasks, but their potential in misinformation detection remains relatively underexplored. Most of existing state-of-the-art approaches either do not consider evidence and solely focus on claim related features or assume the evidence is provided. Few approaches consider evidence retrieval as part of the misinformation detection but rely on fine-tuning models. In this paper, we investigate the potential of LLMs for misinformation detection in a zero-shot setting. We incorporate an evidence retrieval component as it is crucial to gather pertinent information from various sources to detect the veracity of claims. To this end, we propose a novel re-ranking approach for multimodal evidence retrieval using both LLMs and large vision-language models (LVLM). The retrieved evidence samples (images and texts) serve as the input for an LVLM-based approach for multimodal fact verification (LVLM4FV). To enable a fair evaluation, we address the issue of incomplete ground truth in an existing evidence retrieval dataset by annotating a more complete set of evidence samples for both image and text retrieval. Our experimental results on two datasets demonstrate the superiority of the proposed approach in both evidence retrieval and fact verification tasks, with a better generalization capability.

ASJC Scopus Sachgebiete

Zitieren

Multimodal Misinformation Detection using Large Vision-Language Models. / Tahmasebi, Sahar; Müller-Budack, Eric; Ewerth, Ralph.
CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024. S. 2189-2199.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Tahmasebi, S, Müller-Budack, E & Ewerth, R 2024, Multimodal Misinformation Detection using Large Vision-Language Models. in CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. S. 2189-2199, 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024, Boise, USA / Vereinigte Staaten, 21 Okt. 2024. https://doi.org/10.48550/arXiv.2407.14321, https://doi.org/10.1145/3627673.3679826
Tahmasebi, S., Müller-Budack, E., & Ewerth, R. (2024). Multimodal Misinformation Detection using Large Vision-Language Models. In CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (S. 2189-2199) https://doi.org/10.48550/arXiv.2407.14321, https://doi.org/10.1145/3627673.3679826
Tahmasebi S, Müller-Budack E, Ewerth R. Multimodal Misinformation Detection using Large Vision-Language Models. in CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024. S. 2189-2199 doi: 10.48550/arXiv.2407.14321, 10.1145/3627673.3679826
Tahmasebi, Sahar ; Müller-Budack, Eric ; Ewerth, Ralph. / Multimodal Misinformation Detection using Large Vision-Language Models. CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024. S. 2189-2199
Download
@inproceedings{947aa6afc3ca471291a965673a6fa1c2,
title = "Multimodal Misinformation Detection using Large Vision-Language Models",
abstract = "The increasing proliferation of misinformation and its alarming impact have motivated both industry and academia to develop approaches for misinformation detection and fact checking. Recent advances on large language models (LLMs) have shown remarkable performance in various tasks, but their potential in misinformation detection remains relatively underexplored. Most of existing state-of-the-art approaches either do not consider evidence and solely focus on claim related features or assume the evidence is provided. Few approaches consider evidence retrieval as part of the misinformation detection but rely on fine-tuning models. In this paper, we investigate the potential of LLMs for misinformation detection in a zero-shot setting. We incorporate an evidence retrieval component as it is crucial to gather pertinent information from various sources to detect the veracity of claims. To this end, we propose a novel re-ranking approach for multimodal evidence retrieval using both LLMs and large vision-language models (LVLM). The retrieved evidence samples (images and texts) serve as the input for an LVLM-based approach for multimodal fact verification (LVLM4FV). To enable a fair evaluation, we address the issue of incomplete ground truth in an existing evidence retrieval dataset by annotating a more complete set of evidence samples for both image and text retrieval. Our experimental results on two datasets demonstrate the superiority of the proposed approach in both evidence retrieval and fact verification tasks, with a better generalization capability.",
keywords = "multimodal misinformation detection, news analytics, social media",
author = "Sahar Tahmasebi and Eric M{\"u}ller-Budack and Ralph Ewerth",
note = "Publisher Copyright: {\textcopyright} 2024 Owner/Author.; 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 ; Conference date: 21-10-2024 Through 25-10-2024",
year = "2024",
month = oct,
day = "21",
doi = "10.48550/arXiv.2407.14321",
language = "English",
pages = "2189--2199",
booktitle = "CIKM 2024",

}

Download

TY - GEN

T1 - Multimodal Misinformation Detection using Large Vision-Language Models

AU - Tahmasebi, Sahar

AU - Müller-Budack, Eric

AU - Ewerth, Ralph

N1 - Publisher Copyright: © 2024 Owner/Author.

PY - 2024/10/21

Y1 - 2024/10/21

N2 - The increasing proliferation of misinformation and its alarming impact have motivated both industry and academia to develop approaches for misinformation detection and fact checking. Recent advances on large language models (LLMs) have shown remarkable performance in various tasks, but their potential in misinformation detection remains relatively underexplored. Most of existing state-of-the-art approaches either do not consider evidence and solely focus on claim related features or assume the evidence is provided. Few approaches consider evidence retrieval as part of the misinformation detection but rely on fine-tuning models. In this paper, we investigate the potential of LLMs for misinformation detection in a zero-shot setting. We incorporate an evidence retrieval component as it is crucial to gather pertinent information from various sources to detect the veracity of claims. To this end, we propose a novel re-ranking approach for multimodal evidence retrieval using both LLMs and large vision-language models (LVLM). The retrieved evidence samples (images and texts) serve as the input for an LVLM-based approach for multimodal fact verification (LVLM4FV). To enable a fair evaluation, we address the issue of incomplete ground truth in an existing evidence retrieval dataset by annotating a more complete set of evidence samples for both image and text retrieval. Our experimental results on two datasets demonstrate the superiority of the proposed approach in both evidence retrieval and fact verification tasks, with a better generalization capability.

AB - The increasing proliferation of misinformation and its alarming impact have motivated both industry and academia to develop approaches for misinformation detection and fact checking. Recent advances on large language models (LLMs) have shown remarkable performance in various tasks, but their potential in misinformation detection remains relatively underexplored. Most of existing state-of-the-art approaches either do not consider evidence and solely focus on claim related features or assume the evidence is provided. Few approaches consider evidence retrieval as part of the misinformation detection but rely on fine-tuning models. In this paper, we investigate the potential of LLMs for misinformation detection in a zero-shot setting. We incorporate an evidence retrieval component as it is crucial to gather pertinent information from various sources to detect the veracity of claims. To this end, we propose a novel re-ranking approach for multimodal evidence retrieval using both LLMs and large vision-language models (LVLM). The retrieved evidence samples (images and texts) serve as the input for an LVLM-based approach for multimodal fact verification (LVLM4FV). To enable a fair evaluation, we address the issue of incomplete ground truth in an existing evidence retrieval dataset by annotating a more complete set of evidence samples for both image and text retrieval. Our experimental results on two datasets demonstrate the superiority of the proposed approach in both evidence retrieval and fact verification tasks, with a better generalization capability.

KW - multimodal misinformation detection

KW - news analytics

KW - social media

UR - http://www.scopus.com/inward/record.url?scp=85210041544&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2407.14321

DO - 10.48550/arXiv.2407.14321

M3 - Conference contribution

AN - SCOPUS:85210041544

SP - 2189

EP - 2199

BT - CIKM 2024

T2 - 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024

Y2 - 21 October 2024 through 25 October 2024

ER -