Interpretable zero-shot stance detection with proactive content intervention

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autorschaft

Organisationseinheiten

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Aufsatznummer104223
FachzeitschriftInformation Processing and Management
Jahrgang62
Ausgabenummer6
Frühes Online-Datum16 Juni 2025
PublikationsstatusVeröffentlicht - Nov. 2025

Abstract

Zero-Shot Stance Detection (ZSSD) identifies an author's stance towards unseen targets. Existing works have mainly focused on contrastive, meta, adversarial learning, or data augmentation but face issues like data scarcity, generalizability, and lack of coherence between text and targets. Moreover, stance detection must be interpretable to ensure transparency. Recent works with large language models (LLMs) aim to enhance unseen target knowledge or generate explanations but often rely excessively on explicit reasoning or provide coarse explanations, overlooking implicit cues and complicating interpretation. To address these challenges, we propose a novel interpretable multi-stage ZSSD framework. Stage 1 decodes explanations (rationales) justifying the stance while Stage 2 provides the final stance label, thus providing inherent interpretability in predicting stances. Extensive experiments prove that our approach outperforms other baselines with an average improvement in F1 scores of 27.99% with LLMs and 23.60% without LLMs for SemEval and 14.62% with LLMs and 25.24% without LLMs for VAST datasets for the ZSSD task, benefiting from the proposed pipeline architecture and interpretable design. Furthermore, to mitigate the harmful effects of offensive content and promote a more respectful online environment, we integrate an intervention module that leverages the contextual insights derived from our ZSSD framework with the ethics-based text generation power of LLMs to develop interventions. Automatic and human evaluation of LLM-generated interventions based on various proposed criteria provide insights into how LLMs perceive similar information from different perspectives, which can help foster morally sound and respectful online discourse.

ASJC Scopus Sachgebiete

Zitieren

Interpretable zero-shot stance detection with proactive content intervention. / Upadhyaya, Apoorva; Nejdl, Wolfgang; Fisichella, Marco.
in: Information Processing and Management, Jahrgang 62, Nr. 6, 104223, 11.2025.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Download
@article{33d2a4d435c24937bacda1733493a78a,
title = "Interpretable zero-shot stance detection with proactive content intervention",
abstract = "Zero-Shot Stance Detection (ZSSD) identifies an author's stance towards unseen targets. Existing works have mainly focused on contrastive, meta, adversarial learning, or data augmentation but face issues like data scarcity, generalizability, and lack of coherence between text and targets. Moreover, stance detection must be interpretable to ensure transparency. Recent works with large language models (LLMs) aim to enhance unseen target knowledge or generate explanations but often rely excessively on explicit reasoning or provide coarse explanations, overlooking implicit cues and complicating interpretation. To address these challenges, we propose a novel interpretable multi-stage ZSSD framework. Stage 1 decodes explanations (rationales) justifying the stance while Stage 2 provides the final stance label, thus providing inherent interpretability in predicting stances. Extensive experiments prove that our approach outperforms other baselines with an average improvement in F1 scores of 27.99% with LLMs and 23.60% without LLMs for SemEval and 14.62% with LLMs and 25.24% without LLMs for VAST datasets for the ZSSD task, benefiting from the proposed pipeline architecture and interpretable design. Furthermore, to mitigate the harmful effects of offensive content and promote a more respectful online environment, we integrate an intervention module that leverages the contextual insights derived from our ZSSD framework with the ethics-based text generation power of LLMs to develop interventions. Automatic and human evaluation of LLM-generated interventions based on various proposed criteria provide insights into how LLMs perceive similar information from different perspectives, which can help foster morally sound and respectful online discourse.",
keywords = "Interpretability, Intervention, Large language models, Rationale decoding, Zero-shot stance detection",
author = "Apoorva Upadhyaya and Wolfgang Nejdl and Marco Fisichella",
note = "Publisher Copyright: {\textcopyright} 2025 The Authors",
year = "2025",
month = nov,
doi = "10.1016/j.ipm.2025.104223",
language = "English",
volume = "62",
journal = "Information Processing and Management",
issn = "0306-4573",
publisher = "Elsevier Ltd.",
number = "6",

}

Download

TY - JOUR

T1 - Interpretable zero-shot stance detection with proactive content intervention

AU - Upadhyaya, Apoorva

AU - Nejdl, Wolfgang

AU - Fisichella, Marco

N1 - Publisher Copyright: © 2025 The Authors

PY - 2025/11

Y1 - 2025/11

N2 - Zero-Shot Stance Detection (ZSSD) identifies an author's stance towards unseen targets. Existing works have mainly focused on contrastive, meta, adversarial learning, or data augmentation but face issues like data scarcity, generalizability, and lack of coherence between text and targets. Moreover, stance detection must be interpretable to ensure transparency. Recent works with large language models (LLMs) aim to enhance unseen target knowledge or generate explanations but often rely excessively on explicit reasoning or provide coarse explanations, overlooking implicit cues and complicating interpretation. To address these challenges, we propose a novel interpretable multi-stage ZSSD framework. Stage 1 decodes explanations (rationales) justifying the stance while Stage 2 provides the final stance label, thus providing inherent interpretability in predicting stances. Extensive experiments prove that our approach outperforms other baselines with an average improvement in F1 scores of 27.99% with LLMs and 23.60% without LLMs for SemEval and 14.62% with LLMs and 25.24% without LLMs for VAST datasets for the ZSSD task, benefiting from the proposed pipeline architecture and interpretable design. Furthermore, to mitigate the harmful effects of offensive content and promote a more respectful online environment, we integrate an intervention module that leverages the contextual insights derived from our ZSSD framework with the ethics-based text generation power of LLMs to develop interventions. Automatic and human evaluation of LLM-generated interventions based on various proposed criteria provide insights into how LLMs perceive similar information from different perspectives, which can help foster morally sound and respectful online discourse.

AB - Zero-Shot Stance Detection (ZSSD) identifies an author's stance towards unseen targets. Existing works have mainly focused on contrastive, meta, adversarial learning, or data augmentation but face issues like data scarcity, generalizability, and lack of coherence between text and targets. Moreover, stance detection must be interpretable to ensure transparency. Recent works with large language models (LLMs) aim to enhance unseen target knowledge or generate explanations but often rely excessively on explicit reasoning or provide coarse explanations, overlooking implicit cues and complicating interpretation. To address these challenges, we propose a novel interpretable multi-stage ZSSD framework. Stage 1 decodes explanations (rationales) justifying the stance while Stage 2 provides the final stance label, thus providing inherent interpretability in predicting stances. Extensive experiments prove that our approach outperforms other baselines with an average improvement in F1 scores of 27.99% with LLMs and 23.60% without LLMs for SemEval and 14.62% with LLMs and 25.24% without LLMs for VAST datasets for the ZSSD task, benefiting from the proposed pipeline architecture and interpretable design. Furthermore, to mitigate the harmful effects of offensive content and promote a more respectful online environment, we integrate an intervention module that leverages the contextual insights derived from our ZSSD framework with the ethics-based text generation power of LLMs to develop interventions. Automatic and human evaluation of LLM-generated interventions based on various proposed criteria provide insights into how LLMs perceive similar information from different perspectives, which can help foster morally sound and respectful online discourse.

KW - Interpretability

KW - Intervention

KW - Large language models

KW - Rationale decoding

KW - Zero-shot stance detection

UR - http://www.scopus.com/inward/record.url?scp=105008173989&partnerID=8YFLogxK

U2 - 10.1016/j.ipm.2025.104223

DO - 10.1016/j.ipm.2025.104223

M3 - Article

AN - SCOPUS:105008173989

VL - 62

JO - Information Processing and Management

JF - Information Processing and Management

SN - 0306-4573

IS - 6

M1 - 104223

ER -

Von denselben Autoren