Details
Original language | English |
---|---|
Article number | 104223 |
Journal | Information Processing and Management |
Volume | 62 |
Issue number | 6 |
Early online date | 16 Jun 2025 |
Publication status | E-pub ahead of print - 16 Jun 2025 |
Abstract
Zero-Shot Stance Detection (ZSSD) identifies an author's stance towards unseen targets. Existing works have mainly focused on contrastive, meta, adversarial learning, or data augmentation but face issues like data scarcity, generalizability, and lack of coherence between text and targets. Moreover, stance detection must be interpretable to ensure transparency. Recent works with large language models (LLMs) aim to enhance unseen target knowledge or generate explanations but often rely excessively on explicit reasoning or provide coarse explanations, overlooking implicit cues and complicating interpretation. To address these challenges, we propose a novel interpretable multi-stage ZSSD framework. Stage 1 decodes explanations (rationales) justifying the stance while Stage 2 provides the final stance label, thus providing inherent interpretability in predicting stances. Extensive experiments prove that our approach outperforms other baselines with an average improvement in F1 scores of 27.99% with LLMs and 23.60% without LLMs for SemEval and 14.62% with LLMs and 25.24% without LLMs for VAST datasets for the ZSSD task, benefiting from the proposed pipeline architecture and interpretable design. Furthermore, to mitigate the harmful effects of offensive content and promote a more respectful online environment, we integrate an intervention module that leverages the contextual insights derived from our ZSSD framework with the ethics-based text generation power of LLMs to develop interventions. Automatic and human evaluation of LLM-generated interventions based on various proposed criteria provide insights into how LLMs perceive similar information from different perspectives, which can help foster morally sound and respectful online discourse.
Keywords
- Interpretability, Intervention, Large language models, Rationale decoding, Zero-shot stance detection
ASJC Scopus subject areas
- Computer Science(all)
- Information Systems
- Engineering(all)
- Media Technology
- Computer Science(all)
- Computer Science Applications
- Decision Sciences(all)
- Management Science and Operations Research
- Social Sciences(all)
- Library and Information Sciences
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: Information Processing and Management, Vol. 62, No. 6, 104223, 11.2025.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Interpretable zero-shot stance detection with proactive content intervention
AU - Upadhyaya, Apoorva
AU - Nejdl, Wolfgang
AU - Fisichella, Marco
N1 - Publisher Copyright: © 2025 The Authors
PY - 2025/6/16
Y1 - 2025/6/16
N2 - Zero-Shot Stance Detection (ZSSD) identifies an author's stance towards unseen targets. Existing works have mainly focused on contrastive, meta, adversarial learning, or data augmentation but face issues like data scarcity, generalizability, and lack of coherence between text and targets. Moreover, stance detection must be interpretable to ensure transparency. Recent works with large language models (LLMs) aim to enhance unseen target knowledge or generate explanations but often rely excessively on explicit reasoning or provide coarse explanations, overlooking implicit cues and complicating interpretation. To address these challenges, we propose a novel interpretable multi-stage ZSSD framework. Stage 1 decodes explanations (rationales) justifying the stance while Stage 2 provides the final stance label, thus providing inherent interpretability in predicting stances. Extensive experiments prove that our approach outperforms other baselines with an average improvement in F1 scores of 27.99% with LLMs and 23.60% without LLMs for SemEval and 14.62% with LLMs and 25.24% without LLMs for VAST datasets for the ZSSD task, benefiting from the proposed pipeline architecture and interpretable design. Furthermore, to mitigate the harmful effects of offensive content and promote a more respectful online environment, we integrate an intervention module that leverages the contextual insights derived from our ZSSD framework with the ethics-based text generation power of LLMs to develop interventions. Automatic and human evaluation of LLM-generated interventions based on various proposed criteria provide insights into how LLMs perceive similar information from different perspectives, which can help foster morally sound and respectful online discourse.
AB - Zero-Shot Stance Detection (ZSSD) identifies an author's stance towards unseen targets. Existing works have mainly focused on contrastive, meta, adversarial learning, or data augmentation but face issues like data scarcity, generalizability, and lack of coherence between text and targets. Moreover, stance detection must be interpretable to ensure transparency. Recent works with large language models (LLMs) aim to enhance unseen target knowledge or generate explanations but often rely excessively on explicit reasoning or provide coarse explanations, overlooking implicit cues and complicating interpretation. To address these challenges, we propose a novel interpretable multi-stage ZSSD framework. Stage 1 decodes explanations (rationales) justifying the stance while Stage 2 provides the final stance label, thus providing inherent interpretability in predicting stances. Extensive experiments prove that our approach outperforms other baselines with an average improvement in F1 scores of 27.99% with LLMs and 23.60% without LLMs for SemEval and 14.62% with LLMs and 25.24% without LLMs for VAST datasets for the ZSSD task, benefiting from the proposed pipeline architecture and interpretable design. Furthermore, to mitigate the harmful effects of offensive content and promote a more respectful online environment, we integrate an intervention module that leverages the contextual insights derived from our ZSSD framework with the ethics-based text generation power of LLMs to develop interventions. Automatic and human evaluation of LLM-generated interventions based on various proposed criteria provide insights into how LLMs perceive similar information from different perspectives, which can help foster morally sound and respectful online discourse.
KW - Interpretability
KW - Intervention
KW - Large language models
KW - Rationale decoding
KW - Zero-shot stance detection
UR - http://www.scopus.com/inward/record.url?scp=105008173989&partnerID=8YFLogxK
U2 - 10.1016/j.ipm.2025.104223
DO - 10.1016/j.ipm.2025.104223
M3 - Article
AN - SCOPUS:105008173989
VL - 62
JO - Information Processing and Management
JF - Information Processing and Management
SN - 0306-4573
IS - 6
M1 - 104223
ER -