Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Findings of the Association for Computational Linguistics: EMNLP 2024 |
Herausgeber/-innen | Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen |
Erscheinungsort | Miami, Florida, USA |
Seiten | 4604-4622 |
Seitenumfang | 19 |
Publikationsstatus | Veröffentlicht - 1 Nov. 2024 |
Abstract
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Findings of the Association for Computational Linguistics: EMNLP 2024. Hrsg. / Yaser Al-Onaizan; Mohit Bansal; Yun-Nung Chen. Miami, Florida, USA, 2024. S. 4604-4622.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Improving Argument Effectiveness Across Ideologies using Instruction-tuned Large Language Models
AU - El Baff, Roxanne
AU - Khatib, Khalid Al
AU - Alshomary, Milad
AU - Konen, Kai
AU - Stein, Benno
AU - Wachsmuth, Henning
PY - 2024/11/1
Y1 - 2024/11/1
N2 - Different political ideologies (e.g., liberal and conservative Americans) hold different worldviews, which leads to opposing stances on different issues (e.g., gun control) and, thereby, fostering societal polarization. Arguments are a means of bringing the perspectives of people with different ideologies closer together, depending on how well they reach their audience. In this paper, we study how to computationally turn ineffective arguments into effective arguments for people with certain ideologies by using instruction-tuned large language models (LLMs), looking closely at style features. For development and evaluation, we collect ineffective arguments per ideology from debate.org, and we generate about 30k, which we rewrite using three LLM methods tailored to our task: zero-shot prompting, few-shot prompting, and LLM steering. Our experiments provide evidence that LLMs naturally improve argument effectiveness for liberals. Our LLM-based and human evaluation show a clear preference towards the rewritten arguments. Code and link to the data are available here: https://github.com/roxanneelbaff/emnlp2024-iesta.
AB - Different political ideologies (e.g., liberal and conservative Americans) hold different worldviews, which leads to opposing stances on different issues (e.g., gun control) and, thereby, fostering societal polarization. Arguments are a means of bringing the perspectives of people with different ideologies closer together, depending on how well they reach their audience. In this paper, we study how to computationally turn ineffective arguments into effective arguments for people with certain ideologies by using instruction-tuned large language models (LLMs), looking closely at style features. For development and evaluation, we collect ineffective arguments per ideology from debate.org, and we generate about 30k, which we rewrite using three LLM methods tailored to our task: zero-shot prompting, few-shot prompting, and LLM steering. Our experiments provide evidence that LLMs naturally improve argument effectiveness for liberals. Our LLM-based and human evaluation show a clear preference towards the rewritten arguments. Code and link to the data are available here: https://github.com/roxanneelbaff/emnlp2024-iesta.
U2 - 10.18653/v1/2024.findings-emnlp.265
DO - 10.18653/v1/2024.findings-emnlp.265
M3 - Conference contribution
SP - 4604
EP - 4622
BT - Findings of the Association for Computational Linguistics: EMNLP 2024
A2 - Al-Onaizan, Yaser
A2 - Bansal, Mohit
A2 - Chen, Yun-Nung
CY - Miami, Florida, USA
ER -