Improving Argument Effectiveness Across Ideologies using Instruction-tuned Large Language Models

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

Externe Organisationen

  • DLR-Institut für Raumfahrtsysteme
  • Bauhaus-Universität Weimar
  • Reichsuniversität Groningen
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksFindings of the Association for Computational Linguistics: EMNLP 2024
Herausgeber/-innenYaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
ErscheinungsortMiami, Florida, USA
Seiten4604-4622
Seitenumfang19
PublikationsstatusVeröffentlicht - 1 Nov. 2024

Abstract

Different political ideologies (e.g., liberal and conservative Americans) hold different worldviews, which leads to opposing stances on different issues (e.g., gun control) and, thereby, fostering societal polarization. Arguments are a means of bringing the perspectives of people with different ideologies closer together, depending on how well they reach their audience. In this paper, we study how to computationally turn ineffective arguments into effective arguments for people with certain ideologies by using instruction-tuned large language models (LLMs), looking closely at style features. For development and evaluation, we collect ineffective arguments per ideology from debate.org, and we generate about 30k, which we rewrite using three LLM methods tailored to our task: zero-shot prompting, few-shot prompting, and LLM steering. Our experiments provide evidence that LLMs naturally improve argument effectiveness for liberals. Our LLM-based and human evaluation show a clear preference towards the rewritten arguments. Code and link to the data are available here: https://github.com/roxanneelbaff/emnlp2024-iesta.

Zitieren

Improving Argument Effectiveness Across Ideologies using Instruction-tuned Large Language Models. / El Baff, Roxanne; Khatib, Khalid Al; Alshomary, Milad et al.
Findings of the Association for Computational Linguistics: EMNLP 2024. Hrsg. / Yaser Al-Onaizan; Mohit Bansal; Yun-Nung Chen. Miami, Florida, USA, 2024. S. 4604-4622.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

El Baff, R, Khatib, KA, Alshomary, M, Konen, K, Stein, B & Wachsmuth, H 2024, Improving Argument Effectiveness Across Ideologies using Instruction-tuned Large Language Models. in Y Al-Onaizan, M Bansal & Y-N Chen (Hrsg.), Findings of the Association for Computational Linguistics: EMNLP 2024. Miami, Florida, USA, S. 4604-4622. https://doi.org/10.18653/v1/2024.findings-emnlp.265
El Baff, R., Khatib, K. A., Alshomary, M., Konen, K., Stein, B., & Wachsmuth, H. (2024). Improving Argument Effectiveness Across Ideologies using Instruction-tuned Large Language Models. In Y. Al-Onaizan, M. Bansal, & Y.-N. Chen (Hrsg.), Findings of the Association for Computational Linguistics: EMNLP 2024 (S. 4604-4622). https://doi.org/10.18653/v1/2024.findings-emnlp.265
El Baff R, Khatib KA, Alshomary M, Konen K, Stein B, Wachsmuth H. Improving Argument Effectiveness Across Ideologies using Instruction-tuned Large Language Models. in Al-Onaizan Y, Bansal M, Chen YN, Hrsg., Findings of the Association for Computational Linguistics: EMNLP 2024. Miami, Florida, USA. 2024. S. 4604-4622 doi: 10.18653/v1/2024.findings-emnlp.265
El Baff, Roxanne ; Khatib, Khalid Al ; Alshomary, Milad et al. / Improving Argument Effectiveness Across Ideologies using Instruction-tuned Large Language Models. Findings of the Association for Computational Linguistics: EMNLP 2024. Hrsg. / Yaser Al-Onaizan ; Mohit Bansal ; Yun-Nung Chen. Miami, Florida, USA, 2024. S. 4604-4622
Download
@inproceedings{7d9512f806ce44c5b36bd922057b7472,
title = "Improving Argument Effectiveness Across Ideologies using Instruction-tuned Large Language Models",
abstract = "Different political ideologies (e.g., liberal and conservative Americans) hold different worldviews, which leads to opposing stances on different issues (e.g., gun control) and, thereby, fostering societal polarization. Arguments are a means of bringing the perspectives of people with different ideologies closer together, depending on how well they reach their audience. In this paper, we study how to computationally turn ineffective arguments into effective arguments for people with certain ideologies by using instruction-tuned large language models (LLMs), looking closely at style features. For development and evaluation, we collect ineffective arguments per ideology from debate.org, and we generate about 30k, which we rewrite using three LLM methods tailored to our task: zero-shot prompting, few-shot prompting, and LLM steering. Our experiments provide evidence that LLMs naturally improve argument effectiveness for liberals. Our LLM-based and human evaluation show a clear preference towards the rewritten arguments. Code and link to the data are available here: https://github.com/roxanneelbaff/emnlp2024-iesta.",
author = "{El Baff}, Roxanne and Khatib, {Khalid Al} and Milad Alshomary and Kai Konen and Benno Stein and Henning Wachsmuth",
year = "2024",
month = nov,
day = "1",
doi = "10.18653/v1/2024.findings-emnlp.265",
language = "English",
pages = "4604--4622",
editor = "Yaser Al-Onaizan and Mohit Bansal and Yun-Nung Chen",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",

}

Download

TY - GEN

T1 - Improving Argument Effectiveness Across Ideologies using Instruction-tuned Large Language Models

AU - El Baff, Roxanne

AU - Khatib, Khalid Al

AU - Alshomary, Milad

AU - Konen, Kai

AU - Stein, Benno

AU - Wachsmuth, Henning

PY - 2024/11/1

Y1 - 2024/11/1

N2 - Different political ideologies (e.g., liberal and conservative Americans) hold different worldviews, which leads to opposing stances on different issues (e.g., gun control) and, thereby, fostering societal polarization. Arguments are a means of bringing the perspectives of people with different ideologies closer together, depending on how well they reach their audience. In this paper, we study how to computationally turn ineffective arguments into effective arguments for people with certain ideologies by using instruction-tuned large language models (LLMs), looking closely at style features. For development and evaluation, we collect ineffective arguments per ideology from debate.org, and we generate about 30k, which we rewrite using three LLM methods tailored to our task: zero-shot prompting, few-shot prompting, and LLM steering. Our experiments provide evidence that LLMs naturally improve argument effectiveness for liberals. Our LLM-based and human evaluation show a clear preference towards the rewritten arguments. Code and link to the data are available here: https://github.com/roxanneelbaff/emnlp2024-iesta.

AB - Different political ideologies (e.g., liberal and conservative Americans) hold different worldviews, which leads to opposing stances on different issues (e.g., gun control) and, thereby, fostering societal polarization. Arguments are a means of bringing the perspectives of people with different ideologies closer together, depending on how well they reach their audience. In this paper, we study how to computationally turn ineffective arguments into effective arguments for people with certain ideologies by using instruction-tuned large language models (LLMs), looking closely at style features. For development and evaluation, we collect ineffective arguments per ideology from debate.org, and we generate about 30k, which we rewrite using three LLM methods tailored to our task: zero-shot prompting, few-shot prompting, and LLM steering. Our experiments provide evidence that LLMs naturally improve argument effectiveness for liberals. Our LLM-based and human evaluation show a clear preference towards the rewritten arguments. Code and link to the data are available here: https://github.com/roxanneelbaff/emnlp2024-iesta.

U2 - 10.18653/v1/2024.findings-emnlp.265

DO - 10.18653/v1/2024.findings-emnlp.265

M3 - Conference contribution

SP - 4604

EP - 4622

BT - Findings of the Association for Computational Linguistics: EMNLP 2024

A2 - Al-Onaizan, Yaser

A2 - Bansal, Mohit

A2 - Chen, Yun-Nung

CY - Miami, Florida, USA

ER -

Von denselben Autoren