Details
Original language | English |
---|---|
Title of host publication | Robust Argumentation Machines - First International Conference, RATIO 2024, Proceedings |
Subtitle of host publication | First International Conference, RATIO 2024, Bielefeld, Germany, June 5–7, 2024, Proceedings |
Editors | Philipp Cimiano, Anette Frank, Michael Kohlhase, Benno Stein |
Place of Publication | Cham |
Pages | 335-351 |
Number of pages | 17 |
Edition | 1. |
ISBN (electronic) | 978-3-031-63536-6 |
Publication status | Published - 17 Jul 2024 |
Event | 1st International Conference on Recent Advances in Robust Argumentation Machines (RATIO-24) - Bielefeld, Germany Duration: 5 Jun 2024 → 7 Jun 2024 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 14638 LNAI |
ISSN (Print) | 0302-9743 |
ISSN (electronic) | 1611-3349 |
Abstract
Keywords
- Computational Argumentation, Information Retrieval, Large Language Models, Text Neutralization, Text Summarization
ASJC Scopus subject areas
- Mathematics(all)
- Theoretical Computer Science
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Robust Argumentation Machines - First International Conference, RATIO 2024, Proceedings: First International Conference, RATIO 2024, Bielefeld, Germany, June 5–7, 2024, Proceedings. ed. / Philipp Cimiano; Anette Frank; Michael Kohlhase; Benno Stein. 1. ed. Cham, 2024. p. 335-351 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 14638 LNAI).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Objective Argument Summarization in Search
AU - Ziegenbein, Timon
AU - Syed, Shahbaz
AU - Potthast, Martin
AU - Wachsmuth, Henning
N1 - © 2024 The Author(s)
PY - 2024/7/17
Y1 - 2024/7/17
N2 - Decision-making and opinion formation are influenced by arguments from various online sources, including social media, web publishers, and, not least, the search engines used to retrieve them. However, many, if not most, arguments on the web are informal, especially in online discussions or on personal pages. They can be long and unstructured, subjective and emotional, and contain inappropriate language. This makes it difficult to find relevant arguments efficiently. We hypothesize that, on search engine results pages,“objective snippets” of arguments are better suited than the commonly used extractive snippets and develop corresponding methods for two important tasks: snippet generation and neutralization. For each of these tasks, we investigate two approaches based on (1) prompt engineering for large language models (LLMs), and (2) supervised models trained on existing datasets. We find that a supervised summarization model outperforms zero-shot summarization with LLMs for snippet generation. For neutralization, using reinforcement learning to align an LLM with human preferences for suitable arguments leads to the best results. Both tasks are complementary, and their combination leads to the best snippets of arguments according to automatic and human evaluation.
AB - Decision-making and opinion formation are influenced by arguments from various online sources, including social media, web publishers, and, not least, the search engines used to retrieve them. However, many, if not most, arguments on the web are informal, especially in online discussions or on personal pages. They can be long and unstructured, subjective and emotional, and contain inappropriate language. This makes it difficult to find relevant arguments efficiently. We hypothesize that, on search engine results pages,“objective snippets” of arguments are better suited than the commonly used extractive snippets and develop corresponding methods for two important tasks: snippet generation and neutralization. For each of these tasks, we investigate two approaches based on (1) prompt engineering for large language models (LLMs), and (2) supervised models trained on existing datasets. We find that a supervised summarization model outperforms zero-shot summarization with LLMs for snippet generation. For neutralization, using reinforcement learning to align an LLM with human preferences for suitable arguments leads to the best results. Both tasks are complementary, and their combination leads to the best snippets of arguments according to automatic and human evaluation.
KW - Computational Argumentation
KW - Information Retrieval
KW - Large Language Models
KW - Text Neutralization
KW - Text Summarization
UR - http://www.scopus.com/inward/record.url?scp=85200681607&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-63536-6_20
DO - 10.1007/978-3-031-63536-6_20
M3 - Conference contribution
SN - 978-3-031-63535-9
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 335
EP - 351
BT - Robust Argumentation Machines - First International Conference, RATIO 2024, Proceedings
A2 - Cimiano, Philipp
A2 - Frank, Anette
A2 - Kohlhase, Michael
A2 - Stein, Benno
CY - Cham
T2 - 1st International Conference on Recent Advances in Robust Argumentation Machines (RATIO-24)
Y2 - 5 June 2024 through 7 June 2024
ER -