Details
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue |
| Editors | Frédéric Béchet, Fabrice Lefèvre, Nicholas Asher, Seokhwan Kim, Teva Merlin |
| Place of Publication | Avignon, France |
| Pages | 1-20 |
| Number of pages | 20 |
| Publication status | Published - 1 Aug 2025 |
Abstract
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue. ed. / Frédéric Béchet; Fabrice Lefèvre; Nicholas Asher; Seokhwan Kim; Teva Merlin. Avignon, France, 2025. p. 1-20.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues
AU - Fichtel, Leandra
AU - Spliethöver, Maximilian
AU - Hüllermeier, Eyke
AU - Jimenez, Patricia
AU - Klowait, Nils
AU - Kopp, Stefan
AU - Ngonga Ngomo, Axel-Cyrille
AU - Robrecht, Amelie
AU - Scharlau, Ingrid
AU - Terfloth, Lutz
AU - Vollmer, Anna-Lisa
AU - Wachsmuth, Henning
PY - 2025/8/1
Y1 - 2025/8/1
N2 - The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee's background and needs, recent research focused on co-constructive explanation dialogues, where an explainer continuously monitors the explainee's understanding and adapts their explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with an LLM in two settings, one of which involves the LLM being instructed to explain a topic co-constructively. We evaluate the explainees' understanding before and after the dialogue, as well as their perception of the LLMs' co-constructive behavior. Our results suggest that LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.
AB - The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee's background and needs, recent research focused on co-constructive explanation dialogues, where an explainer continuously monitors the explainee's understanding and adapts their explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with an LLM in two settings, one of which involves the LLM being instructed to explain a topic co-constructively. We evaluate the explainees' understanding before and after the dialogue, as well as their perception of the LLMs' co-constructive behavior. Our results suggest that LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.
M3 - Conference contribution
SP - 1
EP - 20
BT - Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue
A2 - Béchet, Frédéric
A2 - Lefèvre, Fabrice
A2 - Asher, Nicholas
A2 - Kim, Seokhwan
A2 - Merlin, Teva
CY - Avignon, France
ER -