Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

View graph of relations

Details

Original languageEnglish
Title of host publicationProceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue
EditorsFrédéric Béchet, Fabrice Lefèvre, Nicholas Asher, Seokhwan Kim, Teva Merlin
Place of PublicationAvignon, France
Pages1-20
Number of pages20
Publication statusPublished - 1 Aug 2025

Abstract

The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee's background and needs, recent research focused on co-constructive explanation dialogues, where an explainer continuously monitors the explainee's understanding and adapts their explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with an LLM in two settings, one of which involves the LLM being instructed to explain a topic co-constructively. We evaluate the explainees' understanding before and after the dialogue, as well as their perception of the LLMs' co-constructive behavior. Our results suggest that LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.

Cite this

Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues. / Fichtel, Leandra; Spliethöver, Maximilian; Hüllermeier, Eyke et al.
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue. ed. / Frédéric Béchet; Fabrice Lefèvre; Nicholas Asher; Seokhwan Kim; Teva Merlin. Avignon, France, 2025. p. 1-20.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Fichtel, L, Spliethöver, M, Hüllermeier, E, Jimenez, P, Klowait, N, Kopp, S, Ngonga Ngomo, A-C, Robrecht, A, Scharlau, I, Terfloth, L, Vollmer, A-L & Wachsmuth, H 2025, Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues. in F Béchet, F Lefèvre, N Asher, S Kim & T Merlin (eds), Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Avignon, France, pp. 1-20. <https://aclanthology.org/2025.sigdial-1.1/>
Fichtel, L., Spliethöver, M., Hüllermeier, E., Jimenez, P., Klowait, N., Kopp, S., Ngonga Ngomo, A.-C., Robrecht, A., Scharlau, I., Terfloth, L., Vollmer, A.-L., & Wachsmuth, H. (2025). Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues. In F. Béchet, F. Lefèvre, N. Asher, S. Kim, & T. Merlin (Eds.), Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue (pp. 1-20). https://aclanthology.org/2025.sigdial-1.1/
Fichtel L, Spliethöver M, Hüllermeier E, Jimenez P, Klowait N, Kopp S et al. Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues. In Béchet F, Lefèvre F, Asher N, Kim S, Merlin T, editors, Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Avignon, France. 2025. p. 1-20
Fichtel, Leandra ; Spliethöver, Maximilian ; Hüllermeier, Eyke et al. / Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues. Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue. editor / Frédéric Béchet ; Fabrice Lefèvre ; Nicholas Asher ; Seokhwan Kim ; Teva Merlin. Avignon, France, 2025. pp. 1-20
Download
@inproceedings{00f511aba609488781f1975a02c0dab6,
title = "Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues",
abstract = "The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee's background and needs, recent research focused on co-constructive explanation dialogues, where an explainer continuously monitors the explainee's understanding and adapts their explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with an LLM in two settings, one of which involves the LLM being instructed to explain a topic co-constructively. We evaluate the explainees' understanding before and after the dialogue, as well as their perception of the LLMs' co-constructive behavior. Our results suggest that LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.",
author = "Leandra Fichtel and Maximilian Splieth{\"o}ver and Eyke H{\"u}llermeier and Patricia Jimenez and Nils Klowait and Stefan Kopp and {Ngonga Ngomo}, Axel-Cyrille and Amelie Robrecht and Ingrid Scharlau and Lutz Terfloth and Anna-Lisa Vollmer and Henning Wachsmuth",
year = "2025",
month = aug,
day = "1",
language = "English",
pages = "1--20",
editor = "Fr{\'e}d{\'e}ric B{\'e}chet and Fabrice Lef{\`e}vre and Nicholas Asher and Seokhwan Kim and Teva Merlin",
booktitle = "Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue",

}

Download

TY - GEN

T1 - Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues

AU - Fichtel, Leandra

AU - Spliethöver, Maximilian

AU - Hüllermeier, Eyke

AU - Jimenez, Patricia

AU - Klowait, Nils

AU - Kopp, Stefan

AU - Ngonga Ngomo, Axel-Cyrille

AU - Robrecht, Amelie

AU - Scharlau, Ingrid

AU - Terfloth, Lutz

AU - Vollmer, Anna-Lisa

AU - Wachsmuth, Henning

PY - 2025/8/1

Y1 - 2025/8/1

N2 - The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee's background and needs, recent research focused on co-constructive explanation dialogues, where an explainer continuously monitors the explainee's understanding and adapts their explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with an LLM in two settings, one of which involves the LLM being instructed to explain a topic co-constructively. We evaluate the explainees' understanding before and after the dialogue, as well as their perception of the LLMs' co-constructive behavior. Our results suggest that LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.

AB - The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee's background and needs, recent research focused on co-constructive explanation dialogues, where an explainer continuously monitors the explainee's understanding and adapts their explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with an LLM in two settings, one of which involves the LLM being instructed to explain a topic co-constructively. We evaluate the explainees' understanding before and after the dialogue, as well as their perception of the LLMs' co-constructive behavior. Our results suggest that LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.

M3 - Conference contribution

SP - 1

EP - 20

BT - Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue

A2 - Béchet, Frédéric

A2 - Lefèvre, Fabrice

A2 - Asher, Nicholas

A2 - Kim, Seokhwan

A2 - Merlin, Teva

CY - Avignon, France

ER -

By the same author(s)