Loading [MathJax]/extensions/tex2jax.js

How to Elicit Explainability Requirements? A Comparison of Interviews, Focus Groups, and Surveys

Dataset

Researchers

Research Organisations

Details

Date made available17 Jun 2025
PublisherZenodo
Contact personMartin Obaidi

Description

Dataset: How to Elicit Explainability Requirements?
A Comparison of Interviews, Focus Groups, and Surveys
Authors: Martin Obaidi, Jakob Droste, Hannah Deters, Marc Herrmann, Raymond Ochsner, Jil Klünder, Kurt Schneider
Description
This dataset accompanies the publication: How to Elicit Explainability Requirements? A Comparison of Interviews, Focus Groups, and Surveys (2025 IEEE 33rd International Requirements Engineering Conference, RE 2025) The dataset provides all materials, question sets, coding guidelines, and coded results from a study investigating how to elicit explainability requirements in real-world software development. Three data collection methods were compared: interviews, focus groups, and surveys. All data were collected in an organizational context. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Grant No.: 470146331, project softXplain (2022–2025).
Contents
The dataset includes the following files: All_explanation_needs_coded.xlsx Contains all coded explanation needs identified in interviews, focus groups, and surveys. Each need is categorized using an established taxonomy (see below). The file includes information on method, session, participant, taxonomy version (direct, delayed, without), and whether the need is unique/distinct. Taxonomy coding guidelines.pdf The full coding guide, describing all categories and subcategories used for coding explanation needs, adapted from Droste et al. and extended by Obaidi et al. Each category includes a definition and practical example. Droste et al.: Droste, J., Deters, H., Obaidi, M. et al. Framing what can be explained - an operational taxonomy for explainability needs. Requirements Eng (2025). https://doi.org/10.1007/s00766-025-00440-x Obaidi et al.: M. Obaidi, N. Voß, J. Droste, H. Deters, M. Herrmann, J. Fischbach, and K. Schneider, “Automating explanation need management in app reviews: A case study from the navigation app industry,” in ICSE-SEIP’25, 2025 https://arxiv.org/abs/2501.08087 taxonomy.pdf A graphical overview of the taxonomy as provided to study participants. questions_fokus_group.xlsx All questions used in the focus groups, as well as the measured time spent on each question and session. The file distinguishes between focus groups with direct, delayed, or without taxonomy introduction. questions_interviews.xlsx All interview questions and measured times per question/session. Includes session details for each taxonomy version (direct, delayed, without). questions_survey.xlsx The survey instrument, with all questions and measured times, organized by taxonomy version (without and delayed).
Anonymization and Privacy
To comply with privacy and company requirements, all data have been fully anonymized: Company and software names have been removed or replaced by placeholders. Demographic and potentially identifying information has been deleted. Only non-sensitive, anonymized qualitative data is included.
Usage and Citation
This dataset can be used for: Secondary analysis of explanation needs in software engineering Methodological comparison of requirements elicitation techniques Development or validation of explainability taxonomies Training and education in qualitative coding and requirements engineering If you use this dataset, please cite the following publication: Obaidi, M., Droste, J., Deters, H., Herrmann, M., Ochsner, R., Klünder, J., Schneider, K. (2025). How to Elicit Explainability Requirements? A Comparison of Interviews, Focus Groups, and Surveys. 2025 IEEE 33rd International Requirements Engineering Conference (RE).
License
This dataset is provided under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
Contact
For questions regarding the dataset, please contact the corresponding author as listed in the publication.