Details
| Originalsprache | Englisch |
|---|---|
| Titel des Sammelwerks | Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025 |
| Herausgeber (Verlag) | Institute of Electrical and Electronics Engineers Inc. |
| Seiten | 49-58 |
| Seitenumfang | 10 |
| ISBN (elektronisch) | 9798331538347 |
| ISBN (Print) | 979-8-3315-3835-4 |
| Publikationsstatus | Veröffentlicht - 1 Sept. 2025 |
| Veranstaltung | 33rd IEEE International Requirements Engineering Conference Workshops, REW 2025 - Valencia, Spanien Dauer: 1 Sept. 2025 → 5 Sept. 2025 |
Publikationsreihe
| Name | Proceedings - IEEE International Requirements Engineering Conference Workshops |
|---|---|
| ISSN (Print) | 2770-6826 |
| ISSN (elektronisch) | 2770-6834 |
Abstract
Explainability has become a crucial non-functional requirement to enhance transparency, build user trust, and ensure regulatory compliance. However, translating explanation needs expressed in user feedback into structured requirements and corresponding explanations remains challenging. While existing methods can identify explanation-related concerns in user reviews, there is no established approach for systematically deriving requirements and generating aligned explanations. To contribute toward addressing this gap, we introduce a tool-supported approach that automates this process. To evaluate its effectiveness, we collaborated with an industrial automation manufacturer to create a dataset of 58 user reviews, each annotated with manually crafted explainability requirements and explanations. Our evaluation shows that while AI-generated requirements often lack relevance and correctness compared to human-created ones, the AI-generated explanations are frequently preferred for their clarity and style. Nonetheless, correctness remains an issue, highlighting the importance of human validation. This work contributes to the advancement of explainability requirements in software systems by (1) introducing an automated approach to derive requirements from user reviews and generate corresponding explanations, (2) providing empirical insights into the strengths and limitations of automatically generated artifacts, and (3) releasing a curated dataset to support future research on the automatic generation of explainability requirements.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Artificial intelligence
- Informatik (insg.)
- Software
- Ingenieurwesen (insg.)
- Sicherheit, Risiko, Zuverlässigkeit und Qualität
- Mathematik (insg.)
- Modellierung und Simulation
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025. Institute of Electrical and Electronics Engineers Inc., 2025. S. 49-58 (Proceedings - IEEE International Requirements Engineering Conference Workshops).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Automatic Generation of Explainability Requirements and Software Explanations From User Reviews
AU - Obaidi, Martin
AU - Droste, Jakob
AU - Deters, Hannah
AU - Herrmann, Marc
AU - Schneider, Kurt
AU - Klünder, Jil
AU - Villamizar, Hugo
AU - Fischbach, Jannik
AU - Krätzig, Steffen
N1 - Publisher Copyright: © 2025 IEEE.
PY - 2025/9/1
Y1 - 2025/9/1
N2 - Explainability has become a crucial non-functional requirement to enhance transparency, build user trust, and ensure regulatory compliance. However, translating explanation needs expressed in user feedback into structured requirements and corresponding explanations remains challenging. While existing methods can identify explanation-related concerns in user reviews, there is no established approach for systematically deriving requirements and generating aligned explanations. To contribute toward addressing this gap, we introduce a tool-supported approach that automates this process. To evaluate its effectiveness, we collaborated with an industrial automation manufacturer to create a dataset of 58 user reviews, each annotated with manually crafted explainability requirements and explanations. Our evaluation shows that while AI-generated requirements often lack relevance and correctness compared to human-created ones, the AI-generated explanations are frequently preferred for their clarity and style. Nonetheless, correctness remains an issue, highlighting the importance of human validation. This work contributes to the advancement of explainability requirements in software systems by (1) introducing an automated approach to derive requirements from user reviews and generate corresponding explanations, (2) providing empirical insights into the strengths and limitations of automatically generated artifacts, and (3) releasing a curated dataset to support future research on the automatic generation of explainability requirements.
AB - Explainability has become a crucial non-functional requirement to enhance transparency, build user trust, and ensure regulatory compliance. However, translating explanation needs expressed in user feedback into structured requirements and corresponding explanations remains challenging. While existing methods can identify explanation-related concerns in user reviews, there is no established approach for systematically deriving requirements and generating aligned explanations. To contribute toward addressing this gap, we introduce a tool-supported approach that automates this process. To evaluate its effectiveness, we collaborated with an industrial automation manufacturer to create a dataset of 58 user reviews, each annotated with manually crafted explainability requirements and explanations. Our evaluation shows that while AI-generated requirements often lack relevance and correctness compared to human-created ones, the AI-generated explanations are frequently preferred for their clarity and style. Nonetheless, correctness remains an issue, highlighting the importance of human validation. This work contributes to the advancement of explainability requirements in software systems by (1) introducing an automated approach to derive requirements from user reviews and generate corresponding explanations, (2) providing empirical insights into the strengths and limitations of automatically generated artifacts, and (3) releasing a curated dataset to support future research on the automatic generation of explainability requirements.
KW - app reviews
KW - explainability
KW - large language models
KW - requirements engineering
KW - user feedback
UR - http://www.scopus.com/inward/record.url?scp=105020976842&partnerID=8YFLogxK
U2 - 10.1109/REW66121.2025.00011
DO - 10.1109/REW66121.2025.00011
M3 - Conference contribution
AN - SCOPUS:105020976842
SN - 979-8-3315-3835-4
T3 - Proceedings - IEEE International Requirements Engineering Conference Workshops
SP - 49
EP - 58
BT - Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 33rd IEEE International Requirements Engineering Conference Workshops, REW 2025
Y2 - 1 September 2025 through 5 September 2025
ER -