Explainable AI for engineering design: A unified approach of systems engineering and component-based deep learning demonstrated by energy-efficient building design

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Philipp Geyer
  • Manav Mahan Singh
  • Xia Chen

External Research Organisations

  • Technical University of Munich (TUM)
View graph of relations

Details

Original languageEnglish
Article number102843
Number of pages17
JournalAdvanced Engineering Informatics
Volume62
Issue numberC
Early online date16 Oct 2024
Publication statusPublished - Oct 2024

Abstract

Data-driven models created by machine learning (ML) have gained importance in all fields of design and engineering. They have high potential to assist decision-makers in creating novel artifacts with better performance and sustainability. However, limited generalization and the black-box nature of these models lead to limited explainability and reusability. To overcome this situation, we developed a component-based approach to create partial component models by ML. This component-based approach aligns deep learning with systems engineering (SE). The key contribution of the component-based method is that activations at interfaces between the components are interpretable engineering quantities. In this way, the hierarchical component system forms a deep neural network (DNN) that a priori integrates interpretable information for explainability of predictions. The large range of possible configurations in composing components allows the examination of novel unseen design cases outside training data. The matching of parameter ranges of components using similar probability distributions produces reusable, well-generalizing, and trustworthy models. The approach adapts the model structure to SE methods and domain knowledge. We examine the performance of the approach in the field of energy-efficient building design: First, we observed better generalization of the component-based method by analyzing prediction accuracy outside the training data. Especially for representative designs that are different in structure, we observed a much higher accuracy (R2 = 0.94) compared to conventional monolithic methods (R2 = 0.71). Second, we illustrate explainability by demonstrating how sensitivity information from SE and an interpretable model based on rules from low-depth decision trees serve engineering design. Third, we evaluate explainability using qualitative and quantitative methods that demonstrate the matching of preliminary knowledge and data-driven derived strategies and show correctness of activations at component interfaces compared to white-box simulation results (envelope components: R2 = 0.92.0.99; zones: R2 = 0.78.0.93).

Keywords

    Artificial intelligence, Complex systems, Machine learning, Regression model, Surrogate modeling, Systems engineering

ASJC Scopus subject areas

Cite this

Explainable AI for engineering design: A unified approach of systems engineering and component-based deep learning demonstrated by energy-efficient building design. / Geyer, Philipp; Singh, Manav Mahan; Chen, Xia.
In: Advanced Engineering Informatics, Vol. 62, No. C, 102843, 10.2024.

Research output: Contribution to journalArticleResearchpeer review

Download
@article{1842b0dc8abd48199e47efd0e59c93ac,
title = "Explainable AI for engineering design: A unified approach of systems engineering and component-based deep learning demonstrated by energy-efficient building design",
abstract = "Data-driven models created by machine learning (ML) have gained importance in all fields of design and engineering. They have high potential to assist decision-makers in creating novel artifacts with better performance and sustainability. However, limited generalization and the black-box nature of these models lead to limited explainability and reusability. To overcome this situation, we developed a component-based approach to create partial component models by ML. This component-based approach aligns deep learning with systems engineering (SE). The key contribution of the component-based method is that activations at interfaces between the components are interpretable engineering quantities. In this way, the hierarchical component system forms a deep neural network (DNN) that a priori integrates interpretable information for explainability of predictions. The large range of possible configurations in composing components allows the examination of novel unseen design cases outside training data. The matching of parameter ranges of components using similar probability distributions produces reusable, well-generalizing, and trustworthy models. The approach adapts the model structure to SE methods and domain knowledge. We examine the performance of the approach in the field of energy-efficient building design: First, we observed better generalization of the component-based method by analyzing prediction accuracy outside the training data. Especially for representative designs that are different in structure, we observed a much higher accuracy (R2 = 0.94) compared to conventional monolithic methods (R2 = 0.71). Second, we illustrate explainability by demonstrating how sensitivity information from SE and an interpretable model based on rules from low-depth decision trees serve engineering design. Third, we evaluate explainability using qualitative and quantitative methods that demonstrate the matching of preliminary knowledge and data-driven derived strategies and show correctness of activations at component interfaces compared to white-box simulation results (envelope components: R2 = 0.92.0.99; zones: R2 = 0.78.0.93).",
keywords = "Artificial intelligence, Complex systems, Machine learning, Regression model, Surrogate modeling, Systems engineering",
author = "Philipp Geyer and Singh, {Manav Mahan} and Xia Chen",
note = "Publisher Copyright: {\textcopyright} 2024",
year = "2024",
month = oct,
doi = "10.48550/arXiv.2108.13836",
language = "English",
volume = "62",
journal = "Advanced Engineering Informatics",
issn = "1474-0346",
publisher = "Elsevier Ltd.",
number = "C",

}

Download

TY - JOUR

T1 - Explainable AI for engineering design

T2 - A unified approach of systems engineering and component-based deep learning demonstrated by energy-efficient building design

AU - Geyer, Philipp

AU - Singh, Manav Mahan

AU - Chen, Xia

N1 - Publisher Copyright: © 2024

PY - 2024/10

Y1 - 2024/10

N2 - Data-driven models created by machine learning (ML) have gained importance in all fields of design and engineering. They have high potential to assist decision-makers in creating novel artifacts with better performance and sustainability. However, limited generalization and the black-box nature of these models lead to limited explainability and reusability. To overcome this situation, we developed a component-based approach to create partial component models by ML. This component-based approach aligns deep learning with systems engineering (SE). The key contribution of the component-based method is that activations at interfaces between the components are interpretable engineering quantities. In this way, the hierarchical component system forms a deep neural network (DNN) that a priori integrates interpretable information for explainability of predictions. The large range of possible configurations in composing components allows the examination of novel unseen design cases outside training data. The matching of parameter ranges of components using similar probability distributions produces reusable, well-generalizing, and trustworthy models. The approach adapts the model structure to SE methods and domain knowledge. We examine the performance of the approach in the field of energy-efficient building design: First, we observed better generalization of the component-based method by analyzing prediction accuracy outside the training data. Especially for representative designs that are different in structure, we observed a much higher accuracy (R2 = 0.94) compared to conventional monolithic methods (R2 = 0.71). Second, we illustrate explainability by demonstrating how sensitivity information from SE and an interpretable model based on rules from low-depth decision trees serve engineering design. Third, we evaluate explainability using qualitative and quantitative methods that demonstrate the matching of preliminary knowledge and data-driven derived strategies and show correctness of activations at component interfaces compared to white-box simulation results (envelope components: R2 = 0.92.0.99; zones: R2 = 0.78.0.93).

AB - Data-driven models created by machine learning (ML) have gained importance in all fields of design and engineering. They have high potential to assist decision-makers in creating novel artifacts with better performance and sustainability. However, limited generalization and the black-box nature of these models lead to limited explainability and reusability. To overcome this situation, we developed a component-based approach to create partial component models by ML. This component-based approach aligns deep learning with systems engineering (SE). The key contribution of the component-based method is that activations at interfaces between the components are interpretable engineering quantities. In this way, the hierarchical component system forms a deep neural network (DNN) that a priori integrates interpretable information for explainability of predictions. The large range of possible configurations in composing components allows the examination of novel unseen design cases outside training data. The matching of parameter ranges of components using similar probability distributions produces reusable, well-generalizing, and trustworthy models. The approach adapts the model structure to SE methods and domain knowledge. We examine the performance of the approach in the field of energy-efficient building design: First, we observed better generalization of the component-based method by analyzing prediction accuracy outside the training data. Especially for representative designs that are different in structure, we observed a much higher accuracy (R2 = 0.94) compared to conventional monolithic methods (R2 = 0.71). Second, we illustrate explainability by demonstrating how sensitivity information from SE and an interpretable model based on rules from low-depth decision trees serve engineering design. Third, we evaluate explainability using qualitative and quantitative methods that demonstrate the matching of preliminary knowledge and data-driven derived strategies and show correctness of activations at component interfaces compared to white-box simulation results (envelope components: R2 = 0.92.0.99; zones: R2 = 0.78.0.93).

KW - Artificial intelligence

KW - Complex systems

KW - Machine learning

KW - Regression model

KW - Surrogate modeling

KW - Systems engineering

UR - http://www.scopus.com/inward/record.url?scp=85206270196&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2108.13836

DO - 10.48550/arXiv.2108.13836

M3 - Article

VL - 62

JO - Advanced Engineering Informatics

JF - Advanced Engineering Informatics

SN - 1474-0346

IS - C

M1 - 102843

ER -