Details
Original language | English |
---|---|
Article number | SW-233511 |
Journal | Semantic web |
Volume | 16 |
Issue number | 2 |
Early online date | 24 Feb 2025 |
Publication status | Published - Mar 2025 |
Abstract
In recent years, knowledge graphs (KGs) have been considered pyramids of interconnected data enriched with semantics for complex decision-making. The potential of KGs and the demand for interpretability of machine learning (ML) models in diverse domains (e.g., healthcare) have gained more attention. The lack of model transparency negatively impacts the understanding and, in consequence, interpretability of the predictions made by a model. Data-driven models should be empowered with the knowledge required to trace down their decisions and the transformations made to the input data to increase model transparency. In this paper, we propose InterpretME, a tool that using KGs, provides fine-grained representations of trained ML models. An ML model description includes data – (e.g., features’ definition and SHACL validation) and model-based characteristics (e.g., relevant features and interpretations of prediction probabilities and model decisions). InterpretME allows for defining a model’s features over data collected in various formats, e.g., RDF KGs, CSV, and JSON. InterpretME relies on the SHACL schema to validate integrity constraints over the input data. InterpretME traces the steps of data collection, curation, integration, and prediction; it documents the collected metadata in the InterpretME KG. InterpretME is published in GitHub and Zenodo. The InterpretME framework includes a pipeline for enhancing the interpretability of ML models, the InterpretME KG, and an ontology to describe the main characteristics of trained ML models; a PyPI library of InterpretME is also provided. Additionally, a live code, and a video demonstrating InterpretME in several use cases are also available.
Keywords
- Interpretability, knowledge graphs, machine learning models, ontologies, shacl, DOI error
ASJC Scopus subject areas
- Computer Science(all)
- Information Systems
- Computer Science(all)
- Computer Science Applications
- Computer Science(all)
- Computer Networks and Communications
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: Semantic web, Vol. 16, No. 2, SW-233511, 03.2025.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - InterpretME
T2 - A tool for interpretations of machine learning models over knowledge graphs
AU - Chudasama, Yashrajsinh
AU - Purohit, Disha
AU - Rohde, Philipp D.
AU - Gercke, Julian
AU - Vidal, Maria Esther
N1 - Publisher Copyright: © © 2024 – The authors. Published by IOS Press.
PY - 2025/3
Y1 - 2025/3
N2 - In recent years, knowledge graphs (KGs) have been considered pyramids of interconnected data enriched with semantics for complex decision-making. The potential of KGs and the demand for interpretability of machine learning (ML) models in diverse domains (e.g., healthcare) have gained more attention. The lack of model transparency negatively impacts the understanding and, in consequence, interpretability of the predictions made by a model. Data-driven models should be empowered with the knowledge required to trace down their decisions and the transformations made to the input data to increase model transparency. In this paper, we propose InterpretME, a tool that using KGs, provides fine-grained representations of trained ML models. An ML model description includes data – (e.g., features’ definition and SHACL validation) and model-based characteristics (e.g., relevant features and interpretations of prediction probabilities and model decisions). InterpretME allows for defining a model’s features over data collected in various formats, e.g., RDF KGs, CSV, and JSON. InterpretME relies on the SHACL schema to validate integrity constraints over the input data. InterpretME traces the steps of data collection, curation, integration, and prediction; it documents the collected metadata in the InterpretME KG. InterpretME is published in GitHub and Zenodo. The InterpretME framework includes a pipeline for enhancing the interpretability of ML models, the InterpretME KG, and an ontology to describe the main characteristics of trained ML models; a PyPI library of InterpretME is also provided. Additionally, a live code, and a video demonstrating InterpretME in several use cases are also available.
AB - In recent years, knowledge graphs (KGs) have been considered pyramids of interconnected data enriched with semantics for complex decision-making. The potential of KGs and the demand for interpretability of machine learning (ML) models in diverse domains (e.g., healthcare) have gained more attention. The lack of model transparency negatively impacts the understanding and, in consequence, interpretability of the predictions made by a model. Data-driven models should be empowered with the knowledge required to trace down their decisions and the transformations made to the input data to increase model transparency. In this paper, we propose InterpretME, a tool that using KGs, provides fine-grained representations of trained ML models. An ML model description includes data – (e.g., features’ definition and SHACL validation) and model-based characteristics (e.g., relevant features and interpretations of prediction probabilities and model decisions). InterpretME allows for defining a model’s features over data collected in various formats, e.g., RDF KGs, CSV, and JSON. InterpretME relies on the SHACL schema to validate integrity constraints over the input data. InterpretME traces the steps of data collection, curation, integration, and prediction; it documents the collected metadata in the InterpretME KG. InterpretME is published in GitHub and Zenodo. The InterpretME framework includes a pipeline for enhancing the interpretability of ML models, the InterpretME KG, and an ontology to describe the main characteristics of trained ML models; a PyPI library of InterpretME is also provided. Additionally, a live code, and a video demonstrating InterpretME in several use cases are also available.
KW - Interpretability
KW - knowledge graphs
KW - machine learning models
KW - ontologies
KW - shacl
KW - DOI error
KW - DOI error
U2 - 10.3233/SW-233511
DO - 10.3233/SW-233511
M3 - Article
AN - SCOPUS:105000657543
VL - 16
JO - Semantic web
JF - Semantic web
SN - 1570-0844
IS - 2
M1 - SW-233511
ER -