Loading [MathJax]/extensions/tex2jax.js

InterpretME: A tool for interpretations of machine learning models over knowledge graphs

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Yashrajsinh Chudasama
  • Disha Purohit
  • Philipp D. Rohde
  • Julian Gercke
  • Maria Esther Vidal

Research Organisations

External Research Organisations

  • German National Library of Science and Technology (TIB)
Plum Print visual indicator of research metrics
  • Citations
    • Citation Indexes: 5
  • Captures
    • Readers: 5
see details

Details

Original languageEnglish
Article numberSW-233511
JournalSemantic web
Volume16
Issue number2
Early online date24 Feb 2025
Publication statusPublished - Mar 2025

Abstract

In recent years, knowledge graphs (KGs) have been considered pyramids of interconnected data enriched with semantics for complex decision-making. The potential of KGs and the demand for interpretability of machine learning (ML) models in diverse domains (e.g., healthcare) have gained more attention. The lack of model transparency negatively impacts the understanding and, in consequence, interpretability of the predictions made by a model. Data-driven models should be empowered with the knowledge required to trace down their decisions and the transformations made to the input data to increase model transparency. In this paper, we propose InterpretME, a tool that using KGs, provides fine-grained representations of trained ML models. An ML model description includes data – (e.g., features’ definition and SHACL validation) and model-based characteristics (e.g., relevant features and interpretations of prediction probabilities and model decisions). InterpretME allows for defining a model’s features over data collected in various formats, e.g., RDF KGs, CSV, and JSON. InterpretME relies on the SHACL schema to validate integrity constraints over the input data. InterpretME traces the steps of data collection, curation, integration, and prediction; it documents the collected metadata in the InterpretME KG. InterpretME is published in GitHub and Zenodo. The InterpretME framework includes a pipeline for enhancing the interpretability of ML models, the InterpretME KG, and an ontology to describe the main characteristics of trained ML models; a PyPI library of InterpretME is also provided. Additionally, a live code, and a video demonstrating InterpretME in several use cases are also available.

Keywords

    Interpretability, knowledge graphs, machine learning models, ontologies, shacl, DOI error

ASJC Scopus subject areas

Cite this

InterpretME: A tool for interpretations of machine learning models over knowledge graphs. / Chudasama, Yashrajsinh; Purohit, Disha; Rohde, Philipp D. et al.
In: Semantic web, Vol. 16, No. 2, SW-233511, 03.2025.

Research output: Contribution to journalArticleResearchpeer review

Chudasama, Y, Purohit, D, Rohde, PD, Gercke, J & Vidal, ME 2025, 'InterpretME: A tool for interpretations of machine learning models over knowledge graphs', Semantic web, vol. 16, no. 2, SW-233511. https://doi.org/10.3233/SW-233511
Chudasama, Y., Purohit, D., Rohde, P. D., Gercke, J., & Vidal, M. E. (2025). InterpretME: A tool for interpretations of machine learning models over knowledge graphs. Semantic web, 16(2), Article SW-233511. https://doi.org/10.3233/SW-233511
Chudasama Y, Purohit D, Rohde PD, Gercke J, Vidal ME. InterpretME: A tool for interpretations of machine learning models over knowledge graphs. Semantic web. 2025 Mar;16(2):SW-233511. Epub 2025 Feb 24. doi: 10.3233/SW-233511
Chudasama, Yashrajsinh ; Purohit, Disha ; Rohde, Philipp D. et al. / InterpretME : A tool for interpretations of machine learning models over knowledge graphs. In: Semantic web. 2025 ; Vol. 16, No. 2.
Download
@article{c852c9d653f34c0c96c7d901908fa09b,
title = "InterpretME: A tool for interpretations of machine learning models over knowledge graphs",
abstract = "In recent years, knowledge graphs (KGs) have been considered pyramids of interconnected data enriched with semantics for complex decision-making. The potential of KGs and the demand for interpretability of machine learning (ML) models in diverse domains (e.g., healthcare) have gained more attention. The lack of model transparency negatively impacts the understanding and, in consequence, interpretability of the predictions made by a model. Data-driven models should be empowered with the knowledge required to trace down their decisions and the transformations made to the input data to increase model transparency. In this paper, we propose InterpretME, a tool that using KGs, provides fine-grained representations of trained ML models. An ML model description includes data – (e.g., features{\textquoteright} definition and SHACL validation) and model-based characteristics (e.g., relevant features and interpretations of prediction probabilities and model decisions). InterpretME allows for defining a model{\textquoteright}s features over data collected in various formats, e.g., RDF KGs, CSV, and JSON. InterpretME relies on the SHACL schema to validate integrity constraints over the input data. InterpretME traces the steps of data collection, curation, integration, and prediction; it documents the collected metadata in the InterpretME KG. InterpretME is published in GitHub and Zenodo. The InterpretME framework includes a pipeline for enhancing the interpretability of ML models, the InterpretME KG, and an ontology to describe the main characteristics of trained ML models; a PyPI library of InterpretME is also provided. Additionally, a live code, and a video demonstrating InterpretME in several use cases are also available.",
keywords = "Interpretability, knowledge graphs, machine learning models, ontologies, shacl, DOI error, DOI error",
author = "Yashrajsinh Chudasama and Disha Purohit and Rohde, {Philipp D.} and Julian Gercke and Vidal, {Maria Esther}",
note = "Publisher Copyright: {\textcopyright} {\textcopyright} 2024 – The authors. Published by IOS Press.",
year = "2025",
month = mar,
doi = "10.3233/SW-233511",
language = "English",
volume = "16",
journal = "Semantic web",
issn = "1570-0844",
publisher = "SAGE Publications Ltd",
number = "2",

}

Download

TY - JOUR

T1 - InterpretME

T2 - A tool for interpretations of machine learning models over knowledge graphs

AU - Chudasama, Yashrajsinh

AU - Purohit, Disha

AU - Rohde, Philipp D.

AU - Gercke, Julian

AU - Vidal, Maria Esther

N1 - Publisher Copyright: © © 2024 – The authors. Published by IOS Press.

PY - 2025/3

Y1 - 2025/3

N2 - In recent years, knowledge graphs (KGs) have been considered pyramids of interconnected data enriched with semantics for complex decision-making. The potential of KGs and the demand for interpretability of machine learning (ML) models in diverse domains (e.g., healthcare) have gained more attention. The lack of model transparency negatively impacts the understanding and, in consequence, interpretability of the predictions made by a model. Data-driven models should be empowered with the knowledge required to trace down their decisions and the transformations made to the input data to increase model transparency. In this paper, we propose InterpretME, a tool that using KGs, provides fine-grained representations of trained ML models. An ML model description includes data – (e.g., features’ definition and SHACL validation) and model-based characteristics (e.g., relevant features and interpretations of prediction probabilities and model decisions). InterpretME allows for defining a model’s features over data collected in various formats, e.g., RDF KGs, CSV, and JSON. InterpretME relies on the SHACL schema to validate integrity constraints over the input data. InterpretME traces the steps of data collection, curation, integration, and prediction; it documents the collected metadata in the InterpretME KG. InterpretME is published in GitHub and Zenodo. The InterpretME framework includes a pipeline for enhancing the interpretability of ML models, the InterpretME KG, and an ontology to describe the main characteristics of trained ML models; a PyPI library of InterpretME is also provided. Additionally, a live code, and a video demonstrating InterpretME in several use cases are also available.

AB - In recent years, knowledge graphs (KGs) have been considered pyramids of interconnected data enriched with semantics for complex decision-making. The potential of KGs and the demand for interpretability of machine learning (ML) models in diverse domains (e.g., healthcare) have gained more attention. The lack of model transparency negatively impacts the understanding and, in consequence, interpretability of the predictions made by a model. Data-driven models should be empowered with the knowledge required to trace down their decisions and the transformations made to the input data to increase model transparency. In this paper, we propose InterpretME, a tool that using KGs, provides fine-grained representations of trained ML models. An ML model description includes data – (e.g., features’ definition and SHACL validation) and model-based characteristics (e.g., relevant features and interpretations of prediction probabilities and model decisions). InterpretME allows for defining a model’s features over data collected in various formats, e.g., RDF KGs, CSV, and JSON. InterpretME relies on the SHACL schema to validate integrity constraints over the input data. InterpretME traces the steps of data collection, curation, integration, and prediction; it documents the collected metadata in the InterpretME KG. InterpretME is published in GitHub and Zenodo. The InterpretME framework includes a pipeline for enhancing the interpretability of ML models, the InterpretME KG, and an ontology to describe the main characteristics of trained ML models; a PyPI library of InterpretME is also provided. Additionally, a live code, and a video demonstrating InterpretME in several use cases are also available.

KW - Interpretability

KW - knowledge graphs

KW - machine learning models

KW - ontologies

KW - shacl

KW - DOI error

KW - DOI error

U2 - 10.3233/SW-233511

DO - 10.3233/SW-233511

M3 - Article

AN - SCOPUS:105000657543

VL - 16

JO - Semantic web

JF - Semantic web

SN - 1570-0844

IS - 2

M1 - SW-233511

ER -