Details
| Original language | English |
|---|---|
| Title of host publication | Proceedings - 2025 IEEE/ACM International Workshop on Deep Learning for Testing and Testing for Deep Learning, DeepTest 2025 |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 1-8 |
| Number of pages | 8 |
| ISBN (electronic) | 9798331501907 |
| ISBN (print) | 979-8-3315-0191-4 |
| Publication status | Published - 3 May 2025 |
| Event | 2025 IEEE/ACM International Workshop on Deep Learning for Testing and Testing for Deep Learning, DeepTest 2025 - Ottawa, Canada Duration: 3 May 2025 → 3 May 2025 |
Abstract
Failure prediction models can be significantly beneficial for managing large-scale complex software systems, but their trustworthiness is severely affected by changes in the data over time, also known as concept drift. Thus, monitoring these models against concept drift and retraining them when the data changes becomes crucial in designing reliable failure prediction models. In this work, we evaluate the effects of monitoring failure prediction models over time using label-independent (unsupervised) drift detectors. We show that retraining based on unsupervised drift detectors instead of periodically reduces the cost of acquiring true labels without compromising accuracy. Furthermore, we propose a novel feature reduction for unsupervised drift detectors and an evaluation pipeline that practitioners can employ to select the most suitable unsupervised drift detector for their application.
Keywords
- concept drift, concept drift detection, failure prediction, machine learning monitoring
ASJC Scopus subject areas
- Computer Science(all)
- Artificial Intelligence
- Computer Science(all)
- Software
- Engineering(all)
- Safety, Risk, Reliability and Quality
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Proceedings - 2025 IEEE/ACM International Workshop on Deep Learning for Testing and Testing for Deep Learning, DeepTest 2025. Institute of Electrical and Electronics Engineers Inc., 2025. p. 1-8.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Improving the Reliability of Failure Prediction Models through Concept Drift Monitoring
AU - Poenaru-Olaru, Lorena
AU - Miranda da Cruz, Luis
AU - Rellermeyer, Jan S.
AU - van Deursen, Arie
N1 - Publisher Copyright: © 2025 IEEE.
PY - 2025/5/3
Y1 - 2025/5/3
N2 - Failure prediction models can be significantly beneficial for managing large-scale complex software systems, but their trustworthiness is severely affected by changes in the data over time, also known as concept drift. Thus, monitoring these models against concept drift and retraining them when the data changes becomes crucial in designing reliable failure prediction models. In this work, we evaluate the effects of monitoring failure prediction models over time using label-independent (unsupervised) drift detectors. We show that retraining based on unsupervised drift detectors instead of periodically reduces the cost of acquiring true labels without compromising accuracy. Furthermore, we propose a novel feature reduction for unsupervised drift detectors and an evaluation pipeline that practitioners can employ to select the most suitable unsupervised drift detector for their application.
AB - Failure prediction models can be significantly beneficial for managing large-scale complex software systems, but their trustworthiness is severely affected by changes in the data over time, also known as concept drift. Thus, monitoring these models against concept drift and retraining them when the data changes becomes crucial in designing reliable failure prediction models. In this work, we evaluate the effects of monitoring failure prediction models over time using label-independent (unsupervised) drift detectors. We show that retraining based on unsupervised drift detectors instead of periodically reduces the cost of acquiring true labels without compromising accuracy. Furthermore, we propose a novel feature reduction for unsupervised drift detectors and an evaluation pipeline that practitioners can employ to select the most suitable unsupervised drift detector for their application.
KW - concept drift
KW - concept drift detection
KW - failure prediction
KW - machine learning monitoring
UR - http://www.scopus.com/inward/record.url?scp=105009125791&partnerID=8YFLogxK
U2 - 10.1109/DeepTest66595.2025.00006
DO - 10.1109/DeepTest66595.2025.00006
M3 - Conference contribution
AN - SCOPUS:105009125791
SN - 979-8-3315-0191-4
SP - 1
EP - 8
BT - Proceedings - 2025 IEEE/ACM International Workshop on Deep Learning for Testing and Testing for Deep Learning, DeepTest 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 IEEE/ACM International Workshop on Deep Learning for Testing and Testing for Deep Learning, DeepTest 2025
Y2 - 3 May 2025 through 3 May 2025
ER -