Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

View graph of relations

Details

Original languageEnglish
Title of host publicationProceedings of the 38th conference on AAAI
EditorsMichael Wooldridge, Jennifer Dy, Sriraam Natarajan
Pages12172-12180
Number of pages9
Publication statusPublished - 24 Mar 2024

Publication series

NameProceedings of the AAAI Conference on Artificial Intelligence
Number11
Volume38
ISSN (Print)2159-5399
ISSN (electronic)2374-3468

Abstract

Hyperparameter optimization (HPO) is important to leverage the full potential of machine learning (ML). In practice, users are often interested in multi-objective (MO) problems, i.e., optimizing potentially conflicting objectives, like accuracy and energy consumption. To tackle this, the vast majority of MO-ML algorithms return a Pareto front of non-dominated machine learning models to the user. Optimizing the hyperparameters of such algorithms is non-trivial as evaluating a hyperparameter configuration entails evaluating the quality of the resulting Pareto front. In literature, there are known indicators that assess the quality of a Pareto front (e.g., hypervolume, R2) by quantifying different properties (e.g., volume, proximity to a reference point). However, choosing the indicator that leads to the desired Pareto front might be a hard task for a user. In this paper, we propose a human-centered interactive HPO approach tailored towards multi-objective ML leveraging preference learning to extract desiderata from users that guide the optimization. Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator. Concretely, we leverage pairwise comparisons of distinct Pareto fronts to learn such an appropriate quality indicator. Then, we optimize the hyperparameters of the underlying MO-ML algorithm towards this learned indicator using a state-of-the-art HPO approach. In an experimental study targeting the environmental impact of ML, we demonstrate that our approach leads to substantially better Pareto fronts compared to optimizing based on a wrong indicator pre-selected by the user, and performs comparable in the case of an advanced user knowing which indicator to pick.

Keywords

    cs.LG, cs.AI

ASJC Scopus subject areas

Cite this

Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning. / Giovanelli, Joseph; Tornede, Alexander; Tornede, Tanja et al.
Proceedings of the 38th conference on AAAI. ed. / Michael Wooldridge; Jennifer Dy; Sriraam Natarajan. 2024. p. 12172-12180 (Proceedings of the AAAI Conference on Artificial Intelligence; Vol. 38, No. 11).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Giovanelli, J, Tornede, A, Tornede, T & Lindauer, M 2024, Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning. in M Wooldridge, J Dy & S Natarajan (eds), Proceedings of the 38th conference on AAAI. Proceedings of the AAAI Conference on Artificial Intelligence, no. 11, vol. 38, pp. 12172-12180. https://doi.org/10.48550/arXiv.2309.03581, https://doi.org/10.1609/aaai.v38i11.29106
Giovanelli, J., Tornede, A., Tornede, T., & Lindauer, M. (2024). Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning. In M. Wooldridge, J. Dy, & S. Natarajan (Eds.), Proceedings of the 38th conference on AAAI (pp. 12172-12180). (Proceedings of the AAAI Conference on Artificial Intelligence; Vol. 38, No. 11). https://doi.org/10.48550/arXiv.2309.03581, https://doi.org/10.1609/aaai.v38i11.29106
Giovanelli J, Tornede A, Tornede T, Lindauer M. Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning. In Wooldridge M, Dy J, Natarajan S, editors, Proceedings of the 38th conference on AAAI. 2024. p. 12172-12180. (Proceedings of the AAAI Conference on Artificial Intelligence; 11). doi: 10.48550/arXiv.2309.03581, 10.1609/aaai.v38i11.29106
Giovanelli, Joseph ; Tornede, Alexander ; Tornede, Tanja et al. / Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning. Proceedings of the 38th conference on AAAI. editor / Michael Wooldridge ; Jennifer Dy ; Sriraam Natarajan. 2024. pp. 12172-12180 (Proceedings of the AAAI Conference on Artificial Intelligence; 11).
Download
@inproceedings{9a2ccdc16610405dbc4314c96d2ba203,
title = "Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning",
abstract = "Hyperparameter optimization (HPO) is important to leverage the full potential of machine learning (ML). In practice, users are often interested in multi-objective (MO) problems, i.e., optimizing potentially conflicting objectives, like accuracy and energy consumption. To tackle this, the vast majority of MO-ML algorithms return a Pareto front of non-dominated machine learning models to the user. Optimizing the hyperparameters of such algorithms is non-trivial as evaluating a hyperparameter configuration entails evaluating the quality of the resulting Pareto front. In literature, there are known indicators that assess the quality of a Pareto front (e.g., hypervolume, R2) by quantifying different properties (e.g., volume, proximity to a reference point). However, choosing the indicator that leads to the desired Pareto front might be a hard task for a user. In this paper, we propose a human-centered interactive HPO approach tailored towards multi-objective ML leveraging preference learning to extract desiderata from users that guide the optimization. Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator. Concretely, we leverage pairwise comparisons of distinct Pareto fronts to learn such an appropriate quality indicator. Then, we optimize the hyperparameters of the underlying MO-ML algorithm towards this learned indicator using a state-of-the-art HPO approach. In an experimental study targeting the environmental impact of ML, we demonstrate that our approach leads to substantially better Pareto fronts compared to optimizing based on a wrong indicator pre-selected by the user, and performs comparable in the case of an advanced user knowing which indicator to pick.",
keywords = "cs.LG, cs.AI",
author = "Joseph Giovanelli and Alexander Tornede and Tanja Tornede and Marius Lindauer",
note = "Publisher Copyright: Copyright {\textcopyright} 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.",
year = "2024",
month = mar,
day = "24",
doi = "10.48550/arXiv.2309.03581",
language = "English",
series = "Proceedings of the AAAI Conference on Artificial Intelligence",
number = "11",
pages = "12172--12180",
editor = "Michael Wooldridge and Jennifer Dy and Sriraam Natarajan",
booktitle = "Proceedings of the 38th conference on AAAI",

}

Download

TY - GEN

T1 - Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning

AU - Giovanelli, Joseph

AU - Tornede, Alexander

AU - Tornede, Tanja

AU - Lindauer, Marius

N1 - Publisher Copyright: Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

PY - 2024/3/24

Y1 - 2024/3/24

N2 - Hyperparameter optimization (HPO) is important to leverage the full potential of machine learning (ML). In practice, users are often interested in multi-objective (MO) problems, i.e., optimizing potentially conflicting objectives, like accuracy and energy consumption. To tackle this, the vast majority of MO-ML algorithms return a Pareto front of non-dominated machine learning models to the user. Optimizing the hyperparameters of such algorithms is non-trivial as evaluating a hyperparameter configuration entails evaluating the quality of the resulting Pareto front. In literature, there are known indicators that assess the quality of a Pareto front (e.g., hypervolume, R2) by quantifying different properties (e.g., volume, proximity to a reference point). However, choosing the indicator that leads to the desired Pareto front might be a hard task for a user. In this paper, we propose a human-centered interactive HPO approach tailored towards multi-objective ML leveraging preference learning to extract desiderata from users that guide the optimization. Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator. Concretely, we leverage pairwise comparisons of distinct Pareto fronts to learn such an appropriate quality indicator. Then, we optimize the hyperparameters of the underlying MO-ML algorithm towards this learned indicator using a state-of-the-art HPO approach. In an experimental study targeting the environmental impact of ML, we demonstrate that our approach leads to substantially better Pareto fronts compared to optimizing based on a wrong indicator pre-selected by the user, and performs comparable in the case of an advanced user knowing which indicator to pick.

AB - Hyperparameter optimization (HPO) is important to leverage the full potential of machine learning (ML). In practice, users are often interested in multi-objective (MO) problems, i.e., optimizing potentially conflicting objectives, like accuracy and energy consumption. To tackle this, the vast majority of MO-ML algorithms return a Pareto front of non-dominated machine learning models to the user. Optimizing the hyperparameters of such algorithms is non-trivial as evaluating a hyperparameter configuration entails evaluating the quality of the resulting Pareto front. In literature, there are known indicators that assess the quality of a Pareto front (e.g., hypervolume, R2) by quantifying different properties (e.g., volume, proximity to a reference point). However, choosing the indicator that leads to the desired Pareto front might be a hard task for a user. In this paper, we propose a human-centered interactive HPO approach tailored towards multi-objective ML leveraging preference learning to extract desiderata from users that guide the optimization. Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator. Concretely, we leverage pairwise comparisons of distinct Pareto fronts to learn such an appropriate quality indicator. Then, we optimize the hyperparameters of the underlying MO-ML algorithm towards this learned indicator using a state-of-the-art HPO approach. In an experimental study targeting the environmental impact of ML, we demonstrate that our approach leads to substantially better Pareto fronts compared to optimizing based on a wrong indicator pre-selected by the user, and performs comparable in the case of an advanced user knowing which indicator to pick.

KW - cs.LG

KW - cs.AI

UR - http://www.scopus.com/inward/record.url?scp=85189622290&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2309.03581

DO - 10.48550/arXiv.2309.03581

M3 - Conference contribution

T3 - Proceedings of the AAAI Conference on Artificial Intelligence

SP - 12172

EP - 12180

BT - Proceedings of the 38th conference on AAAI

A2 - Wooldridge, Michael

A2 - Dy, Jennifer

A2 - Natarajan, Sriraam

ER -

By the same author(s)