Towards Dynamic Priors in Bayesian Optimization for Hyperparameter Optimization

Research output: Chapter in book/report/conference proceedingConference contributionResearch

View graph of relations

Details

Original languageEnglish
Title of host publicationWorkshop Track of the AutoML Conference
Number of pages15
Publication statusPublished - 4 Nov 2025
Event4th International Conference on Automated Machine Learning, AutoML 25 - Roosevelt Island, New York, United States
Duration: 8 Sept 202511 Sept 2025

Abstract

Hyperparameter optimization (HPO), for example, based on Bayesian optimization (BO), supports users in designing models well-suited for a given dataset. HPO has proven its effectiveness on several applications, ranging from classical machine learning for tabular data to deep neural networks for computer vision and transformers for natural language processing. However, HPO still sometimes lacks acceptance by machine learning experts due to its black-box nature and limited user control. Addressing this, first approaches have been proposed to initialize BO methods with expert knowledge. However, these approaches do not allow for online steering during the optimization process. In this paper, we introduce a novel method that enables repeated interventions to steer BO via user input, specifying expert knowledge and user preferences at runtime of the HPO process in the form of prior distributions. To this end, we generalize an existing method, $\pi$BO, preserving theoretical guarantees. We also introduce a misleading prior detection scheme, which allows protection against harmful user inputs. In our experimental evaluation, we demonstrate that our method can effectively incorporate multiple priors, leveraging informative priors, whereas misleading priors are reliably rejected or overcome. Thereby, we achieve competitiveness to unperturbed BO.

Keywords

    cs.LG

Cite this

Towards Dynamic Priors in Bayesian Optimization for Hyperparameter Optimization. / Fehring, Lukas; Wever, Marcel; Spliethöver, Maximilian et al.
Workshop Track of the AutoML Conference . 2025.

Research output: Chapter in book/report/conference proceedingConference contributionResearch

Fehring, L, Wever, M, Spliethöver, M, Hennig, L, Wachsmuth, H & Lindauer, M 2025, Towards Dynamic Priors in Bayesian Optimization for Hyperparameter Optimization. in Workshop Track of the AutoML Conference . 4th International Conference on Automated Machine Learning, AutoML 25, New York, New York, United States, 8 Sept 2025. <https://openreview.net/pdf?id=mQ0IENZRx2>
Download
@inproceedings{b1fd3044b2ec499387994421d5c6e938,
title = "Towards Dynamic Priors in Bayesian Optimization for Hyperparameter Optimization",
abstract = "Hyperparameter optimization (HPO), for example, based on Bayesian optimization (BO), supports users in designing models well-suited for a given dataset. HPO has proven its effectiveness on several applications, ranging from classical machine learning for tabular data to deep neural networks for computer vision and transformers for natural language processing. However, HPO still sometimes lacks acceptance by machine learning experts due to its black-box nature and limited user control. Addressing this, first approaches have been proposed to initialize BO methods with expert knowledge. However, these approaches do not allow for online steering during the optimization process. In this paper, we introduce a novel method that enables repeated interventions to steer BO via user input, specifying expert knowledge and user preferences at runtime of the HPO process in the form of prior distributions. To this end, we generalize an existing method, $\pi$BO, preserving theoretical guarantees. We also introduce a misleading prior detection scheme, which allows protection against harmful user inputs. In our experimental evaluation, we demonstrate that our method can effectively incorporate multiple priors, leveraging informative priors, whereas misleading priors are reliably rejected or overcome. Thereby, we achieve competitiveness to unperturbed BO.",
keywords = "cs.LG",
author = "Lukas Fehring and Marcel Wever and Maximilian Splieth{\"o}ver and Leona Hennig and Henning Wachsmuth and Marius Lindauer",
year = "2025",
month = nov,
day = "4",
language = "English",
booktitle = "Workshop Track of the AutoML Conference",
note = "4th International Conference on Automated Machine Learning, AutoML 25, AutoML 25 ; Conference date: 08-09-2025 Through 11-09-2025",

}

Download

TY - GEN

T1 - Towards Dynamic Priors in Bayesian Optimization for Hyperparameter Optimization

AU - Fehring, Lukas

AU - Wever, Marcel

AU - Spliethöver, Maximilian

AU - Hennig, Leona

AU - Wachsmuth, Henning

AU - Lindauer, Marius

PY - 2025/11/4

Y1 - 2025/11/4

N2 - Hyperparameter optimization (HPO), for example, based on Bayesian optimization (BO), supports users in designing models well-suited for a given dataset. HPO has proven its effectiveness on several applications, ranging from classical machine learning for tabular data to deep neural networks for computer vision and transformers for natural language processing. However, HPO still sometimes lacks acceptance by machine learning experts due to its black-box nature and limited user control. Addressing this, first approaches have been proposed to initialize BO methods with expert knowledge. However, these approaches do not allow for online steering during the optimization process. In this paper, we introduce a novel method that enables repeated interventions to steer BO via user input, specifying expert knowledge and user preferences at runtime of the HPO process in the form of prior distributions. To this end, we generalize an existing method, $\pi$BO, preserving theoretical guarantees. We also introduce a misleading prior detection scheme, which allows protection against harmful user inputs. In our experimental evaluation, we demonstrate that our method can effectively incorporate multiple priors, leveraging informative priors, whereas misleading priors are reliably rejected or overcome. Thereby, we achieve competitiveness to unperturbed BO.

AB - Hyperparameter optimization (HPO), for example, based on Bayesian optimization (BO), supports users in designing models well-suited for a given dataset. HPO has proven its effectiveness on several applications, ranging from classical machine learning for tabular data to deep neural networks for computer vision and transformers for natural language processing. However, HPO still sometimes lacks acceptance by machine learning experts due to its black-box nature and limited user control. Addressing this, first approaches have been proposed to initialize BO methods with expert knowledge. However, these approaches do not allow for online steering during the optimization process. In this paper, we introduce a novel method that enables repeated interventions to steer BO via user input, specifying expert knowledge and user preferences at runtime of the HPO process in the form of prior distributions. To this end, we generalize an existing method, $\pi$BO, preserving theoretical guarantees. We also introduce a misleading prior detection scheme, which allows protection against harmful user inputs. In our experimental evaluation, we demonstrate that our method can effectively incorporate multiple priors, leveraging informative priors, whereas misleading priors are reliably rejected or overcome. Thereby, we achieve competitiveness to unperturbed BO.

KW - cs.LG

M3 - Conference contribution

BT - Workshop Track of the AutoML Conference

T2 - 4th International Conference on Automated Machine Learning, AutoML 25

Y2 - 8 September 2025 through 11 September 2025

ER -

By the same author(s)