Details
| Original language | English |
|---|---|
| Title of host publication | Workshop Track of the AutoML Conference |
| Number of pages | 15 |
| Publication status | Published - 4 Nov 2025 |
| Event | 4th International Conference on Automated Machine Learning, AutoML 25 - Roosevelt Island, New York, United States Duration: 8 Sept 2025 → 11 Sept 2025 |
Abstract
Keywords
- cs.LG
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Workshop Track of the AutoML Conference . 2025.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research
}
TY - GEN
T1 - Towards Dynamic Priors in Bayesian Optimization for Hyperparameter Optimization
AU - Fehring, Lukas
AU - Wever, Marcel
AU - Spliethöver, Maximilian
AU - Hennig, Leona
AU - Wachsmuth, Henning
AU - Lindauer, Marius
PY - 2025/11/4
Y1 - 2025/11/4
N2 - Hyperparameter optimization (HPO), for example, based on Bayesian optimization (BO), supports users in designing models well-suited for a given dataset. HPO has proven its effectiveness on several applications, ranging from classical machine learning for tabular data to deep neural networks for computer vision and transformers for natural language processing. However, HPO still sometimes lacks acceptance by machine learning experts due to its black-box nature and limited user control. Addressing this, first approaches have been proposed to initialize BO methods with expert knowledge. However, these approaches do not allow for online steering during the optimization process. In this paper, we introduce a novel method that enables repeated interventions to steer BO via user input, specifying expert knowledge and user preferences at runtime of the HPO process in the form of prior distributions. To this end, we generalize an existing method, $\pi$BO, preserving theoretical guarantees. We also introduce a misleading prior detection scheme, which allows protection against harmful user inputs. In our experimental evaluation, we demonstrate that our method can effectively incorporate multiple priors, leveraging informative priors, whereas misleading priors are reliably rejected or overcome. Thereby, we achieve competitiveness to unperturbed BO.
AB - Hyperparameter optimization (HPO), for example, based on Bayesian optimization (BO), supports users in designing models well-suited for a given dataset. HPO has proven its effectiveness on several applications, ranging from classical machine learning for tabular data to deep neural networks for computer vision and transformers for natural language processing. However, HPO still sometimes lacks acceptance by machine learning experts due to its black-box nature and limited user control. Addressing this, first approaches have been proposed to initialize BO methods with expert knowledge. However, these approaches do not allow for online steering during the optimization process. In this paper, we introduce a novel method that enables repeated interventions to steer BO via user input, specifying expert knowledge and user preferences at runtime of the HPO process in the form of prior distributions. To this end, we generalize an existing method, $\pi$BO, preserving theoretical guarantees. We also introduce a misleading prior detection scheme, which allows protection against harmful user inputs. In our experimental evaluation, we demonstrate that our method can effectively incorporate multiple priors, leveraging informative priors, whereas misleading priors are reliably rejected or overcome. Thereby, we achieve competitiveness to unperturbed BO.
KW - cs.LG
M3 - Conference contribution
BT - Workshop Track of the AutoML Conference
T2 - 4th International Conference on Automated Machine Learning, AutoML 25
Y2 - 8 September 2025 through 11 September 2025
ER -