Details
Original language | English |
---|---|
Title of host publication | Second International Conference on Automated Machine Learning |
Publication status | E-pub ahead of print - 20 Jul 2023 |
Abstract
Keywords
- Reinforcement learning, AutoML, Hyperparameter optimization
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Second International Conference on Automated Machine Learning. 2023.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research
}
TY - GEN
T1 - AutoRL Hyperparameter Landscapes
AU - Mohan, Aditya
AU - Benjamins, Carolin
AU - Wienecke, Konrad
AU - Dockhorn, Alexander
AU - Lindauer, Marius
PY - 2023/7/20
Y1 - 2023/7/20
N2 - Although Reinforcement Learning (RL) has shown to be capable of producing impressive results, its use is limited by the impact of its hyperparameters on performance. This often makes it difficult to achieve good results in practice. Automated RL (AutoRL) addresses this difficulty, yet little is known about the dynamics of the hyperparameter landscapes that hyperparameter optimization (HPO) methods traverse in search of optimal configurations. In view of existing AutoRL approaches dynamically adjusting hyperparameter configurations, we propose an approach to build and analyze these hyperparameter landscapes not just for one point in time but at multiple points in time throughout training. Addressing an important open question on the legitimacy of such dynamic AutoRL approaches, we provide thorough empirical evidence that the hyperparameter landscapes strongly vary over time across representative algorithms from RL literature (DQN and SAC) in different kinds of environments (Cartpole and Hopper). This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses.
AB - Although Reinforcement Learning (RL) has shown to be capable of producing impressive results, its use is limited by the impact of its hyperparameters on performance. This often makes it difficult to achieve good results in practice. Automated RL (AutoRL) addresses this difficulty, yet little is known about the dynamics of the hyperparameter landscapes that hyperparameter optimization (HPO) methods traverse in search of optimal configurations. In view of existing AutoRL approaches dynamically adjusting hyperparameter configurations, we propose an approach to build and analyze these hyperparameter landscapes not just for one point in time but at multiple points in time throughout training. Addressing an important open question on the legitimacy of such dynamic AutoRL approaches, we provide thorough empirical evidence that the hyperparameter landscapes strongly vary over time across representative algorithms from RL literature (DQN and SAC) in different kinds of environments (Cartpole and Hopper). This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses.
KW - Reinforcement learning
KW - AutoML
KW - Hyperparameter optimization
U2 - 10.48550/arXiv.2304.02396
DO - 10.48550/arXiv.2304.02396
M3 - Conference contribution
BT - Second International Conference on Automated Machine Learning
ER -