Details
Original language | English |
---|---|
Qualification | Doctor rerum naturalium |
Awarding Institution | |
Supervised by |
|
Date of Award | 18 Nov 2024 |
Place of Publication | Hannover |
Publication status | Published - 21 Nov 2024 |
Abstract
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Hannover, 2024. 91 p.
Research output: Thesis › Doctoral thesis
}
TY - BOOK
T1 - Reinforcing automated machine learning
T2 - bridging AutoML and reinforcement learning
AU - Eimer, Theresa
PY - 2024/11/21
Y1 - 2024/11/21
N2 - Reinforcement learning is a machine learning paradigm that allows learning through interaction. It intertwines data collection and model training into a single problem statement, enabling the solution of complex sequential decision making problems in domains like robotics, biology or physics. Not included in this list is the domain of automated machine learning, which aims to automatically configure machine learning algorithms for optimal performance on a given task - even though we have long known that sequential decision making is important in many facets of automated machine learning. This lack of adoption of reinforcement learning is potentially due to the fact that the entanglement of data collection and learning in reinforcement learning makes for a challenging machine learning setting; since the distribution of data seen during training shifts substantially as the agent improves, the optimal solution strategy - including the choice of algorithm, algorithm compo- nents, hyperparameters and even task variation - can shift as well. Thus applying reinforcement learning directly to an automated machine learning task might not be possible without considerable effort and expertise. This thesis bridges the gap between the fields by motivating the use of reinforcement learning in automated machine learning for dynamic algorithm configuration, a novel paradigm for config- uring algorithms during their runtime. In turn, applying reinforcement learning in automated machine learning leads us to a closer examination of how to configure reinforcement learning itself to be efficient, reliable and generalizable when applied to new domains. We accomplish this in three parts: i. extending the algorithm con- figuration paradigm to allow the dynamic configuration and analysis of algorithms; ii. a principled investigation of the landscape of design decisions in reinforcement learning and; iii. laying the groundwork for generalization of reinforcement learning configuration approaches through contextual reinforcement learning. An important focus throughout is providing insights into the inner workings of reinforcement learning with respect to its design decisions, as of yet underexplored territory. Thus we are able to provide actionable recommendations for reinforcement learning practitioners as well as a broad base for future work on automated reinforcement learning. Overall, this thesis provides an in-depth look into the intersection of automated machine learning and reinforcement learning. We believe it will serve as a foundation for a closer connection between the fields by demonstrating the great potential of reinforcement learning for automated machine learning and vice versa.
AB - Reinforcement learning is a machine learning paradigm that allows learning through interaction. It intertwines data collection and model training into a single problem statement, enabling the solution of complex sequential decision making problems in domains like robotics, biology or physics. Not included in this list is the domain of automated machine learning, which aims to automatically configure machine learning algorithms for optimal performance on a given task - even though we have long known that sequential decision making is important in many facets of automated machine learning. This lack of adoption of reinforcement learning is potentially due to the fact that the entanglement of data collection and learning in reinforcement learning makes for a challenging machine learning setting; since the distribution of data seen during training shifts substantially as the agent improves, the optimal solution strategy - including the choice of algorithm, algorithm compo- nents, hyperparameters and even task variation - can shift as well. Thus applying reinforcement learning directly to an automated machine learning task might not be possible without considerable effort and expertise. This thesis bridges the gap between the fields by motivating the use of reinforcement learning in automated machine learning for dynamic algorithm configuration, a novel paradigm for config- uring algorithms during their runtime. In turn, applying reinforcement learning in automated machine learning leads us to a closer examination of how to configure reinforcement learning itself to be efficient, reliable and generalizable when applied to new domains. We accomplish this in three parts: i. extending the algorithm con- figuration paradigm to allow the dynamic configuration and analysis of algorithms; ii. a principled investigation of the landscape of design decisions in reinforcement learning and; iii. laying the groundwork for generalization of reinforcement learning configuration approaches through contextual reinforcement learning. An important focus throughout is providing insights into the inner workings of reinforcement learning with respect to its design decisions, as of yet underexplored territory. Thus we are able to provide actionable recommendations for reinforcement learning practitioners as well as a broad base for future work on automated reinforcement learning. Overall, this thesis provides an in-depth look into the intersection of automated machine learning and reinforcement learning. We believe it will serve as a foundation for a closer connection between the fields by demonstrating the great potential of reinforcement learning for automated machine learning and vice versa.
U2 - 10.15488/18193
DO - 10.15488/18193
M3 - Doctoral thesis
CY - Hannover
ER -