Details
Originalsprache | Englisch |
---|---|
Qualifikation | Doctor rerum naturalium |
Gradverleihende Hochschule | |
Betreut von |
|
Datum der Verleihung des Grades | 18 Nov. 2024 |
Erscheinungsort | Hannover |
Publikationsstatus | Veröffentlicht - 21 Nov. 2024 |
Abstract
Dieses Fehlen von bestärkendem Lernen könnte darauf zurückzuführen sein, dass die Verflechtung von Datenerfassung und Lernen äußerst herausfordernd ist; da sich die Verteilung der während des Trainings gesehenen Daten erheblich verschiebt wenn der Agent sich verbessert, kann sich auch die optimale Lösungsstrategie – einschließlich der Wahl des Algorithmus, der Hyperparameter und sogar der Aufgabenvariation – ändern. Daher ist die direkte Anwendung von bestärkendem Lernen auf automatisiertes maschinelles Lernen wohl ohne erheblichen Aufwand und Fachwissen nicht möglich. Diese Arbeit überbrückt die Lücke zwischen den Bereichen, indem sie den Einsatz von bestärkendem Lernen zur dynamischen Algorithmuskonfiguration motiviert, ein neuartiges Paradigma zur Konfiguration von Algorithmen während ihrer Laufzeit. Diese Anwendung von bestärkendem Lernen führt uns wiederum zu der Frage wie bestärkendes Lernen selbst konfiguriert werden kann um effizient, zuverlässig und generalisierbar zu sein wenn es auf neue Bereiche angewendet wird. Wir erreichen dies in drei Schritten: i. Erweiterung der Algorithmenkonfiguration zur dynamischen Konfiguration und Analyse von Algorithmen; ii. eine grundlegende Untersuchung der Landschaft der Designentscheidungen im bestärkenden Lernen und; iii. die Grundlage für die Generalisierung von Konfigurationsansätzen im bestärkenden Lernen durch kontextuelles bestärkendes Lernen. Insgesamt bietet diese Arbeit einen tiefen Einblick in die Schnittstelle zwischen automatisiertem maschinellem Lernen und bestärkendem Lernen. Wir glauben, dass sie als Grundlage für eine engere Verbindung zwischen den Bereichen dienen wird, indem sie das große Potenzial des bestärkenden Lernens für automatisiertes maschinelles Lernen und umgekehrt demonstriert.
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Hannover, 2024. 91 S.
Publikation: Qualifikations-/Studienabschlussarbeit › Dissertation
}
TY - BOOK
T1 - Reinforcing automated machine learning
T2 - bridging AutoML and reinforcement learning
AU - Eimer, Theresa
PY - 2024/11/21
Y1 - 2024/11/21
N2 - Reinforcement learning is a machine learning paradigm that allows learning through interaction. It intertwines data collection and model training into a single problem statement, enabling the solution of complex sequential decision making problems in domains like robotics, biology or physics. Not included in this list is the domain of automated machine learning, which aims to automatically configure machine learning algorithms for optimal performance on a given task - even though we have long known that sequential decision making is important in many facets of automated machine learning. This lack of adoption of reinforcement learning is potentially due to the fact that the entanglement of data collection and learning in reinforcement learning makes for a challenging machine learning setting; since the distribution of data seen during training shifts substantially as the agent improves, the optimal solution strategy - including the choice of algorithm, algorithm compo- nents, hyperparameters and even task variation - can shift as well. Thus applying reinforcement learning directly to an automated machine learning task might not be possible without considerable effort and expertise. This thesis bridges the gap between the fields by motivating the use of reinforcement learning in automated machine learning for dynamic algorithm configuration, a novel paradigm for config- uring algorithms during their runtime. In turn, applying reinforcement learning in automated machine learning leads us to a closer examination of how to configure reinforcement learning itself to be efficient, reliable and generalizable when applied to new domains. We accomplish this in three parts: i. extending the algorithm con- figuration paradigm to allow the dynamic configuration and analysis of algorithms; ii. a principled investigation of the landscape of design decisions in reinforcement learning and; iii. laying the groundwork for generalization of reinforcement learning configuration approaches through contextual reinforcement learning. An important focus throughout is providing insights into the inner workings of reinforcement learning with respect to its design decisions, as of yet underexplored territory. Thus we are able to provide actionable recommendations for reinforcement learning practitioners as well as a broad base for future work on automated reinforcement learning. Overall, this thesis provides an in-depth look into the intersection of automated machine learning and reinforcement learning. We believe it will serve as a foundation for a closer connection between the fields by demonstrating the great potential of reinforcement learning for automated machine learning and vice versa.
AB - Reinforcement learning is a machine learning paradigm that allows learning through interaction. It intertwines data collection and model training into a single problem statement, enabling the solution of complex sequential decision making problems in domains like robotics, biology or physics. Not included in this list is the domain of automated machine learning, which aims to automatically configure machine learning algorithms for optimal performance on a given task - even though we have long known that sequential decision making is important in many facets of automated machine learning. This lack of adoption of reinforcement learning is potentially due to the fact that the entanglement of data collection and learning in reinforcement learning makes for a challenging machine learning setting; since the distribution of data seen during training shifts substantially as the agent improves, the optimal solution strategy - including the choice of algorithm, algorithm compo- nents, hyperparameters and even task variation - can shift as well. Thus applying reinforcement learning directly to an automated machine learning task might not be possible without considerable effort and expertise. This thesis bridges the gap between the fields by motivating the use of reinforcement learning in automated machine learning for dynamic algorithm configuration, a novel paradigm for config- uring algorithms during their runtime. In turn, applying reinforcement learning in automated machine learning leads us to a closer examination of how to configure reinforcement learning itself to be efficient, reliable and generalizable when applied to new domains. We accomplish this in three parts: i. extending the algorithm con- figuration paradigm to allow the dynamic configuration and analysis of algorithms; ii. a principled investigation of the landscape of design decisions in reinforcement learning and; iii. laying the groundwork for generalization of reinforcement learning configuration approaches through contextual reinforcement learning. An important focus throughout is providing insights into the inner workings of reinforcement learning with respect to its design decisions, as of yet underexplored territory. Thus we are able to provide actionable recommendations for reinforcement learning practitioners as well as a broad base for future work on automated reinforcement learning. Overall, this thesis provides an in-depth look into the intersection of automated machine learning and reinforcement learning. We believe it will serve as a foundation for a closer connection between the fields by demonstrating the great potential of reinforcement learning for automated machine learning and vice versa.
U2 - 10.15488/18193
DO - 10.15488/18193
M3 - Doctoral thesis
CY - Hannover
ER -