Loading [MathJax]/extensions/tex2jax.js

Reinforcing automated machine learning: bridging AutoML and reinforcement learning

Publikation: Qualifikations-/StudienabschlussarbeitDissertation

Autorschaft

Organisationseinheiten

Details

OriginalspracheEnglisch
QualifikationDoctor rerum naturalium
Gradverleihende Hochschule
Betreut von
Datum der Verleihung des Grades18 Nov. 2024
ErscheinungsortHannover
PublikationsstatusVeröffentlicht - 21 Nov. 2024

Abstract

Bestärkendes Lernen ist ein Paradigma des maschinelles Lernens, das Lernen durch Interaktion ermöglicht. Es verknüpft die Datenerfassung und das Modelltraining zu einem Ablauf, wodurch die Lösung komplexer sequentieller Entscheidungsprobleme in Bereichen wie Robotik, Biologie oder Physik ermöglicht wird. Nicht in dieser Liste enthalten ist der Bereich des automatisierten maschinellen Lernens, der darauf abzielt, maschinelle Lernalgorithmen automatisch für eine optimale Leistung zu konfigurieren – obwohl wir schon lange wissen, dass sequentielle Entscheidungsfindung in vielen Aspekten des automatisierten maschinellen Lernens wichtig ist.
Dieses Fehlen von bestärkendem Lernen könnte darauf zurückzuführen sein, dass die Verflechtung von Datenerfassung und Lernen äußerst herausfordernd ist; da sich die Verteilung der während des Trainings gesehenen Daten erheblich verschiebt wenn der Agent sich verbessert, kann sich auch die optimale Lösungsstrategie – einschließlich der Wahl des Algorithmus, der Hyperparameter und sogar der Aufgabenvariation – ändern. Daher ist die direkte Anwendung von bestärkendem Lernen auf automatisiertes maschinelles Lernen wohl ohne erheblichen Aufwand und Fachwissen nicht möglich. Diese Arbeit überbrückt die Lücke zwischen den Bereichen, indem sie den Einsatz von bestärkendem Lernen zur dynamischen Algorithmuskonfiguration motiviert, ein neuartiges Paradigma zur Konfiguration von Algorithmen während ihrer Laufzeit. Diese Anwendung von bestärkendem Lernen führt uns wiederum zu der Frage wie bestärkendes Lernen selbst konfiguriert werden kann um effizient, zuverlässig und generalisierbar zu sein wenn es auf neue Bereiche angewendet wird. Wir erreichen dies in drei Schritten: i. Erweiterung der Algorithmenkonfiguration zur dynamischen Konfiguration und Analyse von Algorithmen; ii. eine grundlegende Untersuchung der Landschaft der Designentscheidungen im bestärkenden Lernen und; iii. die Grundlage für die Generalisierung von Konfigurationsansätzen im bestärkenden Lernen durch kontextuelles bestärkendes Lernen. Insgesamt bietet diese Arbeit einen tiefen Einblick in die Schnittstelle zwischen automatisiertem maschinellem Lernen und bestärkendem Lernen. Wir glauben, dass sie als Grundlage für eine engere Verbindung zwischen den Bereichen dienen wird, indem sie das große Potenzial des bestärkenden Lernens für automatisiertes maschinelles Lernen und umgekehrt demonstriert.

Zitieren

Reinforcing automated machine learning: bridging AutoML and reinforcement learning. / Eimer, Theresa.
Hannover, 2024. 91 S.

Publikation: Qualifikations-/StudienabschlussarbeitDissertation

Eimer, T 2024, 'Reinforcing automated machine learning: bridging AutoML and reinforcement learning', Doctor rerum naturalium, Gottfried Wilhelm Leibniz Universität Hannover, Hannover. https://doi.org/10.15488/18193
Download
@phdthesis{db800b51cc6344f082c975a1d04503b2,
title = "Reinforcing automated machine learning: bridging AutoML and reinforcement learning",
abstract = "Reinforcement learning is a machine learning paradigm that allows learning through interaction. It intertwines data collection and model training into a single problem statement, enabling the solution of complex sequential decision making problems in domains like robotics, biology or physics. Not included in this list is the domain of automated machine learning, which aims to automatically configure machine learning algorithms for optimal performance on a given task - even though we have long known that sequential decision making is important in many facets of automated machine learning. This lack of adoption of reinforcement learning is potentially due to the fact that the entanglement of data collection and learning in reinforcement learning makes for a challenging machine learning setting; since the distribution of data seen during training shifts substantially as the agent improves, the optimal solution strategy - including the choice of algorithm, algorithm compo- nents, hyperparameters and even task variation - can shift as well. Thus applying reinforcement learning directly to an automated machine learning task might not be possible without considerable effort and expertise. This thesis bridges the gap between the fields by motivating the use of reinforcement learning in automated machine learning for dynamic algorithm configuration, a novel paradigm for config- uring algorithms during their runtime. In turn, applying reinforcement learning in automated machine learning leads us to a closer examination of how to configure reinforcement learning itself to be efficient, reliable and generalizable when applied to new domains. We accomplish this in three parts: i. extending the algorithm con- figuration paradigm to allow the dynamic configuration and analysis of algorithms; ii. a principled investigation of the landscape of design decisions in reinforcement learning and; iii. laying the groundwork for generalization of reinforcement learning configuration approaches through contextual reinforcement learning. An important focus throughout is providing insights into the inner workings of reinforcement learning with respect to its design decisions, as of yet underexplored territory. Thus we are able to provide actionable recommendations for reinforcement learning practitioners as well as a broad base for future work on automated reinforcement learning. Overall, this thesis provides an in-depth look into the intersection of automated machine learning and reinforcement learning. We believe it will serve as a foundation for a closer connection between the fields by demonstrating the great potential of reinforcement learning for automated machine learning and vice versa.",
author = "Theresa Eimer",
year = "2024",
month = nov,
day = "21",
doi = "10.15488/18193",
language = "English",
school = "Leibniz University Hannover",

}

Download

TY - BOOK

T1 - Reinforcing automated machine learning

T2 - bridging AutoML and reinforcement learning

AU - Eimer, Theresa

PY - 2024/11/21

Y1 - 2024/11/21

N2 - Reinforcement learning is a machine learning paradigm that allows learning through interaction. It intertwines data collection and model training into a single problem statement, enabling the solution of complex sequential decision making problems in domains like robotics, biology or physics. Not included in this list is the domain of automated machine learning, which aims to automatically configure machine learning algorithms for optimal performance on a given task - even though we have long known that sequential decision making is important in many facets of automated machine learning. This lack of adoption of reinforcement learning is potentially due to the fact that the entanglement of data collection and learning in reinforcement learning makes for a challenging machine learning setting; since the distribution of data seen during training shifts substantially as the agent improves, the optimal solution strategy - including the choice of algorithm, algorithm compo- nents, hyperparameters and even task variation - can shift as well. Thus applying reinforcement learning directly to an automated machine learning task might not be possible without considerable effort and expertise. This thesis bridges the gap between the fields by motivating the use of reinforcement learning in automated machine learning for dynamic algorithm configuration, a novel paradigm for config- uring algorithms during their runtime. In turn, applying reinforcement learning in automated machine learning leads us to a closer examination of how to configure reinforcement learning itself to be efficient, reliable and generalizable when applied to new domains. We accomplish this in three parts: i. extending the algorithm con- figuration paradigm to allow the dynamic configuration and analysis of algorithms; ii. a principled investigation of the landscape of design decisions in reinforcement learning and; iii. laying the groundwork for generalization of reinforcement learning configuration approaches through contextual reinforcement learning. An important focus throughout is providing insights into the inner workings of reinforcement learning with respect to its design decisions, as of yet underexplored territory. Thus we are able to provide actionable recommendations for reinforcement learning practitioners as well as a broad base for future work on automated reinforcement learning. Overall, this thesis provides an in-depth look into the intersection of automated machine learning and reinforcement learning. We believe it will serve as a foundation for a closer connection between the fields by demonstrating the great potential of reinforcement learning for automated machine learning and vice versa.

AB - Reinforcement learning is a machine learning paradigm that allows learning through interaction. It intertwines data collection and model training into a single problem statement, enabling the solution of complex sequential decision making problems in domains like robotics, biology or physics. Not included in this list is the domain of automated machine learning, which aims to automatically configure machine learning algorithms for optimal performance on a given task - even though we have long known that sequential decision making is important in many facets of automated machine learning. This lack of adoption of reinforcement learning is potentially due to the fact that the entanglement of data collection and learning in reinforcement learning makes for a challenging machine learning setting; since the distribution of data seen during training shifts substantially as the agent improves, the optimal solution strategy - including the choice of algorithm, algorithm compo- nents, hyperparameters and even task variation - can shift as well. Thus applying reinforcement learning directly to an automated machine learning task might not be possible without considerable effort and expertise. This thesis bridges the gap between the fields by motivating the use of reinforcement learning in automated machine learning for dynamic algorithm configuration, a novel paradigm for config- uring algorithms during their runtime. In turn, applying reinforcement learning in automated machine learning leads us to a closer examination of how to configure reinforcement learning itself to be efficient, reliable and generalizable when applied to new domains. We accomplish this in three parts: i. extending the algorithm con- figuration paradigm to allow the dynamic configuration and analysis of algorithms; ii. a principled investigation of the landscape of design decisions in reinforcement learning and; iii. laying the groundwork for generalization of reinforcement learning configuration approaches through contextual reinforcement learning. An important focus throughout is providing insights into the inner workings of reinforcement learning with respect to its design decisions, as of yet underexplored territory. Thus we are able to provide actionable recommendations for reinforcement learning practitioners as well as a broad base for future work on automated reinforcement learning. Overall, this thesis provides an in-depth look into the intersection of automated machine learning and reinforcement learning. We believe it will serve as a foundation for a closer connection between the fields by demonstrating the great potential of reinforcement learning for automated machine learning and vice versa.

U2 - 10.15488/18193

DO - 10.15488/18193

M3 - Doctoral thesis

CY - Hannover

ER -

Von denselben Autoren