Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | The 17th European Workshop on Reinforcement Learning (EWRL 2024) |
Publikationsstatus | Angenommen/Im Druck - 30 Sept. 2024 |
Abstract
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
The 17th European Workshop on Reinforcement Learning (EWRL 2024). 2024.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Abstract in Konferenzband › Forschung › Peer-Review
}
TY - CHAP
T1 - Towards Enhancing Predictive Representations using Relational Structure in Reinforcement Learning
AU - Mohan, Aditya
AU - Lindauer, Marius
PY - 2024/9/30
Y1 - 2024/9/30
N2 - While Reinforcement Learning (RL) has demonstrated promising results, its practical application remains limited due to brittleness in complex environments characterized by attributes such as high-dimensional observations, sparse rewards, partial observability, and changing dynamics. To overcome these challenges, we propose enhancing representation learning in RL by incorporating structural inductive biases through Graph Neural Networks (GNNs). Our approach leverages a structured GNN latent model to capture relational structures, thereby improving belief representation end-to-end. We validate our model’s benefits through empirical evaluation in selected challenging environments within the Minigrid suite, which offers relational complexity, against a baseline that uses a Multi-Layer Perceptron (MLP) as the latent model. Additionally, we explore the robustness of these representations in continually changing environments by increasing the size and adding decision points in the form of distractors. Through this analysis, we offer initial insights into the advantages of combining relational latent representations using GNNs for end-to-end representation learning in RL and pave the way for future methods of incorporating graph structure for representation learning in RL.
AB - While Reinforcement Learning (RL) has demonstrated promising results, its practical application remains limited due to brittleness in complex environments characterized by attributes such as high-dimensional observations, sparse rewards, partial observability, and changing dynamics. To overcome these challenges, we propose enhancing representation learning in RL by incorporating structural inductive biases through Graph Neural Networks (GNNs). Our approach leverages a structured GNN latent model to capture relational structures, thereby improving belief representation end-to-end. We validate our model’s benefits through empirical evaluation in selected challenging environments within the Minigrid suite, which offers relational complexity, against a baseline that uses a Multi-Layer Perceptron (MLP) as the latent model. Additionally, we explore the robustness of these representations in continually changing environments by increasing the size and adding decision points in the form of distractors. Through this analysis, we offer initial insights into the advantages of combining relational latent representations using GNNs for end-to-end representation learning in RL and pave the way for future methods of incorporating graph structure for representation learning in RL.
KW - cs.LG
M3 - Conference abstract
BT - The 17th European Workshop on Reinforcement Learning (EWRL 2024)
ER -