Towards Enhancing Predictive Representations using Relational Structure in Reinforcement Learning

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAbstract in KonferenzbandForschungPeer-Review

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksThe 17th European Workshop on Reinforcement Learning (EWRL 2024)
PublikationsstatusAngenommen/Im Druck - 30 Sept. 2024

Abstract

While Reinforcement Learning (RL) has demonstrated promising results, its practical application remains limited due to brittleness in complex environments characterized by attributes such as high-dimensional observations, sparse rewards, partial observability, and changing dynamics. To overcome these challenges, we propose enhancing representation learning in RL by incorporating structural inductive biases through Graph Neural Networks (GNNs). Our approach leverages a structured GNN latent model to capture relational structures, thereby improving belief representation end-to-end. We validate our model’s benefits through empirical evaluation in selected challenging environments within the Minigrid suite, which offers relational complexity, against a baseline that uses a Multi-Layer Perceptron (MLP) as the latent model. Additionally, we explore the robustness of these representations in continually changing environments by increasing the size and adding decision points in the form of distractors. Through this analysis, we offer initial insights into the advantages of combining relational latent representations using GNNs for end-to-end representation learning in RL and pave the way for future methods of incorporating graph structure for representation learning in RL.

Zitieren

Towards Enhancing Predictive Representations using Relational Structure in Reinforcement Learning. / Mohan, Aditya; Lindauer, Marius.
The 17th European Workshop on Reinforcement Learning (EWRL 2024). 2024.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAbstract in KonferenzbandForschungPeer-Review

Mohan, A., & Lindauer, M. (Angenommen/im Druck). Towards Enhancing Predictive Representations using Relational Structure in Reinforcement Learning. In The 17th European Workshop on Reinforcement Learning (EWRL 2024)
Download
@inbook{6a3326e9566a4b5da650197a9e19a46a,
title = "Towards Enhancing Predictive Representations using Relational Structure in Reinforcement Learning",
abstract = "While Reinforcement Learning (RL) has demonstrated promising results, its practical application remains limited due to brittleness in complex environments characterized by attributes such as high-dimensional observations, sparse rewards, partial observability, and changing dynamics. To overcome these challenges, we propose enhancing representation learning in RL by incorporating structural inductive biases through Graph Neural Networks (GNNs). Our approach leverages a structured GNN latent model to capture relational structures, thereby improving belief representation end-to-end. We validate our model{\textquoteright}s benefits through empirical evaluation in selected challenging environments within the Minigrid suite, which offers relational complexity, against a baseline that uses a Multi-Layer Perceptron (MLP) as the latent model. Additionally, we explore the robustness of these representations in continually changing environments by increasing the size and adding decision points in the form of distractors. Through this analysis, we offer initial insights into the advantages of combining relational latent representations using GNNs for end-to-end representation learning in RL and pave the way for future methods of incorporating graph structure for representation learning in RL.",
keywords = "cs.LG",
author = "Aditya Mohan and Marius Lindauer",
year = "2024",
month = sep,
day = "30",
language = "English",
booktitle = "The 17th European Workshop on Reinforcement Learning (EWRL 2024)",

}

Download

TY - CHAP

T1 - Towards Enhancing Predictive Representations using Relational Structure in Reinforcement Learning

AU - Mohan, Aditya

AU - Lindauer, Marius

PY - 2024/9/30

Y1 - 2024/9/30

N2 - While Reinforcement Learning (RL) has demonstrated promising results, its practical application remains limited due to brittleness in complex environments characterized by attributes such as high-dimensional observations, sparse rewards, partial observability, and changing dynamics. To overcome these challenges, we propose enhancing representation learning in RL by incorporating structural inductive biases through Graph Neural Networks (GNNs). Our approach leverages a structured GNN latent model to capture relational structures, thereby improving belief representation end-to-end. We validate our model’s benefits through empirical evaluation in selected challenging environments within the Minigrid suite, which offers relational complexity, against a baseline that uses a Multi-Layer Perceptron (MLP) as the latent model. Additionally, we explore the robustness of these representations in continually changing environments by increasing the size and adding decision points in the form of distractors. Through this analysis, we offer initial insights into the advantages of combining relational latent representations using GNNs for end-to-end representation learning in RL and pave the way for future methods of incorporating graph structure for representation learning in RL.

AB - While Reinforcement Learning (RL) has demonstrated promising results, its practical application remains limited due to brittleness in complex environments characterized by attributes such as high-dimensional observations, sparse rewards, partial observability, and changing dynamics. To overcome these challenges, we propose enhancing representation learning in RL by incorporating structural inductive biases through Graph Neural Networks (GNNs). Our approach leverages a structured GNN latent model to capture relational structures, thereby improving belief representation end-to-end. We validate our model’s benefits through empirical evaluation in selected challenging environments within the Minigrid suite, which offers relational complexity, against a baseline that uses a Multi-Layer Perceptron (MLP) as the latent model. Additionally, we explore the robustness of these representations in continually changing environments by increasing the size and adding decision points in the form of distractors. Through this analysis, we offer initial insights into the advantages of combining relational latent representations using GNNs for end-to-end representation learning in RL and pave the way for future methods of incorporating graph structure for representation learning in RL.

KW - cs.LG

M3 - Conference abstract

BT - The 17th European Workshop on Reinforcement Learning (EWRL 2024)

ER -

Von denselben Autoren