Loading [MathJax]/extensions/tex2jax.js

Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

Organisationseinheiten

Externe Organisationen

  • Technische Universität Berlin
  • Delft University of Technology

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025
Herausgeber/-innenYevgeniy Vorobeychik, Sanmay Das, Ann Nowe
Seiten5-13
Seitenumfang9
ISBN (elektronisch)9798400714269
PublikationsstatusVeröffentlicht - 19 Mai 2025
Veranstaltung24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025: AAMAS 2025 - Detroit Marriott at the Renaissance Center , Detroit, USA / Vereinigte Staaten
Dauer: 19 Mai 202523 Mai 2025

Publikationsreihe

NameProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
ISSN (Print)1548-8403
ISSN (elektronisch)1558-2914

Abstract

The success of deep reinforcement learning (DRL) relies on the availability and quality of training data, often requiring extensive interactions with specific environments. In many real-world scenarios, where data collection is costly and risky, offline reinforcement learning (RL) offers a solution by utilizing data collected by domain experts and searching for a batch-constrained optimal policy. This approach is further augmented by incorporating external data sources, expanding the range and diversity of data collection possibilities. However, existing offline RL methods often struggle with challenges posed by non-matching data from these external sources. In this work, we specifically address the problem of source-target domain mismatch in scenarios involving mixed datasets, characterized by a predominance of source data generated from random or suboptimal policies and a limited amount of target data generated from higher-quality policies. To tackle this problem, we introduce Transition Scoring (TS), a novel method that assigns scores to transitions based on their similarity to the target domain, and propose Curriculum Learning-Based Trajectory Valuation (CLTV), which effectively leverages these transition scores to identify and prioritize high-quality trajectories through a curriculum learning approach. Our extensive experiments across various offline RL methods and MuJoCo environments, complemented by rigorous theoretical analysis, demonstrate that CLTV enhances the overall performance and transferability of policies learned by offline RL algorithms.

ASJC Scopus Sachgebiete

Zitieren

Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation. / Abolfazli, Amir; Song, Zekun; Anand, Avishek et al.
Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025. Hrsg. / Yevgeniy Vorobeychik; Sanmay Das; Ann Nowe. 2025. S. 5-13 (Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Abolfazli, A, Song, Z, Anand, A & Nejdl, W 2025, Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation. in Y Vorobeychik, S Das & A Nowe (Hrsg.), Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025. Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, S. 5-13, 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025, Detroit, Michigan, USA / Vereinigte Staaten, 19 Mai 2025. https://doi.org/10.48550/arXiv.2502.00601
Abolfazli, A., Song, Z., Anand, A., & Nejdl, W. (2025). Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation. In Y. Vorobeychik, S. Das, & A. Nowe (Hrsg.), Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025 (S. 5-13). (Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS). https://doi.org/10.48550/arXiv.2502.00601
Abolfazli A, Song Z, Anand A, Nejdl W. Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation. in Vorobeychik Y, Das S, Nowe A, Hrsg., Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025. 2025. S. 5-13. (Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS). doi: 10.48550/arXiv.2502.00601
Abolfazli, Amir ; Song, Zekun ; Anand, Avishek et al. / Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation. Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025. Hrsg. / Yevgeniy Vorobeychik ; Sanmay Das ; Ann Nowe. 2025. S. 5-13 (Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS).
Download
@inproceedings{26b70264a08c4684a07214ac3dc52017,
title = "Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation",
abstract = "The success of deep reinforcement learning (DRL) relies on the availability and quality of training data, often requiring extensive interactions with specific environments. In many real-world scenarios, where data collection is costly and risky, offline reinforcement learning (RL) offers a solution by utilizing data collected by domain experts and searching for a batch-constrained optimal policy. This approach is further augmented by incorporating external data sources, expanding the range and diversity of data collection possibilities. However, existing offline RL methods often struggle with challenges posed by non-matching data from these external sources. In this work, we specifically address the problem of source-target domain mismatch in scenarios involving mixed datasets, characterized by a predominance of source data generated from random or suboptimal policies and a limited amount of target data generated from higher-quality policies. To tackle this problem, we introduce Transition Scoring (TS), a novel method that assigns scores to transitions based on their similarity to the target domain, and propose Curriculum Learning-Based Trajectory Valuation (CLTV), which effectively leverages these transition scores to identify and prioritize high-quality trajectories through a curriculum learning approach. Our extensive experiments across various offline RL methods and MuJoCo environments, complemented by rigorous theoretical analysis, demonstrate that CLTV enhances the overall performance and transferability of policies learned by offline RL algorithms.",
keywords = "Offline Reinforcement Learning, Trajectory Valuation",
author = "Amir Abolfazli and Zekun Song and Avishek Anand and Wolfgang Nejdl",
note = "Publisher Copyright: {\textcopyright} 2025 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org).; 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025 : AAMAS 2025, AAMAS 2025 ; Conference date: 19-05-2025 Through 23-05-2025",
year = "2025",
month = may,
day = "19",
doi = "10.48550/arXiv.2502.00601",
language = "English",
series = "Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS",
pages = "5--13",
editor = "Yevgeniy Vorobeychik and Sanmay Das and Ann Nowe",
booktitle = "Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025",

}

Download

TY - GEN

T1 - Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation

AU - Abolfazli, Amir

AU - Song, Zekun

AU - Anand, Avishek

AU - Nejdl, Wolfgang

N1 - Publisher Copyright: © 2025 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org).

PY - 2025/5/19

Y1 - 2025/5/19

N2 - The success of deep reinforcement learning (DRL) relies on the availability and quality of training data, often requiring extensive interactions with specific environments. In many real-world scenarios, where data collection is costly and risky, offline reinforcement learning (RL) offers a solution by utilizing data collected by domain experts and searching for a batch-constrained optimal policy. This approach is further augmented by incorporating external data sources, expanding the range and diversity of data collection possibilities. However, existing offline RL methods often struggle with challenges posed by non-matching data from these external sources. In this work, we specifically address the problem of source-target domain mismatch in scenarios involving mixed datasets, characterized by a predominance of source data generated from random or suboptimal policies and a limited amount of target data generated from higher-quality policies. To tackle this problem, we introduce Transition Scoring (TS), a novel method that assigns scores to transitions based on their similarity to the target domain, and propose Curriculum Learning-Based Trajectory Valuation (CLTV), which effectively leverages these transition scores to identify and prioritize high-quality trajectories through a curriculum learning approach. Our extensive experiments across various offline RL methods and MuJoCo environments, complemented by rigorous theoretical analysis, demonstrate that CLTV enhances the overall performance and transferability of policies learned by offline RL algorithms.

AB - The success of deep reinforcement learning (DRL) relies on the availability and quality of training data, often requiring extensive interactions with specific environments. In many real-world scenarios, where data collection is costly and risky, offline reinforcement learning (RL) offers a solution by utilizing data collected by domain experts and searching for a batch-constrained optimal policy. This approach is further augmented by incorporating external data sources, expanding the range and diversity of data collection possibilities. However, existing offline RL methods often struggle with challenges posed by non-matching data from these external sources. In this work, we specifically address the problem of source-target domain mismatch in scenarios involving mixed datasets, characterized by a predominance of source data generated from random or suboptimal policies and a limited amount of target data generated from higher-quality policies. To tackle this problem, we introduce Transition Scoring (TS), a novel method that assigns scores to transitions based on their similarity to the target domain, and propose Curriculum Learning-Based Trajectory Valuation (CLTV), which effectively leverages these transition scores to identify and prioritize high-quality trajectories through a curriculum learning approach. Our extensive experiments across various offline RL methods and MuJoCo environments, complemented by rigorous theoretical analysis, demonstrate that CLTV enhances the overall performance and transferability of policies learned by offline RL algorithms.

KW - Offline Reinforcement Learning

KW - Trajectory Valuation

UR - http://www.scopus.com/inward/record.url?scp=105009826542&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2502.00601

DO - 10.48550/arXiv.2502.00601

M3 - Conference contribution

AN - SCOPUS:105009826542

T3 - Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS

SP - 5

EP - 13

BT - Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025

A2 - Vorobeychik, Yevgeniy

A2 - Das, Sanmay

A2 - Nowe, Ann

T2 - 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025

Y2 - 19 May 2025 through 23 May 2025

ER -

Von denselben Autoren