Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

Research Organisations

External Research Organisations

  • Technische Universität Berlin
  • Delft University of Technology (TU Delft)
View graph of relations

Details

Original languageEnglish
Title of host publicationProceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025
EditorsYevgeniy Vorobeychik, Sanmay Das, Ann Nowe
Pages5-13
Number of pages9
ISBN (electronic)9798400714269
Publication statusPublished - 19 May 2025
Event24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025: AAMAS 2025 - Detroit Marriott at the Renaissance Center , Detroit, United States
Duration: 19 May 202523 May 2025

Publication series

NameProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
ISSN (Print)1548-8403
ISSN (electronic)1558-2914

Abstract

The success of deep reinforcement learning (DRL) relies on the availability and quality of training data, often requiring extensive interactions with specific environments. In many real-world scenarios, where data collection is costly and risky, offline reinforcement learning (RL) offers a solution by utilizing data collected by domain experts and searching for a batch-constrained optimal policy. This approach is further augmented by incorporating external data sources, expanding the range and diversity of data collection possibilities. However, existing offline RL methods often struggle with challenges posed by non-matching data from these external sources. In this work, we specifically address the problem of source-target domain mismatch in scenarios involving mixed datasets, characterized by a predominance of source data generated from random or suboptimal policies and a limited amount of target data generated from higher-quality policies. To tackle this problem, we introduce Transition Scoring (TS), a novel method that assigns scores to transitions based on their similarity to the target domain, and propose Curriculum Learning-Based Trajectory Valuation (CLTV), which effectively leverages these transition scores to identify and prioritize high-quality trajectories through a curriculum learning approach. Our extensive experiments across various offline RL methods and MuJoCo environments, complemented by rigorous theoretical analysis, demonstrate that CLTV enhances the overall performance and transferability of policies learned by offline RL algorithms.

Keywords

    Offline Reinforcement Learning, Trajectory Valuation

ASJC Scopus subject areas

Cite this

Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation. / Abolfazli, Amir; Song, Zekun; Anand, Avishek et al.
Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025. ed. / Yevgeniy Vorobeychik; Sanmay Das; Ann Nowe. 2025. p. 5-13 (Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Abolfazli, A, Song, Z, Anand, A & Nejdl, W 2025, Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation. in Y Vorobeychik, S Das & A Nowe (eds), Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025. Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, pp. 5-13, 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025, Detroit, Michigan, United States, 19 May 2025. https://doi.org/10.48550/arXiv.2502.00601
Abolfazli, A., Song, Z., Anand, A., & Nejdl, W. (2025). Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation. In Y. Vorobeychik, S. Das, & A. Nowe (Eds.), Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025 (pp. 5-13). (Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS). https://doi.org/10.48550/arXiv.2502.00601
Abolfazli A, Song Z, Anand A, Nejdl W. Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation. In Vorobeychik Y, Das S, Nowe A, editors, Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025. 2025. p. 5-13. (Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS). doi: 10.48550/arXiv.2502.00601
Abolfazli, Amir ; Song, Zekun ; Anand, Avishek et al. / Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation. Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025. editor / Yevgeniy Vorobeychik ; Sanmay Das ; Ann Nowe. 2025. pp. 5-13 (Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS).
Download
@inproceedings{26b70264a08c4684a07214ac3dc52017,
title = "Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation",
abstract = "The success of deep reinforcement learning (DRL) relies on the availability and quality of training data, often requiring extensive interactions with specific environments. In many real-world scenarios, where data collection is costly and risky, offline reinforcement learning (RL) offers a solution by utilizing data collected by domain experts and searching for a batch-constrained optimal policy. This approach is further augmented by incorporating external data sources, expanding the range and diversity of data collection possibilities. However, existing offline RL methods often struggle with challenges posed by non-matching data from these external sources. In this work, we specifically address the problem of source-target domain mismatch in scenarios involving mixed datasets, characterized by a predominance of source data generated from random or suboptimal policies and a limited amount of target data generated from higher-quality policies. To tackle this problem, we introduce Transition Scoring (TS), a novel method that assigns scores to transitions based on their similarity to the target domain, and propose Curriculum Learning-Based Trajectory Valuation (CLTV), which effectively leverages these transition scores to identify and prioritize high-quality trajectories through a curriculum learning approach. Our extensive experiments across various offline RL methods and MuJoCo environments, complemented by rigorous theoretical analysis, demonstrate that CLTV enhances the overall performance and transferability of policies learned by offline RL algorithms.",
keywords = "Offline Reinforcement Learning, Trajectory Valuation",
author = "Amir Abolfazli and Zekun Song and Avishek Anand and Wolfgang Nejdl",
note = "Publisher Copyright: {\textcopyright} 2025 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org).; 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025 : AAMAS 2025, AAMAS 2025 ; Conference date: 19-05-2025 Through 23-05-2025",
year = "2025",
month = may,
day = "19",
doi = "10.48550/arXiv.2502.00601",
language = "English",
series = "Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS",
pages = "5--13",
editor = "Yevgeniy Vorobeychik and Sanmay Das and Ann Nowe",
booktitle = "Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025",

}

Download

TY - GEN

T1 - Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation

AU - Abolfazli, Amir

AU - Song, Zekun

AU - Anand, Avishek

AU - Nejdl, Wolfgang

N1 - Publisher Copyright: © 2025 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org).

PY - 2025/5/19

Y1 - 2025/5/19

N2 - The success of deep reinforcement learning (DRL) relies on the availability and quality of training data, often requiring extensive interactions with specific environments. In many real-world scenarios, where data collection is costly and risky, offline reinforcement learning (RL) offers a solution by utilizing data collected by domain experts and searching for a batch-constrained optimal policy. This approach is further augmented by incorporating external data sources, expanding the range and diversity of data collection possibilities. However, existing offline RL methods often struggle with challenges posed by non-matching data from these external sources. In this work, we specifically address the problem of source-target domain mismatch in scenarios involving mixed datasets, characterized by a predominance of source data generated from random or suboptimal policies and a limited amount of target data generated from higher-quality policies. To tackle this problem, we introduce Transition Scoring (TS), a novel method that assigns scores to transitions based on their similarity to the target domain, and propose Curriculum Learning-Based Trajectory Valuation (CLTV), which effectively leverages these transition scores to identify and prioritize high-quality trajectories through a curriculum learning approach. Our extensive experiments across various offline RL methods and MuJoCo environments, complemented by rigorous theoretical analysis, demonstrate that CLTV enhances the overall performance and transferability of policies learned by offline RL algorithms.

AB - The success of deep reinforcement learning (DRL) relies on the availability and quality of training data, often requiring extensive interactions with specific environments. In many real-world scenarios, where data collection is costly and risky, offline reinforcement learning (RL) offers a solution by utilizing data collected by domain experts and searching for a batch-constrained optimal policy. This approach is further augmented by incorporating external data sources, expanding the range and diversity of data collection possibilities. However, existing offline RL methods often struggle with challenges posed by non-matching data from these external sources. In this work, we specifically address the problem of source-target domain mismatch in scenarios involving mixed datasets, characterized by a predominance of source data generated from random or suboptimal policies and a limited amount of target data generated from higher-quality policies. To tackle this problem, we introduce Transition Scoring (TS), a novel method that assigns scores to transitions based on their similarity to the target domain, and propose Curriculum Learning-Based Trajectory Valuation (CLTV), which effectively leverages these transition scores to identify and prioritize high-quality trajectories through a curriculum learning approach. Our extensive experiments across various offline RL methods and MuJoCo environments, complemented by rigorous theoretical analysis, demonstrate that CLTV enhances the overall performance and transferability of policies learned by offline RL algorithms.

KW - Offline Reinforcement Learning

KW - Trajectory Valuation

UR - http://www.scopus.com/inward/record.url?scp=105009826542&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2502.00601

DO - 10.48550/arXiv.2502.00601

M3 - Conference contribution

AN - SCOPUS:105009826542

T3 - Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS

SP - 5

EP - 13

BT - Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025

A2 - Vorobeychik, Yevgeniy

A2 - Das, Sanmay

A2 - Nowe, Ann

T2 - 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025

Y2 - 19 May 2025 through 23 May 2025

ER -

By the same author(s)