Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025 |
Herausgeber/-innen | Yevgeniy Vorobeychik, Sanmay Das, Ann Nowe |
Seiten | 5-13 |
Seitenumfang | 9 |
ISBN (elektronisch) | 9798400714269 |
Publikationsstatus | Veröffentlicht - 19 Mai 2025 |
Veranstaltung | 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025: AAMAS 2025 - Detroit Marriott at the Renaissance Center , Detroit, USA / Vereinigte Staaten Dauer: 19 Mai 2025 → 23 Mai 2025 |
Publikationsreihe
Name | Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS |
---|---|
ISSN (Print) | 1548-8403 |
ISSN (elektronisch) | 1558-2914 |
Abstract
The success of deep reinforcement learning (DRL) relies on the availability and quality of training data, often requiring extensive interactions with specific environments. In many real-world scenarios, where data collection is costly and risky, offline reinforcement learning (RL) offers a solution by utilizing data collected by domain experts and searching for a batch-constrained optimal policy. This approach is further augmented by incorporating external data sources, expanding the range and diversity of data collection possibilities. However, existing offline RL methods often struggle with challenges posed by non-matching data from these external sources. In this work, we specifically address the problem of source-target domain mismatch in scenarios involving mixed datasets, characterized by a predominance of source data generated from random or suboptimal policies and a limited amount of target data generated from higher-quality policies. To tackle this problem, we introduce Transition Scoring (TS), a novel method that assigns scores to transitions based on their similarity to the target domain, and propose Curriculum Learning-Based Trajectory Valuation (CLTV), which effectively leverages these transition scores to identify and prioritize high-quality trajectories through a curriculum learning approach. Our extensive experiments across various offline RL methods and MuJoCo environments, complemented by rigorous theoretical analysis, demonstrate that CLTV enhances the overall performance and transferability of policies learned by offline RL algorithms.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Artificial intelligence
- Informatik (insg.)
- Software
- Ingenieurwesen (insg.)
- Steuerungs- und Systemtechnik
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025. Hrsg. / Yevgeniy Vorobeychik; Sanmay Das; Ann Nowe. 2025. S. 5-13 (Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Enhancing Offline Reinforcement Learning with Curriculum Learning-Based Trajectory Valuation
AU - Abolfazli, Amir
AU - Song, Zekun
AU - Anand, Avishek
AU - Nejdl, Wolfgang
N1 - Publisher Copyright: © 2025 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org).
PY - 2025/5/19
Y1 - 2025/5/19
N2 - The success of deep reinforcement learning (DRL) relies on the availability and quality of training data, often requiring extensive interactions with specific environments. In many real-world scenarios, where data collection is costly and risky, offline reinforcement learning (RL) offers a solution by utilizing data collected by domain experts and searching for a batch-constrained optimal policy. This approach is further augmented by incorporating external data sources, expanding the range and diversity of data collection possibilities. However, existing offline RL methods often struggle with challenges posed by non-matching data from these external sources. In this work, we specifically address the problem of source-target domain mismatch in scenarios involving mixed datasets, characterized by a predominance of source data generated from random or suboptimal policies and a limited amount of target data generated from higher-quality policies. To tackle this problem, we introduce Transition Scoring (TS), a novel method that assigns scores to transitions based on their similarity to the target domain, and propose Curriculum Learning-Based Trajectory Valuation (CLTV), which effectively leverages these transition scores to identify and prioritize high-quality trajectories through a curriculum learning approach. Our extensive experiments across various offline RL methods and MuJoCo environments, complemented by rigorous theoretical analysis, demonstrate that CLTV enhances the overall performance and transferability of policies learned by offline RL algorithms.
AB - The success of deep reinforcement learning (DRL) relies on the availability and quality of training data, often requiring extensive interactions with specific environments. In many real-world scenarios, where data collection is costly and risky, offline reinforcement learning (RL) offers a solution by utilizing data collected by domain experts and searching for a batch-constrained optimal policy. This approach is further augmented by incorporating external data sources, expanding the range and diversity of data collection possibilities. However, existing offline RL methods often struggle with challenges posed by non-matching data from these external sources. In this work, we specifically address the problem of source-target domain mismatch in scenarios involving mixed datasets, characterized by a predominance of source data generated from random or suboptimal policies and a limited amount of target data generated from higher-quality policies. To tackle this problem, we introduce Transition Scoring (TS), a novel method that assigns scores to transitions based on their similarity to the target domain, and propose Curriculum Learning-Based Trajectory Valuation (CLTV), which effectively leverages these transition scores to identify and prioritize high-quality trajectories through a curriculum learning approach. Our extensive experiments across various offline RL methods and MuJoCo environments, complemented by rigorous theoretical analysis, demonstrate that CLTV enhances the overall performance and transferability of policies learned by offline RL algorithms.
KW - Offline Reinforcement Learning
KW - Trajectory Valuation
UR - http://www.scopus.com/inward/record.url?scp=105009826542&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2502.00601
DO - 10.48550/arXiv.2502.00601
M3 - Conference contribution
AN - SCOPUS:105009826542
T3 - Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
SP - 5
EP - 13
BT - Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025
A2 - Vorobeychik, Yevgeniy
A2 - Das, Sanmay
A2 - Nowe, Ann
T2 - 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025
Y2 - 19 May 2025 through 23 May 2025
ER -