Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | 2025 Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2025) |
Publikationsstatus | Angenommen/Im Druck - 15 Feb. 2025 |
Abstract
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
2025 Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2025). 2025.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Abstract in Konferenzband › Forschung › Peer-Review
}
TY - CHAP
T1 - Growing with Experience: Growing Neural Networks in Deep Reinforcement Learning
AU - Fehring, Lukas
AU - Eimer, Theresa
AU - Lindauer, Marius
PY - 2025/2/15
Y1 - 2025/2/15
N2 - While increasingly large models have revolutionized much of the machine learning landscape, training even mid-sized networks for Reinforcement Learning (RL) is still proving to be a struggle. This, however, severely limits the complexity of policies we are able to learn. To enable increased network capacity while maintaining network trainability, we propose GrowNN, a simple yet effective method that utilizes progressive network growth during training. We start training a small network to learn an initial policy. Then we add layers without changing the encoded function. Subsequent updates can utilize the added layers to learn a more expressive policy, adding capacity as the policy’s complexity increases. GrowNN can be seamlessly integrated into most existing RL agents. Our experiments on MiniHack and Mujoco show improved agent performance, with incrementally GrowNN deeper networks outperforming their respective static counterparts of the same size by up to 48% on MiniHack Room and 72% on Ant.
AB - While increasingly large models have revolutionized much of the machine learning landscape, training even mid-sized networks for Reinforcement Learning (RL) is still proving to be a struggle. This, however, severely limits the complexity of policies we are able to learn. To enable increased network capacity while maintaining network trainability, we propose GrowNN, a simple yet effective method that utilizes progressive network growth during training. We start training a small network to learn an initial policy. Then we add layers without changing the encoded function. Subsequent updates can utilize the added layers to learn a more expressive policy, adding capacity as the policy’s complexity increases. GrowNN can be seamlessly integrated into most existing RL agents. Our experiments on MiniHack and Mujoco show improved agent performance, with incrementally GrowNN deeper networks outperforming their respective static counterparts of the same size by up to 48% on MiniHack Room and 72% on Ant.
M3 - Conference abstract
BT - 2025 Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2025)
ER -