Details
| Original language | English |
|---|---|
| Title of host publication | 60th IEEE Conference on Decision and Control, CDC 2021 |
| Pages | 5239 - 5246 |
| Number of pages | 8 |
| ISBN (electronic) | 9781665436595 |
| Publication status | Published - 14 Dec 2021 |
| Externally published | Yes |
Publication series
| Name | Proceedings of the IEEE Conference on Decision and Control |
|---|---|
| Volume | 2021-December |
| ISSN (Print) | 0743-1546 |
| ISSN (electronic) | 2576-2370 |
Abstract
Multi-agent reinforcement learning methods have shown remarkable potential in solving complex multi-agent problems but mostly lack theoretical guarantees. Recently, mean field control and mean field games have been established as a tractable solution for large-scale multi-agent problems with many agents. In this work, driven by a motivating scheduling problem, we consider a discrete-time mean field control model with common environment states. We rigorously establish approximate optimality as the number of agents grows in the finite agent case and find that a dynamic programming principle holds, resulting in the existence of an optimal stationary policy. As exact solutions are difficult in general due to the resulting continuous action space of the limiting mean field Markov decision process, we apply established deep reinforcement learning methods to solve the associated mean field control problem. The performance of the learned mean field control policy is compared to typical multi-agent reinforcement learning approaches and is found to converge to the mean field performance for sufficiently many agents, verifying the obtained theoretical results and reaching competitive solutions.
ASJC Scopus subject areas
- Engineering(all)
- Control and Systems Engineering
- Mathematics(all)
- Modelling and Simulation
- Mathematics(all)
- Control and Optimization
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
60th IEEE Conference on Decision and Control, CDC 2021. 2021. p. 5239 - 5246 (Proceedings of the IEEE Conference on Decision and Control; Vol. 2021-December).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Discrete-time mean field control with environment states
AU - Tahir, Anam
N1 - Publisher Copyright: © 2021 IEEE.
PY - 2021/12/14
Y1 - 2021/12/14
N2 - Multi-agent reinforcement learning methods have shown remarkable potential in solving complex multi-agent problems but mostly lack theoretical guarantees. Recently, mean field control and mean field games have been established as a tractable solution for large-scale multi-agent problems with many agents. In this work, driven by a motivating scheduling problem, we consider a discrete-time mean field control model with common environment states. We rigorously establish approximate optimality as the number of agents grows in the finite agent case and find that a dynamic programming principle holds, resulting in the existence of an optimal stationary policy. As exact solutions are difficult in general due to the resulting continuous action space of the limiting mean field Markov decision process, we apply established deep reinforcement learning methods to solve the associated mean field control problem. The performance of the learned mean field control policy is compared to typical multi-agent reinforcement learning approaches and is found to converge to the mean field performance for sufficiently many agents, verifying the obtained theoretical results and reaching competitive solutions.
AB - Multi-agent reinforcement learning methods have shown remarkable potential in solving complex multi-agent problems but mostly lack theoretical guarantees. Recently, mean field control and mean field games have been established as a tractable solution for large-scale multi-agent problems with many agents. In this work, driven by a motivating scheduling problem, we consider a discrete-time mean field control model with common environment states. We rigorously establish approximate optimality as the number of agents grows in the finite agent case and find that a dynamic programming principle holds, resulting in the existence of an optimal stationary policy. As exact solutions are difficult in general due to the resulting continuous action space of the limiting mean field Markov decision process, we apply established deep reinforcement learning methods to solve the associated mean field control problem. The performance of the learned mean field control policy is compared to typical multi-agent reinforcement learning approaches and is found to converge to the mean field performance for sufficiently many agents, verifying the obtained theoretical results and reaching competitive solutions.
UR - http://www.scopus.com/inward/record.url?scp=85126041900&partnerID=8YFLogxK
U2 - 10.1109/CDC45484.2021.9683749
DO - 10.1109/CDC45484.2021.9683749
M3 - Conference contribution
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 5239
EP - 5246
BT - 60th IEEE Conference on Decision and Control, CDC 2021
ER -