27018

High Performance Simulation for Scalable Multi-Agent Reinforcement Learning

Jordan Langham-Lopez, Sebastian M. Schmon, Patrick Cannon
Improbable, London, UK
arXiv:2207.03945 [cs.MA]

@misc{langhamlopez2022high,

   title={High Performance Simulation for Scalable Multi-Agent Reinforcement Learning},

   author={Jordan Langham-Lopez and Sebastian M. Schmon and Patrick Cannon},

   year={2022},

   eprint={2207.03945},

   archivePrefix={arXiv},

   primaryClass={cs.MA}

}

Download Download (PDF)   View View   Source Source   

555

views

Multi-agent reinforcement learning experiments and open-source training environments are typically limited in scale, supporting tens or sometimes up to hundreds of interacting agents. In this paper we demonstrate the use of Vogue, a high performance agent based model (ABM) framework. Vogue serves as a multi-agent training environment, supporting thousands to tens of thousands of interacting agents while maintaining high training throughput by running both the environment and reinforcement learning (RL) agents on the GPU. High performance multi-agent environments at this scale have the potential to enable the learning of robust and flexible policies for use in ABMs and simulations of complex systems. We demonstrate training performance with two newly developed, large scale multi-agent training environments. Moreover, we show that these environments can train shared RL policies on time-scales of minutes and hours.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: