17233

Efficient Parallel Methods for Deep Reinforcement Learning

Alfredo V. Clemente, Humberto N. Castejon, Arjun Chandra
Department of Computer and Information Science, Norwegian University of Science and Technology, Trondheim, Norway
arXiv:1705.04862 [cs.LG], (16 May 2017)

@article{clemente2017efficient,

   title={Efficient Parallel Methods for Deep Reinforcement Learning},

   author={Clemente, Alfredo V. and Castejon, Humberto N. and Chandra, Arjun},

   year={2017},

   month={may},

   archivePrefix={"arXiv"},

   primaryClass={cs.LG}

}

We propose a novel framework for efficient parallelization of deep reinforcement learning algorithms, enabling these algorithms to learn from multiple actors on a single machine. The framework is algorithm agnostic and can be applied to on-policy, off-policy, value based and policy gradient based algorithms. Given its inherent parallelism, the framework can be efficiently implemented on a GPU, allowing the usage of powerful models while significantly reducing training time. We demonstrate the effectiveness of our framework by implementing an advantage actor-critic algorithm on a GPU, using on-policy experiences and employing synchronous updates. Our algorithm achieves state-of-the-art performance on the Atari domain after only a few hours of training. Our framework thus opens the door for much faster experimentation on demanding problem domains. Our implementation is open-source.
Rating: 2.7/5. From 3 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: