17923

Deep In-GPU Experience Replay

Ben Parr
Carnegie Mellon University
arXiv:1801.03138 [cs.AI], (9 Jan 2018)

@article{parr2018deep,

   title={Deep In-GPU Experience Replay},

   author={Parr, Ben},

   year={2018},

   month={jan},

   archivePrefix={"arXiv"},

   primaryClass={cs.AI}

}

Download Download (PDF)   View View   Source Source   

1601

views

Experience replay allows a reinforcement learning agent to train on samples from a large amount of the most recent experiences. A simple in-RAM experience replay stores these most recent experiences in a list in RAM, and then copies sampled batches to the GPU for training. I moved this list to the GPU, thus creating an in-GPU experience replay, and a training step that no longer has inputs copied from the CPU. I trained an agent to play Super Smash Bros. Melee, using internal game memory values as inputs and outputting controller button presses. A single state in Melee contains 27 floats, so the full experience replay fits on a single GPU. For a batch size of 128, the in-GPU experience replay trained twice as fast as the in-RAM experience replay. As far as I know, this is the first in-GPU implementation of experience replay. Finally, I note a few ideas for fitting the experience replay inside the GPU when the environment state requires more memory.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: