14278

Scaling Monte Carlo Tree Search on Intel Xeon Phi

S. Ali Mirsoleimani, Aske Plaat, Jaap van den Herik, Jos Vermaseren
Leiden Centre of Data Science, Leiden University, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands
arXiv:1507.04383 [cs.DC], (15 Jul 2015)

@article{mirsoleimani2015scaling,

   title={Scaling Monte Carlo Tree Search on Intel Xeon Phi},

   author={Mirsoleimani, S. Ali and Plaat, Aske and Herik, Jaap van den and Vermaseren, Jos},

   year={2015},

   month={jul},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Many algorithms have been parallelized successfully on the Intel Xeon Phi coprocessor, especially those with regular, balanced, and predictable data access patterns and instruction flows. Irregular and unbalanced algorithms are harder to parallelize efficiently. They are, for instance, present in artificial intelligence search algorithms such as Monte Carlo Tree Search (MCTS). In this paper we study the scaling behavior of MCTS, on a highly optimized real-world application, on real hardware. The Intel Xeon Phi allows shared memory scaling studies up to 61 cores and 244 hardware threads. We compare work-stealing (Cilk Plus and TBB) and work-sharing (FIFO scheduling) approaches. Interestingly, we find that a straightforward thread pool with a work-sharing FIFO queue shows the best performance. A crucial element for this high performance is the controlling of the grain size, an approach that we call Grain Size Controlled Parallel MCTS. Our subsequent comparing with the Xeon CPUs shows an even more comprehensible distinction in performance between different threading libraries. We achieve, to the best of our knowledge, the fastest implementation of a parallel MCTS on the 61 core Intel Xeon Phi using a real application (47 relative to a sequential run).
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: