16698

Accelerate Deep Learning Inference with MCTS in the game of Go on the Intel Xeon Phi

Ching-Nung Lin, Shi-Jim Yen
Deptartment of Computer Science and Information Engineering, National Dong Hwa University, Hualien, Taiwan
Information Processing Society of Japan, 2016

@article{lin2016accelerate,

   title={Accelerate Deep Learning Inference with MCTS in the game of Go on the Intel Xeon Phi},

   author={Lin, Ching-Nung and Yen, Shi-Jim},

   year={2016}

}

Download Download (PDF)   View View   Source Source   

1720

views

The performance of Deep Learning Inference is a serious issue when combining with speed delicate Monte Carlo Tree Search. Traditional hybrid CPU and Graphics processing unit solution is bounded because of frequently heavy data transferring. This paper proposes a method making Deep Convolution Neural Network prediction and MCTS execution simultaneously at Intel Xeon Phi. This outperforms all present solutions. With our methodology, high quality simulation with pure DCNN can be done in a reasonable time.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: