Accelerate Deep Learning Inference with MCTS in the game of Go on the Intel Xeon Phi
Deptartment of Computer Science and Information Engineering, National Dong Hwa University, Hualien, Taiwan
Information Processing Society of Japan, 2016
@article{lin2016accelerate,
title={Accelerate Deep Learning Inference with MCTS in the game of Go on the Intel Xeon Phi},
author={Lin, Ching-Nung and Yen, Shi-Jim},
year={2016}
}
The performance of Deep Learning Inference is a serious issue when combining with speed delicate Monte Carlo Tree Search. Traditional hybrid CPU and Graphics processing unit solution is bounded because of frequently heavy data transferring. This paper proposes a method making Deep Convolution Neural Network prediction and MCTS execution simultaneously at Intel Xeon Phi. This outperforms all present solutions. With our methodology, high quality simulation with pure DCNN can be done in a reasonable time.
November 8, 2016 by hgpu