16698

Accelerate Deep Learning Inference with MCTS in the game of Go on the Intel Xeon Phi

Ching-Nung Lin, Shi-Jim Yen
Deptartment of Computer Science and Information Engineering, National Dong Hwa University, Hualien, Taiwan
Information Processing Society of Japan, 2016

@article{lin2016accelerate,

   title={Accelerate Deep Learning Inference with MCTS in the game of Go on the Intel Xeon Phi},

   author={Lin, Ching-Nung and Yen, Shi-Jim},

   year={2016}

}

Download Download (PDF)   View View   Source Source   

842

views

The performance of Deep Learning Inference is a serious issue when combining with speed delicate Monte Carlo Tree Search. Traditional hybrid CPU and Graphics processing unit solution is bounded because of frequently heavy data transferring. This paper proposes a method making Deep Convolution Neural Network prediction and MCTS execution simultaneously at Intel Xeon Phi. This outperforms all present solutions. With our methodology, high quality simulation with pure DCNN can be done in a reasonable time.
No votes yet.
Please wait...

* * *

* * *

Featured events

2018
November
27-30
Hida Takayama, Japan

The Third International Workshop on GPU Computing and AI (GCA), 2018

2018
September
19-21
Nagoya University, Japan

The 5th International Conference on Power and Energy Systems Engineering (CPESE), 2018

2018
September
22-24
MediaCityUK, Salford Quays, Greater Manchester, England

The 10th International Conference on Information Management and Engineering (ICIME), 2018

2018
August
21-23
No. 1037, Luoyu Road, Hongshan District, Wuhan, China

The 4th International Conference on Control Science and Systems Engineering (ICCSSE), 2018

2018
October
29-31
Nanyang Executive Centre in Nanyang Technological University, Singapore

The 2018 International Conference on Cloud Computing and Internet of Things (CCIOT’18), 2018

HGPU group © 2010-2018 hgpu.org

All rights belong to the respective authors

Contact us: