Large Scale Artificial Neural Network Training Using Multi-GPUs
Georgia Institute of Technology
arXiv:1511.04348 [cs.DC], (13 Nov 2015)
@article{wang2015large,
title={Large Scale Artificial Neural Network Training Using Multi-GPUs},
author={Wang, Linnan and Wu, Wei and Xiao, Jianxiong and Yi, Yang},
year={2015},
month={nov},
archivePrefix={"arXiv"},
primaryClass={cs.DC}
}
This paper describes a method for accelerating large scale Artificial Neural Networks (ANN) training using multi-GPUs by reducing the forward and backward passes to matrix multiplication. We propose an out-of-core multi-GPU matrix multiplication and integrate the algorithm with the ANN training. The experiments demonstrate that our matrix multiplication algorithm achieves linear speedup on multiple inhomogeneous GPUs. The full paper of this project can be found at [1].
November 20, 2015 by hgpu