Gaussian Process Models with Parallelization and GPU acceleration
Department of Computer Science, University of Sheffield
arXiv:1410.4984 [cs.DC], (18 Oct 2014)
@article{2014arXiv1410.4984D,
author={Dai}, Z. and {Damianou}, A. and {Hensman}, J. and {Lawrence}, N.},
title={"{Gaussian Process Models with Parallelization and GPU acceleration}"},
journal={ArXiv e-prints},
archivePrefix={"arXiv"},
eprint={1410.4984},
primaryClass={"cs.DC"},
keywords={Computer Science – Distributed, Parallel, and Cluster Computing, Computer Science – Learning, Statistics – Machine Learning},
year={2014},
month={oct},
adsurl={http://adsabs.harvard.edu/abs/2014arXiv1410.4984D},
adsnote={Provided by the SAO/NASA Astrophysics Data System}
}
In this work, we present an extension of Gaussian process (GP) models with sophisticated parallelization and GPU acceleration. The parallelization scheme arises naturally from the modular computational structure w.r.t. datapoints in the sparse Gaussian process formulation. Additionally, the computational bottleneck is implemented with GPU acceleration for further speed up. Combining both techniques allows applying Gaussian process models to millions of datapoints. The efficiency of our algorithm is demonstrated with a synthetic dataset. Its source code has been integrated into our popular software library GPy.
October 24, 2014 by hgpu