12242

GPU Implementation of Gaussian Processes

Ying Deng, Edward Khon, Yuanruo Liang, Terence Lim, Jun Wei Ng, Lixiaonan Yin
Imperial College London
Imperial College London, M.Sc. Group Project Final Report, 2014

@article{deng2014gpu,

   title={GPU Implementation of Gaussian Processes},

   author={Deng, Ying and Khon, Edward and Liang, Yuanruo and Lim, Terence and Ng, Jun Wei and Yin, Lixiaonan},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

614

views

Gaussian process models (henceforth Gaussian Processes) provide a probabilistic, non-parametric framework for inferring posterior distributions over functions from general prior information and observed noisy function values. This, however, comes with a computational burden of O(N3) for training and O(N2) for prediction, where N is the size of the training set [1]. Therefore, this method does not lend itself well to problems where N is large – a common occurrence in many modern machine learning or ‘big data’ problems. There are two routes to address this challenge, and they are (1) using approximations, or (2) the exploitation of modern processors, which we will explore in our project. Modern-day graphics processing units (GPUs) have been shown to achieve performance improvements of up to two orders of magnitude in various applications by performing massively parallel computations on a large number of cores [2]. In this project, we will investigate whether the parallel processing power of these GPUs can be suitably exploited to scale Gaussian Processes to larger data sets. We also aim to develop a GPU – GP software package for exact GP regression.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: