Data Regression with Normal Equation on GPU using CUDA
School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
IRACST – International Journal of Computer Science and Information Technology & Security (IJCSITS), Vol. 2, No.2, 2012
@article{mohan2012data,
title={Data Regression with Normal Equation on GPU using CUDA},
author={Mohan, Vaibhav and Gupta, Mayank},
year={2012}
}
Demand in the consumer market for graphics hardware that accelerates rendering of 3D images has resulted in Graphic Cards that are capable of delivering astonishing levels of performance. These results were achieved by specifically tailoring the hardware for the target domain. As graphics accelerators become increasingly programmable however, this performance has made them an attractive target for other domains. Graphic processing units provide a low-cost parallel computing architecture. It is possible to achieve massive parallelism by SIMD (Single Instruction Multiple Data) on General Purpose Graphics Processing Unit (GPGPU) integrated with Central Processing Unit (CPU). In this implementation, Normal Equation Algorithm is used to achieve parallelism in data regression on a set of data given using a programming model, Compute Unified Device Architecture (CUDA) which uses multithreading technique. Normal Equation is one of the algorithms to predict, forecast, mine huge amount of data. Normal Equation using CUDA can achieve high performance. Here, Normal Equation is implemented on Graphics Processing Unit (GPU) and on CPU to process given datasets for prediction of patterns by finding weights of the Regression model. The time spent for computation is compared in both the cases.
May 11, 2012 by hgpu