Performance comparison of GPU and FPGA architectures for the SVM training problem
Dept. of Electr. & Electron. Eng., Imperial Coll. London, London, UK
International Conference on Field-Programmable Technology, 2009. FPT 2009
@inproceedings{papadonikolakis2009performance,
title={Performance Comparison of GPU and FPGA architectures for the SVM Training Problem},
author={Papadonikolakis, M. and Bouganis, C.S. and Constantinides, G.},
booktitle={Field-Programmable Technology, 2009. FPT 2009. International Conference on},
pages={388–391},
organization={IEEE},
year={2009}
}
The Support Vector Machine (SVM) is a popular supervised learning method, providing high accuracy in many classification and regression tasks. However, its training phase is a computationally expensive task. In this work, we focus on the acceleration of this phase and a geometric approach to SVM training based on Gilbert’s Algorithm is targeted, due to the high parallelization potential of its heavy computational tasks. The algorithm is mapped on two of the most popular parallel processing devices, a Graphics Processor and an FPGA device. The evaluation analysis points out the best choice under different configurations. The final speed up depends on the problem size, when no chunking techniques are applied to the training set, achieving the largest speed up for small problem sizes.
May 29, 2011 by hgpu