15184

Study of basic vector operations on Intel Xeon Phi and NVIDIA Tesla using OpenCL

Edoardo Coronado, Guillermo Indalecio, Antonio Garcia-Loureiro
Centro de Investigacion en Tecnoloxias da Informacion (CiTIUS), Universidad de Santiago de Compostela, Rua de Jenaro de la Fuente Dominguez, 15782 Santiago de Compostela, Spain
Annals of Multicore and GPU Programming, Vol 2, No 1, 2015

@article{coronado2015study,

   title={Study of basic vector operations on Intel Xeon Phi and NVIDIA Tesla using OpenCL},

   author={Coronado, Edoardo and Indalecio, Guillermo and Garcia-Loureiro, Antonio},

   journal={Annals of Multicore and GPU Programming},

   volume={2},

   number={1},

   pages={66–80},

   year={2015}

}

Download Download (PDF)   View View   Source Source   

2411

views

The present work is an analysis of the performance of the basic vector operations AXPY, DOT and SpMV using OpenCL. The code was tested on the NVIDIA Tesla S2050 GPU and Intel Xeon Phi 3120A coprocessor. Due to the nature of the AXPY function, only two versions were implemented, the routine to be executed by the CPU and the kernel to be executed on the previously mentioned devices. It was studied how they perform for different vector’s sizes. Their results show the NVIDIA architecture better suited for the smaller vectors sizes and the Intel architecture for the larger vector’s sizes. For the DOT and SpMV functions, there are three versions implemented. The first is the CPU routine, the second one is an OpenCL kernel that uses local memory and the third one is an OpenCL kernel that only uses global memory. The kernels that use local memory are tested by varying the size of the work-group; the kernels that only uses global memory are tested by varying the arrays size. In the case of the first ones, the results show the optimum work-group size and that the NVIDIA architecture benefits from the use of local memory. For the latter kernels, the results show that larger computational loads benefits the Intel architecture.
Rating: 0.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: