12141

GPU vs FPGA: A Comparative Analysis for Non-standard Precision

Umar Ibrahim Minhas, Samuel Bayliss, George A. Constantinides
Department of Electrical and Electronic Engineering, Imperial College London, South Kensington Campus, London SW7 2AZ
Imperial College London, 2014

@incollection{minhas2014gpu,

   title={GPU vs FPGA: A Comparative Analysis for Non-standard Precision},

   author={Minhas, Umar Ibrahim and Bayliss, Samuel and Constantinides, George A},

   booktitle={Reconfigurable Computing: Architectures, Tools, and Applications},

   pages={298–305},

   year={2014},

   publisher={Springer}

}

Download Download (PDF)   View View   Source Source   

3386

views

FPGAs and GPUs are increasingly used in a range of high performance computing applications. When implementing numerical algorithms on either platform, we can choose to represent operands with different levels of accuracy. A trade-off exists between the numerical accuracy of arithmetic operators and the resources needed to implement them. Where algorithmic requirements for numerical stability are captured in a design description, this trade-off can be exploited to optimize performance by using high-accuracy operators only where they are most required. Support for half and double-double floating point representations allows additional flexibility to achieve this. The aim of this work is to study the language and hardware support, and the achievable peak performance for non-standard precisions on a GPU and an FPGA. A compute intensive program, matrix-matrix multiply, is selected as a benchmark and implemented for various different matrix sizes. The results show that for large-enough matrices, GPUs out-perform FPGA-based implementations but for some smaller matrix sizes, specialized FPGA floating-point operators for half and double-double precision can deliver higher throughput than implementation on a GPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: