12715

A Framework for Lattice QCD Calculations on GPUs

F. T. Winter, M. A. Clark, R. G. Edwards, B. Joo
Thomas Jefferson National Accelerator Facility, Newport News, VA, USA
arXiv:1408.5925 [hep-lat], (25 Aug 2014)

@article{2014arXiv1408.5925W,

   author={Winter}, F.~T. and {Clark}, M.~A. and {Edwards}, R.~G. and {Jo{‘o}}, B.},

   title={"{A Framework for Lattice QCD Calculations on GPUs}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1408.5925},

   primaryClass={"hep-lat"},

   keywords={High Energy Physics – Lattice, Computer Science – Mathematical Software, Physics – Computational Physics},

   year={2014},

   month={aug},

   adsurl={http://adsabs.harvard.edu/abs/2014arXiv1408.5925W},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1865

views

Computing platforms equipped with accelerators like GPUs have proven to provide great computational power. However, exploiting such platforms for existing scientific applications is not a trivial task. Current GPU programming frameworks such as CUDA C/C++ require low-level programming from the developer in order to achieve high performance code. As a result porting of applications to GPUs is typically limited to time-dominant algorithms and routines, leaving the remainder not accelerated which can open a serious Amdahl’s law issue. The lattice QCD application Chroma allows to explore a different porting strategy. The layered structure of the software architecture logically separates the data-parallel from the application layer. The QCD Data-Parallel software layer provides data types and expressions with stencil-like operations suitable for lattice field theory and Chroma implements algorithms in terms of this high-level interface. Thus by porting the low-level layer one can effectively move the whole application in one swing to a different platform. The QDP-JIT/PTX library, the reimplementation of the low-level layer, provides a framework for lattice QCD calculations for the CUDA architecture. The complete software interface is supported and thus applications can be run unaltered on GPU-based parallel computers. This reimplementation was possible due to the availability of a JIT compiler (part of the NVIDIA Linux kernel driver) which translates an assembly-like language (PTX) to GPU code. The expression template technique is used to build PTX code generators and a software cache manages the GPU memory. This reimplementation allows us to deploy an efficient implementation of the full gauge-generation program with dynamical fermions on large-scale GPU-based machines such as Titan and Blue Waters which accelerates the algorithm by more than an order of magnitude.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: