8846

Autotuning, Code Generation and Optimizing Compiler Technology for GPUs

Malik Muhammad Zaki Murtaza Khan
University of Southern California
University of Southern California, 2012

@phdthesis{khan2012autotuning,

   title={Autotuning, code generation and optimizing compiler technology for GPUs},

   author={Khan, M.M.Z.M.},

   year={2012},

   school={UNIVERSITY OF SOUTHERN CALIFORNIA}

}

Download Download (PDF)   View View   Source Source   

1934

views

Graphics Processing Units (GPUs) have evolved to devices with teraflop-level performance potential. Application developers have a tedious task in developing GPU software by correctly identifying parallel computation and optimizing placement of data for the parallel processors in such architectures. Further, code optimized for one architecture may not perform well on different generations of even the same processor family. Many manually tuned GPU solutions would need a complete rewrite on a different architecture, hence more programmer time and effort. High-performance computing on GPUs can be facilitated by programming models and programming frameworks that attempt to reduce the amount of time and effort needed to develop GPU applications. This thesis describes a compiler framework for automatically generating and optimizing parallel code for GPUs (CUDA code for Nvidia GPUs), relieving programmers of the tedious work of parallelizing sequential code. The framework describes a script-based compiler for CUDA code generation combining: 1) a Transformation Strategy Generator (TSG), which automatically generates multiple scripts representing different optimization strategies; 2) an Autotuning System that automatically generates a set of code variants and selects the best among them through empirical evaluation. An underlying loop transformation and code generation framework takes TSG-generated scripts as input and generates CUDA code; thus enabling an end-to-end system to produce high performance solutions for scientific computations on a given GPU architecture. This flexible organization enables the system to explore a large optimization search space, simultaneously targeting different architecture features, but constrained by data dependences and guided by data reuse and the best heuristics from manual tuning. The system tailors the generated code to yield high-performance results for different GPU generations, data types and data sets. The key contributions of this thesis include: (1) the meta-optimizer, TSG, (2) a search and autotuning mechanism, (3) integration with a script-based compiler framework, resulting in an end-to-end automatic parallelization system. (4) performance-portable code generation for the Nvidia GTX-280 and Nvidia Tesla C2050 Fermi architectures, and (5) performance gains of up to 1.84x over linear algebra kernels in the manually-tuned Nvidia CUBLAS library, and up to 2.03x for a set of scientific, multimedia and imaging kernels over a state-of-the-art GPU compiler.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: