3151

Ocelot: a dynamic optimization framework for bulk-synchronous applications in heterogeneous systems

Gregory F. Diamos, Andrew R. Kerr, Sudhakar Yalamanchili, Nathan Clark
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0250
In Proceedings of the 19th international conference on Parallel architectures and compilation techniques (2010), pp. 353-364.

@conference{diamos2010ocelot,

   title={Ocelot: a dynamic optimization framework for bulk-synchronous applications in heterogeneous systems},

   author={Diamos, G.F. and Kerr, A.R. and Yalamanchili, S. and Clark, N.},

   booktitle={Proceedings of the 19th international conference on Parallel architectures and compilation techniques},

   pages={353–364},

   year={2010},

   organization={ACM}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

1463

views

Ocelot is a dynamic compilation framework designed to map the explicitly data parallel execution model used by NVIDIA CUDA applications onto diverse multithreaded platforms. Ocelot includes a dynamic binary translator from Parallel Thread eXecution ISA (PTX) to many-core processors that leverages the Low Level Virtual Machine (LLVM) code generator to target x86 and other ISAs. The dynamic compiler is able to execute existing CUDA binaries without recompilation from source and supports switching between execution on an NVIDIA GPU and a many-core CPU at runtime. It has been validated against over 130 applications taken from the CUDA SDK, the UIUC Parboil benchmarks [1], the Virginia Rodinia benchmarks [2], the GPU-VSIPL signal and image processing library [3], the Thrust library [4], and several domain specific applications. This paper presents a high level overview of the implementation of the Ocelot dynamic compiler highlighting design decisions and trade-offs, and showcasing their effect on application performance. Several novel code transformations are explored that are applicable only when compiling explicitly parallel applications and traditional dynamic compiler optimizations are revisited for this new class of applications. This study is expected to inform the design of compilation tools for explicitly parallel programming models (such as OpenCL) as well as future CPU and GPU architectures.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: