The Scalable Heterogeneous Computing (SHOC) benchmark suite

Anthony Danalis,Gabriel Marin,Collin McCurdy,Jeremy S. Meredith,Philip C. Roth,Kyle Spafford,Vinod Tipparaju,Jeffrey S. Vetter
University of Tennessee, Knoxville, TN and Oak Ridge National Laboratory, Oak Ridge, TN
Proceedings of the 3rd Workshop on General-Purpose Computation on Graphics Processing Units (GPGPU ’10), 2010


   title={The scalable heterogeneous computing (shoc) benchmark suite},

   author={Danalis, A. and Marin, G. and McCurdy, C. and Meredith, J.S. and Roth, P.C. and Spafford, K. and Tipparaju, V. and Vetter, J.S.},

   booktitle={Proceedings of the 3rd Workshop on General-Purpose Computation on Graphics Processing Units},





Scalable heterogeneous computing systems, which are composed of a mix of compute devices, such as commodity multicore processors, graphics processors, reconfigurable processors, and others, are gaining attention as one approach to continuing performance improvement while managing the new challenge of energy efficiency. As these systems become more common, it is important to be able to compare and contrast architectural designs and programming systems in a fair and open forum. To this end, we have designed the Scalable HeterOgeneous Computing benchmark suite (SHOC). SHOC’s initial focus is on systems containing graphics processing units (GPUs) and multi-core processors, and on the new OpenCL programming standard. SHOC is a spectrum of programs that test the performance and stability of these scalable heterogeneous computing systems. At the lowest level, SHOC uses microbenchmarks to assess architectural features of the system. At higher levels, SHOC uses application kernels to determine system-wide performance including many system features such as intranode and internode communication among devices. SHOC includes benchmark implementations in both OpenCL and CUDA in order to provide a comparison of these programming models.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: