12512

Energy-Efficient Collective Reduce and Allreduce Operations on Distributed GPUs

Lena Oden, Benjamin Klenk, Holger Froning
Fraunhofer Institute for Industrial Mathematics, Competence Center High Perfomance Computing, Kaiserslautern, Germany
Fraunhofer Institute for Industrial Mathematics, 2014

@inproceedings{oden2014energy,

   title={Energy-Efficient Collective Reduce and Allreduce Operations on Distributed GPUs},

   author={Oden, Lena and Klenk, Benjamin and Froning, Holger},

   booktitle={Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on},

   pages={483–492},

   year={2014},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

1594

views

GPUs gain high popularity in High Performance Computing, due to their massive parallelism and high performance per Watt. Despite their popularity, data transfer between multiple GPUs in a cluster remains a problem. Most communication models require the CPU to control the data flow; also intermediate staging copies to host memory are often inevitable. These two facts lead to higher CPU and memory utilization. As a result, overall performance decreases and power consumption increases. Collective operations like reduce and allreduce are very common in scientific simulations and also very sensitive to performance. Due to their massive parallelism, GPUs are very suitable for such operations, but they only excel in performance if they can process the problem in-core. Global GPU Address Spaces (GGAS) enable a direct GPU-to-GPU communication for heterogeneous clusters, which is completely in-line with the GPUs thread-collective execution model and does not require CPU assistance or staging copies in host memory. As we will see, GGAS helps to process collective operations among distributed GPUs in-core. In this paper, we introduce the implementation and optimization of collective reduce and allreduce operations using GGAS as a communication model. Compared to message passing, we get a speedup of 1.7x for small data sizes. A detailed analysis based on power measurements of CPU, host memory and GPU reveals that GGAS as communication model not only saves cycles, also the power and energy consumption is reduced dramatically. For instance, for an allreduce operation half of the energy can be saved by the reduced the power consumption in combination with the lower run time.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: