11320

Impact of communication times on mixed CPU/GPU applications scheduling using KAAPI

David Beniamine
Universite Joseph Fourrier
Memoire de Master 2, Universite Joseph Fourrier, Grenoble I, 2013

@mastersthesis{beniamine:hal-00924020,

   hal_id={hal-00924020},

   url={http://hal.inria.fr/hal-00924020},

   title={Impact of communication times on mixed CPU/GPU applications scheduling using KAAPI},

   author={Beniamine, David},

   language={Anglais},

   affiliation={MOAIS – INRIA Grenoble Rh{^o}ne-Alpes / LIG Laboratoire d’Informatique de Grenoble},

   pages={36},

   address={Grenoble},

   audience={nationale},

   year={2013},

   month={Jun},

   pdf={http://hal.inria.fr/hal-00924020/PDF/report.pdf}

}

Download Download (PDF)   View View   Source Source   

582

views

High Performance Computing machines use more and more Graphical Processing Units as they are very efficient for homogeneous computation such as matrix operations. However before using these accelerators, one has to transfer data from the processor to them. Such a transfer can be slow. In this report, our aim is to study the impact of communication times on the makespan of a scheduling. Indeed, with a better anticipation of these communications, we could use the GPUs even more efficiently. More precisely, we will focus on machines with one or more GPUs and on applications with a low ratio of computations over communications. During this study, we have implemented two offline scheduling algorithms within XKAAPI’s runtime. Then we have led an experimental study, combining these algorithms to highlight the impact of communication times. Finally our study has shown that, by using communication aware scheduling algorithms, we can reduce substantially the makespan of an application. Our experiments have shown a reduction of this makespan up to $64%$ on a machine with several GPUs executing homogeneous computations.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: