8209

Reducing thread divergence in a GPU-accelerated branch-and-bound algorithm

Imen Chakroun, Mohand Mezmaz, Nouredine Melab, Ahcene Bendjoudi
INRIA – CNRS : UMR8022 – Universite Lille 1 – Sciences et Technologies
hal-00731859, 2012

@article{chakroun:hal-00731859,

   hal_id={hal-00731859},

   url={http://hal.inria.fr/hal-00731859},

   title={Reducing thread divergence in a GPU-accelerated branch-and-bound algorithm},

   author={Chakroun, Imen and Mezmaz, Mohand and Melab, Nouredine and Bendjoudi, Ahc{‘e}ne},

   language={Anglais},

   affiliation={DOLPHIN – INRIA Lille – Nord Europe , Institut de Math{‘e}matiques [Mons] , Laboratoire d’Informatique Fondamentale de Lille – LIFL , Centre de recherche sur l’Information Scientifique et Technique – CERIST},

   publisher={John Wiley & Sons},

   journal={Concurrency and Computation: Practice and Experience},

   audience={internationale },

   year={2012},

   pdf={http://hal.inria.fr/hal-00731859/PDF/cpedoc.pdf}

}

Download Download (PDF)   View View   Source Source   

2460

views

In this paper, we address the design and implementation of GPU-accelerated Branch-and-Bound algorithms (B&B) for solving Flow-shop scheduling optimization problems (FSP). Such applications are CPU-time consuming and highly irregular. On the other hand, GPUs are massively multi-threaded accelerators using the SIMD model at execution. A major issue which arises when executing on GPU a B&B applied to FSP is thread or branch divergence. Such divergence is caused by the lower bound function of FSP which contains many irregular loops and conditional instructions. Our challenge is therefore to revisit the design and implementation of B&B applied to FSP dealing with thread divergence. Extensive experiments of the proposed approach have been carried out on well-known FSP benchmarks using an Nvidia Tesla C2050 GPU card. Compared to a CPU-based execution, accelerations up to x77.46 are achieved for large problem instances.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: