Iterative SLE Solvers over a CPU-GPU Platform

Alecio P.D. Binotto, Christian Daniel, Daniel Weber, Arjan Kuijper, Andre Stork, Carlos Pereira, Dieter Fellner
Tech. Universitdt Darmstadt, Darmstadt, Germany
12th IEEE International Conference on High Performance Computing and Communications (HPCC), 2010


   title={Iterative sle solvers over a cpu-gpu platform},

   author={Binotto, A.P.D. and Daniel, C. and Weber, D. and Kuijper, A. and Stork, A. and Pereira, C. and Fellner, D.},

   booktitle={2010 12th IEEE International Conference on High Performance Computing and Communications},





Source Source   



GPUs (Graphics Processing Units) have become one of the main co-processors that contributed to desktops towards high performance computing. Together with multi-core CPUs, a powerful heterogeneous execution platform is built for massive calculations. To improve application performance and explore this heterogeneity, a distribution of workload in a balanced way over the PUs (Processing Units) plays an important role for the system. However, this problem faces challenges since the cost of a task at a PU is non-deterministic and can be influenced by several parameters not known a priori, like the problem size domain. We present a comparison of iterative SLE (Systems of Linear Equations) solvers, used in many scientific and engineering applications, over a heterogeneous CPU-GPUs platform and characterize scenarios where the solvers obtain better performances. A new technique to improve memory access on matrix-vector multiplication used by SLEs on GPUs is described and compared to standard implementations for CPU and GPUs. Such timing profiling is analyzed and break-even points based on the problem sizes are identified for this implementation, pointing whether our technique is faster to use GPU instead of CPU. Preliminary results show the importance of this study applied to a real-time CFD (Computational Fluid Dynamics) application with geometry modification.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: