TESLA GPUs versus MPI with OpenMP for the Forward Modeling of Gravity and Gravity Gradient of Large Prisms Ensemble

Carlos Couder-Castaneda, Carlos Ortiz-Aleman, Mauricio Gabriel Orozco-del-Castillo, Mauricio Nava-Flores
Mexican Petroleum Institute, Eje Central Lazaro Cardenas 152, Colonia San Bartolo Atepehuacan, 07730 Mexico, DF, Mexico
Journal of Applied Mathematics, Volume 2013, Article ID 437357, 15 pages, 2013


   title={TESLA GPUs versus MPI with OpenMP for the Forward Modeling of Gravity and Gravity Gradient of Large Prisms Ensemble},

   author={Couder-Casta{~n}eda, Carlos and Ortiz-Alem{‘a}n, Carlos and Orozco-del-Castillo, Mauricio Gabriel and Nava-Flores, Mauricio},

   journal={Journal of Applied Mathematics},



   publisher={Hindawi Publishing Corporation}


Download Download (PDF)   View View   Source Source   



An implementation with the CUDA technology in a single and in several graphics processing units (GPUs) is presented for the calculation of the forward modeling of gravitational fields from a tridimensional volumetric ensemble composed by unitary prisms of constant density. We compared the performance results obtained with the GPUs against a previous version coded in OpenMP with MPI, and we analyzed the results on both platforms. Today, the use of GPUs represents a breakthrough in parallel computing, which has led to the development of several applications with various applications. Nevertheless, in some applications the decomposition of the tasks is not trivial, as can be appreciated in this paper. Unlike a trivial decomposition of the domain, we proposed to decompose the problem by sets of prisms and use different memory spaces per processing CUDA core, avoiding the performance decay as a result of the constant calls to kernels functions which would be needed in a parallelization by observations points. The design and implementation created are the main contributions of this work, because the parallelization scheme implemented is not trivial. The performance results obtained are comparable to those of a small processing cluster.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: