14080

Accelerated Nodal Discontinuous Galerkin Simulations for Reverse Time Migration with Large Clusters

Axel Modave, Amik St-Cyr, Wim A. Mulder, Tim Warburton
Rice University, Houston, Texas, USA
arXiv:1506.00907 [physics.comp-ph], (2 Jun 2015)

@article{modave2015accelerated,

   title={Accelerated Nodal Discontinuous Galerkin Simulations for Reverse Time Migration with Large Clusters},

   author={Modave, Axel and St-Cyr, Amik and Mulder, Wim A. and Warburton, Tim},

   year={2015},

   month={jun},

   archivePrefix={"arXiv"},

   primaryClass={physics.comp-ph}

}

Download Download (PDF)   View View   Source Source   

1493

views

Improving both accuracy and computational performance of numerical tools is a major challenge for seismic imaging and generally requires specialized implementations to make full use of modern parallel architectures. We present a computational strategy for reverse-time migration (RTM) with accelerator-aided clusters. A new imaging condition computed from the pressure and velocity fields is introduced. The model solver is based on a high-order discontinuous Galerkin time-domain (DGTD) method for the pressure-velocity system with unstructured meshes and multi-rate local time-stepping. We adopted the MPI+X approach for distributed programming where X is a threaded programming model. In this work we chose OCCA, a unified framework that makes use of major multi-threading languages (e.g. CUDA and OpenCL) and offers the flexibility to run on several hardware architectures. DGTD schemes are suitable for efficient computations with accelerators thanks to localized element-to-element coupling and the dense algebraic operations required for each element. Moreover, compared to high-order finite-difference schemes, the thin halo inherent to DGTD method reduces the amount of data to be exchanged between MPI processes and storage requirements for RTM procedures. The amount of data to be recorded during simulation is reduced by storing only boundary values in memory rather than on disk and recreating the forward wavefields Computational results are presented that indicate that these methods are strong scalable up to at least 32 GPUs for a large-scale three-dimensional case.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: