10737

Heterogeneous FTDT for Seismic Processing

Andreas Berg Skomedal
Department of Computer and Information Science, Norwegian University of Science and Technology
Norwegian University of Science and Technology, 2013

@article{skomedal2013heterogeneous,

   title={Heterogeneous FTDT for Seismic Processing},

   author={Skomedal, Andreas Berg},

   year={2013}

}

In the early days of computing, scientific calculations were done by specialized hardware. More recently, increasingly powerful CPUs took over and have been dominant for a long time. Now though, scientific computation is not only for the general CPU environment anymore. GPUs are specialized processors with their own memory hierarchy requiring more effort to program, but for suitable algorithms they may significantly outperform serially optimized CPUs. In recent years, these GPUs have become a lot more easily programmable, where they in the past had to be programmed through the abstraction of a graphics pipeline. EMGS in Trondheim is an oil-finding service working with analysis of seismic readings of the ocean floor, to provide information about possible oil reservoirs. Data-centers comprised of CPU nodes does all the work today, however GPU installations could be more cost effective and faster. In this thesis we look at the implementation of the main part of one of their data analysis algorithms. For this we use the FDTD method implemented in Yee bench[3] by Ulf Andersson. We look at how to adapt it for GPU using CUDA, parallelize the CPU implementations and how to run this efficiently together heterogeneously. It is shown that this method has great potential for use on GPUs, speedups just short of 19x over single thread CPU are achieved in this work. The FDTD method we use does however have some erratic memory operations which limits our performance compared to great GPU implementations these days which can reach speedups of over 100x. However, many of them still compare to single CPU performance. The order in which we address memory is therefore even more important, we show that optimizing memory writes when half the memory reads will not coalesce still improves our performance considerably. We show that care is needed when scheduling jobs on both CPU and GPU on the same node to avoid the total performance going down. Using all available resources on the host may not be beneficial. Utilizing several parallel CUDA streams proves effective to hide a lot of overhead and delay caused by busy CPU and main memory. This work is not a final solution for EMGS’ needs for this tool, other considerations and options than those discussed are also of interest. These topics are included in the future work section.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: