15026

Efficient Static and Dynamic Memory Management Techniques for Multi-GPU Systems

Max Grossman, Mauricio Araya-Polo
Dept. of Computer Science – MS 132, Rice University, P.O. Box 1892, Houston, TX 77251, USA
Runtime Systems for Extreme Scale Programming Models and Architectures Workshop at SC15, 2015

@article{grossman2015efficient,

   title={Efficient Static and Dynamic Memory Management Techniques for Multi-GPU Systems},

   author={Grossman, Max and Araya-Polo, Mauricio},

   year={2015}

}

Download Download (PDF)   View View   Source Source   

1477

views

There are four trends in modern high-performance computing (HPC) that have led to an increased need for efficient memory management techniques for heterogeneous systems (such as one fitted with GPUs). First, the average size of datasets for HPC applications is rapidly increasing. Read-only input matrices that used to be on the order of megabytes or low-order gigabytes are growing into the double-digit gigabyte range and beyond. Second, HPC applications are continually required to be more and more accurate. This trend leads to larger working set sizes in memory as the resolution of stored and computed data becomes more fine. Third, no matter how close accelerators are to the CPU, memory address spaces are still incoherent and automated memory management systems are not yet reaching the performance of hand-crafted solutions for HPC applications. Fourth, while the physical memory size of accelerators is growing it fails to grow at the same rate as the working set sizes of applications. Taking these four trends together leads to the conclusion that future supercomputers will rely heavily on efficient memory management for accelerators to be able to handle future working set sizes, but that new techniques in this field are required. In this paper we describe, evaluate, and discuss memory management techniques for two common classes of scientific computing applications. The first class is the simpler of the two and assumes that the locations of all memory accesses are known prior to a GPU kernel launch. The second class is characterized by an access pattern that is not predictable before performing the actual computation. We focus on supporting data sets which do not fit in the physical memory of current GPUs and which are used in applications exhibiting both of these access patterns. Our approach considers GPU global memory as a cache for a large data set stored in system memory. We evaluate the techniques described in this paper on a production (industrial-strength) geophysics application as part of a larger GPU implementation. Our results demonstrate that these techniques flexibly support out-of-core datasets while minimizing overhead, future-proofing the target application against future generations of GPUs and dataset size increases. Our results demonstrate that using these out-of-core memory management techniques results in 80-100% GPU memory utilization while adding 7-13% of overhead. These overheads are offset by the performance improvement from using GPUs and using the memory management techniques described in this paper improves the flexibility of the overall application.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: