Processing MPI Derived Datatypes on Noncontiguous GPU-Resident Data
Department of Computer Science, North Carolina State University, Raleigh, NC 27695
North Carolina State University, Report ANL/MCS-P5008-0813, 2013
@article{jenkins2013processing,
title={Processing MPI Derived Datatypes on Noncontiguous GPU-Resident Data},
author={Jenkins, John and Dinan, James and Balaji, Pavan and Peterka, Tom},
year={2013}
}
Driven by the goals of efficient and generic communication of noncontiguous data layouts in GPU memory, for which solutions do not currently exist, we present a parallel, noncontiguous data-processing methodology through the MPI datatypes specification. Our processing algorithm utilizes a kernel on the GPU to pack arbitrary noncontiguous GPU data by enriching the datatypes encoding to expose a fine-grained, data-point level of parallelism. Additionally, the typically tree-based datatype encoding is preprocessed to enable efficient, cached access across GPU threads. Using CUDA, we show that the computational method outperforms DMA-based alternatives for several common data layouts as well as more complex data layouts for which reasonable DMA-based processing does not exist. Our method incurs low overhead for data layouts that closely match best-case DMA usage or that can be processed by layout-specific implementations. We additionally investigate usage scenarios for data packing that incur resource contention, identifying potential pitfalls for various packing strategies. We also demonstrate the efficacy of kernel-based packing in various communication scenarios, showing multifold improvement in point-to-point communication and evaluating packing within the context of the SHOC stencil benchmark and HACC mesh analysis.
September 23, 2013 by hgpu