6762

Parallel Quadtree Coding of Large-Scale Raster Geospatial Data on Multicore CPUs and GPGPUs

Jianting Zhang, Simin You, Le Gruenwald
Dept. of Computer Science, City College of New York, New York City, NY, 10031
19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS 2011), 2011

@inproceedings{zhang2011parallel,

   title={Parallel Quadtree Coding of Large-Scale Raster Geospatial Data on Multicore CPUs and GPGPUs},

   author={Zhang, J. and You, S. and Gruenwald, L.},

   booktitle={Proceedings of the ACM-GIS Conference},

   year={2011}

}

Download Download (PDF)   View View   Source Source   

634

views

Global remote sensing and large-scale environmental modeling have generated huge amounts of raster geospatial data. While the inherent data parallelism of large-scale raster geospatial data allows straightforward coarse-grained parallelization at the chunk level on CPUs, it is largely unclear how to effectively exploit such data parallelism on massively parallel General Purpose Graphics Processing Units (GPGPUs) that require fine-grained parallelization. In this study, we have developed an efficient spatial data structure called BQ-Tree to code raster geospatial data by exploiting the uniform distributions of quadrants of bitmaps at the bitplanes of a raster. In addition to utilizing the chunk-level coarse grained parallelism on both multicore CPUs and GPGPUs, we have also developed two fine-grained parallelization schemes and their four implementations by using different system and application level optimization strategies. Experiments show that the best GPGPU implementation is capable of decoding a BQTree encoded 16-bits NASA MODIS geospatial raster with 22,658*15,586 cells in 190 milliseconds, i.e., 1.86 billion cells per second, on an Nvidia C2050 GPU card. The performance achieves a 6X speedup when compared with the best dual quadcore CPU implementation and 239-290X speedups when compared with a baseline single thread CPU implementation.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: