Implementations of a Parallel Algorithm for Computing Euclidean Distance Map in Multicore Processors and GPUs

Duhu Man, Kenji Uda, Hironobu Ueyama, Yasuaki Ito, Koji Nakano
Department of Information Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, Hiroshima, 739-8527, Japan
International Journal of Networking and Computing, Vol 1, No 2, 2011


   title={Implementations of a Parallel Algorithm for Computing Euclidean Distance Map in Multicore Processors and GPUs},

   author={Man, D. and Uda, K. and Ueyama, H. and Ito, Y. and Nakano, K.},

   journal={International Journal of Networking and Computing},






Download Download (PDF)   View View   Source Source   



Given a 2-D binary image of size nxn, Euclidean Distance Map (EDM) is a 2-D array of the same size such that each element is storing the Euclidean distance to the nearest black pixel. It is known that a sequential algorithm can compute the EDM in O(n2) and thus this algorithm is optimal. Also, work-time optimal parallel algorithms for shared memory model have been presented. However, the presented parallel algorithms are too complicated to implement in existing shared memory parallel machines. The main contribution of this paper is to develop a simple parallel algorithm for the EDM and implement it in two different parallel platforms: multicore processors and Graphics Processing Units (GPUs). We have implemented our parallel algorithm in a Linux server with four Intel hexad-core processors (Intel Xeon X7460 2.66GHz). We have also implemented it in the following two modern GPU systems, Tesla C1060 and GTX 480, respectively. The experimental results have shown that, for an input binary image with size of 9216×9216, our implementation in the multicore system achieves a speedup factor of 18 over the performance of a sequential algorithm using a single processor in the same system. Meanwhile, for the same input binary image, our implementation on the GPU achieves a speedup factor of 26 over the sequential algorithm implementation.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: