9419

CUDA Accelerated Robot Localization and Mapping

Haiyang Zhang, Fred Martin
Computer Science Department, University of Massachusetts Lowell, Lowell, MA 01854, USA
IEEE International Conference on Technologies for Practical Robot Applications (TePRA 2013), 2013

@article{zhang2013cuda,

   title={CUDA Accelerated Robot Localization and Mapping},

   author={Zhang, Haiyang and Martin, Fred},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

2495

views

We present a method to accelerate robot localization and mapping by using CUDA (Compute Unified Device Architecture), the general purpose parallel computing platform on NVIDIA GPUs. In robotics, the particle filter-based SLAM (Simultaneous Localization and Mapping) algorithm has many applications, but is computationally intensive. Prior work has used CUDA to accelerate various robot applications, but particle filter-based SLAM has not been implemented on CUDA yet. Because computations on the particles are independent of each other in this algorithm, CUDA acceleration should be highly effective. We have implemented the SLAM algorithm’s most time consuming step, particle weight calculation, and optimized memory access by using texture memory to alleviate memory bottleneck and fully leverage the parallel processing power. Our experiments have shown the performance has increased by an order of magnitude or more. The results indicate that offloading to GPU is a cost-effective way to improve SLAM algorithm performance.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: