8942

MATLAB and Python for GPU Computing

Jose Unpingco, Juan Carlos Chaves
High Performance Technologies, Inc. (HPTi), Wright-Patterson AFB, OH
DoD High Performance Computing Modernization Program Users Group Conference, 2011

@inproceedings{unpingco2011matlab,

   title={MATLAB and Python for GPU Computing},

   author={Unpingco, Jose and Chaves, Juan Carlos},

   booktitle={Users’~{} Group Conference},

   pages={585},

   organization={DTIC Document},

   year={2011}

}

Download Download (PDF)   View View   Source Source   

4734

views

Recent trends in hardware development have led to graphics processing units (GPUs) evolving into highly-parallel, multi-core computing platforms suitable for computational science applications. Recently, GPUs such as the NVIDIA Tesla 20-series (with up to 448 cores) have become available to the High Performance Computing Modernization Program (HPCMP) user community. Traditionally, NVIDIA GPUs are programmed using Compute Unified Device Architecture (CUDA). CUDA is a parallel programming model and software environment, developed by NVIDIA Corporation, that enables programmers to take advantage of the multi-core GPU using the C language. CUDA provides extensions to the C programming language that enable the programmer to write fine-grained parallel algorithms that can be executed using multiple, simultaneous threads on the GPU. Usually, this requires a deep understanding of the CUDA environment. Productivity is strongly influenced by the workflow of the user (e.g., time spent running vs. time spent programming). Therefore, in the Signal/Image Processing (SIP) and other related communities, most users prefer high-productivity languages such as MATLAB or Python for their scientific and technical computation needs. In this paper we study the feasibility of exploiting NVIDIA GPUs for typical SIP applications using MATLAB or Python. Programming GPUs with MATLAB or Python is a relatively recent development and only a few solutions are available. PyCUDA allows accessing the NVIDIA CUDA parallel computation API from Python. On the other hand, the MathWorks, Inc. has added MATLAB support for NVIDIA CUDA-enabled GPUs through the Parallel Computing Toolbox (PCT). We investigate the use of these technologies for typical SIP applications paying special attention to performance and productivity aspects of these approaches. Our preliminary results show that these technologies are viable and show potential to preserve the highproductivity advantages of MATLAB and Python with GPUs with relatively few, if any, performance tradeoffs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: