8942

MATLAB and Python for GPU Computing

Jose Unpingco, Juan Carlos Chaves
High Performance Technologies, Inc. (HPTi), Wright-Patterson AFB, OH
DoD High Performance Computing Modernization Program Users Group Conference, 2011
@inproceedings{unpingco2011matlab,

   title={MATLAB and Python for GPU Computing},

   author={Unpingco, Jose and Chaves, Juan Carlos},

   booktitle={Users’~{} Group Conference},

   pages={585},

   organization={DTIC Document},

   year={2011}

}

Download Download (PDF)   View View   Source Source   

1180

views

Recent trends in hardware development have led to graphics processing units (GPUs) evolving into highly-parallel, multi-core computing platforms suitable for computational science applications. Recently, GPUs such as the NVIDIA Tesla 20-series (with up to 448 cores) have become available to the High Performance Computing Modernization Program (HPCMP) user community. Traditionally, NVIDIA GPUs are programmed using Compute Unified Device Architecture (CUDA). CUDA is a parallel programming model and software environment, developed by NVIDIA Corporation, that enables programmers to take advantage of the multi-core GPU using the C language. CUDA provides extensions to the C programming language that enable the programmer to write fine-grained parallel algorithms that can be executed using multiple, simultaneous threads on the GPU. Usually, this requires a deep understanding of the CUDA environment. Productivity is strongly influenced by the workflow of the user (e.g., time spent running vs. time spent programming). Therefore, in the Signal/Image Processing (SIP) and other related communities, most users prefer high-productivity languages such as MATLAB or Python for their scientific and technical computation needs. In this paper we study the feasibility of exploiting NVIDIA GPUs for typical SIP applications using MATLAB or Python. Programming GPUs with MATLAB or Python is a relatively recent development and only a few solutions are available. PyCUDA allows accessing the NVIDIA CUDA parallel computation API from Python. On the other hand, the MathWorks, Inc. has added MATLAB support for NVIDIA CUDA-enabled GPUs through the Parallel Computing Toolbox (PCT). We investigate the use of these technologies for typical SIP applications paying special attention to performance and productivity aspects of these approaches. Our preliminary results show that these technologies are viable and show potential to preserve the highproductivity advantages of MATLAB and Python with GPUs with relatively few, if any, performance tradeoffs.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

Like us on Facebook

HGPU group

169 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1276 peoples are following HGPU @twitter

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: AMD APP SDK 2.9
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 6.0.1, AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us: