10325
Gerardo Ares, Pablo Ezzatti, Enrique S. Quintana-Orti
We present an extension of a GPU-based matrix inversion algorithm for distributed memory contexts. Specifically, we implement and evaluate a message-passing variant of the Gauss-Jordan method (GJE) for matrix inversion on a cluster of nodes equipped with GPU hardware accelerators. The experimental evaluation of the proposal shows a significant runtime reduction when compared with both […]
View View   Download Download (PDF)   
I. Demir, R. Westermann
In this paper we present a technique which allows us to perform high quality and progressive response surface prediction from multidimensional input samples in an efficient manner. We utilize kriging interpolation to estimate a response surface which minimizes the expectation value and variance of the prediction error. High computational efficiency is achieved by employing parallel […]
View View   Download Download (PDF)   
Huda Ibeid, Dinesh Kaushik, David Keyes, Hatem Ltaief
The goal of this paper is to implement an efficient matrix inversion of symmetric positive-definite matrices on heterogeneous GPU-based systems. The matrix inversion procedure can be split into three stages: computing the Cholesky factorization, inverting the Cholesky factor and calculating the product of the inverted Cholesky factor with its transpose to get the final inverted […]
View View   Download Download (PDF)   
Mark Franey, Pritam Ranjan, Hugh Chipman
The graphics processing unit (GPU) has emerged as a powerful and cost effective processor for high performance computing. GPUs are capable of an order of magnitude more floating point operations per second as compared to modern central processing units (CPUs), and thus provide a great deal of promise for computationally intensive statistical applications (Brodtkorb et […]
View View   Download Download (PDF)   
Jorge Soriano Pinedo
In this project several mathematic algorithms are developed to obtain a matrix inversion method – that combines CUDA’s parallel architecture and MATLAB which is actually faster than MATLAB’s built in inverse matrix function. This matrix inversion method is intended to be used for image reconstruction as a faster alternative to iterative methods with a comparable […]
View View   Download Download (PDF)   
Chengming Zou, Chunfen Xia, Guanghui Zhao
The characteristics of modern graphics processing unit (GPU) is programmable, high price / performance ratio and high speed. It has a strong ability to adapt the parallel calculation, Based on this, the article study the general method of GPU calculating and use compute unified device architecture (CUDA) to design new parallel algorithm to accelerate the […]
Pablo Ezzatti, Enrique S. Quintana-Orti, Alfredo Remon
Inversion of large-scale matrices appears in a few scientific applications like model reduction or optimal control. Matrix inversion requires an important computational effort and, therefore, the application of high performance computing techniques and architectures for matrices with dimension in the order of thousands. Following the recent uprise of graphics processors (GPUs), we present and evaluate […]
View View   Download Download (PDF)   
P. Ezzatti, E. Quintana-Orti, A. Remon
We study the use of massively parallel architectures for computing a matrix inverse. Two different algorithms are reviewed, the traditional approach based on Gaussian elimination and the Gauss-Jordan elimination alternative, and several high performance implementations are presented and evaluated. The target architecture is a current general-purpose multicore processor (CPU) connected to a graphics processor (GPU). […]
Florian Ries, Tommaso De Marco, Matteo Zivieri, Roberto Guerrieri
Dense matrix inversion is a basic procedure in many linear algebra algorithms. A computationally arduous step in most dense matrix inversion methods is the inversion of triangular matrices as produced by factorization methods such as LU decomposition. In this paper, we demonstrate how triangular matrix inversion (TMI) can be accelerated considerably by using commercial Graphics […]

* * *

* * *

Like us on Facebook

HGPU group

137 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1209 peoples are following HGPU @twitter

Featured events

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: AMD APP SDK 2.9
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 6.0.1, AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us: