2761

Posts

Jan, 24

MLS-based scalar fields over triangle meshes and their application in mesh processing

A novel technique that uses the Moving Least Squares (MLS) method to interpolate sparse constraints over mesh surfaces is introduced in this paper. Given a set of constraints, the proposed technique constructs, directly on the surface, a smooth scalar field that interpolates or approximates the constraints. Three types of constraints: point-value, point-gradient and iso-contour, are […]
Jan, 24

CUDASA: Compute Unified Device and Systems Architecture

We present an extension to the CUDA programming language which extends parallelism to multi-GPU systems and GPU-cluster environments. Following the existing model, which exposes the internal parallelism of GPUs, our extended programming language provides a consistent development interface for additional, higher levels of parallel abstraction from the bus and network interconnects. The newly introduced layers […]
Jan, 24

Sparse regularization in MRI iterative reconstruction using GPUs

Regularization is a common technique used to improve image quality in inverse problems such as MR image reconstruction. In this work, we extend our previous Graphics Processing Unit (GPU) implementation of MR image reconstruction with compensation for susceptibility-induced field inhomogeneity effects by incorporating an additional quadratic regularization term. Regularization techniques commonly impose the prior information […]
Jan, 24

Exploiting More Parallelism from Applications Having Generalized Reductions on GPU Architectures

Reduction is a common component of many applications, but can often be the limiting factor for parallelization. Previous reduction work has focused on detecting reduction idioms and parallelizing the reduction operation by minimizing data communications or exploiting more data locality. While these techniques can be useful, they are mostly limited to simple code structures. In […]
Jan, 24

Multi-GPU Implementation for Iterative MR Image Reconstruction with Field Correction

Many advanced MRI image acquisition and reconstruction methods see limited application due to high computational cost in MRI. For instance, iterative reconstruction algorithms (e.g. non-Cartesian k-space trajectory, or magnetic field inhomogeneity compensation) can improve image quality but suffer from low reconstruction speed. General-purpose computing on graphics processing units (GPU) have demonstrated significant performance speedups and […]
Jan, 24

Accelerating iterative field-compensated MR image reconstruction on GPUs

We propose a fast implementation for iterative MR image reconstruction using Graphics Processing Units (GPU). In MRI, iterative reconstruction with conjugate gradient algorithms allows for accurate modeling the physics of the imaging system. Specifically, methods have been reported to compensate for the magnetic field inhomogeneity induced by the susceptibility differences near the air/tissue interface in […]
Jan, 24

Data Layout Transformation for Structured-Grid Codes on GPU

We present data layout transformation as an effective performance optimization for memory-bound structuredgrid applications for GPUs. Structured grid applications are a class of applications that compute grid cell values on a regular 2D, 3D or higher dimensional regular grid. Each output point is computed as a function of itself and its nearest neighbors. Stencil code […]
Jan, 24

Program Optimization Strategies for Data-Parallel Many-Core Processors

Program optimization for highly parallel systems has historically been considered an art, with experts doing much of the performance tuning by hand. With the introduction of inexpensive, single-chip, massively parallel platforms, more developers will be creating highly data-parallel applications for these platforms while lacking the substantial experience and knowledge needed to maximize application performance. In […]
Jan, 24

Efficient Parallel Scan Algorithms for GPUs

Scan and segmented scan algorithms are crucial building blocks for a great many data-parallel algorithms. Segmented scan and related primitives also provide the necessary support for the flattening transform, which allows for nested data-parallel programs to be compiled into flat data-parallel languages. In this paper, we describe the design of efficient scan and segmented scan […]
Jan, 24

Efficient Sparse Matrix-Vector Multiplication on CUDA

The massive parallelism of graphics processing units (GPUs) offers tremendous performance in many high-performance computing applications. While dense linear algebra readily maps to such platforms, harnessing this potential for sparse matrix computations presents additional challenges. Given its role in iterative methods for solving sparse linear systems and eigenvalue problems, sparse matrix-vector multiplication (SpMV) is of […]
Jan, 23

Parallel Genetic Algorithms on Programmable Graphics Hardware

Parallel genetic algorithms are usually implemented on parallel machines or distributed systems. This paper describes how fine-grained parallel genetic algorithms can be mapped to programmable graphics hardware found in commodity PC. Our approach stores chromosomes and their fitness values in texture memory on graphics card. Both fitness evaluation and genetic operations are implemented entirely with […]
Jan, 23

Parallel Evolutionary Algorithms on Consumer-Level Graphics Processing Unit

Evolutionary Algorithms (EAs) are effective and robust methods for solving many practical problems such as feature selection, electrical circuits synthesis, and data mining. However, they may execute for a long time for some difficult problems, because several fitness evaluations must be performed. A promising approach to overcome this limitation is to parallelize these algorithms. In […]

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: