Posts
Apr, 13
A GPU Memory System Comparison for an Elliptic Test Problem
This paper presents GPU-based solutions to the Poisson equation with homogeneous Dirichlet boundary conditions in two spatial dimensions. This problem has well-understood behavior, but similar computation to many more complex real-world problems. We analyze the GPU performance using three types of memory access in the CUDA memory model (direct access to global memory, texture access, […]
Apr, 13
Software Model Checking for GPGPU Programs, Towards a Verification Tool
The tremendous computing power GPUs are capable of makes of them the epicenter of an unprecedented attention for applications other than graphics and gaming. Apart from the highly parallel nature of the programs to be run on GPUs, the sought after gain in computing power is only achieved with low level tuning at threads level […]
Apr, 13
Writing a modular GPGPU program in Java
This paper proposes a Java to CUDA runtime program translator for scientific-computing applications. Traditionally, these applications have been written in Fortran or C without using a rich modularization mechanism. Our translator enables those applications to be written in Java and run on GPGPUs while exploiting a rich modularization mechanism in Java. This translator dynamically generates […]
Apr, 13
Parallel programming with CUDA
This report documents our master thesis project, which is about parallel programming with CUDA, the NVIDIA GPU architecture with support for general purpose computing. The purpose of the thesis is to uncover the qualities of CUDA as a parallel computing platform, determining the possibilities and limitations of its ability to handle different types of algorithms. […]
Apr, 13
Design of high-performance parallelized gene predictors in MATLAB
BACKGROUND: This paper proposes a method of implementing parallel gene prediction algorithms in MATLAB. The proposed designs are based on either Goertzel’s algorithm or on FFTs and have been implemented using varying amounts of parallelism on a central processing unit (CPU) and on a graphics processing unit (GPU). FINDINGS: Results show that an implementation using […]
Apr, 12
Spatial Indexing of Large-Scale Geo-Referenced Point Data on GPGPUs Using Parallel Primitives
Modern positioning and locating technologies, e.g., GPS, have generated huge amounts of geo-referenced point data that are crucial to understand environmental and social-economic phenomena. Unfortunately, traditional disk-resident databases are inefficient in handling large-scale point data. In this study, we propose to utilize the massive data parallel processing power of General Purpose computing on Graphics Processing […]
Apr, 12
Verifying GPU Kernels by Test Amplification
We present a novel technique for verifying properties of data parallel GPU programs via test amplification. The key insight behind our work is that we can use the technique of static information flow to amplify the result of a single test execution over the set of all inputs and interleavings that affect the property being […]
Apr, 12
Programming issues for video analysis on Graphics Processing Units
Video processing is a part of signal processing where input and/or output signals are video streams. It covers a wide variety of applications that are generally very compute-intensive due to the algorithmic complexity. Moreover, many of these applications demand real-time performance. Fulfilling these requirements makes necessary the use of hardware acceleration such as Graphics Processing […]
Apr, 12
TDDFT in massively parallel computer architectures: the OCTOPUS project
OCTOPUS is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this article we present the ongoing efforts for the parallelisation of OCTOPUS. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has a great […]
Apr, 12
Bonsai: A GPU Tree-Code
We present a gravitational hierarchical N-body code that is designed to run efficiently on Graphics Processing Units (GPUs). All parts of the algorithm are executed on the GPU which eliminates the need for data transfer between the Central Processing Unit (CPU) and the GPU. Our tests indicate that the gravitational tree-code outperforms tuned CPU code […]
Apr, 11
Exposing Fine-Grained Parallelism in Algebraic Multigrid Methods
Algebraic multigrid methods for large, sparse linear systems are a necessity in many computational simulations, yet parallel algorithms for such solvers are generally decomposed into coarse-grained tasks suitable for distributed computers with traditional processing cores. However, accelerating multigrid on massively parallel throughput-oriented processors, such as the GPU, demands algorithms with abundant fine-grained parallelism. In this […]
Apr, 11
RGEM: A Responsive GPGPU Execution Model for Runtime Engines
General-purpose computing on graphics processing units, also known as GPGPU, is a burgeoning technique to enhance the computation of parallel programs. Applying this technique to real-time applications, however, requires additional support for timeliness of execution. In particular, the non-preemptive nature of GPGPU, associated with copying data to/from the device memory and launching code onto the […]