Posts
Aug, 1
GPU merge path: a GPU merging algorithm
Graphics Processing Units (GPUs) have become ideal candidates for the development of fine-grain parallel algorithms as the number of processing elements per GPU increases. In addition to the increase in cores per system, new memory hierarchies and increased bandwidth have been developed that allow for significant performance improvement when computation is performed using certain types […]
Aug, 1
New Sparse Matrix Storage Format to Improve The Performance of Total SPMV Time
Graphics Processing Units (GPUs) are massive data parallel processors. High performance comes only at the cost of identifying data parallelism in the applications while using data parallel processors like GPU. This is an easy effort for applications that have regular memory access and high computation intensity. GPUs are equally attractive for sparse matrix vector multiplications […]
Aug, 1
High-Level Manipulation of OpenCL-Based Subvectors and Submatrices
High-level C++ proxies for the convenient manipulation of subvectors and submatrices on OpenCL-enabled devices are introduced. It is demonstrated that the programming convenience of standard host-based code can be retained using native C++ language features only, even if massively parallel computing architectures such as graphics processing units are employed. The required modifications of the underlying […]
Aug, 1
GPU-Accelerated Non-negative Matrix Factorization for Text Mining
An implementation of the non-negative matrix factorization algorithm for the purpose of text mining on graphics processing units is presented. Performance gains of more than one order of magnitude are obtained.
Jul, 31
accULL: An User-directed Approach to Heterogeneous Programming
The world of HPC is undergoing rapid changes and computer architectures capable to achieve high performance have broadened. The irruption in the scene of computational accelerators, like GPUs, is increasing performance while maintaining low cost per GFLOP, thus expanding the popularity of HPC. However, it is still difficult to exploit the new complex processor hierarchies. […]
Jul, 31
Parallel programming on GPU using Intel Array Building Blocks
The goal of this project is to demonstrate Parallel Programming on a GPU using the latest Intel technology called Intel Array Building Blocks (Intel ArBB). The main aim is to describe the programming model of Intel ArBB and show effectiveness of the new technology, Intel ArBB on a GPU environment using examples. Parallel Programming is […]
Jul, 31
On Binaural Spatialization and the Use of GPGPU for Audio Processing
3D recordings and audio, namely techniques that aim to create the perception of sound sources placed anywhere in 3 dimensional space, are becoming an interesting resource for composers, live performances and augmented reality. This thesis focuses on binaural spatialization techniques. We will tackle the problem from three different perspectives. The first one is related to […]
Jul, 31
Application of the Mean Field Methods to MRF Optimization in Computer Vision
The mean field (MF) methods are an energy optimization method for Markov random fields (MRFs). These methods, which have their root in solid state physics, estimate the marginal density of each site of an MRF graph by iterative computation, similarly to loopy belief propagation (LBP). It appears that, being shadowed by LBP, the MF methods […]
Jul, 31
MCMini: Monte Carlo on GPGPU
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP […]
Jul, 29
High-Performance Online Spatial and Temporal Aggregations on Multi-core CPUs and Many-Core GPUs
Motivated by the practical needs for efficiently processing large-scale taxi trip data, we have developed techniques for high performance online spatial, temporal and spatiotemporal aggregations. These techniques include timestamp compression to reduce memory footprint, simple linear data structures for efficient in-memory scans and utilization of massively data parallel GPU accelerations for spatial joins. Our experiments […]
Jul, 29
Sigma*: Symbolic Learning of Stream Filters
We present Sigma*, a novel technique for learning symbolic models of software behavior. Sigma* addresses the challenge of synthesizing models of software by using symbolic conjectures and abstraction. By combining dynamic symbolic execution to discover symbolic input-output steps of the programs and counterexample guided abstraction refinement to over-approximate program behavior, Sigma* transforms arbitrary source representation […]
Jul, 29
A Novel GPU Implementation of Eigen Analysis for Risk Management
Portfolio risk is commonly defined as the standard deviation of its return. The empirical correlation matrix of asset returns in a portfolio has its intrinsic noise component. This noise is filtered for more robust performance. Eigendecomposition is a widely used method for noise filtering. Jacobi algorithm has been a popular eigensolver technique due to its […]