Posts
Sep, 9
Fast Detection of Overlapping Communities via Online Tensor Methods on GPUs
We present a scalable tensor-based approach for detecting hidden overlapping communities under the mixed membership stochastic block model. We employ stochastic gradient descent for performing tensor decompositions, which provides flexibility to tradeoff node sub-sampling with accuracy. Our GPU implementation of the tensor-based approach is extremely fast and scalable, and involves a careful optimization of GPU-CPU […]
Sep, 9
Acceleration of iterative Navier-Stokes solvers on graphics processing units
While new power-efficient computer architectures exhibit spectacular theoretical peak performance, they require specific conditions to operate efficiently, which makes porting complex algorithms a challenge. Here, we report results of the semi-implicit method for pressure linked equations (SIMPLE) and the pressure implicit with operator splitting (PISO) methods implemented on the graphics processing unit (GPU). We examine […]
Sep, 7
A Bi-objective Optimization Framework for Query Plans
Graphics Processing Units (GPU) have significantly more applications than just rendering images. They are also used in general-purpose computing to solve problems that can benefit from massive parallel processing. However, there are tasks that either hardly suit GPU or fit GPU only partially. The latter class is the focus of this paper. We elaborate on […]
Sep, 7
GPU-based simulation of brain neuron models
The human brain is an incredible system which can process, store, and transfer information with high speed and volume. Inspired by such system, engineers and scientists are cooperating to construct a digital brain with these characteristics. The brain is composed by billions of neurons which can be modeled by mathematical equations. The first step to […]
Sep, 7
Comparison and Analysis of GPU Energy Effciency For CUDA and OpenCL
The use of GPUs for processing large sets of parallelizable data has increased sharply in recent years. As the concept of GPU computing is still relatively young, parameters other than computation time, such as energy eciency, are being overlooked. Two parallel computing platforms, CUDA and OpenCL, provide developers with an interface that they can use […]
Sep, 7
D5.5.3 – Design and implementation of the SIMD-MIMD GPU architecture
To develop a new SIMD-MIMD architecture we first characterized GPGPU workloads using simple and well known workload metrics to identify the performance bottlenecks. We found that the benchmarks with branch divergence do not utilize the SIMD width optimally on conventional GPUs. We also studied the performance bottlenecks of motion compensation kernel developed in Task 3.2 […]
Sep, 7
Combining recent HPC techniques for 3D geophysics acceleration
Reverse Time Migration technique produces underground images using wave propagation. A discretization based on the Discontinuous Galerkin Method unleashes a massively parallel elastodynamics simulation, an interesting feature for current and future architectures. In this work, we propose to combine two recent HPC techniques to achieve a high level of efficiency: the use of runtimes (StarPU […]
Sep, 6
D5.5.2 – Architectural Techniques to exploit SLACK & ACCURACY trade-offs
In this work we are (a) exploring memory slack for the state-of-the-art many-core CPUs and GPUs, (b) present techniques to eliminate slack, and (c) explore the architectural parameters to improve power eciency. Dynamic Voltage-Frequency Scaling (DVFS) is one of the most benecial techniques for CPU’s to improve power eciency. The end of Dennard scaling however, […]
Sep, 6
A Survey on GPU System Considering its Performance on Different Applications
In this paper we study NVIDIA graphics processing unit (GPU) along with its computational power and applications. Although these units are specially designed for graphics application we can employee there computation power for non graphics application too. GPU has high parallel processing power, low cost of computation and less time utilization; it gives good result […]
Sep, 6
Phase Aware Memory Scheduling
Computer architecture is at the brink of convergence with the integration of the general-purpose multi-core CPU architecture and the special purpose accelerated graphics architecture (GPU). Semiconductor giants like Intel and AMD have already brought to the market next-generation integrated heterogeneous processors in the form of the Sandy Bridge and the Fusion architecture respectively. However, with […]
Sep, 6
Skew Handling in Aggregate Streaming Queries on GPUs
Nowadays, the data to be processed by database systems has grown so large that any conventional, centralized technique is inadequate. At the same time, general purpose computation on GPU (GPGPU) recently has successfully drawn attention from the data management community due to its ability to achieve significant speed-ups at a small cost. Efficient skew handling […]
Sep, 6
Percolation study of samples on 2D lattices using GPUs
We study the percolation problem of sites on 2D lattices of various geometries, using general purpose graphic processing units (GPGPU). The implementation of a component labeling parallel algorithm in CUDA and their generalization to different geometries, is discussed. The results of performance for this algorithm on a GPU versus the corresponding sequential implementation of reference […]