Posts
Jan, 10
GPU SQL Query Accelerator
The world rapidly grows with every connected sensors and devices with geo-location capabilities to update its location. Data analytic industries are finding ways to store the data, and also turn this raw data into valuable information as an eminent business intelligence services. It has inadvertently conformed a flood of granular data about our world. Crucially, […]
Jan, 8
Synchronization and Coordination in Heterogeneous Processors
Recent developments in internet connectivity and mobile devices have spurred massive data growth. Users demand rapid data processing from both large-scale systems and energy-constrained personal devices. Concurrently with this data growth, transistor scaling trends have slowed, diminishing processor performance and energy improvements compared to prior generations. To sustain performance trends while staying within energy budgets, […]
Jan, 8
A Framework for Dense Triangular Matrix Kernels on Various Manycore Architectures
We present a new high performance framework for dense triangular BLAS kernels, i.e., triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors (Charara et al., EuroPar, 2016). In this paper, the performance of triangular BLAS kernels […]
Jan, 8
Akid: A Library for Neural Network Research and Production from a Dataism Approach
Neural networks are a revolutionary but immature technique that is fast evolving and heavily relies on data. To benefit from the newest development and newly available data, we want the gap between research and production as small as possibly. On the other hand, differing from traditional machine learning models, neural network is not just yet […]
Jan, 8
Communication and Coordination Paradigms for Highly-Parallel Accelerators
As CPU performance plateaus, many communities are turning to highly-parallel accelerators such as graphics processing units (GPUs) to obtain their desired level of processing power. Unfortunately, the GPU’s massive parallelism and data-parallel execution model make it difficult to synchronize GPU threads. To resolve this, we introduce aggregation buffers, which are producer/consumer queues that act as […]
Jan, 8
Gunrock: GPU Graph Analytics
For large-scale graph analytics on the GPU, the irregularity of data access and control flow, and the complexity of programming GPUs, have presented two significant challenges to developing a programmable high-performance graph library. "Gunrock", our graph-processing system designed specifically for the GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on operations on a vertex or […]
Jan, 4
Deep Neural Networks to Enable Real-time Multimessenger Astrophysics
We introduce a new methodology for time-domain signal processing, based on deep learning neural networks, which has the potential to revolutionize data analysis in science. To illustrate how this enables real-time multimessenger astrophysics, we designed two deep convolutional neural networks that can analyze time-series data from observatories including advanced LIGO. The first neural network recognizes […]
Jan, 4
Massively Parallel Computation of Accurate Densities for N-body Dark Matter Simulations using the Phase-Space-Element Method
In 2012 a method to analyze N-body dark matter simulations using a tetrahedral tesselation of the three-dimensional dark matter manifold in six-dimensional phase space was introduced. This paper presents an accurate density computation approach for large N-body datasets, that is based on this technique and designed for massively parallel GPU-clusters. The densities are obtained by […]
Jan, 4
Design and optimization of a portable LQCD Monte Carlo code using OpenACC
The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core GPUs, exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; […]
Jan, 4
Evaluation of Multi-Threading in Vulkan
Today processor development has a lot of focus on parallel performance by providing multiple cores that programs can use. The problem with the current version of OpenGL is that it lacks support for utilizing multiple CPU threads for calling rendering commands. Vulkan is a new low level graphics API that gives more control to the […]
Jan, 4
An initial performance review of software components for a heterogeneous computing platform
The design of embedded systems is a complex activity that involves a lot of decisions. With high performance demands of present day usage scenarios and software, they often involve energy hungry state-of-the-art computing units. While focusing on power consumption of computing units, the physical properties of software are often ignored. Recently, there has been a […]
Dec, 31
Synthesizing Benchmarks for Predictive Modeling
Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the […]