Posts
Nov, 16
Efficient Communications in Training Large Scale Neural Networks
We consider the problem of how to reduce the cost of communication that is required for the parallel training of a neural network. The state-of-the-art method, Bulk Synchronous Parallel Stochastic Gradient Descent (BSP-SGD), requires many collective communication operations, like broadcasts of parameters or reductions for sub-gradient aggregations, which for large messages quickly dominates overall execution […]
Nov, 16
Data Acquisition with GPUs: The DAQ for the Muon g-2 Experiment at Fermilab
Graphical Processing Units (GPUs) have recently become a valuable computing tool for the acquisition of data at high rates and for a relatively low cost. The devices work by parallelizing the code into thousands of threads, each executing a simple process, such as identifying pulses from a waveform digitizer. The CUDA programming library can be […]
Nov, 16
Automatic code generation methods applied to numerical linear algebra in high performance computing
Parallelism in today’s computer architectures is ubiquitous whether it be in supercomputers, workstations or on portable devices such as smartphones. Exploiting efficiently these systems for a specific application requires a multidisciplinary effort that concerns Domain Specific Languages (DSL), code generation and optimization techniques and application-specific numerical algorithms. In this PhD thesis, we present a method […]
Nov, 16
Benchmarking performance of a hybrid Xeon/Xeon Phi system for parallel computation of similarity measures between large vectors
The paper deals with parallelization of computing similarity measures between large vectors. Such computations are important components within many applications and consequently are of high importance. Rather than focusing on optimization of the algorithm itself, assuming specific measures, the paper assumes a general scheme for finding similarity measures for all pairs of vectors and investigates […]
Nov, 13
CUDA-API-wrappers: Thin C++-flavored wrappers for the CUDA runtime API
nVIDIA’s Runtime API for CUDA is intended for use both in C and C++ code. As such, it uses a C-style API, the lower common denominator (with a few notable exceptions of templated function overloads). This library of wrappers around the Runtime API is intended to allow us to embrace many of the features of […]
Nov, 13
OpenCL-based optimizations for acceleration of object tracking on FPGAs and GPUs
OpenCL support across many heterogeneous nodes (FPGAs, GPUs, CPUs) has increased the programmability of these systems significantly. At the same time, it opens up new challenges and design choices for system designers and application programmers. While OpenCL offers a universal semantic to capture the parallel behavior of applications independent of the target architecture, some customization […]
Nov, 13
Executing Dynamic Data Rate Actor Networks on OpenCL Platforms
Heterogeneous computing platforms consisting of general purpose processors (GPPs) and graphics processing units (GPUs) have become commonplace in personal mobile devices and embedded systems. For years, programming of these platforms was very tedious and simultaneous use of all available GPP and GPU resources required low-level programming to ensure efficient synchronization and data transfer between processors. […]
Nov, 13
Hadoopcl2: Motivating the design of a distributed, heterogeneous programming system with machine-learning applications
Machine learning (ML) algorithms have garnered increased interest as they demonstrate improved ability to extract meaningful trends from large, diverse, and noisy data sets. While research is advancing the state-of-the-art in ML algorithms, it is difficult to drastically improve the real-world performance of these algorithms. Porting new and existing algorithms from single-node systems to multi-node […]
Nov, 13
Fractal Art Generation using GPUs
Fractal image generation algorithms exhibit extreme parallelizability. Using general purpose graphics processing unit (GPU) programming to implement escape-time algorithms for Julia sets of functions,parallel methods generate visually attractive fractal images much faster than traditional methods. Vastly improved speeds are achieved using this method of computation, which allow real-time generation and display of images. A comparison […]
Nov, 10
Shuffle Reduction Based Sparse Matrix-Vector Multiplication on Kepler GPU
GPU is the suitable equipment for accelerating computing-intensive applications in order to get the higher throughput for High Performance Computing (HPC). Sparse Matrix-Vector Multiplication (SpMV) is the core algorithm of HPC, so the SpMV’s throughput on GPU may affect the throughput on HPC platform. In the paper, we focus on the latency of reduction routine […]
Nov, 10
PipeCNN: An OpenCL-Based FPGA Accelerator for Large-Scale Convolution Neuron Networks
Convolutional neural networks (CNNs) have been widely employed in many applications such as image classification, video analysis and speech recognition. Being compute-intensive, CNN computations are mainly accelerated by GPUs with high power dissipations. Recently, studies were carried out exploiting FPGA as CNN accelerator because of its reconfigurability and energy efficiency advantage over GPU, especially when […]
Nov, 10
Optimization and parallelization of B-spline based orbital evaluations in QMC on multi/many-core shared memory processors
B-spline based orbital representations are widely used in Quantum Monte Carlo (QMC) simulations of solids, historically taking as much as 50% of the total run time. Random accesses to a large four-dimensional array make it challenging to efficiently utilize caches and wide vector units of modern CPUs. We present node-level optimizations of B-spline evaluations on […]