18609

Posts

Nov, 18

Accelerating Low-End Edge Computing with Cross-Kernel Functionality Abstraction

This paper envisions a future in which high performance and energy-modest parallel computing on low-end edge devices were achieved through cross-device functionality abstraction to make them interactive to cloud machines. Rather, there has been little exploration of the overall optimization into kernel processing can deliver for increasingly popular but heavy burden on low-end edge devices. […]
Nov, 18

Spatter: A Benchmark Suite for Evaluating Sparse Access Patterns

Recent characterizations of data movement performance have evaluated optimizations for dense and blocked accesses used by accelerators like GPUs and Xeon Phi, but sparse access patterns like scatter and gather are still not well understood across current and emerging architectures. We propose a tunable benchmark suite, Spatter, that allows users to characterize scatter, gather, and […]
Nov, 18

FusionStitching: Deep Fusion and Code Generation for Tensorflow Computations on GPUs

In recent years, there is a surge on machine learning applications in industry. Many of them are based on popular AI frameworks like Tensorflow, Torch, Caffe, or MxNet, etc, and are enpowered by accelerator platforms such as GPUs. One important challenge of running Tensorflow computations on GPUs is the fine granularity problem, namely, FLOPS of […]
Nov, 18

StePS: A Multi-GPU Cosmological N-body Code for Compactified Simulations

We present the multi-GPU realization of the StePS (Stereographically Projected Cosmological Simulations) algorithm with MPI-OpenMP-CUDA hybrid parallelization, and show what parallelization efficiency can be reached. We use a new zoom-in cosmological direct N-body simulation method, that can simulate the infinite universe with unprecedented dynamic range for a given amount of memory and, in contrast to […]
Nov, 18

AMGCL: an Efficient, Flexible, and Extensible Algebraic Multigrid Implementation

The paper presents AMGCL – an opensource C++ library implementing the algebraic multigrid method (AMG) for solution of large sparse linear systems of equations, usually arising from discretization of partial differential equations on an unstructured grid. The library supports both shared and distributed memory computation, allows to utilize modern massively parallel processors via OpenMP, OpenCL, […]
Nov, 11

Hashing, Caching, and Synchronization: Memory Techniques for Latency Masking Multithreaded Applications

The increase in size and decrease in cost of DRAMs has led to a rapid growth of in-memory solutions to data analytics. In this area, performance is often limited by the latency and bandwidth of the memory system. Furthermore, the move to multicore execution has put added pressure on the memory bandwidth and often results […]
Nov, 11

Double-precision FPUs in High-Performance Computing: an Embarrassment of Riches?

Among the (uncontended) common wisdom in High-Performance Computing (HPC) is the applications’ need for large amount of double-precision support in hardware. Hardware manufacturers, the TOP500 list, and (rarely revisited) legacy software have without doubt followed and contributed to this view. In this paper, we challenge that wisdom, and we do so by exhaustively comparing a […]
Nov, 11

The AlexNet Moment for Homomorphic Encryption: HCNN, the First Homomorphic CNN on Encrypted Data with GPUs

Fully homomorphic encryption, with its widely-known feature of computing on encrypted data, empowers a wide range of privacy-concerned cloud applications including deep learning as a service. This comes at a high cost since FHE includes highly-intensive computation that requires enormous computing power. Although the literature includes a number of proposals to run CNNs on encrypted […]
Nov, 11

Workload-aware Automatic Parallelization for Multi-GPU DNN Training

Deep neural networks (DNNs) have emerged as successful solutions for variety of artificial intelligence applications, but their very large and deep models impose high computational requirements during training. Multi-GPU parallelization is a popular option to accelerate demanding computations in DNN training, but most state-of-the-art multi-GPU deep learning frameworks not only require users to have an […]
Nov, 11

A Hybrid GPU-FPGA-based Computing Platform for Machine Learning

We present a hybrid GPU-FPGA based computing platform to tackle the high-density computing problem of machine learning. In our platform, the training part of a machine learning application is implemented on GPU and the inferencing part is implemented on FPGA. It should also include a model transplantation part which can transplant the model from the […]
Nov, 3

Power analysis of sorting algorithms on FPGA using OpenCL

With the advent of big data and cloud computing, there is tremendous interest in optimised algorithms and architectures for sorting either using software or hardware. Field Programmable Gate Arrays (FPGAs) are being increasingly used in high end data servers providing a bridge between the flexibility of software and performance benefits of hardware. In this paper […]
Nov, 3

Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation

TensorFlow has been the most widely adopted Machine/Deep Learning framework. However, little exists in the literature that provides a thorough understanding of the capabilities which TensorFlow offers for the distributed training of large ML/DL models that need computation and communication at scale. Most commonly used distributed training approaches for TF can be categorized as follows: […]

* * *

* * *

HGPU group © 2010-2020 hgpu.org

All rights belong to the respective authors

Contact us: