Posts
Dec, 2
PanJoin: A Partition-based Adaptive Stream Join
In stream processing, stream join is one of the critical sources of performance bottlenecks. The sliding-window-based stream join provides a precise result but consumes considerable computational resources. The current solutions lack support for the join predicates on large windows. These algorithms and their hardware accelerators are either limited to equi-join or use a nested loop […]
Dec, 2
Performance Portability Challenges for Fortran Applications
This project investigates how different approaches to parallel optimization impact the performance portability for Fortran codes. In addition, we explore the productivity challenges due to the software tool-chain limitations unique to Fortran. For this study, we build upon the Truchas software, a metal casting manufacturing simulation code based on unstructured mesh methods and our initial […]
Nov, 25
CLort: High Throughput and Low Energy Network Intrusion Detection on IoT Devices with Embedded GPUs
While IoT is becoming widespread, cyber security of its devices is still a limiting factor where recent attacks (e.g., the Mirai botnet) underline the need for countermeasures. One commonly-used security mechanism is a Network Intrusion Detection System (NIDS), but the processing need of NIDS has been a significant bottleneck for large dedicated machines, and a […]
Nov, 25
SuperNeurons: FFT-based Gradient Sparsification in the Distributed Training of Deep Neural Networks
The performance and efficiency of distributed training of Deep Neural Networks highly depend on the performance of gradient averaging among all participating nodes, which is bounded by the communication between nodes. There are two major strategies to reduce communication overhead: one is to hide communication by overlapping it with computation, and the other is to […]
Nov, 25
Modeling Deep Learning Accelerator Enabled GPUs
The efficacy of deep learning has resulted in it becoming one of the most important applications run in data centers today. The NVIDIA Tesla V100 GPU introduced a specialized functional unit called the Tensor Core to meet growing demand for higher performance on this workload. To exploit the full capability of current NVIDIA GPUs machine […]
Nov, 25
SWIFOLD: Smith-Waterman implementation on FPGA with OpenCL for long DNA sequences
BACKGROUND: The Smith-Waterman (SW) algorithm is the best choice for searching similar regions between two DNA or protein sequences. However, it may become impracticable in some contexts due to its high computational demands. Consequently, the computer science community has focused on the use of modern parallel architectures such as Graphics Processing Units (GPUs), Xeon Phi […]
Nov, 25
Dense and sparse parallel linear algebra algorithms on graphics processing units
One line of development followed in the field of supercomputing is the use of specific purpose processors to speed up certain types of computations. In this thesis we study the use of graphics processing units as computer accelerators and apply it to the field of linear algebra. In particular, we work with the SLEPc library […]
Nov, 18
Accelerating Low-End Edge Computing with Cross-Kernel Functionality Abstraction
This paper envisions a future in which high performance and energy-modest parallel computing on low-end edge devices were achieved through cross-device functionality abstraction to make them interactive to cloud machines. Rather, there has been little exploration of the overall optimization into kernel processing can deliver for increasingly popular but heavy burden on low-end edge devices. […]
Nov, 18
Spatter: A Benchmark Suite for Evaluating Sparse Access Patterns
Recent characterizations of data movement performance have evaluated optimizations for dense and blocked accesses used by accelerators like GPUs and Xeon Phi, but sparse access patterns like scatter and gather are still not well understood across current and emerging architectures. We propose a tunable benchmark suite, Spatter, that allows users to characterize scatter, gather, and […]
Nov, 18
StePS: A Multi-GPU Cosmological N-body Code for Compactified Simulations
We present the multi-GPU realization of the StePS (Stereographically Projected Cosmological Simulations) algorithm with MPI-OpenMP-CUDA hybrid parallelization, and show what parallelization efficiency can be reached. We use a new zoom-in cosmological direct N-body simulation method, that can simulate the infinite universe with unprecedented dynamic range for a given amount of memory and, in contrast to […]
Nov, 18
FusionStitching: Deep Fusion and Code Generation for Tensorflow Computations on GPUs
In recent years, there is a surge on machine learning applications in industry. Many of them are based on popular AI frameworks like Tensorflow, Torch, Caffe, or MxNet, etc, and are enpowered by accelerator platforms such as GPUs. One important challenge of running Tensorflow computations on GPUs is the fine granularity problem, namely, FLOPS of […]
Nov, 18
AMGCL: an Efficient, Flexible, and Extensible Algebraic Multigrid Implementation
The paper presents AMGCL – an opensource C++ library implementing the algebraic multigrid method (AMG) for solution of large sparse linear systems of equations, usually arising from discretization of partial differential equations on an unstructured grid. The library supports both shared and distributed memory computation, allows to utilize modern massively parallel processors via OpenMP, OpenCL, […]