18604

Posts

Nov, 11

Hashing, Caching, and Synchronization: Memory Techniques for Latency Masking Multithreaded Applications

The increase in size and decrease in cost of DRAMs has led to a rapid growth of in-memory solutions to data analytics. In this area, performance is often limited by the latency and bandwidth of the memory system. Furthermore, the move to multicore execution has put added pressure on the memory bandwidth and often results […]
Nov, 11

Double-precision FPUs in High-Performance Computing: an Embarrassment of Riches?

Among the (uncontended) common wisdom in High-Performance Computing (HPC) is the applications’ need for large amount of double-precision support in hardware. Hardware manufacturers, the TOP500 list, and (rarely revisited) legacy software have without doubt followed and contributed to this view. In this paper, we challenge that wisdom, and we do so by exhaustively comparing a […]
Nov, 11

The AlexNet Moment for Homomorphic Encryption: HCNN, the First Homomorphic CNN on Encrypted Data with GPUs

Fully homomorphic encryption, with its widely-known feature of computing on encrypted data, empowers a wide range of privacy-concerned cloud applications including deep learning as a service. This comes at a high cost since FHE includes highly-intensive computation that requires enormous computing power. Although the literature includes a number of proposals to run CNNs on encrypted […]
Nov, 11

Workload-aware Automatic Parallelization for Multi-GPU DNN Training

Deep neural networks (DNNs) have emerged as successful solutions for variety of artificial intelligence applications, but their very large and deep models impose high computational requirements during training. Multi-GPU parallelization is a popular option to accelerate demanding computations in DNN training, but most state-of-the-art multi-GPU deep learning frameworks not only require users to have an […]
Nov, 11

A Hybrid GPU-FPGA-based Computing Platform for Machine Learning

We present a hybrid GPU-FPGA based computing platform to tackle the high-density computing problem of machine learning. In our platform, the training part of a machine learning application is implemented on GPU and the inferencing part is implemented on FPGA. It should also include a model transplantation part which can transplant the model from the […]
Nov, 3

Power analysis of sorting algorithms on FPGA using OpenCL

With the advent of big data and cloud computing, there is tremendous interest in optimised algorithms and architectures for sorting either using software or hardware. Field Programmable Gate Arrays (FPGAs) are being increasingly used in high end data servers providing a bridge between the flexibility of software and performance benefits of hardware. In this paper […]
Nov, 3

Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation

TensorFlow has been the most widely adopted Machine/Deep Learning framework. However, little exists in the literature that provides a thorough understanding of the capabilities which TensorFlow offers for the distributed training of large ML/DL models that need computation and communication at scale. Most commonly used distributed training approaches for TF can be categorized as follows: […]
Nov, 3

Integration of CUDA Processing within the C++ library for parallelism and concurrency (HPX)

Experience shows that on today’s high performance systems the utilization of different acceleration cards in conjunction with a high utilization of all other parts of the system is difficult. Future architectures, like exascale clusters, are expected to aggravate this issue as the number of cores are expected to increase and memory hierarchies are expected to […]
Nov, 3

A Comparative Measurement Study of Deep Learning as a Service Framework

Big data powered Deep Learning (DL) and its applications have blossomed in recent years, fueled by three technological trends: a large amount of digitized data openly accessible, a growing number of DL software frameworks in open source and commercial markets, and a selection of affordable parallel computing hardware devices. However, no single DL framework, to […]
Nov, 3

OpenCL Performance Prediction using Architecture-Independent Features

OpenCL is an attractive model for heterogeneous high-performance computing systems, with wide support from hardware vendors and significant performance portability. To support efficient scheduling on HPC systems it is necessary to perform accurate performance predictions for OpenCL workloads on varied compute devices, which is challenging due to diverse computation, communication and memory access characteristics which […]
Oct, 28

High Performance Computing with FPGAs and OpenCL

In this work we evaluate the potential of FPGAs for accelerating HPC workloads as a more power-efficient alternative to GPUs. Using High-Level Synthesis and a large set of optimization techniques, we show that FPGAs can achieve better performance than CPUs, and better power efficiency than both CPUs and GPUs for typical HPC workloads. Furthermore, we […]
Oct, 28

Automatic Mapping for OpenCL-Programs on CPU/GPU Heterogeneous Platforms

Heterogeneous computing systems with multiple CPUs and GPUs are increasingly popular. Today, heterogeneous platforms are deployed in many setups, ranging from low-power mobile systems to high performance computing systems. Such platforms are usually programmed using OpenCL which allows to execute the same program on different types of device. Nevertheless, programming such platforms is a challenging […]

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us:

contact@hpgu.org