29180

Posts

Apr, 14

High Performance Privacy Preserving AI

Artificial intelligence (AI) depends on data. In sensitive domains – such as healthcare, security, finance, and many more – there is therefore tension between unleashing the power of AI and maintaining the confidentiality and security of the relevant data. This book – intended for researchers in academia and R&D engineers in industry – explains how […]
Apr, 7

Using Intel oneAPI for Multi-hybrid Acceleration Programming with GPU and FPGA Coupling

Intel oneAPI is a programming framework that accepts various accelerators such as GPUs, FPGAs, and multi-core CPUs, with a focus on HPC applications. Users can apply their code written in a single language, DPC++, to this heterogeneous programming environment. However, in practice, it is not easy to apply to different accelerators, especially for non-Intel devices […]
Apr, 7

94% on CIFAR-10 in 3.29 Seconds on a Single GPU

CIFAR-10 is among the most widely used datasets in machine learning, facilitating thousands of research projects per year. To accelerate research and reduce the cost of experiments, we introduce training methods for CIFAR-10 which reach 94% accuracy in 3.29 seconds, 95% in 10.4 seconds, and 96% in 46.3 seconds, when run on a single NVIDIA […]
Apr, 7

gpu_tracker: Python package for tracking and profiling GPU utilization in both desktop and high-performance computing environments

Determining the maximum usage of random-access memory (RAM) on both the motherboard and on a graphical processing unit (GPU) over the lifetime of a computing task can be extremely useful for troubleshooting points of failure as well as optimizing memory utilization, especially within a high-performance computing (HPC) setting. While there are tools for tracking compute […]
Apr, 7

Speed, power and cost implications for GPU acceleration of Computational Fluid Dynamics on HPC systems

Computational Fluid Dynamics (CFD) is the simulation of fluid flow undertaken with the use of computational hardware. The underlying equations are computationally challenging to solve and necessitate high performance computing (HPC) to resolve in a practical timeframe when a reasonable level of fidelity is required. The simulations are memory intensive, having previously been limited to […]
Apr, 7

Seer: Predictive Runtime Kernel Selection for Irregular Problems

Modern GPUs are designed for regular problems and suffer from load imbalance when processing irregular data. Prior to our work, a domain expert selects the best kernel to map fine-grained irregular parallelism to a GPU. We instead propose Seer, an abstraction for producing a simple, reproduceable, and understandable decision tree selector model which performs runtime […]
Mar, 24

LOOPer: A Learned Automatic Code Optimizer For Polyhedral Compilers

While polyhedral compilers have shown success in implementing advanced code transformations, they still have challenges in selecting the most profitable transformations that lead to the best speedups. This has motivated the use of machine learning to build cost models to guide the search for polyhedral optimizations. State-of-the-art polyhedral compilers have demonstrated a viable proof-of-concept of […]
Mar, 24

Full-Scale File System Acceleration on GPU

Modern HPC and AI Computing solutions regularly use GPUs as their main source of computational power. This creates a significant imbalance for storage operations for GPU applications, as every such storage operation has to be signalled to and handled by the CPU. In GPU4FS, we propose a radical solution to this imbalance: Move the file […]
Mar, 24

Retargeting and Respecializing GPU Workloads for Performance Portability

In order to come close to peak performance, accelerators like GPUs require significant architecture-specific tuning that understand the availability of shared memory, parallelism, tensor cores, etc. Unfortunately, the pursuit of higher performance and lower costs have led to a significant diversification of architecture designs, even from the same vendor. This creates the need for performance […]
Mar, 24

Performance Portable Monte Carlo Particle Transport on Intel, NVIDIA, and AMD GPUs

OpenMC is an open source Monte Carlo neutral particle transport application that has recently been ported to GPU using the OpenMP target offloading model. We examine the performance of OpenMC at scale on the Frontier, Polaris, and Aurora supercomputers, demonstrating that performance portability has been achieved by OpenMC across all three major GPU vendors (AMD, […]
Mar, 24

Parallel Gaussian process with kernel approximation in CUDA

This paper introduces a parallel implementation in CUDA/C++ of the Gaussian process with a decomposed kernel. This recent formulation, introduced by Joukov and Kulić (2022), is characterized by an approximated — but much smaller — matrix to be inverted compared to plain Gaussian process. However, it exhibits a limitation when dealing with higher-dimensional samples which […]
Mar, 18

Fast Truncated SVD of Sparse and Dense Matrices on Graphics Processors

We investigate the solution of low-rank matrix approximation problems using the truncated SVD. For this purpose, we develop and optimize GPU implementations for the randomized SVD and a blocked variant of the Lanczos approach. Our work takes advantage of the fact that the two methods are composed of very similar linear algebra building blocks, which […]

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: