Posts
Nov, 12
Solving MaxSAT with Matrix Multiplication
We propose an incomplete algorithm for Maximum Satisfiability (MaxSAT) specifically designed to run on neural network accelerators such as GPUs and TPUs. Given a MaxSAT problem instance in conjunctive normal form, our procedure constructs a Restricted Boltzmann Machine (RBM) with an equilibrium distribution wherein the probability of a Boolean assignment is exponential in the number […]
Nov, 12
An Evaluation of Directive-Based Parallelization on the GPU Using a Parboil Benchmark
Heterogeneous architectures consisting of both central processing units and graphics processing units are common in contemporary computer systems. For that reason, several programming models have been developed to exploit available parallelism, such as low-level CUDA and OpenCL, and directive-based OpenMP and OpenACC. In this paper we explore and evaluate the applicability of OpenACC, which is […]
Nov, 12
GDlog: A GPU-Accelerated Deductive Engine
Modern deductive database engines (e.g., LogicBlox and Soufflé) enable their users to write declarative queries which compute recursive deductions over extensional data, leaving their high-performance operationalization (query planning, semi-naïve evaluation, and parallelization) to the engine. Such engines form the backbone of modern high-throughput applications in static analysis, security auditing, social-media mining, and business analytics. State-of-the-art […]
Nov, 12
An approach to performance portability through generic programming
The expanding hardware diversity in high performance computing adds enormous complexity to scientific software development. Developers who aim to write maintainable software have two options: 1) To use a so-called data locality abstraction that handles portability internally, thereby, performance-productivity becomes a trade off. Such abstractions usually come in the form of libraries, domain-specific languages, and […]
Nov, 5
Performance Optimization of Deep Learning Sparse Matrix Kernels on Intel Max Series GPU
In this paper, we focus on three sparse matrix operations that are relevant for machine learning applications, namely, the sparse-dense matrix multiplication (SPMM), the sampled dense-dense matrix multiplication (SDDMM), and the composition of the SDDMM with SPMM, also termed as FusedMM. We develop optimized implementations for SPMM, SDDMM, and FusedMM operations utilizing Intel oneAPI’s Explicit […]
Nov, 5
Applying the Midas Touch of Reproducibility to High-Performance Computing
With the serial performance of CPUs improving exponentially through the 1980s and 1990s and then plateauing by the mid-2000s, the high-performance computing community has seen parallel computing become ubiquitous, which, in turn, has led to a proliferation of parallel programming models. This diversity in hardware platform and programming model has forced programmers to port their […]
Nov, 5
Redco: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs
The recent progress of AI can be largely attributed to large language models (LLMs). However, their escalating memory requirements introduce challenges for machine learning (ML) researchers and engineers. Addressing this requires developers to partition a large model to distribute it across multiple GPUs or TPUs. This necessitates considerable coding and intricate configuration efforts with existing […]
Nov, 5
A Comparison of the Performance of the Molecular Dynamics Simulation Package GROMACS Implemented in the SYCL and CUDA Programming Models
For many years, systems running Nvidia-based GPU architectures have dominated the heterogeneous supercomputer landscape. However, recently GPU chipsets manufactured by Intel and AMD have cut into this market and can now be found in some of the world’s fastest supercomputers. The June 2023 edition of the TOP500 list of supercomputers ranks the Frontier supercomputer at […]
Nov, 5
OpenRAND: A Performance Portable, Reproducible Random Number Generation Library for Parallel Computations
We introduce OpenRAND, a C++17 library aimed at facilitating reproducible scientific research through the generation of statistically robust and yet replicable random numbers. OpenRAND accommodates single and multi-threaded applications on CPUs and GPUs and offers a simplified, user-friendly API that complies with the C++ standard’s random number engine interface. It is portable: it functions seamlessly […]
Oct, 29
Performance Tuning for GPU-Embedded Systems: Machine-Learning-based and Analytical Model-driven Tuning Methodologies
GPU-embedded systems have gained popularity across various domains due to their efficient power consumption. However, in order to meet the demands of real-time or time-consuming applications running on these systems, it is crucial for them to be tuned to exhibit high performance. This paper addresses the issue by developing and comparing two tuning methodologies on […]
Oct, 29
Performance portability evaluation of blocked stencil computations on GPUs
In this new era where multiple GPU vendors are leading the supercomputing landscape, and multiple programming models are available to users, the drive to achieve performance portability across platforms faces new challenges. Consider stencil algorithms, where architecture-specific solutions are required to optimize for the parallelism hierarchy and memory hierarchy of emerging systems. In this work, […]
Oct, 29
Dynamic autotuning of SpMV kernel in CUSP library
Sparse matrix-vector product (SpMV) is a central operation in many iterative methods for solving linear systems and as such is an attractive candidate for acceleration on the GPU. However, the performance of the SpMV kernel can vary depending both on the target architecture as well as on the sparsity pattern of the matrix. Thus, to […]