28693

Posts

Oct, 22

Performance/power assessment of CNN packages on embedded automotive platforms

The rise of power-efficient embedded computers based on highly-parallel accelerators opens a number of opportunities and challenges for researchers and engineers, and paved the way to the era of edge computing. At the same time, advances in embedded AI for object detection and categorization such as YOLO, GoogleNet and AlexNet reached an unprecedented level of […]
Oct, 22

Performance portability analysis of SYCL with a classical CG on CPU, GPU, and FPGA

In this work, the capability of SYCL™ to execute code on different hardware devices is investigated. This motivates conducting a performance portability analysis. The architectures investigated are the CPU, GPU, and FPGA. As a benchmark algorithm, the CG algorithm is used, as it is widely applicable to many fields and is more complex than simple […]
Oct, 22

Predicting the Execution Time of a kernel on a specific GPU using PTX code

During the last couple of decades, there has been an exponential growth in the amount of time and energy required to run workloads on high-performance computing systems, which nowadays rely heavily upon GPUs. In order to reduce the resources required by these systems, one clear approach is to avoid inefficient applications by using prediction models […]
Oct, 22

SYCL in the Edge: Performance Evaluation for Heterogeneous Acceleration

Edge computing is essential to handle increasing data volumes and processing capacities. It provides real-time, secure data processing near data sources, like smart devices, alleviating cloud computing energy use and saving network bandwidth. Specialized accelerators, like GPUs and FPGAs, are vital for low-latency edge computing but the requirements to customized code for different hardware and […]
Oct, 22

LeXInt: GPU-accelerated Exponential Integrators package

We present an open-source CUDA-based package that consists of a compilation of exponential integrators where the action of the matrix exponential or the φl functions on a vector is approximated using the method of polynomial interpolation at Leja points. Using a couple of test examples on an NVIDIA A100 GPU, we show that one can […]
Oct, 15

OpenMM 8: Molecular Dynamics Simulation with Machine Learning Potentials

Machine learning plays an important and growing role in molecular simulation. The newest version of the OpenMM molecular dynamics toolkit introduces new features to support the use of machine learning potentials. Arbitrary PyTorch models can be added to a simulation and used to compute forces and energy. A higher-level interface allows users to easily model […]
Oct, 15

Reverse-Mode AD of Reduce-by-Index and Scan in Futhark

We present and evaluate the Futhark implementation of reverse-mode automatic differentiation (AD) for the basic blocks of parallel programming: reduce, prefix sum (scan), and reduce by index. We first present derivations of general-case algorithms and then discuss several specializations that result in efficient differentiation of most cases of practical interest. We report an experiment that […]
Oct, 15

Strega: An HTTP Server for FPGAs

The computer architecture landscape is being reshaped by the new opportunities, challenges and constraints brought by the cloud. On the one hand, high-level applications profit from specialised hardware to boost their performance and reduce deployment costs. On the other hand, cloud providers maximise the CPU time allocated to client applications by offloading infrastructure tasks to […]
Oct, 15

Comparison of different n-body algorithms on various hardware platforms using SYCL

The n-body problem has various applications in different fields of science such as astrophysics, where it describes the problem of calculating the movements of n different bodies which all interact with each other over time. There exist different algorithms that solve the n-body problem, for example, the naive approach and the Barnes-Hut algorithm. Since applications […]
Oct, 15

Open SYCL on heterogeneous GPU systems: A case of study

Computational platforms for high-performance scientific applications are becoming more heterogenous, including hardware accelerators such as multiple GPUs. Applications in a wide variety of scientific fields require an efficient and careful management of the computational resources of this type of hardware to obtain the best possible performance. However, there are currently different GPU vendors, architectures and […]
Oct, 8

Exploring the Limits of Generic Code Execution on GPUs via Direct (OpenMP) Offload

GPUs are well-known for their remarkable ability to accelerate computations through massive parallelism. However, offloading computations to GPUs necessitates manual identification of code regions that should be executed on the device, memory that needs to be transferred, and synchronization to be handled. Recent work has leveraged the portable target offloading interface provided by LLVM/OpenMP, taking […]
Oct, 8

Impacts of Parallel Programming on Limited-Resource Hardware

Limited resource hardware devices are more affordable and energy efficient than high-end hardware. Despite their reduced size, these devices are increasingly complex, with many now featuring multiple processing cores, GPGPU accelerators, and larger RAM capacity. To fully utilize their computational capacity, software developers must exploit parallelism, but this adds an extra layer of complexity because […]

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: