16689

Posts

Nov, 5

HPVM: A Portable Virtual Instruction Set for Heterogeneous Parallel Systems

We describe a programming abstraction for heterogeneous parallel hardware, designed to capture a wide range of popular parallel hardware, including GPUs, vector instruction sets and multicore CPUs. Our abstraction, which we call HPVM, is a hierarchical dataflow graph with shared memory and vector instructions. We use HPVM to define both a virtual instruction set (ISA) […]
Nov, 5

grim: A Flexible, Conservative Scheme for Relativistic Fluid Theories

Hot, diffuse, relativistic plasmas such as sub-Eddington black hole accretion flows are expected to be collisionless, yet are commonly modeled as a fluid using ideal general relativistic magnetohydrodynamics (GRMHD). Dissipative effects such as heat conduction and viscosity can be important in a collisionless plasma and will potentially alter the dynamics and radiative properties of the […]
Nov, 5

A Memory Bandwidth-Efficient Hybrid Radix Sort on GPUs

Sorting is at the core of many database operations, such as index creation, sort-merge joins and user-requested output sorting. As GPUs are emerging as a promising platform to accelerate various operations, sorting on GPUs becomes a viable endeavour. Over the past few years, several improvements have been proposed for sorting on GPUs, leading to the […]
Nov, 3

Extensions and Limitations of the Neural GPU

The Neural GPU is a recent model that can learn algorithms such as multi-digit binary addition and binary multiplication in a way that generalizes to inputs of arbitrary length. We show that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size. […]
Nov, 3

Diplomat: Mapping of multi-kernel applications using a static dataflow abstraction

In this paper we propose a novel approach to heterogeneous embedded systems programmability using a taskgraph based framework called Diplomat. Diplomat is a taskgraph framework that exploits the potential of static dataflow modeling and analysis to deliver performance estimation and CPU/GPU mapping. An application has to be specified once, and then the framework can automatically […]
Nov, 3

MILC staggered conjugate gradient performance on Intel KNL

We review our work done to optimize the staggered conjugate gradient (CG) algorithm in the MILC code for use with the Intel Knights Landing (KNL) architecture. KNL is the second generation Intel Xeon Phi processor. It is capable of massive thread parallelism, data parallelism, and high on-board memory bandwidth and is being adopted in supercomputing […]
Nov, 3

A hybrid algorithm for parallel molecular dynamics simulations

This article describes an algorithm for hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-ranged forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds […]
Nov, 3

Hybrid CPU-GPU generation of the Hamiltonian and Overlap matrices in FLAPW methods

In this paper we focus on the integration of high-performance numerical libraries in ab initio codes and the portability of performance and scalability. The target of our work is FLEUR, a software for electronic structure calculations developed in the Forschungszentrum J"ulich over the course of two decades. The presented work follows up on a previous […]
Nov, 2

A Survey of Techniques for Architecting TLBs

Translation lookaside buffer (TLB) caches virtual to physical address translation information and is used in systems ranging from embedded devices to high-end servers. Since TLB is accessed very frequently and a TLB miss is extremely costly, prudent management of TLB is important for improving performance and energy efficiency of processors. In this paper, we present […]
Nov, 1

Classification Performance of Convolutional Neural Networks

The purpose of this thesis is to determine the performance of convolutional neural networks in classifications per millisecond, not training or accuracy, for the GTX960 and the TegraX1. This is done through varying parameters of the convolutional neural networks and using the Python framework Theano’s function profiler to measure the time taken for different networks. […]
Nov, 1

LightRNN: Memory and Computation-Efficient Recurrent Neural Networks

Recurrent neural networks (RNNs) have achieved state-of-the-art performances in many natural language processing tasks, such as language modeling and machine translation. However, when the vocabulary is large, the RNN model will become very big (e.g., possibly beyond the memory capacity of a GPU device) and its training will become very inefficient. In this work, we […]
Nov, 1

Towards Automating Multi-dimensional Data Decomposition for Executing a Single-GPU Code on a Multi-GPU System

In this paper, we present a data decomposition method for multi-dimensional data, aiming at realizing multi graphics processing unit (GPU) acceleration of a compute unified device architecture (CUDA) code written for a single GPU. Our multi-dimensional method extends a previous method that deals with one-dimensional (1-D) data. The method performs a sample run of selected […]
Page 19 of 912« First...10...1718192021...304050...Last »

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: