Jan, 16

Using efficient parallelization in Graphic Processing Units to parameterize stochastic fire propagation models

Fire propagation is a major concern in the world in general and in Argentinian northwestern Patagonia in particular where every year hundreds of hectares are affected by both natural and anthropogenic forest fires. We developed an efficient cellular automata model in Graphic Processing Units (GPUs) to simulate fire propagation. The graphical advantages of GPUs were […]
Jan, 16

Decoding with Finite-State Transducers on GPUs

Weighted finite automata and transducers (including hidden Markov models and conditional random fields) are widely used in natural language processing (NLP) to perform tasks such as morphological analysis, part-of-speech tagging, chunking, named entity recognition, speech recognition, and others. Parallelizing finite state algorithms on graphics processing units (GPUs) would benefit many areas of NLP. Although researchers […]
Jan, 12

GPU Hackathons, 2017

Background General-purpose Graphics Processing Units (GPGPUs) potentially offer exceptionally high memory bandwidth and performance for a wide range of applications. The challenge in utilizing such accelerators has been the difficulty in programming them. Any and all GPU programming paradigms are welcome. Hackathon goal The goal of each hackathon is for current or prospective user groups […]
Jan, 10

Parallelization of BVH and BSP on the GPU

Rendering is a central point in computer graphics and visualization. In order to display realistic images reflections, shadows and further realistic light diffusions is needed. To obtain these, ray tracing, view frustum culling as well as transparency sorting among others are commonly used techniques. Given the right acceleration structure, said procedures can be reduced to […]
Jan, 10

Software Prefetching for Indirect Memory Accesses

Many modern data processing and HPC workloads are heavily memory-latency bound. A tempting proposition to solve this is software prefetching, where special non-blocking loads are used to bring data into the cache hierarchy just before being required. However, these are difficult to insert to effectively improve performance, and techniques for automatic insertion are currently limited. […]
Jan, 10

DeepDSL: A Compilation-based Domain-Specific Language for Deep Learning

In recent years, Deep Learning (DL) has found great success in domains such as multimedia understanding. However, the complex nature of multimedia data makes it difficult to develop DL-based software. The state-of-the art tools, such as Caffe, TensorFlow, Torch7, and CNTK, while are successful in their applicable domains, are programming libraries with fixed user interface, […]
Jan, 10

An FPGA Accelerator for Molecular Dynamics Simulation Using OpenCL

Molecular dynamics (MD) simulations are very important to study physical properties of the atoms and molecules. However, a huge amount of processing time is required to simulate a few nano-seconds of an actual experiment. Although the hardware acceleration using FPGAs provides promising results, huge design time and hardware design skills are required to implement an […]
Jan, 10

GPU SQL Query Accelerator

The world rapidly grows with every connected sensors and devices with geo-location capabilities to update its location. Data analytic industries are finding ways to store the data, and also turn this raw data into valuable information as an eminent business intelligence services. It has inadvertently conformed a flood of granular data about our world. Crucially, […]
Jan, 8

Synchronization and Coordination in Heterogeneous Processors

Recent developments in internet connectivity and mobile devices have spurred massive data growth. Users demand rapid data processing from both large-scale systems and energy-constrained personal devices. Concurrently with this data growth, transistor scaling trends have slowed, diminishing processor performance and energy improvements compared to prior generations. To sustain performance trends while staying within energy budgets, […]
Jan, 8

A Framework for Dense Triangular Matrix Kernels on Various Manycore Architectures

We present a new high performance framework for dense triangular BLAS kernels, i.e., triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors (Charara et al., EuroPar, 2016). In this paper, the performance of triangular BLAS kernels […]
Jan, 8

Communication and Coordination Paradigms for Highly-Parallel Accelerators

As CPU performance plateaus, many communities are turning to highly-parallel accelerators such as graphics processing units (GPUs) to obtain their desired level of processing power. Unfortunately, the GPU’s massive parallelism and data-parallel execution model make it difficult to synchronize GPU threads. To resolve this, we introduce aggregation buffers, which are producer/consumer queues that act as […]
Jan, 8

Akid: A Library for Neural Network Research and Production from a Dataism Approach

Neural networks are a revolutionary but immature technique that is fast evolving and heavily relies on data. To benefit from the newest development and newly available data, we want the gap between research and production as small as possibly. On the other hand, differing from traditional machine learning models, neural network is not just yet […]
Page 21 of 924« First...10...1920212223...304050...Last »

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: