Apr, 22

OpenCL-Based Mobile GPGPU Benchmarking: Methods and Challenges

Benchmarking general-purpose computing on graphics processing unit (GPGPU) aims to profile and compare performance across different devices. Due to the low-level nature of most GPGPU APIs, GPGPU benchmarks are also useful for architectural exploration and program optimization. This can be challenging in mobile devices due to lack of underlying hardware details and limited profiling capabilities […]
Apr, 19

Parallel Programming Models for Dense Linear Algebra on Heterogeneous Systems

We present a review of the current best practices in parallel programming models for dense linear algebra (DLA) on heterogeneous architectures. We consider multicore CPUs, stand alone manycore coprocessors, GPUs, and combinations of these. Of interest is the evolution of the programming models for DLA libraries – in particular, the evolution from the popular LAPACK […]
Apr, 19

A Parallel Solution to Finding Nodal Neighbors in Generic Meshes

In this paper we specifically present a parallel solution to finding the one-ring neighboring nodes and elements for each vertex in generic meshes. The finding of nodal neighbors is computationally straightforward but expensive for large meshes. To improve the efficiency, the parallelism is adopted by utilizing the modern Graphics Processing Unit (GPU). The presented parallel […]
Apr, 19

MIML Learning with CNNs: Yelp Restaurant Photo Classification

We present the conditions of a data science challenge from Kaggle, which can be viewed as a multi-instance multilabel learning problem in the image domain, and describe the official training dataset provided. We discuss our technical approach, and address the challenges in using transfer learning and with finetuning, trying out different strategies to tackle the […]
Apr, 19

A Unified, Hardware-Fitted, Cross-GPU Performance Model

We present a mechanism to symbolically gather performance-relevant operation counts from numerically-oriented subprograms (‘kernels’) expressed in the Loopy programming system, and apply these counts in a simple, linear model of kernel run time. We use a series of ‘performance-instructive’ kernels to fit the parameters of a unified model to the performance characteristics of GPU hardware […]
Apr, 19

LightScan: Faster Scan Primitive on CUDA Compatible Manycore Processors

Scan (or prefix sum) is a fundamental and widely used primitive in parallel computing. In this paper, we present LightScan, a faster parallel scan primitive for CUDA-enabled GPUs, which investigates a hybrid model combining intra-block computation and inter-block communication to perform a scan. Our algorithm employs warp shuffle functions to implement fast intra-block computation and […]
Apr, 16

GeePS: Scalable deep learning on distributed GPUs with a GPU-specialized parameter server

Large-scale deep learning requires huge computational resources to train a multi-layer neural network. Recent systems propose using 100s to 1000s of machines to train networks with tens of layers and billions of connections. While the computation involved can be done more efficiently on GPUs than on more traditional CPU cores, training such networks on a […]
Apr, 16

Fluid Simulation by the Smoothed Particle Hydrodynamics Method: A Survey

This paper presents a survey of Smoothed Particle Hydrodynamics (SPH) and its use in computational fluid dynamics. As a truly mesh-free particle method based upon the Lagrangian formulation, SPH has been applied to a variety of different areas in science, computer graphics and engineering. It has been established as a popular technique for fluid based […]
Apr, 16

Efficiency of general Krylov methods on GPUs – An experimental study

This paper compares different Krylov methods based on short recurrences with respect to their efficiency when implemented on GPUs. The comparison includes BiCGSTAB, CGS, QMR, and IDR using different shadow space dimensions. These methods are known for their good convergence characteristics. For a large set of test matrices taken from the University of Florida Matrix […]
Apr, 16

pocl: A Performance-Portable OpenCL Implementation

OpenCL is a standard for parallel programming of heterogeneous systems. The benefits of a common programming standard are clear; multiple vendors can provide support for application descriptions written according to the standard, thus reducing the program porting effort. While the standard brings the obvious benefits of platform portability, the performance portability aspects are largely left […]
Apr, 16

Parallel data mining algorithms for multi-dimensional points on GPUs

Data mining tasks such as clustering, outlier detection and similarity search typically employ a series of algorithms to operate on a large set of data, making them amenable to parallelization. Thus parallelization of data mining operations such as distance computation has been extensively studied in the literature. In recent years, the use of Graphics Processing […]
Apr, 14

The 5th International Conference on Information and Knowledge Management (ICIKM), 2016

Index Ei Compendex, Inspec, DOAJ, CPCI (Web of Science) and Scopus. COMMITTEE Conference Chairs Prof. Chen-Huei Chou, College of Charleston, USA Prof. Yongsheng Ma, University of Alberta, Canada Prof. Jiangping Wang, Webster University, USA Local Chair Dr. Xiaoyu Zeng, Beijing Wuzi University, China   AGENDA July 22, 2016 – Registration & Conference Materials Collection July […]
Page 2 of 86612345...102030...Last »

* * *

* * *

Follow us on Twitter

HGPU group

1851 peoples are following HGPU @twitter

Like us on Facebook

HGPU group

405 people like HGPU on Facebook

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: