13281

Posts

Dec, 16

Highly Efficient Forward and Backward Propagation of Convolutional Neural Networks for Pixelwise Classification

We present highly efficient algorithms for performing forward and backward propagation of Convolutional Neural Network (CNN) for pixelwise classification on images. For pixelwise classification tasks, such as image segmentation and object detection, surrounding image patches are fed into CNN for predicting the classes of centered pixels via forward propagation and for updating CNN parameters via […]
Dec, 16

Multi-Centroid PSO Classification Learning on the GPU

Training classifiers can be seen as an optimization problem. With this view, we have developed a method to train a type of nearest centroid classifier with PSO. Results showed an improvement on most of the datasets tested. Additionally, we have developed a method to utilize the developed classifier with datasets containing both numeric and categorical […]
Dec, 16

An Optimized GPU Memory Hierarchy Design for an OpenCL Kernel

With the advent of multi and many-core processors, communication has replaced computation as the performance bottleneck. Most current approaches to the problem try to tolerate memory access latency through a high amount of Thread-Level Parallelism. However, not all applications benefit from such techniques and there is a need to address the weakness of the underlying […]
Dec, 16

Scaling behavior of topologically constrained polymer rings in a melt

Large scale molecular dynamics simulations on graphic processing units (GPUs) are employed to study the scaling behavior of ring polymers with various topological constraints in melts. Typical sizes of rings containing $3_1$, $5_1$ knots and catenanes made up of two unknotted rings scale like $N^{1/3}$ in the limit of large ring sizes $N$. This is […]
Dec, 16

MatConvNet – Convolutional Neural Networks for MATLAB

MatConvNet is an implementation of Convolutional Neural Networks (CNNs) for MATLAB. The toolbox is designed with an emphasis on simplicity and flexibility. It exposes the building blocks of CNNs as easy-to-use MATLAB functions, providing routines for computing linear convolutions with filter banks, feature pooling, and many more. In this manner, MatConvNet allows fast prototyping of […]
Dec, 15

Minerva: A Scalable and Highly Efficient Training Platform for Deep Learning

The tooling landscape of deep learning is fragmented by a growing gap between the generic and productivity-oriented tools that optimize for algorithm development and the task-specific ones that optimize for speed and scale. This creates an artificial barrier to bring new innovations into real-world applications. Minerva addresses this issue with a layered design that provides […]
Dec, 15

Bayesian neural networks for detecting epistasis in genetic association studies

BACKGROUND: Discovering causal genetic variants from large genetic association studies poses many difficult challenges. Assessing which genetic markers are involved in determining trait status is a computationally demanding task, especially in the presence of gene-gene interactions. RESULTS: A non-parametric Bayesian approach in the form of a Bayesian neural network is proposed for use in analyzing […]
Dec, 15

Analysis and Optimization Techniques for Massively Parallel Processors

In response to the ever growing demand for computing power, heterogeneous parallelism has emerged as a widespread computing paradigm in the past decade or so. In particular, massively parallel processors such as graphics processing units (GPUs) have become the prevalent throughput computing elements in heterogeneous systems, offering high performance and power efficiency for general-purpose workloads. […]
Dec, 15

Easy-to-Use On-the-Fly Binary Program Acceleration on Many-Cores

This paper introduces Binary Acceleration At Runtime (BAAR), an easy-to-use on-the-fly binary acceleration mechanism which aims to tackle the problem of enabling existent software to automatically utilize accelerators at runtime. BAAR is based on the LLVM Compiler Infrastructure and has a client-server architecture. The client runs the program to be accelerated in an environment which […]
Dec, 15

Performance Comparison of GPUs with a Genetic Algorithm based on CUDA

Generally genetic algorithm (GA) has disadvantage of taking a lot of computation time, and it is worth reducing the execution time while keeping good quality and result. Comparative experiments are conducted with one CPU and four GPUs using CUDA (Compute Unified Device Architecture) and generational GA. We implement the fitness functions of the GA which […]
Dec, 15

Bamboo: Automatic Translation of MPI Source into a Latency-Tolerant Form

Communication remains a significant barrier to scalability on distributed-memory systems. At present, the trend in architectural system design, which focuses on enhancing node performance, exacerbates the communication problem, since the relative cost of communication grows as the computation rate increases. This problem will be more pronounced at the exascale, where computational rates will be orders […]
Dec, 14

Heuristics for Conversion Process of GPU’s Kernels for Multiples Kernels with Concurrent Optimization Divergence

Graphics Processing Units have been created with the objective of accelerating the construction and processing of graphic images. In its historical evolution line, concerned with the large computational capacity inherent, these devices started to be used for general purposes. However, the design of the GPUs don’t work well with divergent algorithms, mainly conditionals and repetitions. […]

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: