13268

Posts

Dec, 16

MatConvNet – Convolutional Neural Networks for MATLAB

MatConvNet is an implementation of Convolutional Neural Networks (CNNs) for MATLAB. The toolbox is designed with an emphasis on simplicity and flexibility. It exposes the building blocks of CNNs as easy-to-use MATLAB functions, providing routines for computing linear convolutions with filter banks, feature pooling, and many more. In this manner, MatConvNet allows fast prototyping of […]
Dec, 15

Minerva: A Scalable and Highly Efficient Training Platform for Deep Learning

The tooling landscape of deep learning is fragmented by a growing gap between the generic and productivity-oriented tools that optimize for algorithm development and the task-specific ones that optimize for speed and scale. This creates an artificial barrier to bring new innovations into real-world applications. Minerva addresses this issue with a layered design that provides […]
Dec, 15

Bayesian neural networks for detecting epistasis in genetic association studies

BACKGROUND: Discovering causal genetic variants from large genetic association studies poses many difficult challenges. Assessing which genetic markers are involved in determining trait status is a computationally demanding task, especially in the presence of gene-gene interactions. RESULTS: A non-parametric Bayesian approach in the form of a Bayesian neural network is proposed for use in analyzing […]
Dec, 15

Analysis and Optimization Techniques for Massively Parallel Processors

In response to the ever growing demand for computing power, heterogeneous parallelism has emerged as a widespread computing paradigm in the past decade or so. In particular, massively parallel processors such as graphics processing units (GPUs) have become the prevalent throughput computing elements in heterogeneous systems, offering high performance and power efficiency for general-purpose workloads. […]
Dec, 15

Easy-to-Use On-the-Fly Binary Program Acceleration on Many-Cores

This paper introduces Binary Acceleration At Runtime (BAAR), an easy-to-use on-the-fly binary acceleration mechanism which aims to tackle the problem of enabling existent software to automatically utilize accelerators at runtime. BAAR is based on the LLVM Compiler Infrastructure and has a client-server architecture. The client runs the program to be accelerated in an environment which […]
Dec, 15

Performance Comparison of GPUs with a Genetic Algorithm based on CUDA

Generally genetic algorithm (GA) has disadvantage of taking a lot of computation time, and it is worth reducing the execution time while keeping good quality and result. Comparative experiments are conducted with one CPU and four GPUs using CUDA (Compute Unified Device Architecture) and generational GA. We implement the fitness functions of the GA which […]
Dec, 15

Bamboo: Automatic Translation of MPI Source into a Latency-Tolerant Form

Communication remains a significant barrier to scalability on distributed-memory systems. At present, the trend in architectural system design, which focuses on enhancing node performance, exacerbates the communication problem, since the relative cost of communication grows as the computation rate increases. This problem will be more pronounced at the exascale, where computational rates will be orders […]
Dec, 14

Heuristics for Conversion Process of GPU’s Kernels for Multiples Kernels with Concurrent Optimization Divergence

Graphics Processing Units have been created with the objective of accelerating the construction and processing of graphic images. In its historical evolution line, concerned with the large computational capacity inherent, these devices started to be used for general purposes. However, the design of the GPUs don’t work well with divergent algorithms, mainly conditionals and repetitions. […]
Dec, 14

Locality-Aware Automatic Parallelization for GPGPU with OpenHMPP Directives

The use of GPUs for general purpose computation has increased dramatically in the past years due to the rising demands of computing power and their tremendous computing capacity at low cost. Hence, new programming models have been developed to integrate these accelerators with high-level programming languages, giving place to heterogeneous computing systems. Unfortunately, this heterogeneity […]
Dec, 14

Acceleration of Hessenberg Reduction for Nonsymmetric Matrix

The worth of finding a general solution for nonsymmetric eigenvalue problems is specified in many areas of engineering and science computations, such as reducing noise to have a quiet ride in automotive industrial engineering or calculating the natural frequency of a bridge in civil engineering. The main objective of this thesis is to design a […]
Dec, 14

Graph Processing on GPU

Graph mining and data management has become a significant area because more and more new applications to various data mining problems in social networking, computational biology, chemical data analysis and drug discovery are emerging recently. Although traditional mining methods have been extended to process graphs, many graph applications still confront huge challenges due to continuous […]
Dec, 13

C++ AMP: Accelerated Massive Parallelism with Microsoft Visual C++

Capitalize on the faster GPU processors in today’s computers with the C++ AMP code library—and bring massive parallelism to your project. With this practical book, experienced C++ developers will learn parallel programming fundamentals with C++ AMP through detailed examples, code snippets, and case studies. Learn the advantages of parallelism and get best practices for harnessing […]

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: