6746

Posts

Dec, 20

GPU Accelerated PK-means Algorithm for Gene Clustering

In this paper, a novel GPU accelerated scheme for the PK-means gene clustering algorithm is proposed. According to the native particle-pair structure of the PKmeans algorithm, a fragment shader program is tailor-made to process a pair of particles in one pass for the computationintensive portion. As the output channel of a fragment consisting of 4 […]
Dec, 20

A Framework for Genetic Algorithms in Parallel Environments

In this research, we developed a framework to execute genetic algorithms (GA) in various parallel environments. GA researchers can prepare implementations of GA operators and fitness functions using this framework. We have prepared several types of communication library in various parallel environments. Combining GA implementations and our libraries, GA researchers can benefit from parallel processing […]
Dec, 20

Parallel Contour-Buildup Algorithm for the Molecular Surface

Molecular Dynamics simulations are an essential tool for many applications. The simulation of large molecules – like proteins – over long trajectories is of high importance e. g. for pharmaceutical, biochemical and medical research. For analyzing these data sets interactive visualization plays a crucial role as details of the interactions of molecules are often affected […]
Dec, 20

Analysis of GPGPU Platforms Efficiency in General-Purpose Computations

Nowadays a technique of using graphics processing units (GPUs) for general-purpose computing (or GPGPU) is becoming more and more widespread. The goal of this paper is to analyze efficiency of computing with use of the GPGPU technique, depending on several factors. In this paper, there are analyzed differences in performance and platform organization between widespread […]
Dec, 20

Can CUDA be exposed through web services?

Massively parallel programming is an increasingly growing field with the recent introduction of general purpose GPU computing. Modern graphics processors from NVidia and AMD have massively parallel architectures that can be used for 3D ren-dering, financial analysis, physics simulations, and biomedical analysis. These mas-sively parallel systems are exposed to programmers through interfaces such as NVidias […]
Dec, 20

Parsing in Parallel on Multiple Cores and GPUs

This paper examines the ways in which parallelism can be used to speed the parsing of dense PCFGs. We focus on two kinds of parallelism here: Symmetric Multi-Processing (SMP) parallelism on shared-memory multicore CPUs, and Single-Instruction MultipleThread (SIMT) parallelism on GPUs. We describe how to achieve speed-ups over an already very efficient baseline parser using […]
Dec, 20

The Future in Mobile Multicore Computing

Mobile computers are an essential part of consumer technology, and we are fast approaching a future where all mobile computers have general purpose GPUs (GPGPUs) and multicore CPUs in them. We describe this development as Mobile Multicore Computing (MMC). In this paper, we discuss the importance of MMC, as well as three critical issues associated […]
Dec, 20

Floating-point Mixed-radix FFT Core Generation for FPGA and Comparison with GPU and CPU

Over the past decades, we noticed huge advances in FPGA technologies. The topic of floating-point accelerator on FPGA has gained renewed interests due to the increased device size and the emergence of fast hardware floating-point library. The popularity of FFT makes it easier to justify spending lots of effort doing detailed optimization. However, the ever […]
Dec, 19

Fast Random Graph Generation

Today, several database applications call for the generation of random graphs. A fundamental, versatile random graph model adopted for that purpose is the Erdos-Renyi Gamma_v,p model. This model can be used for directed, undirected, and multipartite graphs, with and without self-loops; it induces algorithms for both graph generation and sampling, hence is useful not only […]
Dec, 19

GPU-Accelerated Preconditioned Iterative Linear Solvers

This work is an overview of our preliminary experience in developing high-performance iterative linear solver accelerated by GPU co-processors. Our goal is to illustrate the advantages and difficulties encountered when deploying GPU technology to perform sparse linear algebra computations. Techniques for speeding up sparse matrix-vector product (SpMV) kernels and finding suitable preconditioning methods are discussed. […]
Dec, 19

3D Recursive Gaussian IIR on GPU and FPGAs: A Case Study for Accelerating Bandwidth-Bounded Applications

GPU devices typically have a higher off-chip bandwidth than FPGA-based systems. Thus typically GPU should perform better for bandwidth-bounded massive parallel applications. In this paper we present our implementations of a 3D recursive Gaussian IIR on multicore CPU, many-core GPU and multi-FPGA platforms. Our baseline implementation on the CPU features the smallest arithmetic computation (2 […]
Dec, 19

An Efficient Simulation Environment for Modeling Large-Scale Cortical Processing

We have developed a spiking neural network simulator, which is both easy to use and computationally efficient, for the generation of large-scale computational neuroscience models. The simulator implements current or conductance based Izhikevich neuron networks, having spike-timing dependent plasticity and short-term plasticity. It uses a standard network construction interface. The simulator allows for execution on […]

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: