6617

Posts

Dec, 10

Achieving High Throughput Sequencing with Graphics Processing Units

High throughput sequencing has become a powerful technique for genome analysis after this concept was raised in recent years. Currently, there is a huge demand from patients that have genetic diseases which cannot be satisfied due to the limitation of computation power. Though several softwares are developed using currently most efficient algorithm to deal with […]
Dec, 10

Particle Simulation on a GPU with PyCUDA

This report is on a small test problem within the context of a larger long-term research project. GPUs are increasingly popular for particle methods, due to the readily apparent parallelism inherent to N-Body problems. Particle-In-Cell is a popular scheme for exploring systems in plasma physics. We hope to explore a small sample problem in order […]
Dec, 10

Real-time intraoperative full-range complex FD-OCT guided cerebral blood vessel identification and brain tumor resection in neurosurgery

This work utilized an ultra-high-speed full-range complex-conjugate-free optical coherence tomography (FD-OCT) system to perform real-time intraoperative imaging to guide two common neurosurgical procedures: the cerebral blood vessel identification and the brain tumor resection. The cerebral blood vessel identification experiment is conducted ex vivo on human cadaver specimen. Specific cerebral arteries and veins in different positions […]
Dec, 10

Local Laplacian Filters: Edge-aware Image Processing with a Laplacian Pyramid

The Laplacian pyramid is ubiquitous for decomposing images into multiple scales and is widely used for image analysis. However, because it is constructed with spatially invariant Gaussian kernels, the Laplacian pyramid is widely believed as being unable to represent edges well and as being ill-suited for edge-aware operations such as edge-preserving smoothing and tone mapping. […]
Dec, 10

A New Morphological Anomaly Detection Algorithm for Hyperspectral Images and its GPU Implementation

Anomaly detection is considered a very important task for hyperspectral data exploitation. It is now routinely applied in many application domains, including defence and intelligence, public safety, precision agriculture, geology, or forestry. Many of these applications require timely responses for swift decisions which depend upon high computing performance of algorithm analysis. However, with the recent […]
Dec, 10

Fast and Robust Pyramid-based Image Processing

Multi-scale manipulations are central to image editing but they are also prone to halos. Achieving artifact-free results requires sophisticated edgeaware techniques and careful parameter tuning. These shortcomings were recently addressed by the local Laplacian filters, which can achieve a broad range of effects using standard Laplacian pyramids. However, these filters are slow to evaluate and […]
Dec, 10

Real-time dual-mode standard/complex Fourier-domain OCT system using graphics processing unit accelerated 4D signal processing and visualization

We realized a real-time dual-mode standard/complex Fourier-domain optical coherence tomography (FD-OCT) system using graphics processing unit (GPU) accelerated 4D (3D+time) signal processing and visualization. For both standard and complex FD-OCT modes, the signal processing tasks were implemented on a dual-GPUs architecture that included lambda-to-k spectral re-sampling, fast Fourier transform (FFT), modified Hilbert transform, logarithmic-scaling, and […]
Dec, 10

Fast and Memory-Efficient Minimum Spanning Tree on the GPU

The GPU is an efficient accelerator for regular data-parallel workloads, but GPU acceleration is more difficult for graph algorithms and other applications with irregular memory access patterns and large memory footprints. The Minimum Spanning Tree (MST) problem arises in a variety of applications and its solution exemplifies the difficulties of mapping irregular algorithms to the […]
Dec, 10

Multi-Science Applications with Single Codebase – GAMER – for Massively Parallel Architectures

The growing need for power efficient extreme-scale highperformance computing (HPC) coupled with plateauing clock-speeds is driving the emergence of massively parallel compute architectures. Tens to many hundreds of cores are increasingly made available as compute units, either as the integral part of the main processor or as coprocessors designed for handling massively parallel workloads. In […]
Dec, 10

Playdoh: A lightweight Python library for distributed computing and optimisation

Parallel computing is now an essential paradigm for high performance scientific computing. Most existing hardware and software solutions are expensive or difficult to use. We developed Playdoh, a Python library for distributing computations across the free computing units available in a small network of multicore computers. Playdoh supports independent and loosely coupled parallel problems such […]
Dec, 9

Load Balancing Utilizing Data Redundancy in Distributed Volume Rendering

In interactive volume rendering, the cost for rendering a certain block of the volume strongly varies with dynamically changing parameters (most notably the camera position and orientation). In distributed environments – wherein each compute device renders one block – this potentially causes severe load-imbalance. Balancing the load usually induces costly data transfers causing critical rendering […]
Dec, 9

A design tool for efficient mapping of multimedia applications onto heterogeneous platforms

Development of multimedia systems on heterogeneous platforms is a challenging task with existing design tools due to a lack of rigorous integration between high level abstract modeling, and low level synthesis and analysis. In this paper, we present a new dataflow-based design tool, called the targeted dataflow interchange format (TDIF), for design, analysis, and implementation […]

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: