6077

Posts

Oct, 20

A prototyping environment for high performance reconfigurable computing

In the face of power wall and high performance requirements, designers of hardware architectures are directed more and more towards reconfigurable computing with the usage of heterogeneous CPU/FPGA systems. In such architectures, multi-core processors come with high computation rates while the reconfigurable logic offers high performance per watt and adaptability to the application constraints. However, […]
Oct, 19

An Efficient Stream Buffer Mechanism for Dataflow Execution on Heterogeneous Platforms with GPUs

The move towards heterogeneous parallel computing is underway as witnessed by the emergence of novel computing platforms combining architecturally diverse components such as CPUs, GPUs and special function units. We approach mapping of streaming applications onto heterogeneous architectures using a Process Network (PN) model of computation. In this paper, we present an approach for exploiting […]
Oct, 19

The Potential for a GPU-Like Overlay Architecture for FPGAs

We propose a soft processor programming model and architecture inspired by graphics processing units (GPUs) that are well-matched to the strengths of FPGAs, namely, highly parallel and pipelinable computation. In particular, our soft processor architecture exploits multithreading, vector operations, and predication to supply a floating-point pipeline of 64 stages via hardware support for up to […]
Oct, 19

Designing the Language Liszt for Building Portable Mesh-based PDE Solvers

Complex physical simulations have driven the need for exascale computing, but reaching exascale will require more power-efficient supercomputers. Heterogenous hardware offers one way to increase efficiency, but is difficult to program and lacks a unifying programming model. Abstracting problems at the level of the domain rather than hardware offers an alternative approach. In this paper […]
Oct, 19

10×10: A General-purpose Architectural Approach to Heterogeneity and Energy Efficiency

Two decades of microprocessor architecture driven by quantitative 90/10 optimization has delivered an extraordinary 1000-fold improvement in microprocessor performance, enabled by transistor scaling which improved density, speed, and energy. Recent generations of technology have produced limited benefits in transistor speed and power, so as a result the industry has turned to multicore parallelism for performance […]
Oct, 19

Heterogeneous Accelerated Bioinformatics-Perspectives for Cancer Research

The demand for even higher performance in bioinformatics data analysis continues to grow rapidly as the volumes of data generated by next generation sequencing equipment soar. Traditional acceleration techniques historically used for faster bioinformatics application will individually be insufficient to meet the demand and increased analysis complexity, requiring an integrated heterogeneous accelerated computing environment. Current […]
Oct, 19

A Code Transformation Framework for Scientific Applications on Structured Grids

The combination of expert-tuned code expression and aggressive compiler optimizations is known to deliver the best achievable performance for modern multicore processors. The development and maintenance of these optimized code expressions is never trivial. Tedious and error-prone processes greatly decrease the code developer’s willingness to adopt manually-tuned optimizations. In this paper, we describe a pre-compilation […]
Oct, 19

Pricing composable contracts on the GP-GPU

We present a language for specifying stochastic processes, called SPL. We show that SPL can express the price of a range of financial contracts, including so called exotic options with path dependence and with multiple sources of uncertainty. Jones, Eber and Seward previously presented a language for writing down financial contracts in a compositional manner […]
Oct, 19

AeminiumGPU: A CPU-GPU Hybrid Runtime for the Aeminium Language

Given that CPU clock speeds are stagnating, programmers are resorting to parallelism to improve the performance of their applications. Although such parallelism has usually been attained using either multicore architectures, multiple CPUs and/or clusters of machines, the GPU has since been used as an alternative. GPUs are an interesting resource because they can provide much […]
Oct, 19

The GPU Computing Revolution: From Multi-Core CPUs To Many-Core Graphics Processors

Computer architectures are undergoing their most radical change in a decade. In the past, processor performance has been improved largely by increasing clock speed: the faster the clock speed, the faster a processor can execute instructions, and thus the greater the performance that is delivered to the end user. This drive to greater and greater […]
Oct, 19

GPU Parallel Collections For Scala

A decade ago, graphics processing units have been used specifically for high-speed graphics. Of late, they are becoming more popular as general purpose parallel processors. With the release of CUDA, ATI Stream and OpenCL, programmers can now split their program execution between CPU and GPU, whenever appropriate, resulting in huge performance gain. The cost of […]
Oct, 18

Enabling Traceability in an MDE Approach to Improve Performance of GPU Applications

Graphics Processor Units (GPUs) are known for offering high performance and power efficiency for processing algorithms that suit well to theirmassively parallel architecture. Unfortunately, as parallel programming for thiskind of architecture requires a complex distribution of tasks and data, developersfind it difficult to implement their applications effectively. Although approachesbased on source-to-source and model-to-source transformations have […]

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: