12857

Decoupling algorithms from the organization of computation for high performance image processing

Fredo Durand, Saman Amarashinghe
Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology, 2014

@phdthesis{ragan2014decoupling,

   title={Decoupling algorithms from the organization of computation for high performance image processing},

   author={Ragan-Kelley, Jonathan Millard},

   year={2014},

   school={Massachusetts Institute of Technology}

}

Download Download (PDF)   View View   Source Source   

1727

views

Future graphics and imaging applications-from self-driving cards, to 4D light field cameras, to pervasive sensing-demand orders of magnitude more computation than we currently have. This thesis argues that the efficiency and performance of an application are determined not only by the algorithm and the hardware architecture on which it runs, but critically also by the organization of computations and data on that architecture. Real graphics and imaging applications appear embarrassingly parallel, but have complex dependencies, and are limited by locality (the distance over which data has to move, e.g., from nearby caches or far away main memory) and synchronization. Increasingly, the cost of communication-both within a chip and over a network-dominates computation and power consumption, and limits the gains realized from shrinking transistors. Driven by these trends, writing high-performance processing code is challenging because it requires global reorganization of computations and data, not simply local optimization of an inner loop. Existing programming languages make it difficult for clear and composable code to express optimized organizations because they conflate the intrinsic algorithms being defined with their organization. To address the challenge of productively building efficient, high-performance programs, this thesis presents the Halide language and compiler for image processing. Halide explicitly separates what computations define an algorithm from the choices of execution structure which determine parallelism, locality, memory footprint, and synchronization. For image processing algorithms with the same complexity-even the exact same set of arithmetic operations and data-executing on the same hardware, the order and granularity of execution and placement of data can easily change performance by an order of magnitude because of locality and parallelism. I will show that, for data-parallel pipelines common in graphics, imaging, and other data-intensive applications, the organization of computations and data for a given algorithm is constrained by a fundamental tension between parallelism, locality, and redundant computation of shared values. I will present a systematic model of "schedules" which explicitly trade off these pressures by globally reorganizing the computations and data for an entire pipeline, and an optimizing compiler that synthesizes high performance implementations from a Halide algorithm and a schedule. The end result is much simpler programs, delivering performance often many times faster than the best prior hand-tuned C, assembly, and CUDA implementations, while scaling across radically different architectures, from ARM mobile processors to massively parallel GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: