2407

Raising the level of many-core programming with compiler technology: meeting a grand challenge

Wen-mei Hwu
University of Illinois at Urbana-Champaign, Urbana-Champaign, IL, USA
In PACT ’10: Proceedings of the 19th international conference on Parallel architectures and compilation techniques (2010), pp. 5-6.

@conference{hwu2010raising,

   title={Raising the level of many-core programming with compiler technology: meeting a grand challenge},

   author={Hwu, W.},

   booktitle={Proceedings of the 19th international conference on Parallel architectures and compilation techniques},

   pages={5–6},

   year={2010},

   organization={ACM}

}

Source Source   

1319

views

Modern GPUs and CPUs are massively parallel, many-core processors. While application developers for these many-core chips are reporting 10X-100X speedup over sequential code on traditional microprocessors, the current practice of many-core programming based on OpenCL, CUDA, and OpenMP puts strain on software development, testing and support teams. According to the semiconductor industry roadmap, these processors could scale up to over 1,000X speedup over single cores by the end of the year 2016. Such a dramatic performance difference between parallel and sequential execution will motivate an increasing number of developers to parallelize their applications. Today, an application programmer has to understand the desirable parallel programming idioms, manually work around potential hardware performance pitfalls, and restructure their application design in order to achieve their performance objectives on many-core processors. In this presentation, I will discuss why advanced compiler functionalities have not found traction with the developer communities, what the industry is doing today to try to address the challenges, and how the academic community can contribute to this exciting revolution.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: