15600

Proteus: Efficient Resource Use in Heterogeneous Architectures

Sankaralingam Panneerselvam, Michael M. Swift
Computer Sciences Department, University of Wisconsin-Madison
University of Wisconsin-Madison, 2016

@article{panneerselvam2016proteus,

   title={Proteus: Efficient Resource Use in Heterogeneous Architectures},

   author={Panneerselvam, Sankaralingam and Swift, Michael M},

   year={2016}

}

Download Download (PDF)   View View   Source Source   

1630

views

Current processors provide a variety of different processing units to improve performance and power efficiency. For example, ARM’S big.LITTLE, AMD’s APUs, and Oracle’s M7 provide heterogeneous processors, on-die GPUs, and ondie accelerators. However, the performance experienced by programs on these accelerators can be highly variable due to issues like contention from multiprogramming or thermal constraints. In these systems, the decision of where to execute a task will have to consider not only stand-alone performance but also current system conditions and the program’s performance goals such as throughput, latency or real-time deadlines. We built Proteus, a kernel extension and runtime library, to perform scheduling and handle task placement in such dynamic heterogeneous systems. Proteus enables programs to perform task placement decisions by choosing the right processing units with the libadept runtime, since programs are best suitable to determine how to achieve their performance goals. System conditions such as load on accelerators are exposed by the OpenKernel, the kernel component of Proteus, to the libadept enabling programs to make an informed decision. While placement is determined by libadept, the OpenKernel is also responsible for resource management including resource allocation and enforcing isolation among applications. When integrated with StarPU, a runtime system for heterogeneous architectures, Proteus improves StarPU by performing 1.5-2x better than its native scheduling policies in a shared heterogeneous environment.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: