Software Platform for Hybrid Resource Management of Many-core Accelerators
Department of Electrical Engineering and Computer Science, College of Engineering, Seoul National University
Seoul National University, 2018
@phdthesis{kim2018software,
title={Software Platform for Hybrid Resource Management of Many-core Accelerators},
author={Kim, Taeyoung},
year={2018},
school={Seoul National University}
}
The ever-increasing computational demand from workload mix of concurrent applications characterizes modern embedded systems. In response to such a trend, many-core accelerators are becoming more popular in high-end embedded systems. However, embedded systems usually have many constraints compared to general purpose computers. Various constraints such as low computing powers, lack of operating system and restriction on power consumption need to be considered. Also embedded applications often have throughput or latency constraint. This should also be taken into account when managing resources. Concurrent applications should be able to share the many-core resources to utilize many-core accelerator efficiently. However, distributing resources to concurrent applications is the difficult problem in embedded systems. The system status may change dynamically due to various factors such as workload variation and QoS requirement change. Moreover, various architectural features of many-core accelerator make the problem even more complex. To handle the variation of system status, a variety of resource management techniques have been proposed recently. However they have some limitations. They usually assume specific target many-core architectures and mapping schemes. Also they cannot make any QoS guarantee. In this dissertation, we propose the software platform for hybrid resource management of embedded many-core accelerators. First, we present the software platform, which supports various types of the manycore accelerator, based on a hybrid resource management technique. The proposed platform provides a seamless design flow from a programming front-end, which generates dataflow-style function codes automatically from the task specification, to run-time environment, which adaptively manages compute resources for concurrent applications in response to system status change. The proposed platform has been implemented on two different many-core architectures: the Xeon Phi coprocessor and an Epiphany-like NoC virtual prototype. In the second part of this dissertation, we propose an extended resource management scheme organizing multiple managers in a distributed and hierarchical way and develop a run-time resource manager of Kalray MPPA-256 that is a state-of-the-art cluster-based many-core accelerator. The proposed manager performs run-time core-to-application mapping to handle concurrent application workload adaptively to the system status change. We evaluate the impacts of the parameters on the performance of the proposed scheme and their trade-offs through extensive design space exploration based on an analytical performance model. Experimental results prove that the proposed platform is capable of adapting to the run-time workload variation effectively with an affordable overhead of run-time resource management.
December 16, 2018 by hgpu