A parallel evolutionary algorithm to optimize dynamic memory managers in embedded systems
Department of Computer Architecture and Automation, Universidad Complutense de Madrid, 28040 Madrid, Spain
Parallel Computing, Volume 36, Issues 10-11, October-November 2010, Pages 572-590
@article{risco2010parallel,
title={A parallel evolutionary algorithm to optimize dynamic memory managers in embedded systems},
author={Risco-Mart{‘i}n, J.L. and Atienza, D. and Colmenar, J.M. and Garnica, O.},
journal={Parallel Computing},
issn={0167-8191},
year={2010},
publisher={Elsevier}
}
For the last 30 years, several dynamic memory managers (DMMs) have been proposed. Such DMMs include first fit, best fit, segregated fit and buddy systems. Since the performance, memory usage and energy consumption of each DMM differs, software engineers often face difficult choices in selecting the most suitable approach for their applications. This issue has special impact in the field of portable consumer embedded systems, that must execute a limited amount of multimedia applications (e.g., 3D games, video players, signal processing software, etc.), demanding high performance and extensive memory usage at a low energy consumption. Recently, we have developed a novel methodology based on genetic programming to automatically design custom DMMs, optimizing performance, memory usage and energy consumption. However, although this process is automatic and faster than state-of-the-art optimizations, it demands intensive computation, resulting in a time-consuming process. Thus, parallel processing can be very useful to enable to explore more solutions spending the same time, as well as to implement new algorithms. In this paper we present a novel parallel evolutionary algorithm for DMMs optimization in embedded systems, based on the Discrete Event Specification (DEVS) formalism over a Service Oriented Architecture (SOA) framework. Parallelism significantly improves the performance of the sequential exploration algorithm. On the one hand, when the number of generations are the same in both approaches, our parallel optimization framework is able to reach a speed-up of 86.40? when compared with other state-of-the-art approaches. On the other, it improves the global quality (i.e., level of performance, low memory usage and low energy consumption) of the final DMM obtained in a 36.36% with respect to two well-known general-purpose DMMs and two state-of-the-art optimization methodologies.
January 22, 2011 by hgpu