Low-overhead diskless checkpoint for hybrid computing systems
Tokyo Institute of Technology, Tokyo, Japan
International Conference on High Performance Computing (HiPC), 2010
@inproceedings{gomez2011low,
title={Low-overhead diskless checkpoint for hybrid computing systems},
author={Gomez, L.B. and Nukada, A. and Maruyama, N. and Cappello, F. and Matsuoka, S.},
booktitle={High Performance Computing (HiPC), 2010 International Conference on},
pages={1–10},
year={2011},
organization={IEEE}
}
As the size of new supercomputers scales to tens of thousands of sockets, the mean time between failures (MTBF) is decreasing to just several hours and long executions need some kind of fault tolerance method to survive failures. CheckpointRestart is a popular technique used for this purpose; but writing the state of a big scientific application to remote storage will become prohibitively expensive in the near future. Diskless checkpoint was proposed as a solution to avoid the I/O bottleneck of disk-based checkpoint. However, the complex time-consuming encoding techniques hinder its scalability. At the same time, heterogeneous computing is becoming more and more popular in high performance computing (HPC), with new clusters combining CPUs and graphic processing units (GPUs). However, hybrid applications cannot always use all the resources available on the nodes, leaving some idle resources such us GPUs or CPU cores. In this work, we propose a hybrid diskless checkpoint (HDC) technique for GPU-accelerated clusters, that can checkpoint CPU/GPU applications, does not require spare nodes and can tolerate up to 50% of process failures with a low, sometimes negligible, checkpoint overhead.
July 20, 2011 by hgpu