Scalable Multi-Cache Simulation Using GPUs
Department of Computer Science, University of Pittsburgh
IEEE 19th International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems (MASCOTS), 2011
@inproceedings{moeng2011scalable,
title={Scalable Multi-Cache Simulation Using GPUs},
author={Moeng, M. and Cho, S. and Melhem, R.},
booktitle={Modeling, Analysis & Simulation of Computer and Telecommunication Systems (MASCOTS), 2011 IEEE 19th International Symposium on},
pages={159–167},
year={2011},
organization={IEEE}
}
Software simulation is the primary tool used for evaluation of processor design. Simulation offers better accuracy than analytical models and is an important evaluation step before actually fabricating a chip. Unfortunately, simulator speeds are slow — a conventional cycle-accurate simulator will be unable to keep up with increasing core counts in modern processor design. Parallel simulation is one method for improving simulation speeds. Two major areas of parallel simulation research are multithreaded simulators and FPGAs as simulation accelerators. Multithreaded simulators can only extract coarse-grained parallelism and must sacrifice accuracy in order to scale well. FPGA-based simulators can extract fine-grained parallelism, but are expensive and difficult to program. We propose using GPUs for architectural simulation, which can take advantage of a high degree of fine-grained parallelism. In addition, they are inexpensive and easier to program compared to FPGAs. To demonstrate our ideas, we implement a trace-driven many-cache simulator using NVIDIA’s CUDA toolkit. GPU-accelerated cache simulation displays remarkable scaling with number of simulated caches when compared to serial CPU-only simulation.
November 27, 2011 by hgpu