Towards acceleration of fault simulation using graphics processing units
Department of ECE, Texas A&M University, College Station, TX 77843
In DAC ’08: Proceedings of the 45th annual Design Automation Conference (2008), pp. 822-827
@conference{gulati2008towards,
title={Towards acceleration of fault simulation using graphics processing units},
author={Gulati, K. and Khatri, S.P.},
booktitle={Design Automation Conference, 2008. DAC 2008. 45th ACM/IEEE},
pages={822–827},
issn={0738-100X},
year={2008},
organization={IEEE}
}
In this paper, we explore the implementation of fault simulation on a Graphics Processing Unit (GPU). In particular, we implement a fault simulator that exploits thread level parallelism. Fault simulation is inherently parallelizable, and the large number of threads that can be computed in parallel on a GPU results in a natural fit for the problem of fault simulation. Our implementation fault-simulates all the gates in a particular level of a circuit, including good and faulty circuit simulations, for all patterns, in parallel. Since GPUs have an extremely large memory bandwidth, we implement each of our fault simulation threads (which execute in parallel with no data dependencies) using memory lookup. Fault injection is also done along with gate evaluation, with each thread using a different fault injection mask. All threads compute identical instructions, but on different data, as required by the Single Instruction Multiple Data (SIMD) programming semantics of the GPU. Our results, implemented on a NVIDIA GeForce GTX 8800 GPU card, indicate that our approach is on average 35x faster when compared to a commercial fault simulation engine. With the recently announced Tesla GPU servers housing up to eight GPUs, our approach would be potentially 238x faster. The correctness of the GPU based fault simulator has been verified by comparing its result with a CPU based fault simulator.
December 11, 2010 by hgpu