4704

SCGPSim: A fast SystemC simulator on GPUs

Mahesh Nanjundappa, Hiren D. Patel, Bijoy A. Jose, Sandeep K. Shukla
FERMAT Lab., Virginia Tech., Blacksburg, VA, USA
15th Asia and South Pacific Design Automation Conference (ASP-DAC), 2010

@inproceedings{nanjundappa2010scgpsim,

   title={SCGPSim: A fast systemC simulator on GPUs},

   author={Nanjundappa, M. and Patel, H.D. and Jose, B.A. and Shukla, S.K.},

   booktitle={Proceedings of the 2010 Asia and South Pacific Design Automation Conference},

   pages={149–154},

   year={2010},

   organization={IEEE Press}

}

Download Download (PDF)   View View   Source Source   

2319

views

The main objective of this paper is to speed up the simulation performance of SystemC designs at the RTL abstraction level by exploiting the high degree of parallelism afforded by today’s general purpose graphics processors (GPGPUs). Our approach parallelizes SystemC’s discrete-event simulation (DES) on GPGPUs by transforming the model of computation of DES into a model of concurrent threads that synchronize as and when necessary. Unlike the cooperative threading model employed in the SystemC reference implementation, our threading model is capable of executing in parallel on the large number of simple processing units available on GPUs. Our simulation infrastructure is called SCGPSim and it includes a source-to-source (S2S) translator to transform synthesizable SystemC models into parallelly executable programs targeting an NVIDIA GPU. The translator retains the simulation semantics of the original designs by applying semantics preserving transformations. The resulting transformed models mapped onto the massively parallel architecture of GPUs improve simulation efficiency quite substantially. Preliminary experiments with varying-sized examples such as AES, ALU, and FIR have shown simulation speed-ups ranging from 30x to 100x. Considering that our transformations are not yet optimized, we believe that optimizing them will improve the simulation performance even further.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: