Graphics Processing Unit Utilization in Circuit Simulation

Mikko Hulkkonen
Department of Radio Science and Technology, School of Electrical Engineering, Aalto University
School of Electrical Engineering, Aalto University, 2011


   title={Graphics Processing Unit Utilization in Circuit Simulation},

   author={Hulkkonen, M.},



Download Download (PDF)   View View   Source Source   



Graphics processing units (GPU) of today include hundreds of multi-threaded, multicore processors and a complex, high-bandwidth memory architecture, making them a good alternative to speed up general-purpose parallel computation where large data quantities are processed with same functions. Some successful applications of GPU computation have also been introduced in the field of circuit simulation. The objective of this thesis is to examine the GPU’s computing potential in the APLAC circuit simulation software. The realization of a diode model on a GPU device is also presented. The nonlinear diode model was implemented on NVIDIA’s Compute Unified Device Architecture (CUDA), that is a single-instruction, multiple-thread (SIMT) architecture. A CUDA device was programmed using the CUDA C application programming interface, which is an extension of the standard C language. The test results revealed that due to the diode’s simple nonlinearity, its evaluation is computationally too light to gain any speed benefit from the GPU’s computation power. The required modifications to the circuit analysis structure and data handling resulted in a marginally longer total simulation time than initially. However, when the diode model is made more complex by multiplying its evaluation, the CUDA implementation is faster than the original model. This gives a rough estimate of how complex a model benefits from the GPU computation. Although, the diode model evaluation was not faster on the GPU, this implementation is a good foundation for future CUDA applications in APLAC. The next of these applications will be the computationally more complex BSIM3 transistor model, which will most likely benefit from the computing power of GPU devices.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: