5337

Reliability modeling of MEMS devices on CUDA based HPC setup

Rohit Pathak, Satyadhar Joshi
Acropolis Institute of Technology & Research, Indore, India
First Asian Himalayas International Conference on Internet, 2009. AH-ICI 2009

@inproceedings{pathak2009reliability,

   title={Reliability Modeling of MEMS devices on CUDA based HPC Setup},

   author={Pathak, R. and Joshi, S.},

   booktitle={Internet, 2009. AH-ICI 2009. First Asian Himalayas International Conference on},

   pages={1–5},

   year={2009},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

1463

views

In this paper, we have reviewed the development in CUDA and the implementation of various distribution that exists in the reliability for MEMS based devices on a CUDA setup. The various distributions can be highly optimized so that the system can be simulated highly on CUDA. We have shown the type of distribution may vary from exponential to binomial to other that are being proposed recently. The reliability modeling codes to calculate reliability function, failure rate function, mean time to failure (MTTF) and mean residual time (MRT) are proposed for MEMS technology for these specific calculations needs to be performed for which CUDA plays a very important role. It is observed that high performance computing (HPC) can be used to optimize reliability calculation and help to accelerate research in reliability of MEMS. The three key abstractions of CUDA i.e. hierarchy of thread groups, shared memories, and barrier synchronization are exposed as a set of extensions to C language, which provides fine-grained data parallelism and thread parallelism, nested within coarse-grained data parallelism and task parallelism. The key is division of the computations of Reliability analysis into crude sub-problems that can be solved parallely in isolation independently, and then into finer pieces that can be executed in parallel with mutual cooperation among them. Allowing threads to solve each sub-problem cooperatively, this division of problem preserves expressivity of language. Each sub-problem is thus scheduled to be solved on any of the available processor cores allowing transparent scalability. Thus computations of reliability analysis can be performed by using a compiled CUDA program that can execute on any number of GPU cores. During the programming we need not know the exact configuration and thus only the runtime system needs to know the physical processor count.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: