13752

A Switched Dynamical System Framework for Analysis of Massively Parallel Asynchronous Numerical Algorithms

Kooktae Lee, Raktim Bhattacharya, Vijay Gupta
Department of Aerospace Engineering, Texas A&M University, College Station, TX 77843-3141, USA
arXiv:1503.03952 [cs.SY], (13 Mar 2015)

@article{lee2015switched,

   title={A Switched Dynamical System Framework for Analysis of Massively Parallel Asynchronous Numerical Algorithms},

   author={Lee, Kooktae and Bhattacharya, Raktim and Gupta, Vijay},

   year={2015},

   month={mar},

   archivePrefix={"arXiv"},

   primaryClass={cs.SY}

}

Download Download (PDF)   View View   Source Source   

1949

views

In the near future, massively parallel computing systems will be necessary to solve computation intensive applications. The key bottleneck in massively parallel implementation of numerical algorithms is the synchronization of data across processing elements (PEs) after each iteration, which results in significant idle time. Thus, there is a trend towards relaxing the synchronization and adopting an asynchronous model of computation to reduce idle time. However, it is not clear what is the effect of this relaxation on the stability and accuracy of the numerical algorithm. In this paper we present a new framework to analyze such algorithms. We treat the computation in each PE as a dynamical system and model the asynchrony as stochastic switching. The overall system is then analyzed as a switched dynamical system. However, modeling of massively parallel numerical algorithms as switched dynamical systems results in a very large number of modes, which makes current analysis tools available for such systems computationally intractable. We develop new techniques that circumvent this scalability issue. The framework is presented on a one-dimensional heat equation as a case study and the proposed analysis framework is verified by solving the partial differential equation (PDE) in a nVIDIA Tesla GPU machine, with asynchronous communication between cores.
Rating: 2.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: