10282

Achieving Speedup in Aggregate Risk Analysis using Multiple GPUs

A. K. Bahl, O. Baltzer, A. Rau-Chaplin, B. Varghese, A. Whiteway
Centre for Security, Theory and Algorithmic Research, International Institute of Information Technology, Hyderabad, India
arXiv:1308.2572 [cs.DC], (12 Aug 2013)

@article{2013arXiv1308.2572B,

   author={Bahl}, A.~K. and {Baltzer}, O. and {Rau-Chaplin}, A. and {Varghese}, B. and {Whiteway}, A.},

   title={"{Achieving Speedup in Aggregate Risk Analysis using Multiple GPUs}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1308.2572},

   primaryClass={"cs.DC"},

   keywords={Computer Science – Distributed, Parallel, and Cluster Computing, Computer Science – Computational Engineering, Finance, and Science, Computer Science – Data Structures and Algorithms},

   year={2013},

   month={aug},

   adsurl={http://adsabs.harvard.edu/abs/2013arXiv1308.2572B},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1898

views

Stochastic simulation techniques employed for the analysis of portfolios of insurance/reinsurance risk, often referred to as `Aggregate Risk Analysis’, can benefit from exploiting state-of-the-art high-performance computing platforms. In this paper, parallel methods to speed-up aggregate risk analysis for supporting real-time pricing are explored. An algorithm for analysing aggregate risk is proposed and implemented for multi-core CPUs and for many-core GPUs. Experimental studies indicate that GPUs offer a feasible alternative solution over traditional high-performance computing systems. A simulation of 1,000,000 trials with 1,000 catastrophic events per trial on a typical exposure set and contract structure is performed in less than 5 seconds on a multiple GPU platform. The key result is that the multiple GPU implementation can be used in real-time pricing scenarios as it is approximately 77x times faster than the sequential counterpart implemented on a CPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: