The GPU vs Phi Debate: Risk Analytics Using Many-Core Computing
School of Computer Science, University of St Andrews, UK
arXiv:1501.06326 [cs.DC], (26 Jan 2015)
@article{varghese2015debate,
title={The GPU vs Phi Debate: Risk Analytics Using Many-Core Computing},
author={Varghese, Blesson},
year={2015},
month={jan},
archivePrefix={"arXiv"},
primaryClass={cs.DC},
doi={10.1016/j.compeleceng.2015.01.012}
}
The risk of reinsurance portfolios covering globally occurring natural catastrophes, such as earthquakes and hurricanes, is quantified by employing simulations. These simulations are computationally intensive and require large amounts of data to be processed. The use of many-core hardware accelerators, such as the Intel Xeon Phi and the NVIDIA Graphics Processing Unit (GPU), are desirable for achieving high-performance risk analytics. In this paper, we set out to investigate how accelerators can be employed in risk analytics, focusing on developing parallel algorithms for Aggregate Risk Analysis, a simulation which computes the Probable Maximum Loss of a portfolio taking both primary and secondary uncertainties into account. The key result is that both hardware accelerators are useful in different contexts; without taking data transfer times into account the Phi had lowest execution times when used independently and the GPU along with a host in a hybrid platform yielded best performance.
January 28, 2015 by hgpu