5957

FAMOUS, faster: using parallel computing techniques to accelerate the FAMOUS/HadCM3 climate model with a focus on the radiative transfer algorithm

P. Hanappe, A. Beurive, F. Laguzet, L. Steels, N. Bellouin, O. Boucher, Y. H. Yamazaki, T. Aina, M. Allen
Sony Computer Science Laboratory, Paris, France
Geoscientific Model Development, 4, 835-844, 2011

@article{gmd-4-835-2011,

   author={Hanappe, P. and Beuriv’e, A. and Laguzet, F. and Steels, L. and Bellouin, N. and Boucher, O. and Yamazaki, Y. H. and Aina, T. and Allen, M.},

   title={FAMOUS, faster: using parallel computing techniques to accelerate the FAMOUS/HadCM3 climate model with a focus on the radiative transfer algorithm},

   journal={Geoscientific Model Development},

   volume={4},

   year={2011},

   number={3},

   pages={835–844},

   url={http://www.geosci-model-dev.net/4/835/2011/},

   doi={10.5194/gmd-4-835-2011}

}

Download Download (PDF)   View View   Source Source   

834

views

We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. A task queue and a thread pool are used to distribute the computation to several processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster and on graphics processors, using OpenCL, more than 2.5 times faster, as compared to the original code. Because the radiation code takes more than 60 % of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: