13800

Separable projection integrals for higher-order correlators of the cosmic microwave sky: Acceleration by factors exceeding 100

J. Briggs, J. Jaykka, J. R. Fergusson, E. P. S. Shellard, S. J. Pennycook
Department of Applied Mathematics and Theoretical Physics, University of Cambridge
arXiv:1503.08809 [cs.DC], (30 Mar 2015)

@article{briggs2015separable,

   title={Separable projection integrals for higher-order correlators of the cosmic microwave sky: Acceleration by factors exceeding 100},

   author={Briggs, J. and Jaykka, J. and Fergusson, J. R. and Shellard, E. P. S. and Pennycook, S. J.},

   year={2015},

   month={mar},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

1654

views

We study the optimisation and porting of the "Modal" code on Intel(R) Xeon(R) processors and/or Intel(R) Xeon Phi(TM) coprocessors using methods which should be applicable to more general compute bound codes. "Modal" is used by the Planck satellite experiment for constraining general non-Gaussian models of the early universe via the bispectrum of the cosmic microwave background. We focus on the hot-spot of the code which is the projection of bispectra from the end of inflation to spherical shell at decoupling which defines the CMB we observe. This code involves a three-dimensional inner product between two functions, one of which requires an integral, on a non-rectangular sparse domain. We show that by employing separable methods this calculation can be reduced to a one dimensional summation plus two integrations reducing the dimensionality from four to three. The introduction of separable functions also solves the issue of the domain allowing efficient vectorisation and load balancing. This method becomes unstable in certain cases and so we present a discussion of the optimisation of both approaches. By making bispectrum calculations competitive with those for the power spectrum we are now able to consider joint analysis for cosmological science exploitation of new data. We demonstrate speed-ups of over 100x, arising from a combination of algorithmic improvements and architecture-aware optimizations targeted at improving thread and vectorization behaviour. The resulting MPI/OpenMP code is capable of executing on clusters containing Intel(R) Xeon(R) processors and/or Intel(R) Xeon Phi(TM) coprocessors, with strong-scaling efficiency of 98.6% on up to 16 nodes. We find that a single coprocessor outperforms two processor sockets by a factor of 1.3x and that running the same code across a combination of processors and coprocessors improves performance-per-node by a factor of 3.38x.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: