On-the-Fly Computing on Many-Core Processors in Nuclear Applications

Noriyuki Kushida
Japan Atomic Energy Agency, 2-4 Shirakata, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195, Japan
Progress in Nuclear Science and Technology, Vol. 2, pp.663-669, 2011


   title={On-the-Fly Computing on Many-Core Processors in Nuclear Applications},

   author={Kushida, Noriyuki},

   journal={Progress in Nuclear Science and Technology},




Download Download (PDF)   View View   Source Source   



Many nuclear applications still require more computational power than the current computers can provide. Furthermore, some of them require dedicated machines, because they must run constantly or no delay is allowed. To satisfy these requirements, we introduce computer accelerators which can provide higher computational power with lower prices than the current commodity processors. However, the feasibility of accelerators had not well investigated on nuclear applications. Thus, we applied the Cell and GPGPU to plasma stability monitoring and infrasound propagation analysis, respectively. In the plasma monitoring, the eigenvalue solver was focused on. To obtain sufficient power, we connected Cells with Ethernet, and implemented a preconditioned conjugate gradient method. Moreover, we applied a hierarchical parallelization method to minimize communications among the Cells. Finally, we could solve the block tri-diagonal Hermitian matrix that had 1,024 diagonal blocks, and each block was 128×128, within one second. On the basis of these results, we showed the potential of plasma monitoring by using our Cell cluster system. In infrasound propagation analysis, we accelerated two-dimensional parabolic equation (PE) method by using GPGPU. PE is one of the most accurate methods, but it requires higher computational power than other methods. By applying softwarepipelining and memory layout optimization, we obtained x18.3 speedup on GPU from CPU. Our achieved computing speed could be comparable to faster but more inaccurate method.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: