2443

Exploring new architectures in accelerating CFD for Air Force applications

Jack Dongarra, Shirley Moore, Gregory Peterson, Stanimire Tomov
University of Tennessee, Knoxvill
In Proceedings of HPCMP Users Group Conference 2008, July 14-17, Vol. 2008

@conference{dongarra2009exploring,

   title={Exploring new architectures in accelerating CFD for Air Force applications},

   author={Dongarra, J. and Peterson, G. and Tomov, S. and Allred, J. and Natoli, V. and Richie, D.},

   booktitle={DoD HPCMP Users Group Conference, 2008. DOD HPCMP UGC},

   pages={472–478},

   year={2009},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

1206

views

Computational Fluid Dynamics (CFD) is an active field of research where the development of faster and more accurate methods is linked to the continuous demand for ever higher computational power. And indeed, for at least two decades, high-performance computing (HPC) programmers have taken for granted that each successive generation of microprocessors would, either immediately or after minor adjustments, make their software run substantially faster. But recent microprocessor design trends including the introduction of multi/many-core designs and the increasingly popular use in HPC of accelerators such as General Purpose Graphics Processing Units (GPGPU) and Field Programmable Gate Arrays (FPGAs), present an unprecedented challenge, namely how to update and enhance the existing large CFD software infrastructure to efficiently use these new architectures. In this paper we address some main issues in this transition and present ideas on using the new architectures to accelerate CFD applications that are of interest to the Air Force. We consider not only multi/many-core but also special purpose (e.g. GPUs) and reconfigurable computing (e.g. FPGAs) architectures. Moreover, we demonstrate benefits of using hybrid combinations where the strengths of each platform can be used to better map algorithm requirements and underlying architecture.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: