Dax: Data Analysis at Extreme
Sandia National Laboratories, Albuquerque, NM 87185, USA
Scientific Discovery through Advanced Computing Program (SciDAC), 2011
@article{moreland2011dax,
title={Dax: Data Analysis at Extreme},
author={Moreland, K. and Ayachit, U. and Geveci, B. and Ma, K.L.},
year={2011}
}
Experts agree that the exascale machine will comprise processors that contain many cores, which in turn will necessitate a much higher degree of concurrency. Software will require a minimum of a 1,000 times more concurrency. Most parallel analysis and visualization algorithms today work by partitioning data and running mostly serial algorithms concurrently on each data partition. Although this approach lends itself well to the concurrency of current high-performance computing, it does not exhibit the appropriate pervasive parallelism required for exascale computing. The data partitions are too small and the overhead of the threads is too large to make effective use of all the cores in an extreme-scale machine. This paper introduces a new design philosophy for a visualization framework with the intention of encouraging the design of algorithms that exhibit the pervasive parallelism necessary for extreme scale machines. We outline our design plans for Dax, a new visualization framework based on these design principles.
October 28, 2011 by hgpu