5220

Benchmarking and modelling of POWER7, Westmere, BG/P, and GPUs: an industry case study

J. A. Herdman, W. P. Gaudin, D. Turland, S. D. Hammond
High Performance Computing, AWE plc, Aldermaston, UK
ACM SIGMETRICS Performance Evaluation Review – Special issue on the 1st international workshop on performance modeling, benchmarking and simulation of high performance computing systems (PMBS 10), Volume 38 Issue 4, March 2011

@article{herdman2011benchmarking,

   title={Benchmarking and modelling of POWER7, Westmere, BG/P, and GPUs: an industry case study},

   author={Herdman, JA and Gaudin, WP and Turland, D. and Hammond, SD},

   journal={ACM SIGMETRICS Performance Evaluation Review},

   volume={38},

   number={4},

   pages={16–22},

   year={2011},

   publisher={ACM}

}

Download Download (PDF)   View View   Source Source   

1566

views

This paper introduces an industry strength, multi-purpose, benchmark: Shamrock. Developed at the Atomic Weapons Establishment (AWE), Shamrock is a two dimensional (2D) structured hydrocode; one of its aims is to assess the impacts of a change in hardware, and (in conjunction with a larger HPC Benchmark Suite) to provide guidance in procurement of future systems. A suitable test problem is described and executed on a local, high-end, workstation for a range of compilers and MPI implementations. Based on these observations, specific configurations are subsequently built and executed on a selection of HPC architectures, including Intel’s Nehalem and Westmere micro architectures, IBM’s POWER-5, POWER-6, POWER-7, BlueGene/L, BlueGene/P, and AMD’s Opteron chip set. Comparisons are made between these architectures, for the Shamrock benchmark, and relative compute resources are specified that deliver similar time to solution, along with their associated power budgets. Additionally, performance comparisons are made for a port of the benchmark to a Nehalem based cluster, accelerated with Tesla C1060 GPUs, with details of the port, and extrapolations to possible performance of the GPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: