Managing the Topology of Heterogeneous Cluster Nodes with Hardware Locality (hwloc)

Brice Goglin
Inria Bordeaux – Sud-ouest – University of Bordeaux, 33405 Talence cedex, France
hal-00985096, (29 April 2014)




   title={Managing the Topology of Heterogeneous Cluster Nodes with Hardware Locality (hwloc)},

   author={Goglin, Brice},

   keywords={opology; locality; affinities; I/O devices; clusters; hwloc},


   affiliation={RUNTIME – INRIA Bordeaux – Sud-Ouest , Laboratoire Bordelais de Recherche en Informatique – LaBRI},

   booktitle={International Conference on High Performance Computing & Simulation (HPCS 2014)},


   address={Bologna, Italie},


   collaboration={plafrim, mcia},





Modern computing platforms are increasingly complex, with multiple cores, shared caches, and NUMA architectures. Parallel applications developers have to take locality into account before they can expect good efficiency on these platforms. Thus there is a strong need for a portable tool gathering and exposing this information. The Hardware Locality project (hwloc) offers a tree representation of the hardware based on the inclusion and localities of the CPU and memory resources. It is already widely used for affinity-based task placement in high performance computing. In this article we present how hwloc is extended to describe more than computing and memory resources. Indeed, I/O device locality is becoming another important aspect of locality since high performance GPUs, network or InfiniBand interfaces possess privileged access to some of the cores and memory banks. hwloc integrates this knowledge into its topology representation and offers an interoperability API to extend existing libraries such as CUDA with locality information. We also describe how hwloc now helps process managers and batch schedulers to deal with the topology of multiple cluster nodes, together with compression for better scalability up to thousands of nodes.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: