9109

Scaling up scientific computations by using map-reduce-like control flow on NUMA architectures

Vlad Slavici
Northeastern University
Northeastern University, 2013

@phdthesis{slavici2013scaling,

   title={Scaling up scientific computations by using map-reduce-like control flow on NUMA architectures},

   author={Slavici, Vlad},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

2199

views

The clock speed of current CPUs and RAM has stopped scaling with Moore’s Law. Yet the scale of applications in science and engineering continues to increase. In order to address this scaling of applications, newer NUMA architectures are emerging. These include parallel disks, hybrid CPU-GPU, and many-core CPUs. Existing CPU-based algorithms, as well as legacy sequential code, need to be migrated to NUMA in order to stay on the curve of Moore’s Law. Migrating applications from traditional sequential architectures to NUMA is difficult, because NUMA architectures penalize data access patterns and computation patterns that are often used by programs on traditional architectures. Specific inefficient operations on NUMA architectures include: random access in hash table lookups; tensor computations involving many small, independent matrix multiplications; collecting together the inputs for a task from multiple previous tasks; and slow access to remote memory. Specific issues for hybrid CPU-GPU architectures are: transferring data from CPU to limited GPU global memory (up to 6 GB for 512 cores, in comparison with up to 2 GB/core on the CPU), and the limited size of GPU on-chip cache (typically 768 KB today). This dissertation analyzes three real-world applications in computational science, each important to a particular community. The three applications are: the determinization and minimization of large finite state automata (important in permutation patterns), a port of the integral operator of the MADNESS scientific library from CPU-only architectures to hybrid CPU-GPUs (important in computational chemistry), and efficient multiplication of large permutations (important in computational group theory). In all three cases, traditional architectures were either too slow, or did not support problem sizes of sufficient scale. For each application, NUMA solutions based on adapting existing CPU-based algorithms are proposed. NUMA architectures were adopted for the sake of greater speed (through greater parallelism) and/or greater data storage. Two common techniques used across all three NUMA-based solutions are delayed data access (to allow for reordering of accesses) and delayed task execution (to hide the overhead of on-demand task launch). In hindsight, we observe that the usage of delayed operations leads to a map-reduce-like control flow in each of the three NUMA-based solution implementations.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: