9109

Scaling up scientific computations by using map-reduce-like control flow on NUMA architectures

Vlad Slavici
Northeastern University
Northeastern University, 2013
@phdthesis{slavici2013scaling,

   title={Scaling up scientific computations by using map-reduce-like control flow on NUMA architectures},

   author={Slavici, Vlad},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

366

views

The clock speed of current CPUs and RAM has stopped scaling with Moore’s Law. Yet the scale of applications in science and engineering continues to increase. In order to address this scaling of applications, newer NUMA architectures are emerging. These include parallel disks, hybrid CPU-GPU, and many-core CPUs. Existing CPU-based algorithms, as well as legacy sequential code, need to be migrated to NUMA in order to stay on the curve of Moore’s Law. Migrating applications from traditional sequential architectures to NUMA is difficult, because NUMA architectures penalize data access patterns and computation patterns that are often used by programs on traditional architectures. Specific inefficient operations on NUMA architectures include: random access in hash table lookups; tensor computations involving many small, independent matrix multiplications; collecting together the inputs for a task from multiple previous tasks; and slow access to remote memory. Specific issues for hybrid CPU-GPU architectures are: transferring data from CPU to limited GPU global memory (up to 6 GB for 512 cores, in comparison with up to 2 GB/core on the CPU), and the limited size of GPU on-chip cache (typically 768 KB today). This dissertation analyzes three real-world applications in computational science, each important to a particular community. The three applications are: the determinization and minimization of large finite state automata (important in permutation patterns), a port of the integral operator of the MADNESS scientific library from CPU-only architectures to hybrid CPU-GPUs (important in computational chemistry), and efficient multiplication of large permutations (important in computational group theory). In all three cases, traditional architectures were either too slow, or did not support problem sizes of sufficient scale. For each application, NUMA solutions based on adapting existing CPU-based algorithms are proposed. NUMA architectures were adopted for the sake of greater speed (through greater parallelism) and/or greater data storage. Two common techniques used across all three NUMA-based solutions are delayed data access (to allow for reordering of accesses) and delayed task execution (to hide the overhead of on-demand task launch). In hindsight, we observe that the usage of delayed operations leads to a map-reduce-like control flow in each of the three NUMA-based solution implementations.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

Like us on Facebook

HGPU group

140 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1217 peoples are following HGPU @twitter

Featured events

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: AMD APP SDK 2.9
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 6.0.1, AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us: