9195

SemCache: Semantics-aware Caching for Efficient GPU Offloading

Nabeel AlSaber, Milind Kulkarni
School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
International Conference on Supercomputing (ICS), 2013

@article{alsaber2013semcache,

   title={SemCache: Semantics-aware Caching for Efficient GPU Offloading},

   author={AlSaber, Nabeel and Kulkarni, Milind},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

1444

views

Recently, GPU libraries have made it easy to improve application performance by offloading computation to the GPU. However, using such libraries introduces the complexity of manually handling explicit data movements between GPU and CPU memory spaces. Unfortunately, when using these libraries with complex applications, it is very difficult to optimize CPU-GPU communication between multiple kernel invocations to avoid redundant communication. In this paper, we introduce SemCache, a semantics-aware GPU cache that automatically manages CPU-GPU communication and dynamically optimizes communication by eliminating redundant transfers using caching. It’s key feature is the use of library semantics to determine the appropriate caching granularity for a given offloaded library (e.g., matrices in BLAS). We applied SemCache to BLAS libraries to provide a GPU drop-in replacement library which handles communications and optimizations automatically. Our caching technique is efficient; it only tracks matrices instead of tracking every memory access at fine granularity. Experimental results show that our system can dramatically reduce redundant communication for real-world computational science application and deliver significant performance improvements, beating GPU-based implementations like CULA and CUBLAS.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: