Ignite-GPU: a GPU-enabled in-memory computing architecture on clusters

Amir Hossein Sojoodi, Majid Salimi Beni, Farshad Khunjush
Department of Computer Science, Engineering and IT, School of Electrical and Computer Engineering, Shiraz University, Shiraz, Iran
The Journal of Supercomputing, 2020


   title={Ignite-GPU: a GPU-enabled in-memory computing architecture on clusters},

   author={Sojoodi, Amir Hossein and Beni, Majid Salimi and Khunjush, Farshad},

   journal={The Journal of Supercomputing},





During recent years, big data explosion and the increase in main memory capacity, on the one hand, and the need for faster data processing, on the other hand, have caused the development of various in-memory processing tools to manage and analyze data. Engaging the speed of the main memory and advantaging data locality, these tools can process a large amount of data with high performance. Apache Ignite, as a distributed in-memory platform, can process massive volumes of data in parallel. Currently, this platform is CPU-based and does not utilize the GPU’s processing resources. To address this concern, we introduce Ignite-GPU that uses the GPU’s massively parallel processing power. Ignite-GPU handles a number of challenges in integrating GPUs into Ignite and utilizes the GPU’s available resources. We have also identified and eliminated time-consuming overheads and used various GPU-specific optimization techniques to improve overall performance. Eventually, we have evaluated Ignite-GPU with the Genetic Algorithm, as a representative of data and compute-intensive algorithms, and gained more than thousands of times speedup in comparison with its CPU version.
Rating: 5.0/5. From 3 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: