29597

Towards Performance-Aware Allocation for Accelerated Machine Learning on GPU-SSD Systems

Ayush Gundawar, Euijun Chung, Hyesoon Kim
Georgia Institute of Technology
arXiv:2412.04569 [cs.AR], (9 Dec 2024)

@misc{gundawar2024performanceawareallocationacceleratedmachine,

   title={Towards Performance-Aware Allocation for Accelerated Machine Learning on GPU-SSD Systems},

   author={Ayush Gundawar and Euijun Chung and Hyesoon Kim},

   year={2024},

   eprint={2412.04569},

   archivePrefix={arXiv},

   primaryClass={cs.AR},

   url={https://arxiv.org/abs/2412.04569}

}

Download Download (PDF)   View View   Source Source   

538

views

The exponential growth of data-intensive machine learning workloads has exposed significant limitations in conventional GPU-accelerated systems, especially when processing datasets exceeding GPU DRAM capacity. We propose MQMS, an augmented in-storage GPU architecture and simulator that is aware of internal SSD states and operations, enabling intelligent scheduling and address allocation to overcome performance bottlenecks caused by CPU-mediated data access patterns. MQMS introduces dynamic address allocation to maximize internal parallelism and fine-grained address mapping to efficiently handle small I/O requests without incurring read-modify-write overheads. Through extensive evaluations on workloads ranging from large language model inference to classical machine learning algorithms, MQMS demonstrates orders-of-magnitude improvements in I/O request throughput, device response time, and simulation end time compared to existing simulators.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: