29007

A Survey on Hardware Accelerators for Large Language Models

Christoforos Kachris
University of West Attica, Greece
arXiv:2401.09890 [cs.AR], (18 Jan 2024)

@misc{kachris2024survey,

   title={A Survey on Hardware Accelerators for Large Language Models},

   author={Christoforos Kachris},

   year={2024},

   eprint={2401.09890},

   archivePrefix={arXiv},

   primaryClass={cs.AR}

}

Download Download (PDF)   View View   Source Source   

1030

views

Large Language Models (LLMs) have emerged as powerful tools for natural language processing tasks, revolutionizing the field with their ability to understand and generate human-like text. As the demand for more sophisticated LLMs continues to grow, there is a pressing need to address the computational challenges associated with their scale and complexity. This paper presents a comprehensive survey on hardware accelerators designed to enhance the performance and energy efficiency of Large Language Models. By examining a diverse range of accelerators, including GPUs, FPGAs, and custom-designed architectures, we explore the landscape of hardware solutions tailored to meet the unique computational demands of LLMs. The survey encompasses an in-depth analysis of architecture, performance metrics, and energy efficiency considerations, providing valuable insights for researchers, engineers, and decision-makers aiming to optimize the deployment of LLMs in real-world applications.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: