29610

HPC-Coder-V2: Studying Code LLMs Across Low-Resource Parallel Languages

Aman Chaturvedi, Daniel Nichols, Siddharth Singh, Abhinav Bhatele
Department of Computer Science, University of Maryland, College Park, MD, USA
arXiv:2412.15178 [cs.DC]

@misc{chaturvedi2024hpccoderv2studyingcodellms,

   title={HPC-Coder-V2: Studying Code LLMs Across Low-Resource Parallel Languages},

   author={Aman Chaturvedi and Daniel Nichols and Siddharth Singh and Abhinav Bhatele},

   year={2024},

   eprint={2412.15178},

   archivePrefix={arXiv},

   primaryClass={cs.DC},

   url={https://arxiv.org/abs/2412.15178}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

408

views

Large Language Model (LLM) based coding tools have been tremendously successful as software development assistants, yet they are often designed for general purpose programming tasks and perform poorly for more specialized domains such as high performance computing. Creating specialized models and tools for these domains is crucial towards gaining the benefits of LLMs in areas such as HPC. While previous work has explored HPC-specific models, LLMs still struggle to generate parallel code and it is not at all clear what hurdles are still holding back these LLMs and what must be done to overcome them. In this work, we conduct an in-depth study along the many axes of fine-tuning a specialized HPC LLM in order to better understand the challenges. Based on our findings we fine-tune and evaluate a specialized HPC LLM that is shown to be the best performing open-source code LLM for parallel code generation to date.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: