30618

From Prompts to Performance: Evaluating LLMs for Task-based Parallel Code Generation

Linus Bantel, Moritz Strack, Alexander Strack, Dirk Pflüger
Institute for Parallel and Distributed Systems, University of Stuttgart, Stuttgart, Germany
arXiv:2602.22240 [cs.PL], (24 Feb 2026)

@misc{bantel2026from,

   title={From Prompts to Performance: Evaluating LLMs for Task-based Parallel Code Generation},

   author={Linus Bantel and Moritz Strack and Alexander Strack and Dirk Pflüger},

   year={2026},

   eprint={2602.22240},

   archivePrefix={arXiv},

   primaryClass={cs.PL},

   url={https://arxiv.org/abs/2602.22240}

}

Download Download (PDF)   View View   Source Source   

273

views

Large Language Models (LLM) show strong abilities in code generation, but their skill in creating efficient parallel programs is less studied. This paper explores how LLMs generate task-based parallel code from three kinds of input prompts: natural language problem descriptions, sequential reference implementations, and parallel pseudo code. We focus on three programming frameworks: OpenMP Tasking, C++ standard parallelism, and the asynchronous many-task runtime HPX. Each framework offers different levels of abstraction and control for task execution. We evaluate LLM-generated solutions for correctness and scalability. Our results reveal both strengths and weaknesses of LLMs with regard to problem complexity and framework. Finally, we discuss what these findings mean for future LLM-assisted development in high-performance and scientific computing.
No votes yet.
Please wait...

You must be logged in to post a comment.

Recent source codes

* * *

* * *

HGPU group © 2010-2026 hgpu.org

All rights belong to the respective authors

Contact us: