30644

CONCUR: Benchmarking LLMs for Concurrent Code Generation

Jue Huang, Tarek Mahmud, Corina Pasareanu, Guowei Yang
The University of Queensland
arXiv:2603.03683 [cs.SE], (4 Mar 2026)

@misc{huang2026concur,

   title={CONCUR: Benchmarking LLMs for Concurrent Code Generation},

   author={Jue Huang and Tarek Mahmud and Corina Pasareanu and Guowei Yang},

   year={2026},

   eprint={2603.03683},

   archivePrefix={arXiv},

   primaryClass={cs.SE},

   url={https://arxiv.org/abs/2603.03683}

}

Leveraging Large Language Models (LLMs) for code generation has increasingly emerged as a common practice in the domain of software engineering. Relevant benchmarks have been established to evaluate the code generation capabilities of LLMs. However, existing benchmarks focus primarily on sequential code, lacking the ability to effectively evaluate LLMs on concurrent code generation. Compared to sequential code, concurrent code exhibits greater complexity and possesses unique types of bugs, such as deadlocks and race conditions, that do not occur in sequential code. Therefore, a benchmark for evaluating sequential code generation cannot be useful for evaluating concurrent code generation with LLMs. To address this gap, we designed a benchmark CONCUR specifically aimed at evaluating the capability of LLMs to generate concurrent code. CONCUR consists of a base set of 43 concurrency problems derived from a standard concurrency textbook, together with 72 validated mutant variants, resulting in 115 total problems. The base problems serve as the semantic core of the benchmark, while the mutants expand linguistic and structural diversity. We conducted an evaluation of a range of LLMs on CONCUR, highlighting limitations of current models. Overall, our work provides a novel direction for evaluating the capability of LLMs to generate code with focus on concurrency.
No votes yet.
Please wait...

You must be logged in to post a comment.

Recent source codes

* * *

* * *

HGPU group © 2010-2026 hgpu.org

All rights belong to the respective authors

Contact us: