30571

Improving Code Generation via Small Language Model-as-a-judge

Giuseppe Crupi, Rosalia Tufano, Gabriele Bavota
Università della Svizzera italiana, Switzerland
arXiv:2602.11911 [cs.SE], (12 Feb 2026)

@misc{crupi2026improving,

   title={Improving Code Generation via Small Language Model-as-a-judge},

   author={Giuseppe Crupi and Rosalia Tufano and Gabriele Bavota},

   year={2026},

   eprint={2602.11911},

   archivePrefix={arXiv},

   primaryClass={cs.SE},

   url={https://arxiv.org/abs/2602.11911}

}

Download Download (PDF)   View View   Source Source   

344

views

Large language models (LLMs) have shown remarkable capabilities in automated code generation. While effective for mainstream languages, they may underperform on less common or domain-specific languages, prompting companies to develop in-house code generators. While open-source models can be trained for this, only LLMs with tens of billions of parameters match the performance of commercial tools, demanding costly training and deployment. Recent work proposed supporting code generation with smaller models (SLMs) by generating multiple candidate solutions and using another SLM to select the most likely correct one. The most recent work in this area is the one by Sun et al. [29] presenting RankEF, a T5 model trained to rank code solutions using both execution-based and non-execution-based information. However, Sun et al. do not assess the T5 ranker's classification accuracy, that is, how often it misjudges correct implementations as incorrect or vice versa, leaving open questions about the reliability of LMs as code correctness judges for other tasks (e.g., automated code review). Moreover, their experiments involve relatively old models, making it unclear the extent to which such a methodology would still help companies in cheaply training their own code generators with performance comparable to those of massive LLMs. We present a study addressing these limitations. We train several state-of-the-art SLMs as code correctness judges and assess their ability to discriminate between correct and wrong implementations. We show that modern SLMs outperform RankEF, even without exploiting execution-based information. When used as code rankers, they achieve higher performance gains than RankEF and perform competitively with LLMs 5-25x larger, at a fraction of the cost.
No votes yet.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2026 hgpu.org

All rights belong to the respective authors

Contact us: