30052

Pre-Training LLMs on a budget: A comparison of three optimizers

Joel Schlotthauer, Christian Kroos, Chris Hinze, Viktor Hangya, Luzian Hahn, Fabian Küch
Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany
arXiv:2507.08472 [cs.LG], (11 Jul 2025)

@misc{schlotthauer2025pretrainingllmsbudgetcomparison,

   title={Pre-Training LLMs on a budget: A comparison of three optimizers},

   author={Joel Schlotthauer and Christian Kroos and Chris Hinze and Viktor Hangya and Luzian Hahn and Fabian Küch},

   year={2025},

   eprint={2507.08472},

   archivePrefix={arXiv},

   primaryClass={cs.LG},

   url={https://arxiv.org/abs/2507.08472}

}

Download Download (PDF)   View View   Source Source   

641

views

Optimizers play a decisive role in reducing pre-training times for LLMs and achieving better-performing models. In this study, we compare three major variants: the de-facto standard AdamW, the simpler Lion, developed through an evolutionary search, and the second-order optimizer Sophia. For better generalization, we train with two different base architectures and use a single- and a multiple-epoch approach while keeping the number of tokens constant. Using the Maximal Update Parametrization and smaller proxy models, we tune relevant hyperparameters separately for each combination of base architecture and optimizer. We found that while the results from all three optimizers were in approximately the same range, Sophia exhibited the lowest training and validation loss, Lion was fastest in terms of training GPU hours but AdamW led to the best downstream evaluation results.
No votes yet.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: