25624

Small-Bench NLP: Benchmark for small single GPU trained models in Natural Language Processing

Kamal Raj Kanakarajan, Bhuvana Kundumani, Malaikannan Sankarasubbu
SAAMA AI Research Lab, Chennai, India
arXiv:2109.10847 [cs.LG], (23 Sep 2021)

@misc{kanakarajan2021smallbench,

   title={Small-Bench NLP: Benchmark for small single GPU trained models in Natural Language Processing},

   author={Kamal Raj Kanakarajan and Bhuvana Kundumani and Malaikannan Sankarasubbu},

   year={2021},

   eprint={2109.10847},

   archivePrefix={arXiv},

   primaryClass={cs.LG}

}

Download Download (PDF)   View View   Source Source   

881

views

Recent progress in the Natural Language Processing domain has given us several State-of-the-Art (SOTA) pretrained models which can be finetuned for specific tasks. These large models with billions of parameters trained on numerous GPUs/TPUs over weeks are leading in the benchmark leaderboards. In this paper, we discuss the need for a benchmark for cost and time effective smaller models trained on a single GPU. This will enable researchers with resource constraints experiment with novel and innovative ideas on tokenization, pretraining tasks, architecture, fine tuning methods etc. We set up Small-Bench NLP, a benchmark for small efficient neural language models trained on a single GPU. Small-Bench NLP benchmark comprises of eight NLP tasks on the publicly available GLUE datasets and a leaderboard to track the progress of the community. Our ELECTRA-DeBERTa (15M parameters) small model architecture achieves an average score of 81.53 which is comparable to that of BERT-Base’s 82.20 (110M parameters). Our models, code and leaderboard are available.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: