27529

Training a Vision Transformer from scratch in less than 24 hours with 1 GPU

Saghar Irandoust, Thibaut Durand, Yunduz Rakhmangulova, Wenjie Zi, Hossein Hajimirsadeghi
Borealis AI
arXiv:2211.05187 [cs.CV], (9 Nov 2022)

@misc{https://doi.org/10.48550/arxiv.2211.05187,

   doi={10.48550/ARXIV.2211.05187},

   url={https://arxiv.org/abs/2211.05187},

   author={Irandoust, Saghar and Durand, Thibaut and Rakhmangulova, Yunduz and Zi, Wenjie and Hajimirsadeghi, Hossein},

   keywords={Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.10},

   title={Training a Vision Transformer from scratch in less than 24 hours with 1 GPU},

   publisher={arXiv},

   year={2022},

   copyright={arXiv.org perpetual, non-exclusive license}

}

Download Download (PDF)   View View   Source Source   

703

views

Transformers have become central to recent advances in computer vision. However, training a vision Transformer (ViT) model from scratch can be resource intensive and time consuming. In this paper, we aim to explore approaches to reduce the training costs of ViT models. We introduce some algorithmic improvements to enable training a ViT model from scratch with limited hardware (1 GPU) and time (24 hours) resources. First, we propose an efficient approach to add locality to the ViT architecture. Second, we develop a new image size curriculum learning strategy, which allows to reduce the number of patches extracted from each image at the beginning of the training. Finally, we propose a new variant of the popular ImageNet1k benchmark by adding hardware and time constraints. We evaluate our contributions on this benchmark, and show they can significantly improve performances given the proposed training budget.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: