25763

Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training

Zhengda Bian, Hongxin Liu, Boxiang Wang, Haichen Huang, Yongbin Li, Chuanrui Wang, Fan Cui, Yang You
HPC-AI Technology Inc.
arXiv:2110.14883 [cs.LG], (28 Oct 2021)

@misc{bian2021colossalai,

   title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},

   author={Zhengda Bian and Hongxin Liu and Boxiang Wang and Haichen Huang and Yongbin Li and Chuanrui Wang and Fan Cui and Yang You},

   year={2021},

   eprint={2110.14883},

   archivePrefix={arXiv},

   primaryClass={cs.LG}

}

The Transformer architecture has improved the performance of deep learning models in domains such as Computer Vision and Natural Language Processing. Together with better performance come larger model sizes. This imposes challenges to the memory wall of the current accelerator hardware such as GPU. It is never ideal to train large models such as Vision Transformer, BERT, and GPT on a single GPU or a single machine. There is an urgent demand to train models in a distributed environment. However, distributed training, especially model parallelism, often requires domain expertise in computer systems and architecture. It remains a challenge for AI researchers to implement complex distributed training solutions for their models. In this paper, we introduce Colossal-AI, which is a unified parallel training system designed to seamlessly integrate different paradigms of parallelization techniques including data parallelism, pipeline parallelism, multiple tensor parallelism, and sequence parallelism. Colossal-AI aims to support the AI community to write distributed models in the same way as how they write models normally. This allows them to focus on developing the model architecture and separates the concerns of distributed training from the development process. The documentations and source code are available.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: