25374

Face.evoLVe: A High-Performance Face Recognition Library

Qingzhong Wang, Pengfei Zhang, Haoyi Xiong, Jian Zhao
Baidu Research, Beijing, China
arXiv:2107.08621 [cs.CV], (20 Jul 2021)

@misc{wang2021faceevolve,

   title={Face.evoLVe: A High-Performance Face Recognition Library},

   author={Qingzhong Wang and Pengfei Zhang and Haoyi Xiong and Jian Zhao},

   year={2021},

   eprint={2107.08621},

   archivePrefix={arXiv},

   primaryClass={cs.CV}

}

In this paper, we develop face.evoLVe – a comprehensive library that collects and implements a wide range of popular deep learning-based methods for face recognition. First of all, face.evoLVe is composed of key components that cover the full process of face analytics, including face alignment, data processing, various backbones, losses, and alternatives with bags of tricks for improving performance. Later, face.evoLVe supports multi-GPU training on top of different deep learning platforms, such as PyTorch and PaddlePaddle, which facilitates researchers to work on both large-scale datasets with millions of images and low-shot counterparts with limited well-annotated data. More importantly, along with face.evoLVe, images before & after alignment in the common benchmark datasets are released with source codes and trained models provided. All these efforts lower the technical burdens in reproducing the existing methods for comparison, while users of our library could focus on developing advanced approaches more efficiently. Last but not least, face.evoLVe is well designed and vibrantly evolving, so that new face recognition approaches can be easily plugged into our framework. Note that we have used face.evoLVe to participate in a number of face recognition competitions and secured the first place. The version that supports PyTorch is publicly available and the PaddlePaddle version is also available. Face.evoLVe has been widely used for face analytics, receiving 2.4K stars and 622 forks.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: