26219

The Ecological Footprint of Neural Machine Translation Systems

Dimitar Sherionov, Eva Vanmassenhove
Department of Cognitive Science And Artificial Intelligence, Tilburg University
arXiv:2202.02170 [cs.CL], (4 Feb 2022)

@misc{sherionov2022ecological,

   title={The Ecological Footprint of Neural Machine Translation Systems},

   author={Dimitar Sherionov and Eva Vanmassenhove},

   year={2022},

   eprint={2202.02170},

   archivePrefix={arXiv},

   primaryClass={cs.CL}

}

Over the past decade, deep learning (DL) has led to significant advancements in various fields of artificial intelligence, including machine translation (MT). These advancements would not be possible without the ever-growing volumes of data and the hardware that allows large DL models to be trained efficiently. Due to the large amount of computing cores as well as dedicated memory, graphics processing units (GPUs) are a more effective hardware solution for training and inference with DL models than central processing units (CPUs). However, the former is very power demanding. The electrical power consumption has economical as well as ecological implications. This chapter focuses on the ecological footprint of neural MT systems. It starts from the power drain during the training of and the inference with neural MT models and moves towards the environment impact, in terms of carbon dioxide emissions. Different architectures (RNN and Transformer) and different GPUs (consumer-grate NVidia 1080Ti and workstation-grade NVidia P100) are compared. Then, the overall CO2 offload is calculated for Ireland and the Netherlands. The NMT models and their ecological impact are compared to common household appliances to draw a more clear picture. The last part of this chapter analyses quantization, a technique for reducing the size and complexity of models, as a way to reduce power consumption. As quantized models can run on CPUs, they present a power-efficient inference solution without depending on a GPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: