25796

Source-to-Source Automatic Differentiation of OpenMP Parallel Loops

Jan Hückelheim, Laurent Hascoët
Argonne National Laboratory
arXiv:2111.01861 [cs.MS], (2 Nov 2021)

@misc{hückelheim2021sourcetosource,

   title={Source-to-Source Automatic Differentiation of OpenMP Parallel Loops},

   author={Jan Hückelheim and Laurent Hascoët},

   year={2021},

   eprint={2111.01861},

   archivePrefix={arXiv},

   primaryClass={cs.MS}

}

This paper presents our work toward correct and efficient automatic differentiation of OpenMP parallel worksharing loops in forward and reverse mode. Automatic differentiation is a method to obtain gradients of numerical programs, which are crucial in optimization, uncertainty quantification, and machine learning. The computational cost to compute gradients is a common bottleneck in practice. For applications that are parallelized for multicore CPUs or GPUs using OpenMP, one also wishes to compute the gradients in parallel. We propose a framework to reason about the correctness of the generated derivative code, from which we justify our OpenMP extension to the differentiation model. We implement this model in the automatic differentiation tool Tapenade and present test cases that are differentiated following our extended differentiation procedure. Performance of the generated derivative programs in forward and reverse mode is better than sequential, although our reverse mode often scales worse than the input programs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: