25799

Optimizing a Hardware Network Stack to Realize an In-Network ML Inference Application

Marco Hartmann, Lukas Weber, Johannes Wirth, Lukas Sommer, Andreas Koch
Embedded Systems and Applications Group, TU Darmstadt, Germany
IEEE/ACM International Workshop on Heterogeneous High-performance Reconfigurable Computing (H2RC), 2021

@article{hartmann2021optimizing,

   title={Optimizing a Hardware Network Stack to Realize an In-Network ML Inference Application},

   author={Hartmann, Marco and Weber, Lukas and Wirth, Johannes and Sommer, Lukas and Koch, Andreas},

   year={2021}

}

FPGAs are an interesting platform for the implementation of network-attached accelerators, either in the form of smart network interface cards or as In-Network Processing accelerators. Both application scenarios require a high-throughput hardware network stack. In this work, we integrate such a stack into the open-source TaPaSCo framework and implement a library of easy-to-use design primitives for network functionality in modern HDLs. To further facilitate the development of network-attached FPGA accelerators, the library is complemented by a handy simulation framework. In our evaluation, we demonstrate that the integrated and extended stack can operate at or close to the theoretical maximum, both for the stack itself as well as an network-attached machine learning inference appliance.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: