18858

Chunkflow: Distributed Hybrid Cloud Processing of Large 3D Images by Convolutional Nets

Jingpeng Wu, William M. Silversmith, H. Sebastian Seung
Princeton University, Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
arXiv:1904.10489 [cs.DC], (25 Apr 2019)

@misc{wu2019chunkflow,

   title={Chunkflow: Distributed Hybrid Cloud Processing of Large 3D Images by Convolutional Nets},

   author={Jingpeng Wu and William M. Silversmith and H. Sebastian Seung},

   year={2019},

   eprint={1904.10489},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

It is now common to process volumetric biomedical images using 3D Convolutional Networks (ConvNets). This can be challenging for the teravoxel and even petavoxel images that are being acquired today by light or electron microscopy. Here we introduce chunkflow, a software framework for distributing ConvNet processing over local and cloud GPUs and CPUs. The image volume is divided into overlapping chunks, each chunk is processed by a ConvNet, and the results are blended together to yield the output image. The frontend submits ConvNet tasks to a cloud queue. The tasks are executed by local and cloud GPUs and CPUs. Thanks to the fault-tolerant architecture of Chunkflow, cost can be greatly reduced by utilizing cheap unstable cloud instances. Chunkflow currently supports PyTorch for GPUs and PZnet for CPUs. To illustrate its usage, a large 3D brain image from serial section electron microscopy was processed by a 3D ConvNet with a U-Net style architecture. Chunkflow provides some chunk operations for general use, and the operations can be composed flexibly in a command line interface.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: