Massive Image Editing on the Cloud
SCI Institute, University of Utah, USA
Robotics and Applications, 2011
@inproceedings{summa2011massive,
title={Massive Image Editing on the Cloud},
author={Summa, B. and Vo, H.T. and Pascucci, V. and Silva, C.},
booktitle={Robotics and Applications},
year={2011},
organization={ACTA Press}
}
Processing massive imagery in a distributed environment currently requires the effort of a skilled team to efficiently handle communication, synchronization, faults, and data/process distribution. Moreover, these implementations are highly optimized for a specific system or cluster, therefore portability or improved performance due to system improvements is rarely considered. Much like early GPU computing, cluster computing for graphics is a highly-specialized field for few experts. In this work, we experiment using the cloud as a possible alternative to the status quo, abstracting away much of the complexity associated with current implementations. As gigapixel images increase in prevalence, the need for a higher level of abstraction for broadly accessible deployment is clear, much like the emergence of CUDA, OpenCL and DirectCompute for multicore and general purpose GPU computing. The increased availability of cloud resources as a commodity offers a unique opportunity to adopt this level of abstraction and extend the distribution and development of large image algorithms to a much wider community. This can potentially lead to a drastic decrease in deployment time for algorithms allowing for faster testing of new ideas. The abstraction of the cloud can allow simple, system oblivious implementations which are more portable, fault-tolerant, and likely to scale as hardware improves. In this paper, we detail how to reformulate graphics algorithms to perform well on the cloud and what considerations need to be made for an efficient implementation. Specifically, we show the use of gradient domain techniques to stitch large panoramas on Apache Foundation’s open source implementation of Google’s MapReduce framework called Hadoop. With the proper redesign of current algorithms, we show how one can balance processing and data movement to achieve implementations well suited for the cloud.
November 17, 2011 by hgpu