Assessing the Performance-Energy Balance of Graphics Processors for Spectral Unmixing

Sergio Sanchez, German Leon, Antonio Plaza, Enrique S. Quintana-Orti
Hyperspectral Computing Laboratory (HyperComp), Department of Technology of Computers and Communications, University of Extremadura, 10.071-Caceres, Spain
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014


   title={Assessing the Performance-Energy Balance of Graphics Processors for Spectral Unmixing},

   author={S{‘a}nchez, Sergio and Le{‘o}n, Germ{‘a}n and Plaza, Antonio and Quintana-Ort{i}, Enrique S},



Download Download (PDF)   View View   Source Source   



Remotely sensed hyperspectral imaging missions are often limited by onboard power restrictions while, simultaneously, require high computing power in order to address applications with relevant constraints in terms of processing times. In recent years, graphics processing units (GPUs) have emerged as a commodity computing platform suitable to meet real-time processing requirements in hyperspectral image processing. On the other hand, GPUs are power-hungry devices, which results in the need to explore the trade-off between the expected high-performance and the significant power consumption of computing architectures suitable to perform fast processing of hyperspectral images. In this paper, we explore the balance between computing performance and power consumption of GPUs in the context of a popular hyperspectral imaging application: spectral unmixing. Specifically, we investigate several processing chains for spectral unmixing and evaluate them on three different GPUs, corresponding to the two latest generations of GPUs from NVIDIA ("Fermi" and "Kepler"), as well as an alternative low-power system more suitable for embedded appliances. Our study provides some observations about the possibility to use GPUs as effective onboard devices in hyperspectral imaging applications.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: