Sparse Convex Optimization on GPUs
University of Illinois at Chicago
University of Illinois at Chicago, 2015
@phdthesis{maggioni2015sparse,
title={Sparse Convex Optimization on GPUs},
author={Maggioni, Marco},
year={2015},
school={Politecnico di Milano}
}
Convex optimization is a fundamental mathematical framework used for general problem solving. The computational time taken to optimize problems formulated as Linear Programming, Integer Linear Programming or Quadratic Programming has an immediate impact on countless application fields, and it is critical to determining which problems we will be able to solve in the future. Since the very beginning, the research community has always been investigating on new algorithmic and numerical techniques to speed up convex optimization. Recently, the focus has included parallel computer architectures and their ability to perform high-throughput computation. This dissertation continues on the same research direction developing novel computational techniques tailored for modern GPUs. We focus on problems with sparse structure which are, arguably, the most challenging to solve on throughput-oriented many-core architectures naturally well-suited for dense computations As original contribution, we combine the leading ideas in SpMV optimization on GPUs into an advanced sparse format known as AdELL+. We also speed up the class of optimization algorithms known as Interior Points Methods with GPU-based adaptive strategies to select between Cholesky factorization and Conjugate Gradient. Last, we design an incremental matrix data structure that provides the foundation for implementing "branch-and-cut" ILP solvers.
February 23, 2016 by hgpu