14000

Speeding up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves

Tobias Domhan, Jost Tobias Springenberg, Frank Hutter
University of Freiburg, Freiburg, Germany
24th International Joint Conference on Artificial Intelligence (IJCAI), 2015

@article{domhan2015speeding,

   title={Speeding up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves},

   author={Domhan, Tobias and Springenberg, Jost Tobias and Hutter, Frank},

   year={2015}

}

Download Download (PDF)   View View   Source Source   

2812

views

Deep neural networks (DNNs) show very strong performance on many machine learning problems, but they are very sensitive to the setting of their hyperparameters. Automated hyperparameter optimization methods have recently been shown to yield settings competitive with those found by human experts, but their widespread adoption is hampered by the fact that they require more computational resources than human experts. Humans have one advantage: when they evaluate a poor hyperparameter setting they can quickly detect (after a few steps of stochastic gradient descent) that the resulting network performs poorly and terminate the corresponding evaluation to save time. In this paper, we mimic the early termination of bad runs using a probabilistic model that extrapolates the performance from the first part of a learning curve. Experiments with a broad range of neural network architectures on various prominent object recognition benchmarks show that our resulting approach speeds up state-of-the-art hyperparameter optimization methods for DNNs roughly twofold, enabling them to find DNN settings that yield better performance than those chosen by human experts.
Rating: 2.1/5. From 5 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: