13463

Large-Scale Deep Learning on the YFCC100M Dataset

Karl Ni, Roger Pearce, Kofi Boakye, Brian Van Essen, Damian Borth, Barry Chen, Eric Wang
Lawrence Livernmore National Laboratory, Computational Engineering Division, 7000 East Avenue, Livermore, CA 94550
arXiv:1502.03409 [cs.LG], (11 Feb 2015)

@article{ni2015largescale,

   title={Large-Scale Deep Learning on the YFCC100M Dataset},

   author={Ni, Karl and Pearce, Roger and Boakye, Kofi and Essen, Brian Van and Borth, Damian and Chen, Barry and Wang, Eric},

   year={2015},

   month={feb},

   archivePrefix={"arXiv"},

   primaryClass={cs.LG}

}

Download Download (PDF)   View View   Source Source   

1765

views

We present a work-in-progress snapshot of learning with a 15 billion parameter deep learning network on HPC architectures applied to the largest publicly available natural image and video dataset released to-date. Recent advancements in unsupervised deep neural networks suggest that scaling up such networks in both model and training dataset size can yield significant improvements in the learning of concepts at the highest layers. We train our three-layer deep neural network on the Yahoo! Flickr Creative Commons 100M dataset. The dataset comprises approximately 99.2 million images and 800,000 user-created videos from Yahoo’s Flickr image and video sharing platform. Training of our network takes eight days on 98 GPU nodes at the High Performance Computing Center at Lawrence Livermore National Laboratory. Encouraging preliminary results and future research directions are presented and discussed.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: