11396

Effective Multi-Modal Retrieval based on Stacked Auto-Encoders

Wei Wang, Beng Chin Ooi, Xiaoyan Yang, Dongxiang Zhang, Yueting Zhuang
School of Computing, National University of Singapore, Singapore
National University of Singapore, 2014

@article{wang2014effective,

   title={Effective Multi-Modal Retrieval based on Stacked Auto-Encoders},

   author={Wang, Wei and Ooi, Beng Chin and Yang, Xiaoyan and Zhang, Dongxiang and Zhuang, Yueting},

   year={2014}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

2525

views

Multi-modal retrieval is emerging as a new search paradigm that enables seamless information retrieval from various types of media. For example, users can simply snap a movie poster to search relevant reviews and trailers. To solve the problem, a set of mapping functions are learned to project high-dimensional features extracted from data of different media types into a common low-dimensional space so that metric distance measures can be applied. In this paper, we propose an effective mapping mechanism based on deep learning (i.e., stacked auto-encoders) for multi-modal retrieval. Mapping functions are learned by optimizing a new objective function, which captures both intra-modal and inter-modal semantic relationships of data from heterogeneous sources effectively. Compared with previous works which require a substantial amount of prior knowledge such as similarity matrices of intra-modal data and ranking examples, our method requires little prior knowledge. Given a large training dataset, we split it into mini-batches and continually adjust the mapping functions for each batch of input. Hence, our method is memory efficient with respect to the data volume. Experiments on three real datasets illustrate that our proposed method achieves significant improvement in search accuracy over the state-of-the-art methods.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: