Effective Multi-Modal Retrieval based on Stacked Auto-Encoders
School of Computing, National University of Singapore, Singapore
National University of Singapore, 2014
@article{wang2014effective,
title={Effective Multi-Modal Retrieval based on Stacked Auto-Encoders},
author={Wang, Wei and Ooi, Beng Chin and Yang, Xiaoyan and Zhang, Dongxiang and Zhuang, Yueting},
year={2014}
}
Multi-modal retrieval is emerging as a new search paradigm that enables seamless information retrieval from various types of media. For example, users can simply snap a movie poster to search relevant reviews and trailers. To solve the problem, a set of mapping functions are learned to project high-dimensional features extracted from data of different media types into a common low-dimensional space so that metric distance measures can be applied. In this paper, we propose an effective mapping mechanism based on deep learning (i.e., stacked auto-encoders) for multi-modal retrieval. Mapping functions are learned by optimizing a new objective function, which captures both intra-modal and inter-modal semantic relationships of data from heterogeneous sources effectively. Compared with previous works which require a substantial amount of prior knowledge such as similarity matrices of intra-modal data and ranking examples, our method requires little prior knowledge. Given a large training dataset, we split it into mini-batches and continually adjust the mapping functions for each batch of input. Hence, our method is memory efficient with respect to the data volume. Experiments on three real datasets illustrate that our proposed method achieves significant improvement in search accuracy over the state-of-the-art methods.
February 14, 2014 by hgpu