TY - JOUR
T1 - Semi-supervised manifold-embedded hashing with joint feature representation and classifier learning
AU - Song, Tiecheng
AU - Cai, Jianfei
AU - Zhang, Tianqi
AU - Gao, Chenqiang
AU - Meng, Fanman
AU - Wu, Qingbo
PY - 2017/8
Y1 - 2017/8
N2 - Recently, learning-based hashing methods which are designed to preserve the semantic information, have shown promising results for approximate nearest neighbor (ANN) search problems. However, most of these methods require a large number of labeled data which are difficult to access in many real applications. With very limited labeled data available, in this paper we propose a semi-supervised hashing method by integrating manifold embedding, feature representation and classifier learning into a joint framework. Specifically, a semi-supervised manifold embedding is explored to simultaneously optimize feature representation and classifier learning to make the learned binary codes optimal for classification. A two-stage hashing strategy is proposed to effectively address the corresponding optimization problem. At the first stage, an iterative algorithm is designed to obtain a relaxed solution. At the second stage, the hashing function is refined by introducing an orthogonal transformation to reduce the quantization error. Extensive experiments on three benchmark databases demonstrate the effectiveness of the proposed method in comparison with several state-of-the-art hashing methods.
AB - Recently, learning-based hashing methods which are designed to preserve the semantic information, have shown promising results for approximate nearest neighbor (ANN) search problems. However, most of these methods require a large number of labeled data which are difficult to access in many real applications. With very limited labeled data available, in this paper we propose a semi-supervised hashing method by integrating manifold embedding, feature representation and classifier learning into a joint framework. Specifically, a semi-supervised manifold embedding is explored to simultaneously optimize feature representation and classifier learning to make the learned binary codes optimal for classification. A two-stage hashing strategy is proposed to effectively address the corresponding optimization problem. At the first stage, an iterative algorithm is designed to obtain a relaxed solution. At the second stage, the hashing function is refined by introducing an orthogonal transformation to reduce the quantization error. Extensive experiments on three benchmark databases demonstrate the effectiveness of the proposed method in comparison with several state-of-the-art hashing methods.
KW - Hashing
KW - Image retrieval
KW - Locality sensitive hashing (LSH)
KW - Manifold embedding
KW - Nearest neighbor search
UR - http://www.scopus.com/inward/record.url?scp=85017631899&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2017.03.004
DO - 10.1016/j.patcog.2017.03.004
M3 - Article
AN - SCOPUS:85017631899
SN - 0031-3203
VL - 68
SP - 99
EP - 110
JO - Pattern Recognition
JF - Pattern Recognition
ER -