Cross-model retrieval with reconstruct hashing
File version
Author(s)
Yan, C
Bai, X
Zhou, J
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Bai, X
Hancock, ER
Ho, TK
Wilson, RC
Biggio, B
RoblesKelly, A
Date
Size
File type(s)
Location
Beijing, China
License
Abstract
Hashing has been widely used in large-scale vision problems thanks to its efficiency in both storage and speed. For fast cross-modal retrieval task, cross-modal hashing (CMH) has received increasing attention recently with its ability to improve quality of hash coding by exploiting the semantic correlation across different modalities. Most traditional CMH methods focus on designing a good hash function to use supervised information appropriately, but the performance are limited by hand-crafted features. Some deep learning based CMH methods focus on learning good features by using deep network, however, directly quantizing the feature may result in large loss for hashing. In this paper, we propose a novel end-to-end deep cross-modal hashing framework, integrating feature and hash-code learning into the same network. We keep the relationship of features between modalities. For hash process, we design a novel net structure and loss for hash learning as well as reconstruct the hash codes to features to improve the quality of codes. Experiments on standard databases for cross-modal retrieval show the proposed methods yields substantial boosts over latest state-of-the-art hashing methods.
Journal Title
Conference Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Book Title
Edition
Volume
11004 LNCS
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Information and computing sciences