Show simple item record

dc.contributor.authorYan, C
dc.contributor.authorPang, G
dc.contributor.authorBai, X
dc.contributor.authorShen, C
dc.contributor.authorZhou, J
dc.contributor.authorHancock, E
dc.date.accessioned2020-04-02T02:20:51Z
dc.date.available2020-04-02T02:20:51Z
dc.date.issued2019
dc.identifier.isbn9781450368896
dc.identifier.doi10.1145/3343031.3350927
dc.identifier.urihttp://hdl.handle.net/10072/392923
dc.description.abstractThis paper tackles a rarely explored but critical problem within learning to hash, i.e., to learn hash codes that effectively discriminate hard similar and dissimilar examples, to empower large-scale image retrieval. Hard similar examples refer to image pairs from the same semantic class that demonstrate some shared appearance but have different fine-grained appearance. Hard dissimilar examples are image pairs that come from different semantic classes but exhibit similar appearance. These hard examples generally have a small distance due to the shared appearance. Therefore, effective encoding of the hard examples can well discriminate the relevant images within a small Hamming distance, enabling more accurate retrieval in the top-ranked returned images. However, most existing hashing methods cannot capture this key information as their optimization is dominated by easy examples, i.e., distant similar/dissimilar pairs that share no or limited appearance. To address this problem, we introduce a novel Gamma distribution-enabled and symmetric Kullback-Leibler divergence-based loss, which is dubbed dual hinge loss because it works similarly as imposing two smoothed hinge losses on the respective similar and dissimilar pairs. Specifically, the loss enforces exponentially variant penalization on the hard similar (dissimilar) examples to emphasize and learn their fine-grained difference. It meanwhile imposes a bounding penalization on easy similar (dissimilar) examples to prevent the dominance of the easy examples in the optimization while preserving the high-level similarity (dissimilarity). This enables our model to well encode the key information carried by both easy and hard examples. Extensive empirical results on three widely-used image retrieval datasets show that (i) our method consistently and substantially outperforms state-of-the-art competing methods using hash codes of the same length and (ii) our method can use significantly (e.g., 50%-75%) shorter hash codes to perform substantially better than, or comparably well to, the competing methods.
dc.description.peerreviewedYes
dc.languageEnglish
dc.publisherAssociation for Computing Machinery (ACM)
dc.relation.ispartofconferencename27th ACM International Conference on Multimedia (MM '19)
dc.relation.ispartofconferencetitleMM 2019 - Proceedings of the 27th ACM International Conference on Multimedia
dc.relation.ispartofdatefrom2019-10-21
dc.relation.ispartofdateto2019-10-25
dc.relation.ispartoflocationNice, France
dc.relation.ispartofpagefrom1535
dc.relation.ispartofpageto1542
dc.subject.fieldofresearchArtificial Intelligence and Image Processing
dc.subject.fieldofresearchcode0801
dc.subject.keywordsScience & Technology
dc.subject.keywordsComputer Science, Interdisciplinary Applications
dc.subject.keywordsComputer Science, Theory & Methods
dc.titleDeep hashing by discriminating hard examples
dc.typeConference output
dc.type.descriptionE1 - Conferences
dcterms.bibliographicCitationYan, C; Pang, G; Bai, X; Shen, C; Zhou, J; Hancock, E, Deep hashing by discriminating hard examples, MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 1535-1542
dc.date.updated2020-04-02T02:15:58Z
gro.hasfulltextNo Full Text
gro.griffith.authorZhou, Jun
gro.griffith.authorYan, Cheng


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

  • Conference outputs
    Contains papers delivered by Griffith authors at national and international conferences.

Show simple item record