SPARE: Self-supervised part erasing for ultra-fine-grained visual categorization
File version
Author(s)
Zhao, Yang
Gao, Yongsheng
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
License
Abstract
This paper presents SPARE, a self-supervised part erasing framework for ultra-fine-grained visual categorization. The key insight of our model is to learn discriminative representations by encoding a self-supervised module that performs random part erasing and prediction on the contextual position of the erased parts. This drives the network to exploit intrinsic structure of data, i.e., understanding and recognizing the contextual information of the objects, thus facilitating more discriminative part-level representation. This also enhances the learning capability of the model by introducing more diversified training part segments with semantic meaning. We demonstrate that our approach is able to achieve strong performance on seven publicly available datasets covering ultra-fine-grained visual categorization and fine-grained visual categorization tasks.
Journal Title
Pattern Recognition
Conference Title
Book Title
Edition
Volume
128
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Computer vision and multimedia computation
Data management and data science
Machine learning
Science & Technology
Technology
Computer Science, Artificial Intelligence
Engineering, Electrical & Electronic
Computer Science
Persistent link to this record
Citation
Yu, X; Zhao, Y; Gao, Y, SPARE: Self-supervised part erasing for ultra-fine-grained visual categorization, Pattern Recognition, 2022, 128, pp. 108691