Information bottleneck and selective noise supervision for zero-shot learning

No Thumbnail Available
File version
Author(s)
Zhou, Lei
Liu, Yang
Zhang, Pengcheng
Bai, Xiao
Gu, Lin
Zhou, Jun
Yao, Yazhou
Harada, Tatsuya
Zheng, Jin
Hancock, Edwin
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2022
Size
File type(s)
Location
License
Abstract

Zero-shot learning (ZSL) aims to recognize novel classes by transferring semantic knowledge from seen classes to unseen classes. Though many ZSL methods rely on a direct mapping between the visual and the semantic space, the calibration deviation and hubness problem limit the generalization capability to unseen classes. Recently emerged generative ZSL methods generate unseen image features to transform ZSL into a supervised classification problem. However, most generative models still suffer from the seen-unseen bias problem as only seen data is used for training. To address these issues, we propose a novel bidirectional embedding based generative model with a tight visual-semantic coupling constraint. We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces. Since the embedding from high-dimensional visual features comprises much non-semantic information, the alignment of visual and semantic in latent space would inevitably be deviated. Therefore, we introduce an information bottleneck constraint to ZSL for the first time to preserve essential attribute information during the mapping. Specifically, we utilize the uncertainty estimation and the wake-sleep procedure to alleviate the feature noises and improve model abstraction capability. In addition, our method can be easily extended to the transductive ZSL setting by generating labels for unseen images. We then introduce a robust self-training loss to solve this label-noise problem. Extensive experimental results show that our method outperforms the state-of-the-art methods in different ZSL settings on most benchmark datasets.

Journal Title

Machine Learning

Conference Title
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note

This publication has been entered in Griffith Research Online as an advanced online version.

Access the data
Related item(s)
Subject

Image processing

Pattern recognition

Science & Technology

Computer Science, Artificial Intelligence

Zero-shot learning

Persistent link to this record
Citation

Zhou, L; Liu, Y; Zhang, P; Bai, X; Gu, L; Zhou, J; Yao, Y; Harada, T; Zheng, J; Hancock, E, Information bottleneck and selective noise supervision for zero-shot learning, Machine Learning, 2022

Collections