Self-Supervised Lie Algebra Representation Learning via Optimal Canonical Metric
File version
Author(s)
Pan, Zicheng
Zhao, Yang
Gao, Yongsheng
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
License
Abstract
Learning discriminative representation with limited training samples is emerging as an important yet challenging visual categorization task. While prior work has shown that incorporating self-supervised learning can improve performance, we found that the direct use of canonical metric in a Lie group is theoretically incorrect. In this article, we prove that a valid optimization measurement should be a canonical metric on Lie algebra. Based on the theoretical finding, this article introduces a novel self-supervised Lie algebra network (SLA-Net) representation learning framework. Via minimizing canonical metric distance between target and predicted Lie algebra representation within a computationally convenient vector space, SLA-Net avoids computing nontrivial geodesic (locally length-minimizing curve) metric on a manifold (curved space). By simultaneously optimizing a single set of parameters shared by self-supervised learning and supervised classification, the proposed SLA-Net gains improved generalization capability. Comprehensive evaluation results on eight public datasets show the effectiveness of SLA-Net for visual categorization with limited samples.
Journal Title
IEEE Transactions on Neural Networks and Learning Systems
Conference Title
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
This publication has been entered in Griffith Research Online as an advanced online version.
Access the data
Related item(s)
Subject
Computer vision
Machine learning
Persistent link to this record
Citation
Yu, X; Pan, Z; Zhao, Y; Gao, Y, Self-Supervised Lie Algebra Representation Learning via Optimal Canonical Metric, IEEE Transactions on Neural Networks and Learning Systems, 2024