A Bottom-Up Capsule Network for Hierarchical Image Classification
File version
Author(s)
Robles-Kelly, A
Zhang, LY
Bouadjenek, MR
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Port Macquarie, Australia
License
Abstract
Hierarchical image classification is an arduous task in deep learning and computer vision. It requires classifying multiple image classes following a taxonomy or data hierarchy. This paper introduces a bottom-up hierarchical capsule network (BUH-CapsNet) designed to address hierarchical multi-label classification. The hierarchical structure of BUH-CapsNet allows it to build a tree-like structure for classification problems, making use of the data hierarchy. This structure enables the network to learn more complex relationships in the taxonomy by balancing the hierarchical levels and following the fine-to-coarse paradigm, leading to more accurate classification results. Furthermore, the bottom-up architecture of the BUH-CapsNet enforces hierarchical consistency, using the hierarchical structure of the datasets. We trained our BUH-CapsNet considering the hierarchical level weights that keep a balance between the levels. Experiments on six widely available datasets show that BUH-CapsNet achieves better results than existing multi-label classification methods and performs better when handling hierarchical labels.
Journal Title
Conference Title
2023 International Conference on Digital Image Computing: Techniques and Applications (DICTA)
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Computer vision and multimedia computation
Machine learning
Persistent link to this record
Citation
Noor, KT; Robles-Kelly, A; Zhang, LY; Bouadjenek, MR, A Bottom-Up Capsule Network for Hierarchical Image Classification, 2023 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2023, pp. 325-331