Context-Adaptive Deep Learning for Efficient Image Parsing in Remote Sensing: An Automated Parameter Selection Approach
File version
Author(s)
Verma, B
Zhang, M
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Mexico City, Mexico
License
Abstract
Image parsing is among the core tasks in the field of image processing and computer vision having wide-ranging applications in the areas of autonomous driving, image interpretation, medical analysis, and remote sensing. The modern techniques despite performing the labeling tasks accurately face several challenges. Among these challenges, the computation of contextual information and the selection of optimized parameters are of prime importance in pixel-wise segmentation tasks. We propose a novel context adaptive image parsing framework that utilizes a unique parameter selection strategy to produce final pixel labels. The automatic parameter selection minimizes the computational overhead, reduces the time complexity, and improves upon the segmentation labels produced. The proposed framework is evaluated on Wuhan Dense Labelling Dataset (WHDLD). In addition, a comprehensive comparison with state- of-the-art image segmentation techniques is presented. Finally, the analysis supporting the dominance of proposed architecture is presented in collation with existing techniques.
Journal Title
Conference Title
2023 IEEE Symposium Series on Computational Intelligence (SSCI)
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Deep learning
Computational imaging
Persistent link to this record
Citation
Azam, B; Verma, B; Zhang, M, Context-Adaptive Deep Learning for Efficient Image Parsing in Remote Sensing: An Automated Parameter Selection Approach, 2023 IEEE Symposium Series on Computational Intelligence (SSCI), 2023, pp. 959-964