Recurrent convolutional neural networks for 3D mandible segmentation in computed tomography

Loading...
Thumbnail Image
File version

Version of Record (VoR)

Author(s)
Qiu, Bingjiang
Guo, Jiapan
Kraeima, Joep
Glas, Haye Hendrik
Zhang, Weichuan
Borra, Ronald JH
Witjes, Max Johannes Hendrikus
van Ooijen, Peter MA
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2021
Size
File type(s)
Location
Abstract

Purpose: Classic encoder-decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. The main reason is that EDCNN approaches ignore the anatomical connectivity of the organs. In this paper, we propose a novel CNN-based 3D mandible segmentation approach that has the ability to accurately segment detailed anatomical structures. Methods: Different from the classic EDCNNs that need to slice or crop the whole CT scan into 2D slices or 3D patches during the segmentation process, our proposed approach can perform mandible segmentation on complete 3D CT scans. The proposed method, namely, RCNNSeg, adopts the structure of the recurrent neural networks to form a directed acyclic graph in order to enable recurrent connections between adjacent nodes to retain their connectivity. Each node then functions as a classic EDCNN to segment a single slice in the CT scan. Our proposed approach can perform 3D mandible segmentation on sequential data of any varied lengths and does not require a large computation cost. The proposed RCNNSeg was evaluated on 109 head and neck CT scans from a local dataset and 40 scans from the PDDCA public dataset. The final accuracy of the proposed RCNNSeg was evaluated by calculating the Dice similarity coefficient (DSC), average symmetric surface distance (ASD), and 95% Hausdorff distance (95HD) between the reference standard and the automated segmentation. Results: The proposed RCNNSeg outperforms the EDCNN-based approaches on both datasets and yields superior quantitative and qualitative performances when compared to the state-of-the-art approaches on the PDDCA dataset. The proposed RCNNSeg generated the most accurate segmentations with an average DSC of 97.48%, ASD of 0.2170 mm, and 95HD of 2.6562 mm on 109 CT scans, and an average DSC of 95.10%, ASD of 0.1367 mm, and 95HD of 1.3560 mm on the PDDCA dataset. Conclusions: The proposed RCNNSeg method generated more accurate automated segmentations than those of the other classic EDCNN segmentation techniques in terms of quantitative and qualitative evaluation. The proposed RCNNSeg has potential for automatic mandible segmentation by learning spatially structured information.

Journal Title

Journal of Personalized Medicine

Conference Title
Book Title
Edition
Volume

11

Issue

6

Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

Item Access Status
Note
Access the data
Related item(s)
Subject

Neural networks

Nanotechnology

Biomedical imaging

Computational imaging

Medical biochemistry and metabolomics

Medical biotechnology

Pharmacology and pharmaceutical sciences

Science & Technology

Life Sciences & Biomedicine

Health Care Sciences & Services

Medicine, General & Internal

General & Internal Medicine

accurate mandible segmentation

oral and maxillofacial surgery

convolutional neural network

3D virtual surgical planning (3D VSP)

Persistent link to this record
Citation

Qiu, B; Guo, J; Kraeima, J; Glas, HH; Zhang, W; Borra, RJH; Witjes, MJH; van Ooijen, PMA, Recurrent convolutional neural networks for 3D mandible segmentation in computed tomography, 2021, 11 (6), pp. 492

Collections