Integration of Patch Features Through Self-supervised Learning and Transformer for Survival Analysis on Whole Slide Images
Author(s)
Huang, Z
Chai, H
Wang, R
Wang, H
Yang, Y
Wu, H
Griffith University Author(s)
Year published
2021
Metadata
Show full item recordAbstract
Survival prediction using whole slide images (WSIs) can provide guidance for better treatment of diseases and patient care. Previous methods usually extract and process only image features from patches of WSIs. However, they ignore the significant role of spatial information of patches and the correlation between the patches of WSIs. Furthermore, those methods extract the patch features through the model pre-trained on ImageNet, overlooking the huge gap between WSIs and natural images. Therefore, we propose a new method, called SeTranSurv, for survival prediction. SeTranSurv extracts patch features from WSIs through ...
View more >Survival prediction using whole slide images (WSIs) can provide guidance for better treatment of diseases and patient care. Previous methods usually extract and process only image features from patches of WSIs. However, they ignore the significant role of spatial information of patches and the correlation between the patches of WSIs. Furthermore, those methods extract the patch features through the model pre-trained on ImageNet, overlooking the huge gap between WSIs and natural images. Therefore, we propose a new method, called SeTranSurv, for survival prediction. SeTranSurv extracts patch features from WSIs through self-supervised learning and adaptively aggregates these features according to their spatial information and correlation between patches using the Transformer. Experiments on three large cancer datasets indicate the effectiveness of our model. More importantly, SeTranSurv has better interpretability in locating important patterns and features that contribute to accurate cancer survival prediction.
View less >
View more >Survival prediction using whole slide images (WSIs) can provide guidance for better treatment of diseases and patient care. Previous methods usually extract and process only image features from patches of WSIs. However, they ignore the significant role of spatial information of patches and the correlation between the patches of WSIs. Furthermore, those methods extract the patch features through the model pre-trained on ImageNet, overlooking the huge gap between WSIs and natural images. Therefore, we propose a new method, called SeTranSurv, for survival prediction. SeTranSurv extracts patch features from WSIs through self-supervised learning and adaptively aggregates these features according to their spatial information and correlation between patches using the Transformer. Experiments on three large cancer datasets indicate the effectiveness of our model. More importantly, SeTranSurv has better interpretability in locating important patterns and features that contribute to accurate cancer survival prediction.
View less >
Conference Title
Lecture Notes in Computer Science
Volume
12908
Subject
Computer vision and multimedia computation