AEHRC CSIRO at ImageCLEFmed caption 2021
File version
Version of Record (VoR)
Author(s)
Dowling, J
Koopman, B
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Virtual
Abstract
We describe our participation in the ImageCLEFmed Caption task of 2021. The task required participants to automatically compose coherent captions for a set of medical images. To this end, we employed a sequence-to-sequence model for caption generation, where its encoder and decoder were initialised with pre-trained Transformer checkpoints. In addition, we investigated the use of Self-Critical Sequence Training (SCST) (which offered a marginal improvement) and pre-training on five external medical image datasets. Overall, our approach was kept intentionally general so that it might be applied to tasks other than medical image captioning. AEHRC CSIRO placed third amongst the participating teams in terms of BLEU score-with a score 0.078 worse than the first placed participant. Our best-performing submission had the simplest configuration-it did not use SCST or pre-training on any of the external datasets. An overview of ImageCLEFmed Caption 2021 is available at: https://www.imageclef.org/2021/medical/caption.
Journal Title
Conference Title
CEUR Workshop Proceedings
Book Title
Edition
Volume
2936
Issue
Thesis Type
Degree Program
School
Publisher link
DOI
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
© 2021 Copyright for this paper by its authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Item Access Status
Note
Access the data
Related item(s)
Subject
Medical biotechnology
Computer graphics
Persistent link to this record
Citation
Nicolson, A; Dowling, J; Koopman, B, AEHRC CSIRO at ImageCLEFmed caption 2021, CEUR Workshop Proceedings, 2021, 2936, pp. 1317-1328