Pre-text representation transfer for deep learning with limited and imbalanced data: Application to CT-based COVID-19 detection

Document Type

Conference Proceeding

Publication Title

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Volume

13836 LNCS

First Page

119

Last Page

130

Publisher

Springer

School

School of Science

RAS ID

54563

Funders

Australian Government Research Training Program Scholarship / Dr. Akhtar is a recipient of the Office of National Intelligence National Intelligence Postdoctoral Grant # NIPG-2021-001 funded by the Australian Government

Comments

Altaf, F., Islam, S. M., Janjua, N. K., & Akhtar, N. (2023, February). Pre-text representation transfer for deep learning with limited and imbalanced data: Application to CT-based COVID-19 detection. In Image and Vision Computing: 37th International Conference, IVCNZ 2022, Auckland, New Zealand, November 24–25, 2022, 13836, 119-130. Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-25825-1_9

Abstract

Annotating medical images for disease detection is often tedious and expensive. Moreover, the available training samples for a given task are generally scarce and imbalanced. These conditions are not conducive for learning effective deep neural models. Hence, it is common to ‘transfer’ neural networks trained on natural images to the medical image domain. However, this paradigm lacks in performance due to the large domain gap between the natural and medical image data. To address that, we propose a novel concept of Pre-text Representation Transfer (PRT). In contrast to the conventional transfer learning, which fine-tunes a source model after replacing its classification layers, PRT retains the original classification layers and updates the representation layers through an unsupervised pre-text task. The task is performed with (original, not synthetic) medical images, without utilizing any annotations. This enables representation transfer with a large amount of training data. This high-fidelity representation transfer allows us to use the resulting model as a more effective feature extractor. Moreover, we can also subsequently perform the traditional transfer learning with this model. We devise a collaborative representation based classification layer for the case when we leverage the model as a feature extractor. We fuse the output of this layer with the predictions of a model induced with the traditional transfer learning performed over our pre-text transferred model. The utility of our technique for limited and imbalanced data classification problem is demonstrated with an extensive five-fold evaluation for three large-scale models, tested for five different class-imbalance ratios for CT based COVID-19 detection. Our results show a consistent gain over the conventional transfer learning with the proposed method.

DOI

10.1007/978-3-031-25825-1_9

Access Rights

subscription content

Share

 
COinS