Domain shift robustness in deep learning models

Date of Award

2023

Document Type

Thesis

Publisher

Edith Cowan University

Degree Name

Doctor of Philosophy

School

School of Science

First Supervisor

Syed Mohammed Shamsul Islam

Second Supervisor

Syed Afaq Ali Shah

Third Supervisor

Asim Iqbal

Fourth Supervisor

Adel Al-Jumaily

Abstract

The advances in machine learning and artificial intelligence are drastically improving our capabilities of solving very easy to extremely complex tasks using computational models. Although these models perform very well on a given data distribution used for training, when presented a data drawn from a different distribution during inference, they tend to degrade in performance. Data bias, within-domain, out-of-domain deviation, and overfitting to the specific data are some of the main challenges for learning based models. These challenges are prevalent in different imaging datasets, when we are getting different image modalities from a variety of different imaging sensors each with different intensity distribution. The analysis of these imaging data becomes more challenging when data is multi-vendor and collected at multiple sites at different time points under different protocols. For example, medical data from different sites, natural images under different environmental conditions (e.g. day and night, sunny and cloudy), data through different imaging modalities are challenging to deal with learning based models. These variations in the data during inference time degrade the performance of the models. On the contrary, biological brains are much better at handling such unseen circumstances. By taking advantage of the current understanding of biological structures and their functionality, we can aim toward making improvements in the existing methods to make them relatively more robust against unseen variations.

In this thesis, we investigate the effect of different types of domain shifts on deep learning based methods. We choose to analyse the performance of deep learning based models for various computer vision tasks i.e. image registration, image classification and image segmentation. The aim is to thoroughly probe into limitations of deep learning based models and to investigate that how learning based models can be made robust against domain shifts. In this study, our focus is on the specific case, when learning based methods have no access to the possible domain shifts as in practice it is not possible to know all the possible variation in the data during training phase.

To address the issue of different intensity distributions (within-domain shifts) in medical image data under the image registration paradigm, we investigated the effects of introducing perceptual and structural-based losses, in comparison with mean square and cross correlation-based losses in the training of deep learning based registration models. Image registration is an important computer vision technique that can also be used to precisely monitor disease progression and to analyse large-scale datasets in a high-throughput manner. Deep learning based image registration methods are mainly inspired by optical flow-based backbone architectures with the addition of spatial transformer networks. The optical flow algorithm assumes a certain constraint on the pixel values in consecutive frames, we argue that this assumption violates in case of different intensity distributions which in turn affects the performance of registration methods. We addressed the specific case for non-rigid registration in brain MRI images. By adding perceptual and structural losses we observe that the models become more robust towards change in intensity.

We then explored the effects of local pixel contrast extracted through modelling a module of the human visual system for saliency region detection in dynamic natural scenes under different illumination conditions. Based on the clear effectiveness of adding such a bio-inspired approach to the existing methods on the natural imaging dataset, we proposed a novel bio-inspired layer (NeDev) in deep neural networks that can greatly enhance the robustness and tolerance against out-of-domain intensity distribution in the case of medical images as well natural image datasets. This layer transforms the input image into a common image space which is computed by local pixel variance. We benchmark the performance of our approach on different datasets to show the efficacy of the proposed layer. Finally, we provide an application tool for the community that can help them label, apply active learning, perform segmentation and registration tasks on medical imaging datasets with a set of trained models.

This study provides a thorough analysis of the effects of different types of domain shifts on deep learning based methods by investigating the performance against major computer vision tasks i.e. image registration, image classification and image segmentation. Findings of this research study through the combination of the different pathways have led to the conclusion of effectiveness of structural and local pixel deviation as a defence against within-domain and out-of-domain shifts.

Access Note

Access to this thesis is embargoed until 30th January 2025.

Access to this thesis is restricted. Please see the Access Note below for access details.

Share

 
COinS