Robust fitting: Assisted by semantic analysis and reinforcement learning

Author Identifiers

Giang Truong


Date of Award


Degree Type


Degree Name

Doctor of Philosophy


School of Science

First Advisor

David Suter

Second Advisor

Syed Zulqarnain Gilani

Third Advisor

Erchuan Zhang


Many computer vision applications require robust model estimation from a set of observed data. However, these data usually contain outliers, due to imperfect data acquisition or pre-processing steps, which can reduce the performance of conventional model-fitting methods. Robust fitting is thus critical to make the model estimation robust against outliers and reach stable performance. All of the contributions made in this thesis are for maximum consensus. In robust model fitting, maximum consensus is one of the most popular criteria, which aims to estimate the model that is consistent to as many observations as possible, i.e. obtain the highest consensus. The thesis makes contributions in two aspects of maximum consensus, one is non-learning based approaches and the other is learning based approaches.

The first motivation for our work is the remarkable progress in semantic segmentation in recent years. Semantic segmentation is a useful process and is usually available for scene understanding, medical image analysis, and virtual reality. We propose novel methods, which make use of semantic segmentation, to improve the efficiency of two robust non-learning based algorithms.

Another motivation for our contributions is the advances in reinforcement learning. In the thesis, a novel unsupervised learning framework is proposed to learn (without labelled data) to solve robust estimation directly. In particular, we formulate robust fitting problem as a special case of goal-oriented learning, and adopt the Reinforcement Learning framework as the basis of our approach. Our approach is agnostic to the input features and can be generalized to various practical applications.


Author also known as Giang Truong

Access Note

Access to this thesis is embargoed until 9th March 2023.

Access to this thesis is restricted. Please see the Access Note below for access details.