Author Identifiers

Adrian Michael Wood (Adrian)

https://orcid.org/0000-0002-3335-6375

Date of Award

2021

Degree Type

Thesis

Degree Name

Doctor of Philosophy

School

School of Science

First Advisor

Mike Johnstone

Second Advisor

Peter Hannay

Abstract

Machine learning is a subset of Artificial Intelligence which is utilised in a variety of different fields to increase productivity, reduce overheads, and simplify the work process through training machines to automatically perform a task. Machine learning has been implemented in many different fields such as medical science, information technology, finance, and cyber security. Machine learning algorithms build models which identify patterns within data, which when applied to new data, can map the input to an output with a high degree of accuracy. To build the machine learning model, a dataset comprised of appropriate examples is divided into training and testing sets. The training set is used by the machine learning algorithm to identify patterns within the data, which are used to make predictions on new data. The test set is used to evaluate the performance of the machine learning model.

These models are popular because they significantly improve the performance of technology through automation of feature detection which previously required human input. However, machine learning algorithms are susceptible to a variety of adversarial attacks, which allow an attacker to manipulate the machine learning model into performing an unwanted action, such as misclassifying data into the attackers desired class, or reducing the overall efficacy of the ML model. One current research area is that of malware detection. Malware detection relies on machine learning to detect previously unknown malware variants, without the need to manually reverse-engineer every suspicious file. Detection of Zero-day malware plays an important role in protecting systems generally but is particularly important in systems which manage critical infrastructure, as such systems often cannot be shut down to apply patches and thus must rely on network defence.

In this research, a targeted adversarial poisoning attack was developed to allow Zero-day malware files, which were originally classified as malicious, to bypass detection by being misclassified as benign files. An adversarial poisoning attack occurs when an attacker can inject specifically-crafted samples into the training dataset which alters the training process to the desired outcome of the attacker. The targeted adversarial poisoning attack was performed by taking a random selection of the Zero-day file’s import functions and injecting them into the benign training dataset. The targeted adversarial poisoning attack succeeded for both Multi-Layer Perceptron (MLP) and Decision Tree models without reducing the overall efficacy of the target model. A defensive strategy was developed for the targeted adversarial poisoning attack for the MLP models by examining the activation weights of the penultimate layer at test time. If the activation weights were outside the norm for the target (benign) class, the file is quarantined for further examination. It was found to be possible to identify on average 80% of the target Zero-day files from the combined targeted poisoning attacks by examining the activation weights of the neurons from the penultimate layer.

Share

 
COinS