Date of Award

2026

Document Type

Thesis

Publisher

Edith Cowan University

Degree Name

Doctor of Philosophy

School

School of Science

First Supervisor

Jumana Abu-Khalaf

Second Supervisor

Patryk Szewczyk

Third Supervisor

Naeem Janjua

Abstract

Adversarial attacks and model drift are two major threats to the reliability of machine learning based Network Intrusion Detection Systems. Adversarial at tacks exploit model weaknesses through carefully crafted perturbations, while model drift gradually degrades performance as traffic patterns and malware families evolve. This thesis examines how each of these phenomena under mines NIDS reliability, and develops offensive and defensive techniques that are realistic for operational IoT and medical IoT settings.

Four main findings are established. First, for environmental variability, the MalBoT-DRL framework is shown to maintain stable detection and practical adaptation under evolving IIoT traffic without continual offline retraining, whereas conventional deep models lose effectiveness as new botnet variants appear. Second, the feasibility of highly realistic adversarial attacks on tabu lar network data is demonstrated: on-manifold attacks generated with SAAE and Large Language Model assisted attacks yield protocol compliant perturbations. These evade state of the art detectors that appear robust only on fixed benchmark splits. Third, a post hoc detection method, EDLT, is introduced to separate normal and adversarial samples in latent space using controlled transformations and differential entropy, requiring only small batches of sam ples and no access to the attack generator. Fourth, feature entanglement is established as a unifying, domain agnostic defense principle. SSD combines stochastic input transformations with entanglement to disrupt smooth gradient structure and raise the cost of gray box attacks while preserving clean accuracy. M4Dapplies the same principle to decision trees for TinyML in medical IoT, maintaining interpretability for audit, forensics, and incident response. The generality of this principle is further supported by EntangleNet, which uses entangling preprocessing to improve robustness of image classifiers across multiple datasets without modifying or retraining the target models.

Overall, the thesis advances knowledge by showing that adaptive learning strategies can address environmental drift, while latent consistency checks and feature entanglement provide effective and reusable leverage against adversarial manipulation across tabular and image modalities. The practical impact is a layered defense stack that reduces false alarms, sustains performance under change, and preserves accountability and transparency for security analysts and healthcare operators, with limited integration overhead. The work also makes its limitations explicit: all results are obtained on widely used, static datasets augmented with realistic adversarial procedures. Hence, maintaining long-term efficacy in live networks will require ongoing monitoring and periodic recalibration. These operational deployment considerations are outside the scope of this thesis.

Access Note

Access to this thesis is embargoed until 11th February 2027

DOI

10.25958/1q9g-qd90

Available for download on Thursday, February 11, 2027

Share

 
COinS