Document Type
Journal Article
Publication Title
ICT Express
Volume
10
Issue
4
First Page
935
Last Page
958
Publisher
Elsevier
School
Centre for Securing Digital Futures
RAS ID
70346
Funders
Australian Government's Cooperative Research Centres Program / Cyber Security Research Centre Limited
Abstract
Digital twins (DTs) are an emerging digitalization technology with a huge impact on today's innovations in both industry and research. DTs can significantly enhance our society and quality of life through the virtualization of a real-world physical system, providing greater insights about their operations and assets, as well as enhancing their resilience through real-time monitoring and proactive maintenance. DTs also pose significant security risks, as intellectual property is encoded and more accessible, as well as their continued synchronization to their physical counterparts. The rapid proliferation and dynamism of cyber threats in today's digital environments motivate the development of automated and intelligent cyber solutions. Today's industrial transformation relies heavily on artificial intelligence (AI), including machine learning (ML) and data-driven technologies that allow machines to perform tasks such as self-monitoring, investigation, diagnosis, future prediction, and decision-making intelligently. However, to effectively employ AI-based models in the context of cybersecurity, human-understandable explanations, and their trustworthiness, are significant factors when making decisions in real-world scenarios. This article provides an extensive study of explainable AI (XAI) based cybersecurity modeling through a taxonomy of AI and XAI methods that can assist security analysts and professionals in comprehending system functions, identifying potential threats and anomalies, and ultimately addressing them in DT environments in an intelligent manner. We discuss how these methods can play a key role in solving contemporary cybersecurity issues in various real-world applications. We conclude this paper by identifying crucial challenges and avenues for further research, as well as directions on how professionals and researchers might approach and model future-generation cybersecurity in this emerging field.
DOI
10.1016/j.icte.2024.05.007
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Comments
Sarker, I. H., Janicke, H., Mohsin, A., Gill, A., & Maglaras, L. (2024). Explainable AI for cybersecurity automation, intelligence and trustworthiness in digital twin: Methods, taxonomy, challenges and prospects. ICT Express, 10(4), 935-958. https://doi.org/10.1016/j.icte.2024.05.007