Abstract

Digital twins (DTs) are an emerging digitalization technology with a huge impact on today's innovations in both industry and research. DTs can significantly enhance our society and quality of life through the virtualization of a real-world physical system, providing greater insights about their operations and assets, as well as enhancing their resilience through real-time monitoring and proactive maintenance. DTs also pose significant security risks, as intellectual property is encoded and more accessible, as well as their continued synchronization to their physical counterparts. The rapid proliferation and dynamism of cyber threats in today's digital environments motivate the development of automated and intelligent cyber solutions. Today's industrial transformation relies heavily on artificial intelligence (AI), including machine learning (ML) and data-driven technologies that allow machines to perform tasks such as self-monitoring, investigation, diagnosis, future prediction, and decision-making intelligently. However, to effectively employ AI-based models in the context of cybersecurity, human-understandable explanations, and their trustworthiness, are significant factors when making decisions in real-world scenarios. This article provides an extensive study of explainable AI (XAI) based cybersecurity modeling through a taxonomy of AI and XAI methods that can assist security analysts and professionals in comprehending system functions, identifying potential threats and anomalies, and ultimately addressing them in DT environments in an intelligent manner. We discuss how these methods can play a key role in solving contemporary cybersecurity issues in various real-world applications. We conclude this paper by identifying crucial challenges and avenues for further research, as well as directions on how professionals and researchers might approach and model future-generation cybersecurity in this emerging field.

RAS ID

70346

Document Type

Journal Article

Date of Publication

8-1-2024

Volume

10

Issue

4

Funding Information

Australian Government's Cooperative Research Centres Program / Cyber Security Research Centre Limited

School

Centre for Securing Digital Futures

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Publisher

Elsevier

Comments

Sarker, I. H., Janicke, H., Mohsin, A., Gill, A., & Maglaras, L. (2024). Explainable AI for cybersecurity automation, intelligence and trustworthiness in digital twin: Methods, taxonomy, challenges and prospects. ICT Express, 10(4), 935-958. https://doi.org/10.1016/j.icte.2024.05.007

Share

 
COinS
 

Link to publisher version (DOI)

10.1016/j.icte.2024.05.007