Enhanced NSCLC subtyping and staging through attention-augmented multi-task deep learning: A novel diagnostic tool

Document Type

Journal Article

Publication Title

International Journal of Medical Informatics

Volume

193

PubMed ID

39515045

Publisher

Elsevier

School

Centre for Precision Health / School of Medical and Health Sciences

RAS ID

76479

Funders

National Natural Science Foundation of China (82173617, 82373683) / Beijing Municipal Health Commission (KCZX-KT-2023002)

Comments

Yang, R., Li, W., Yu, S., Wu, Z., Zhang, H., Liu, X., ... & Guo, X. (2025). Enhanced NSCLC subtyping and staging through attention-augmented multi-task deep learning: A novel diagnostic tool. International Journal of Medical Informatics, 193. https://doi.org/10.1016/j.ijmedinf.2024.105694

Abstract

Objectives: The objective of this study is to develop a novel multi-task learning approach with attention encoders for classifying histologic subtypes and clinical stages of non-small cell lung cancer (NSCLC), with superior performance compared to currently popular deep-learning models. Material and methods: Data were collected from six publicly available datasets in The Cancer Imaging Archive (TCIA). Following the inclusion and exclusion criteria, a total of 4548 CT slices from 758 cases were allocated. We evaluated multiple multi-task learning models that integrate attention mechanisms to resolve challenges in NSCLC subtype classification and clinical staging. These models utilized convolution-based modules in their shared layers for feature extraction, while the task layers were dedicated to histological subtype classification and staging. Each branch sequentially processed features through convolution-based and attention-based modules prior to classification. Results: Our study evaluated 758 NSCLC patients (mean age, 66.2 years ± 10.3; 473 men), spanning ADC and SCC cases. In the classification of histological subtypes and clinical staging of NSCLC, the MobileNet-based multi-task learning model enhanced with attention mechanisms (MN-MTL-A) demonstrated superior performance, achieving Area Under the Curve (AUC) scores of 0.963 (95 % CI: 0.943, 0.981) and 0.966 (95 % CI: 0.945, 0.982) for each task, respectively. The model significantly surpassed its counterparts lacking attention mechanisms and those configured for single-task learning, as evidenced by P-values of 0.01 or less for both tasks, according to DeLong's test. Conclusions: The integration of attention encoder blocks into our multi-task learning network significantly enhanced the accuracy of NSCLC histological subtyping and clinical staging. Given the reduced reliance on precise radiologist annotation, our proposed model shows promising potential for clinical application.

DOI

10.1016/j.ijmedinf.2024.105694

Access Rights

subscription content

Share

 
COinS