Automated radiology report generation using a transformer-template system: Improved clinical accuracy and an assessment of clinical safety

Document Type

Conference Proceeding

Publication Title

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Volume

13728 LNAI

First Page

530

Last Page

543

Publisher

Springer

School

School of Science

RAS ID

53002

Comments

Abela, B., Abu-Khalaf, J., Yang, C. W. R., Masek, M., & Gupta, A. (2022).Automated radiology report generation using a transformer-template system: Improved clinical accuracy and an assessment of clinical safety. In AI 2022: Advances in Artificial Intelligence: 35th Australasian Joint Conference, AI 2022, Perth, WA, Australia, December 5–8, 2022, Proceedings (pp. 530-543). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-031-22695-3_37

Abstract

Radiologists are required to write a descriptive report for each examination they perform which is a time-consuming process. Deep-learning researchers are developing models to automate this process. Currently, the most researched architecture for this task is the encoder-decoder (E-D). An issue with this approach is that these models are optimised to produce output that is more coherent and grammatically correct rather than clinically correct. The current study considers this and instead builds upon a more recent approach that generates reports using a multi-label classification model attached to a Template-based Report Generation (TRG) subsystem. In the current study two TRG models that utilise either a Transformer or CNN classifier are produced and directly compared to the most clinically accurate E-D in the literature at the time of writing. The models were trained using the MIMIC-CXR dataset, a public set of 473,057 chest X-rays and 206,563 corresponding reports. Precision, recall and F1 scores were obtained by applying a rule-based labeller to the MIMIC-CXR reports, applying those labels to the corresponding images, and then using the labeller on the generated reports. The TRG models outperformed the E-D model for clinical accuracy with the largest difference being the recall rate (T-TRG: Precision 0.38, Recall 0.58, F1 0.45; CNN-TRG: Precision 0.34, Recall 0.69, F1 0.42; E-D: Precision 0.38, Recall 0.14, F1 0.19). Examination of the quantitative metrics for each specific abnormality combined with the qualitative assessment concludes that significant progress still needs to be made before clinical integration is safe.

DOI

10.1007/978-3-031-22695-3_37

Access Rights

subscription content

Share

 
COinS