Predicting falls through muscle weakness from a single whole body image: A multimodal contrastive learning framework

Document Type

Conference Proceeding

Publication Title

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Volume

15384 LNCS

First Page

85

Last Page

94

Publisher

Springer

School

Centre for Artificial Intelligence and Machine Learning (CAIML) / School of Science / Nutrition and Health Innovation Research Institute

Funders

National Health and Medical Research Council / Healthway / Future Health Research and Innovation Fund / Royal Perth Hospital Research Foundation Fellowship (RPHRF CAF 00/21) / National Heart Foundation of Australia Future Leader Fellowship (102817) / Raine Medical Research Foundation / Australia-Germany Joint Research Cooperative Scheme (57559550)

Grant Number

NHMRC Numbers : 254627, 303169, 572604

Comments

Zhang, X., Saleem, A., Ilyas, Z., Suter, D., Nadeem, U., Prince, R. L., ... & Gilani, S. Z. (2024, October). Predicting falls through muscle weakness from a single whole body image: A multimodal contrastive learning framework. In International Workshop on Applications of Medical AI (pp. 85-94). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-82007-6_9

Abstract

Falls are often attributed to poor muscle function, with weak hand grip strength clinically recognized as a major risk factor. However, grip strength is rarely assessed clinically. Low radiation dual-energy X-ray absorptiometry (DXA) whole-body scans, that can be obtained during routine osteoporosis screening, offer a comprehensive overview of body composition, thereby providing valuable information for musculoskeletal health. Here, we propose a machine learning technique, exploiting image and clinical data to classify weak grip strength (<22 kg), thereby enhancing fall prediction capabilities. To effectively utilize both discrete and continuous grip strength information, we introduce a novel Supervised Contrastive learning (SupCon) loss strategy, supplemented by regression loss guidance. Additionally, we present a pipeline featuring a unique Region of Interest (RoI) extraction strategy in the data preprocessing procedure, which is designed to focus on areas of genuine interest. Our proposed multi-modal contrastive learning (MMCL) framework enhances feature separability, and class diversity in the latent space, by leveraging different types of information. We evaluate the performance of our framework using a dataset of older women (2144 images); and employ survival analysis for evaluating future fall-related hospitalization risk over 5 years. Our results demonstrate that weak grip strength classified by the proposed approach achieves high sensitivity and accuracy and predicts risk of injurious falls in older women.

DOI

10.1007/978-3-031-82007-6_9

Access Rights

subscription content

Share

 
COinS