Sports Medicine - Open
Western Australian Academy of Performing Arts (WAAPA)
Background Accurate and detailed measurement of a dancer’s training volume is a key requirement to understanding the relationship between a dancer’s pain and training volume. Currently, no system capable of quantifying a dancer’s training volume, with respect to specific movement activities, exists. The application of machine learning models to wearable sensor data for human activity recognition in sport has previously been applied to cricket, tennis and rugby. Thus, the purpose of this study was to develop a human activity recognition system using wearable sensor data to accurately identify key ballet movements (jumping and lifting the leg). Our primary objective was to determine if machine learning can accurately identify key ballet movements during dance training. The secondary objective was to determine the influence of the location and number of sensors on accuracy. Results Convolutional neural networks were applied to develop two models for every combination of six sensors (6, 5, 4, 3, etc.) with and without the inclusion of transition movements. At the first level of classification, including data from all sensors, without transitions, the model performed with 97.8% accuracy. The degree of accuracy reduced at the second (83.0%) and third (75.1%) levels of classification. The degree of accuracy reduced with inclusion of transitions, reduction in the number of sensors and various sensor combinations. Conclusion The models developed were robust enough to identify jumping and leg lifting tasks in real-world exposures in dancers. The system provides a novel method for measuring dancer training volume through quantification of specific movement tasks. Such a system can be used to further understand the relationship between dancers’ pain and training volume and for athlete monitoring systems. Further, this provides a proof of concept which can be easily translated to other lower limb dominant sporting activities
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.