Language model guided interpretable video action reasoning

Author Identifier

Syed Afaq Ali Shah: https://orcid.org/0000-0003-2181-8445

Document Type

Conference Proceeding

Publication Title

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

First Page

18878

Last Page

18887

Publisher

IEEE

School

School of Science

RAS ID

65876

Funders

National Natural Science Foundation of China (62073252, 62072358) / Natural Science Basic Research Program of Shaanxi (2024JC-JCQN-66)

Comments

Wang, N., Zhu, G., Li, H., Zhang, L., Ali Shah, S. A., & Bennamoun, M. (2024). Language model guided interpretable video action reasoning. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 18878-18887). IEEE. https://doi.org/10.1109/CVPR52733.2024.01786

Abstract

While neural networks have excelled in video action recognition tasks, their 'black-box' nature often obscures the understanding of their decision-making processes. Re-cent approaches used inherently interpretable models to an-alyze video actions in a manner akin to human reasoning. These models, however, usually fall short in performance compared to their 'black-box' counterparts. In this work, we present a new framework named Language-guided Interpretable Action Recognition framework (La-IAR). LaIAR leverages knowledge from language models to enhance both the recognition capabilities and the inter-pretability of video models. In essence, we redefine the problem of understanding video model decisions as a task of aligning video and language models. Using the logical reasoning captured by the language model, we steer the training of the video model. This integrated approach not only improves the video model's adaptability to different domains but also boosts its overall performance. Extensive experiments on two complex video action datasets, Charades & CAD-120, validates the improved performance and inter-pretability of our LaIAR framework. The code of LaIAR is available at https://github.com/NingWang2049/LaIAR.

DOI

10.1109/CVPR52733.2024.01786

Access Rights

subscription content

Share

 
COinS