Hierarchical transformer for visual affordance understanding using a large-scale dataset

Document Type

Conference Proceeding

Publication Title

2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

First Page

11371

Last Page

11376

Publisher

IEEE

School

School of Science / Centre for Artificial Intelligence and Machine Learning (CAIML)

RAS ID

58316

Comments

Shah, S. A. A., & Khalifa, Z. (2023, October). Hierarchical transformer for visual affordance understanding using a large-scale dataset [Paper presentation]. 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA. https://doi.org/10.1109/IROS55552.2023.10341976

Abstract

Recognition, detection, and segmentation tasks in machine vision have focused on studying the physical and textural attributes of objects. However, robots and intelligent machines require the ability to understand visual cues, such as the visual affordances that objects offer, to interact intelligently with novel objects. In this paper, we present a large-scale multi-view RGBD visual affordance learning dataset a benchmark of 47,210 RGBD images from 37 object categories, annotated with 15 visual affordance categories and 35 cluttered/complex scenes. We deploy a Vision Transformer (ViT), called Visual Affordance Transformer (VAT), for the affordance segmentation task. Due to its hierarchical architecture, VAT can learn multiple affordances at various scales, making it suitable for objects of varying sizes. Our experimental results show the superior performance of VAT compared to state-of-the-art deep learning networks. In addition, the challenging nature of the proposed dataset highlights the potential for new and robust affordance learning algorithms. Our dataset is publicly available at https://sites.google.com/view/afaqshah/dataset.

DOI

10.1109/IROS55552.2023.10341976

Access Rights

subscription content

Share

 
COinS