Relation graph network for 3D object detection in point clouds
IEEE Transactions on Image Processing
School of Science
National Natural Science Foundation of China / Australian Research Council
Convolutional Neural Networks (CNNs) have emerged as a powerful tool for object detection in 2D images. However, their power has not been fully realised for detecting 3D objects directly in point clouds without conversion to regular grids. Moreover, existing state-of-the-art 3D object detection methods aim to recognize objects individually without exploiting their relationships during learning or inference. In this article, we first propose a strategy that associates the predictions of direction vectors with pseudo geometric centers, leading to a win-win solution for 3D bounding box candidates regression. Secondly, we propose point attention pooling to extract uniform appearance features for each 3D object proposal, benefiting from the learned direction features, semantic features and spatial coordinates of the object points. Finally, the appearance features are used together with the position features to build 3D object-object relationship graphs for all proposals to model their co-existence. We explore the effect of relation graphs on proposals’ appearance feature enhancement under supervised and unsupervised settings. The proposed relation graph network comprises a 3D objectproposal generation module and a 3D relation module, making it an end-to-end trainable network for detecting 3D objects in point clouds. Experiments on challenging benchmark pointcloud datasets (SunRGB-D, ScanNet and KITTI) show that our algorithm performs better than existing state-of-the-art.