Semantic guided long range stereo depth estimation for safer sutonomous vehicle applications

Document Type

Journal Article

Publication Title

IEEE Transactions on Intelligent Transportation Systems

Publisher

IEEE

School

School of Science

RAS ID

44437

Comments

Chuah, W., Tennakoon, R., Hoseinnezhad, R., Suter, D., & Bab-Hadiashar, A. (2022). Semantic guided long range stereo depth estimation for safer autonomous vehicle applications. IEEE Transactions on Intelligent Transportation Systems, 23(10), 1816-18926.

https://doi.org/10.1109/TITS.2022.3170870

Abstract

Autonomous vehicles in intelligent transportation systems must be able to perform reliable and safe navigation. This necessitates accurate object detection, which is commonly achieved by high-precision depth perception. Existing stereo vision-based depth estimation systems generally involve computation of pixel correspondences and estimation of disparities between rectified image pairs. The estimated disparity values will be converted into depth values in downstream applications. As most applications often work in the depth domain, the accuracy of depth estimation is often more compelling than disparity estimation. However, at large distances (>50m), the accuracy of disparity estimation does not directly translate to the accuracy of depth estimation. In the context of learning-based stereo systems, this is mainly due to biases imposed by the choices of the disparity-based loss function and the training data. Consequently, the learning algorithms often produce unreliable depth estimates of under-represented foreground objects, particularly at large distances. To resolve this issue, we first analyze the effect of those biases and then propose a pair of depth-based loss functions for foreground objects and background separately. These loss functions can be tuned and can balance the inherent bias of the stereo learning algorithms. The efficacy of our solution is demonstrated by an extensive set of experiments, which are benchmarked against state of the art. We show on the KITTI 2015 benchmark that our proposed solution yields substantial improvements in disparity and depth estimation, particularly for objects located at distances beyond 50 meters, outperforming the previous state of the art by 10%.

DOI

10.1109/TITS.2022.3170870

Access Rights

subscription content

Share

 
COinS