Lorenzo Bertoni, Sven Kreiss,
Taylor Mordan, Alexandre Alahi
International Conference on Robotics and Automation (ICRA) 2021
Monocular and stereo visions are cost-effective solutions for 3D human localization in the context of self-driving cars or social robots. However, they are usually developed independently and have their respective strengths and limitations. We propose a novel unified learning framework that leverages the strengths of both monocular and stereo cues for 3D human localization. Our method jointly (i) associates humans in left-right images, (ii) deals with occluded and distant cases in stereo settings by relying on the robustness of monocular cues, and (iii) tackles the intrinsic ambiguity of monocular perspective projection by exploiting prior knowledge of the human height distribution. We specifically evaluate outliers as well as challenging instances, such as occluded and far-away pedestrians, by analyzing the entire error distribution and by estimating calibrated confidence intervals. Finally, we critically review the official KITTI 3D metrics and propose a practical 3D localization metric tailored for humans.
Please note that the publication lists from Infoscience integrated into the EPFL website, lab or people pages are frozen following the launch of the new version of platform. The owners of these pages are invited to recreate their publication list from Infoscience. For any assistance, please consult the Infoscience help or contact support.