Single Image Depth Estimation and 3D Reconstruction Under Multi-Sensing Modalities
Assistant Professor, Rochester Institute of Technology
Time: December 4, 2020 @ 3:00 PM
Recovering a 3D shape representation from one single image input has been
attempted in recent years, as multiple images from different perspectives or 3D CAD models are not always available in real applications. We present a novel shape-from-silhouette method based on just a single image, which is an end-to-end learning framework relying on view synthesis and shape-from-silhouette methodology to reconstruct a 3D shape. Compared with normal images, high-resolution LiDARs are precise in depth estimation, but usually too expensive. Though single beam LIDAR enjoys the benefits of low cost, one beam depth sensing is not usually sufficient to perceive the surrounding environment in many scenarios. I present a deep neural network based framework to replicate similar or even higher performance as costly LiDARs with the designated self-supervised network and a low-cost single-beam LIDAR. To overcome the deficiency that RGB images are not suitable for dark and night environment with limited lighting resource, I present a framework to estimate the scene depth directly from a single thermal image that can still observe the scene in the low lighting condition. With the proposed approach, an accurate depth map can be predicted without any prior knowledge under various illumination conditions.