Address

Room 101, Institute of Cyber-Systems and Control, Yuquan Campus, Zhejiang University, Hangzhou, Zhejiang, China

Contact Information

Email: junyuzhu@zju.edu.cn

Junyu Zhu

MS Student

Institute of Cyber-Systems and Control, Zhejiang University, China


Biography

I am pursuing my M.S. degree in College of Control Science and Engineering, Zhejiang University after getting my B.S. degree in Automation from Wuhan University in 2021. My major research interest is Depth Estimation.

Research and Interests

  • Depth Estimation
  • BEV perception
  • Semi-supervised Learning

Publications

  • Junyu Zhu, Lina Liu, Yong Liu, Wanlong Li, Feng Wen, and Hongbo Zhang. FG-Depth: Flow-Guided Unsupervised Monocular Depth Estimation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023.
    [BibTeX] [Abstract] [DOI] [PDF]
    The great potential of unsupervised monocular depth estimation has been demonstrated by many works due to low annotation cost and impressive accuracy comparable to supervised methods. To further improve the performance, recent works mainly focus on designing more complex network structures and exploiting extra supervised information, e.g., semantic segmentation. These methods optimize the models by exploiting the reconstructed relationship between the target and reference images in varying degrees. However, previous methods prove that this image reconstruction optimization is prone to get trapped in local minima. In this paper, our core idea is to guide the optimization with prior knowledge from pretrained Flow-Net. And we show that the bottleneck of unsupervised monocular depth estimation can be broken with our simple but effective framework named FG-Depth. In particular, we propose (i) a flow distillation loss to replace the typical photometric loss that limits the capacity of the model and (ii) a prior flow based mask to remove invalid pixels that bring the noise in training loss. Extensive experiments demonstrate the effectiveness of each component, and our approach achieves state-of-the-art results on both KITTI and NYU-Depth-v2 datasets.
    @inproceedings{zhu2023fgd,
    title = {FG-Depth: Flow-Guided Unsupervised Monocular Depth Estimation},
    author = {Junyu Zhu and Lina Liu and Yong Liu and Wanlong Li and Feng Wen and Hongbo Zhang},
    year = 2023,
    booktitle = {2023 IEEE International Conference on Robotics and Automation (ICRA)},
    doi = {10.1109/ICRA48891.2023.10160534},
    abstract = {The great potential of unsupervised monocular depth estimation has been demonstrated by many works due to low annotation cost and impressive accuracy comparable to supervised methods. To further improve the performance, recent works mainly focus on designing more complex network structures and exploiting extra supervised information, e.g., semantic segmentation. These methods optimize the models by exploiting the reconstructed relationship between the target and reference images in varying degrees. However, previous methods prove that this image reconstruction optimization is prone to get trapped in local minima. In this paper, our core idea is to guide the optimization with prior knowledge from pretrained Flow-Net. And we show that the bottleneck of unsupervised monocular depth estimation can be broken with our simple but effective framework named FG-Depth. In particular, we propose (i) a flow distillation loss to replace the typical photometric loss that limits the capacity of the model and (ii) a prior flow based mask to remove invalid pixels that bring the noise in training loss. Extensive experiments demonstrate the effectiveness of each component, and our approach achieves state-of-the-art results on both KITTI and NYU-Depth-v2 datasets.}
    }