Address

Room 101, Institute of Cyber-Systems and Control, Yuquan Campus, Zhejiang University, Hangzhou, Zhejiang, China

Contact Information

Email: lvjiajun314@zju.edu.cn

Jiajun Lv

PhD Student

Institute of Cyber-Systems and Control, Zhejiang University, China


Biography

I am pursuing my Ph.D. degree in College of Control Science and Engineering, Zhejiang University, Hangzhou, China. My major research interests include extrinsic calibration between LiDAR, IMU and camera.

Research and Interests

  • Sensor Calibration
  • Sensor Fusion

Publications

  • Xiaolei Lang, Chao Chen, Kai Tang, Yukai Ma, Jiajun Lv, Yong Liu, and Xingxing Zuo. Coco-LIC: Continuous-Time Tightly-Coupled LiDAR-Inertial-Camera Odometry using Non-Uniform B-spline. IEEE Robotics and Automation Letters, 8:7074-7081, 2023.
    [BibTeX] [Abstract] [DOI] [PDF]
    In this paper, we propose an effcient continuous-time LiDAR-Inertial-Camera Odometry, utilizing non-uniform B-splines to tightly couple measurements from the LiDAR, IMU, and camera. In contrast to uniform B-spline-based continuous-time methods, our non-uniform B-spline approach offers signifcant advantages in terms of achieving real-time effciency and high accuracy. This is accomplished by dynamically and adaptively placing control points, taking into account the varying dynamics of the motion. To enable effcient fusion of heterogeneous LiDAR-Inertial-Camera data within a short sliding-window optimization, we assign depth to visual pixels using corresponding map points from a global LiDAR map, and formulate frame-to-map reprojection factors for the associated pixels in the current image frame. This way circumvents the necessity for depth optimization of visual pixels, which typically entails a lengthy sliding window with numerous control points for continuous-time trajectory estimation. We conduct dedicated experiments on real-world datasets to demonstrate the advantage and effcacy of adopting non-uniform continuous-time trajectory representation. Our LiDAR-Inertial-Camera odometry system is also extensively evaluated on both challenging scenarios with sensor degenerations and large-scale scenarios, and has shown comparable or higher accuracy than the state-of-the-art methods. The codebase of this paper will also be open-sourced at https://github.com/APRIL-ZJU/Coco-LIC .
    @article{lang2023lic,
    title = {Coco-LIC: Continuous-Time Tightly-Coupled LiDAR-Inertial-Camera Odometry using Non-Uniform B-spline},
    author = {Xiaolei Lang and Chao Chen and Kai Tang and Yukai Ma and Jiajun Lv and Yong Liu and Xingxing Zuo},
    year = 2023,
    journal = {IEEE Robotics and Automation Letters},
    volume = 8,
    pages = {7074-7081},
    doi = {10.1109/LRA.2023.3315542},
    abstract = {In this paper, we propose an effcient continuous-time LiDAR-Inertial-Camera Odometry, utilizing non-uniform B-splines to tightly couple measurements from the LiDAR, IMU, and camera. In contrast to uniform B-spline-based continuous-time methods, our non-uniform B-spline approach offers signifcant advantages in terms of achieving real-time effciency and high accuracy. This is accomplished by dynamically and adaptively placing control points, taking into account the varying dynamics of the motion. To enable effcient fusion of heterogeneous LiDAR-Inertial-Camera data within a short sliding-window optimization, we assign depth to visual pixels using corresponding map points from a global LiDAR map, and formulate frame-to-map reprojection factors for the associated pixels in the current image frame. This way circumvents the necessity for depth optimization of visual pixels, which typically entails a lengthy sliding window with numerous control points for continuous-time trajectory estimation. We conduct dedicated experiments on real-world datasets to demonstrate the advantage and effcacy of adopting non-uniform continuous-time trajectory representation. Our LiDAR-Inertial-Camera odometry system is also extensively evaluated on both challenging scenarios with sensor degenerations and large-scale scenarios, and has shown comparable or higher accuracy than the state-of-the-art methods. The codebase of this paper will also be open-sourced at https://github.com/APRIL-ZJU/Coco-LIC .}
    }
  • Jiajun Lv, Xiaolei Lang, Jinhong Xu, Mengmeng Wang, Yong Liu, and Xingxing Zuo. Continuous-Time Fixed-Lag Smoothing for LiDAR-Inertial-Camera SLAM. IEEE/ASME Transactions on Mechatronics, 28:2259-2270, 2023.
    [BibTeX] [Abstract] [DOI] [PDF]
    Localization and mapping with heterogeneous multi-sensor fusion have been prevalent in recent years. To adequately fuse multi-modal sensor measurements received at different time instants and different frequencies, we estimate the continuous-time trajectory by fixed-lag smoothing within a factor-graph optimization framework. With the continuous-time formulation, we can query poses at any time instants corresponding to the sensor measurements. To bound the computation complexity of the continuous-time fixed-lag smoother, we maintain temporal and keyframe sliding windows with constant size, and probabilistically marginalize out control points of the trajectory and other states, which allows preserving prior information for future sliding-window optimization. Based on continuous-time fixed-lag smoothing, we design tightly-coupled multi-modal SLAM algorithms with a variety of sensor combinations, like the LiDAR-inertial and LiDAR-inertial-camera SLAM systems, in which online timeoffset calibration is also naturally supported. More importantly, benefiting from the marginalization and our derived analytical Jacobians for optimization, the proposed continuous-time SLAM systems can achieve real-time performance regardless of the high complexity of continuous-time formulation. The proposed multi-modal SLAM systems have been widely evaluated on three public datasets and self-collect datasets. The results demonstrate that the proposed continuous-time SLAM systems can achieve high-accuracy pose estimations and outperform existing state-of-the-art methods. To benefit the research community, we will open source our code at {https://github.com/APRIL-ZJU/clic}.
    @article{lv2023ctfl,
    title = {Continuous-Time Fixed-Lag Smoothing for LiDAR-Inertial-Camera SLAM},
    author = {Jiajun Lv and Xiaolei Lang and Jinhong Xu and Mengmeng Wang and Yong Liu and Xingxing Zuo},
    year = 2023,
    journal = {IEEE/ASME Transactions on Mechatronics},
    volume = 28,
    pages = {2259-2270},
    doi = {10.1109/TMECH.2023.3241398},
    abstract = {Localization and mapping with heterogeneous multi-sensor fusion have been prevalent in recent years. To adequately fuse multi-modal sensor measurements received at different time instants and different frequencies, we estimate the continuous-time trajectory by fixed-lag smoothing within a factor-graph optimization framework. With the continuous-time formulation, we can query poses at any time instants corresponding to the sensor measurements. To bound the computation complexity of the continuous-time fixed-lag smoother, we maintain temporal and keyframe sliding windows with constant size, and probabilistically marginalize out control points of the trajectory and other states, which allows preserving prior information for future sliding-window optimization. Based on continuous-time fixed-lag smoothing, we design tightly-coupled multi-modal SLAM algorithms with a variety of sensor combinations, like the LiDAR-inertial and LiDAR-inertial-camera SLAM systems, in which online timeoffset calibration is also naturally supported. More importantly, benefiting from the marginalization and our derived analytical Jacobians for optimization, the proposed continuous-time SLAM systems can achieve real-time performance regardless of the high complexity of continuous-time formulation. The proposed multi-modal SLAM systems have been widely evaluated on three public datasets and self-collect datasets. The results demonstrate that the proposed continuous-time SLAM systems can achieve high-accuracy pose estimations and outperform existing state-of-the-art methods. To benefit the research community, we will open source our code at {https://github.com/APRIL-ZJU/clic}.}
    }
  • Chao Chen, Yukai Ma, Jiajun Lv, Xiangrui Zhao, Laijian Li, Yong Liu, and Wang Gao. OL-SLAM: A Robust and Versatile System of Object Localization and SLAM. Sensors, 23:801, 2023.
    [BibTeX] [Abstract] [DOI] [PDF]
    This paper proposes a real-time, versatile Simultaneous Localization and Mapping (SLAM) and object localization system, which fuses measurements from LiDAR, camera, Inertial Measurement Unit (IMU), and Global Positioning System (GPS). Our system can locate itself in an unknown environment and build a scene map based on which we can also track and obtain the global location of objects of interest. Precisely, our SLAM subsystem consists of the following four parts: LiDAR-inertial odometry, Visual-inertial odometry, GPS-inertial odometry, and global pose graph optimization. The target-tracking and positioning subsystem is developed based on YOLOv4. Benefiting from the use of GPS sensor in the SLAM system, we can obtain the global positioning information of the target; therefore, it can be highly useful in military operations, rescue and disaster relief, and other scenarios.
    @article{chen2023ols,
    title = {OL-SLAM: A Robust and Versatile System of Object Localization and SLAM},
    author = {Chao Chen and Yukai Ma and Jiajun Lv and Xiangrui Zhao and Laijian Li and Yong Liu and Wang Gao},
    year = 2023,
    journal = {Sensors},
    volume = 23,
    pages = {801},
    doi = {10.3390/s23020801},
    abstract = {This paper proposes a real-time, versatile Simultaneous Localization and Mapping (SLAM) and object localization system, which fuses measurements from LiDAR, camera, Inertial Measurement Unit (IMU), and Global Positioning System (GPS). Our system can locate itself in an unknown environment and build a scene map based on which we can also track and obtain the global location of objects of interest. Precisely, our SLAM subsystem consists of the following four parts: LiDAR-inertial odometry, Visual-inertial odometry, GPS-inertial odometry, and global pose graph optimization. The target-tracking and positioning subsystem is developed based on YOLOv4. Benefiting from the use of GPS sensor in the SLAM system, we can obtain the global positioning information of the target; therefore, it can be highly useful in military operations, rescue and disaster relief, and other scenarios.}
    }
  • Chao Chen, Hangyu Wu, Yukai Ma, Jiajun Lv, Laijian Li, and Yong Liu. LiDAR-Inertial SLAM with Efficiently Extracted Planes. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1497-1504, 2023.
    [BibTeX] [Abstract] [DOI] [PDF]
    This paper proposes a LiDAR-Inertial SLAM with efficiently extracted planes, which couples explicit planes in the odometry to improve accuracy and in the mapping for consistency. The proposed method consists of three parts: an efficient Point →Line→Plane extraction algorithm, a LiDAR-Inertial-Plane tightly coupled odometry, and a global plane-aided mapping. Specifically, we leverage the ring field of the LiDAR point cloud to accelerate the region-growing-based plane extraction algorithm. Then we tightly coupled IMU pre-integration factors, LiDAR odometry factors, and explicit plane factors in the sliding window to obtain a more accurate initial pose for mapping. Finally, we maintain explicit planes in the global map, and enhance system consistency by optimizing the factor graph of optimized odometry factors and plane observation factors. Experimental results show that our plane extraction method is efficient, and the proposed plane-aided LiDAR-Inertial SLAM significantly improves the accuracy and consistency compared to the other state-of-the-art algorithms with only a small increase in time consumption.
    @inproceedings{chen2023lidar,
    title = {LiDAR-Inertial SLAM with Efficiently Extracted Planes},
    author = {Chao Chen and Hangyu Wu and Yukai Ma and Jiajun Lv and Laijian Li and Yong Liu},
    year = 2023,
    booktitle = {2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    pages = {1497-1504},
    doi = {10.1109/IROS55552.2023.10342325},
    abstract = {This paper proposes a LiDAR-Inertial SLAM with efficiently extracted planes, which couples explicit planes in the odometry to improve accuracy and in the mapping for consistency. The proposed method consists of three parts: an efficient Point →Line→Plane extraction algorithm, a LiDAR-Inertial-Plane tightly coupled odometry, and a global plane-aided mapping. Specifically, we leverage the ring field of the LiDAR point cloud to accelerate the region-growing-based plane extraction algorithm. Then we tightly coupled IMU pre-integration factors, LiDAR odometry factors, and explicit plane factors in the sliding window to obtain a more accurate initial pose for mapping. Finally, we maintain explicit planes in the global map, and enhance system consistency by optimizing the factor graph of optimized odometry factors and plane observation factors. Experimental results show that our plane extraction method is efficient, and the proposed plane-aided LiDAR-Inertial SLAM significantly improves the accuracy and consistency compared to the other state-of-the-art algorithms with only a small increase in time consumption.}
    }
  • Jiajun Lv, Xingxing Zuo, Kewei Hu, Jinhong Xu, Guoquan Huang, and Yong Liu. Observability-Aware Intrinsic and Extrinsic Calibration of LiDAR-IMU System. IEEE Transactions on Robotics, 38(6):3734-3753, 2022.
    [BibTeX] [Abstract] [DOI] [PDF]
    Accurate and reliable sensor calibration is essential to fuse LiDAR and inertial measurements, which are usually available in robotic applications. In this article, we propose a novel LiDAR-IMU calibration method within the continuous-time batch-optimization framework, where the intrinsics of both sensors and the spatial-temporal extrinsics between sensors are calibrated without using calibration infrastructure, such as fiducial tags. Compared to discrete-time approaches, the continuous-time formulation has natural advantages for fusing high-rate measurements from LiDAR and IMU sensors. To improve efficiency and address degenerate motions, the following two observability-aware modules are leveraged: first, The information-theoretic data selection policy selects only the most informative segments for calibration during data collection, which significantly improves the calibration efficiency by processing only the selected informative segments. Second, the observability-aware state update mechanism in nonlinear least-squares optimization updates only the identifiable directions in the state space with truncated singular value decomposition, which enables accurate calibration results even under degenerate cases where informative data segments are not available. The proposed LiDAR-IMU calibration approach has been validated extensively in both simulated and real-world experiments with different robot platforms, demonstrating its high accuracy and repeatability in commonly-seen human-made environments.
    @article{lv2022oai,
    title = {Observability-Aware Intrinsic and Extrinsic Calibration of LiDAR-IMU System},
    author = {Jiajun Lv and Xingxing Zuo and Kewei Hu and Jinhong Xu and Guoquan Huang and Yong Liu},
    year = 2022,
    journal = {IEEE Transactions on Robotics},
    volume = {38},
    number = {6},
    pages = {3734-3753},
    doi = {10.1109/TRO.2022.3174476},
    abstract = {Accurate and reliable sensor calibration is essential to fuse LiDAR and inertial measurements, which are usually available in robotic applications. In this article, we propose a novel LiDAR-IMU calibration method within the continuous-time batch-optimization framework, where the intrinsics of both sensors and the spatial-temporal extrinsics between sensors are calibrated without using calibration infrastructure, such as fiducial tags. Compared to discrete-time approaches, the continuous-time formulation has natural advantages for fusing high-rate measurements from LiDAR and IMU sensors. To improve efficiency and address degenerate motions, the following two observability-aware modules are leveraged: first, The information-theoretic data selection policy selects only the most informative segments for calibration during data collection, which significantly improves the calibration efficiency by processing only the selected informative segments. Second, the observability-aware state update mechanism in nonlinear least-squares optimization updates only the identifiable directions in the state space with truncated singular value decomposition, which enables accurate calibration results even under degenerate cases where informative data segments are not available. The proposed LiDAR-IMU calibration approach has been validated extensively in both simulated and real-world experiments with different robot platforms, demonstrating its high accuracy and repeatability in commonly-seen human-made environments.}
    }
  • Xiaolei Lang, Jiajun Lv, Jianxin Huang, Yukai Ma, Yong Liu, and Xingxing Zuo. Ctrl-VIO: Continuous-Time Visual-Inertial Odometry for Rolling Shutter Cameras. IEEE Robotics and Automation Letters (RA-L), 7(4):11537-11544, 2022.
    [BibTeX] [Abstract] [DOI] [PDF]
    In this letter, we propose a probabilistic continuoustime visual-inertial odometry (VIO) for rolling shutter cameras. The continuous-time trajectory formulation naturally facilitates the fusion of asynchronized high-frequency IMU data and motion distorted rolling shutter images. To prevent intractable computation load, the proposed VIO is sliding-window and keyframe-based. We propose to probabilistically marginalize the control points to keep the constant number of keyframes in the sliding window. Furthermore, the line exposure time difference (line delay) of the rolling shutter camera can be online calibrated in our continuous-time VIO. To extensively examine the performance of our continuoustime VIO, experiments are conducted on publicly-available WHURSVI, TUM-RSVI, and SenseTime-RSVI rolling shutter datasets. The results demonstrate the proposed continuous-time VIO significantly outperforms the existing state-of-the-art VIO methods.
    @article{lang2022ctv,
    title = {Ctrl-VIO: Continuous-Time Visual-Inertial Odometry for Rolling Shutter Cameras},
    author = {Xiaolei Lang and Jiajun Lv and Jianxin Huang and Yukai Ma and Yong Liu and Xingxing Zuo},
    year = 2022,
    journal = {IEEE Robotics and Automation Letters (RA-L)},
    volume = {7},
    number = {4},
    pages = {11537-11544},
    doi = {10.1109/LRA.2022.3202349},
    abstract = {In this letter, we propose a probabilistic continuoustime visual-inertial odometry (VIO) for rolling shutter cameras. The continuous-time trajectory formulation naturally facilitates
    the fusion of asynchronized high-frequency IMU data and motion distorted rolling shutter images. To prevent intractable computation load, the proposed VIO is sliding-window and keyframe-based. We propose to probabilistically marginalize the control points to keep the constant number of keyframes in the sliding window. Furthermore, the line exposure time difference (line delay) of the rolling
    shutter camera can be online calibrated in our continuous-time VIO. To extensively examine the performance of our continuoustime VIO, experiments are conducted on publicly-available WHURSVI, TUM-RSVI, and SenseTime-RSVI rolling shutter datasets. The results demonstrate the proposed continuous-time VIO significantly outperforms the existing state-of-the-art VIO methods.}
    }
  • Jiajun Lv, Kewei Hu, Jinhong Xu, Yong Liu, and Xingxing Zuo. CLINS: Continuous-Time Trajectory Estimation for LiDAR Inertial System. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 6657-6663, 2021.
    [BibTeX] [Abstract] [DOI] [PDF]
    In this paper, we propose a highly accurate continuous-time trajectory estimation framework dedicated to SLAM (Simultaneous Localization and Mapping) applications, which enables fuse high-frequency and asynchronous sensor data effectively. We apply the proposed framework in a 3D LiDAR-inertial system for evaluations. The proposed method adopts a non-rigid registration method for continuous-time trajectory estimation and simultaneously removing the motion distortion in LiDAR scans. Additionally, we propose a two-state continuous-time trajectory correction method to efficiently and efficiently tackle the computationally-intractable global optimization problem when loop closure happens. We examine the accuracy of the proposed approach on several publicly available datasets and the data we collected. The experimental results indicate that the proposed method outperforms the discrete-time methods regarding accuracy especially when aggressive motion occurs. Furthermore, we open source our code at https://github.com/APRIL-ZJU/clins to benefit research community.
    @inproceedings{lv2021clins,
    title = {CLINS: Continuous-Time Trajectory Estimation for LiDAR Inertial System},
    author = {Jiajun Lv and Kewei Hu and Jinhong Xu and Yong Liu and Xingxing Zuo},
    year = 2021,
    booktitle = {2021 IEEE/RSJ International Conference on Intelligent Robots and Systems},
    pages = {6657-6663},
    doi = {https://doi.org/10.1109/IROS51168.2021.9636676},
    abstract = {In this paper, we propose a highly accurate continuous-time trajectory estimation framework dedicated to SLAM (Simultaneous Localization and Mapping) applications, which enables fuse high-frequency and asynchronous sensor data effectively. We apply the proposed framework in a 3D LiDAR-inertial system for evaluations. The proposed method adopts a non-rigid registration method for continuous-time trajectory estimation and simultaneously removing the motion distortion in LiDAR scans. Additionally, we propose a two-state continuous-time trajectory correction method to efficiently and efficiently tackle the computationally-intractable global optimization problem when loop closure happens. We examine the accuracy of the proposed approach on several publicly available datasets and the data we collected. The experimental results indicate that the proposed method outperforms the discrete-time methods regarding accuracy especially when aggressive motion occurs. Furthermore, we open source our code at https://github.com/APRIL-ZJU/clins to benefit research community.}
    }
  • Jiajun Lv, Jinhong Xu, Kewei Hu, Yong Liu, and Xingxing Zuo. Targetless Calibration of LiDAR-IMU System Based on Continuous-time Batch Estimation. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), page 9968–9975, 2020.
    [BibTeX] [Abstract] [DOI] [arXiv] [PDF]
    Sensor calibration is the fundamental block for a multi-sensor fusion system. This paper presents an accurate and repeatable LiDAR-IMU calibration method (termed LI-Calib), to calibrate the 6-DOF extrinsic transformation between the 3D LiDAR and the Inertial Measurement Unit (IMU). Regarding the high data capture rate for LiDAR and IMU sensors, LI-Calib adopts a continuous-time trajectory formulation based on B-Spline, which is more suitable for fusing high-rate or asynchronous measurements than discrete-time based approaches. Additionally, LI-Calib decomposes the space into cells and identifies the planar segments for data association, which renders the calibration problem well-constrained in usual scenarios without any artificial targets. We validate the proposed calibration approach on both simulated and real-world experiments. The results demonstrate the high accuracy and good repeatability of the proposed method in common human-made scenarios. To benefit the research community, we open-source our code at https://github.com/APRIL-ZJU/lidar_IMU_calib.
    @inproceedings{lv2020targetlessco,
    title = {Targetless Calibration of LiDAR-IMU System Based on Continuous-time Batch Estimation},
    author = {Jiajun Lv and Jinhong Xu and Kewei Hu and Yong Liu and Xingxing Zuo},
    year = 2020,
    booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    pages = {9968--9975},
    doi = {https://doi.org/10.1109/IROS45743.2020.9341405},
    abstract = {Sensor calibration is the fundamental block for a multi-sensor fusion system. This paper presents an accurate and repeatable LiDAR-IMU calibration method (termed LI-Calib), to calibrate the 6-DOF extrinsic transformation between the 3D LiDAR and the Inertial Measurement Unit (IMU). Regarding the high data capture rate for LiDAR and IMU sensors, LI-Calib adopts a continuous-time trajectory formulation based on B-Spline, which is more suitable for fusing high-rate or asynchronous measurements than discrete-time based approaches. Additionally, LI-Calib decomposes the space into cells and identifies the planar segments for data association, which renders the calibration problem well-constrained in usual scenarios without any artificial targets. We validate the proposed calibration approach on both simulated and real-world experiments. The results demonstrate the high accuracy and good repeatability of the proposed method in common human-made scenarios. To benefit the research community, we open-source our code at https://github.com/APRIL-ZJU/lidar_IMU_calib.},
    arxiv = {https://arxiv.org/pdf/2007.14759.pdf}
    }
  • Xingxing Zuo, Yulin Yang, Patrick Geneva, Jiajun Lv, Yong Liu, Guoquan Huang, and Marc Pollefeys. LIC-Fusion 2.0: LiDAR-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), page 5112–5119, 2020.
    [BibTeX] [Abstract] [DOI] [arXiv] [PDF]
    Multi-sensor fusion of multi-modal measurements from commodity inertial, visual and LiDAR sensors to provide robust and accurate 6DOF pose estimation holds great potential in robotics and beyond. In this paper, building upon our prior work (i.e., LIC-Fusion), we develop a sliding-window filter based LiDAR-Inertial-Camera odometry with online spatiotemporal calibration (i.e., LIC-Fusion 2.0), which introduces a novel sliding-window plane-feature tracking for efficiently processing 3D LiDAR point clouds. In particular, after motion compensation for LiDAR points by leveraging IMU data, low-curvature planar points are extracted and tracked across the sliding window. A novel outlier rejection criteria is proposed in the plane-feature tracking for high quality data association. Only the tracked planar points belonging to the same plane will be used for plane initialization, which makes the plane extraction efficient and robust. Moreover, we perform the observability analysis for the IMU-LiDAR subsystem under consideration and report the degenerate cases for spatiotemporal calibration using plane features. While the estimation consistency and identified degenerate motions are validated in Monte-Carlo simulations, different real-world experiments are also conducted to show that the proposed LIC-Fusion 2.0 outperforms its predecessor and other state-of-the-art methods.
    @inproceedings{zuo2020licfusion2l,
    title = {LIC-Fusion 2.0: LiDAR-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking},
    author = {Xingxing Zuo and Yulin Yang and Patrick Geneva and Jiajun Lv and Yong Liu and Guoquan Huang and Marc Pollefeys},
    year = 2020,
    booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    pages = {5112--5119},
    doi = {10.1109/IROS45743.2020.9340704},
    abstract = {Multi-sensor fusion of multi-modal measurements from commodity inertial, visual and LiDAR sensors to provide robust and accurate 6DOF pose estimation holds great potential in robotics and beyond. In this paper, building upon our prior work (i.e., LIC-Fusion), we develop a sliding-window filter based LiDAR-Inertial-Camera odometry with online spatiotemporal calibration (i.e., LIC-Fusion 2.0), which introduces a novel sliding-window plane-feature tracking for efficiently processing 3D LiDAR point clouds. In particular, after motion compensation for LiDAR points by leveraging IMU data, low-curvature planar points are extracted and tracked across the sliding window. A novel outlier rejection criteria is proposed in the plane-feature tracking for high quality data association. Only the tracked planar points belonging to the same plane will be used for plane initialization, which makes the plane extraction efficient and robust. Moreover, we perform the observability analysis for the IMU-LiDAR subsystem under consideration and report the degenerate cases for spatiotemporal calibration using plane features. While the estimation consistency and identified degenerate motions are validated in Monte-Carlo simulations, different real-world experiments are also conducted to show that the proposed LIC-Fusion 2.0 outperforms its predecessor and other state-of-the-art methods.},
    arxiv = {https://arxiv.org/pdf/2008.07196.pdf}
    }
  • Jinhong Xu, Jiajun Lv, Zaishen Pan, Yong Liu, and Yinan Chen. Real-Time LiDAR Data Assocation Aided by IMU in High Dynamic Environment. In 2018 IEEE International Conference on Real-time Computing and Robotics (RCAR), page 202–205, 2018.
    [BibTeX] [Abstract] [DOI] [PDF]
    In recent years, with the breakthroughs in sensor technology, SLAM technology is developing towards high speed and high dynamic applications. The rotating multi line LiDAR sensor plays an important role. However, the rotating multi line LiDAR sensors need to restructure the data in high dynamic environment. Our work is to propose a LiDAR data correction method based on IMU and hardware synchronization, and make a hardware synchronization unit. This method can still output correct point cloud information when LiDAR sensor is moving violently.
    @inproceedings{xu2018realtimeld,
    title = {Real-Time LiDAR Data Assocation Aided by IMU in High Dynamic Environment},
    author = {Jinhong Xu and Jiajun Lv and Zaishen Pan and Yong Liu and Yinan Chen},
    year = 2018,
    booktitle = {2018 IEEE International Conference on Real-time Computing and Robotics (RCAR)},
    pages = {202--205},
    doi = {https://doi.org/10.1109/RCAR.2018.8621627},
    abstract = {In recent years, with the breakthroughs in sensor technology, SLAM technology is developing towards high speed and high dynamic applications. The rotating multi line LiDAR sensor plays an important role. However, the rotating multi line LiDAR sensors need to restructure the data in high dynamic environment. Our work is to propose a LiDAR data correction method based on IMU and hardware synchronization, and make a hardware synchronization unit. This method can still output correct point cloud information when LiDAR sensor is moving violently.}
    }