Accurate fruit localisation using high resolution LiDAR-camera fusion and instance segmentation

Research output: Contribution to journalArticleResearchpeer-review

18 Citations (Scopus)


Accurate depth-sensing is crucial in securing a high success rate of robotic harvesting in natural orchard environments. The solid-state LiDAR technique, a recently introduced LiDAR sensor, can perceive high-resolution geometric information of the scenes, which can be utilised to receive accurate depth information. Meanwhile, the fusion of the sensory data from LiDAR and the camera can significantly enhance the sensing ability of the harvesting robots. This work first introduces a LiDAR-camera fusion-based visual sensing and perception strategy to perform accurate fruit localisation in the apple orchards. Two SOTA LiDAR-camera extrinsic calibration methods are evaluated to obtain the accurate extrinsic matrix between the LiDAR and camera. After that, the point clouds and colour images are fused to perform fruit localisation using a one-stage instance segmentation network. In addition, comprehensive experiments show that LiDAR-camera achieves better visual sensing performance in natural environments. Meanwhile, introducing the LiDAR-camera fusion can largely improve the accuracy and robustness of the fruit localisation. Specifically, the standard deviations of fruit localisation using LiDAR-camera at 0.5, 1.2, and 1.8 m are 0.253, 0.230, and 0.285 cm, respectively, during the afternoon with intensive sunlight. This measurement error is much smaller compared with that from Realsense D455. Lastly, visualised point cloud of the apple trees have been attached to demonstrate the highly accurate sensing results of the proposed Lidar-camera fusion method.

Original languageEnglish
Article number107450
Number of pages11
JournalComputers and Electronics in Agriculture
Publication statusPublished - Dec 2022


  • Deep learning
  • Harvesting robot
  • Instance segmentation
  • LiDAR fusion
  • Solid-state LiDAR

Cite this