TY - JOUR
T1 - Accurate fruit localisation using high resolution LiDAR-camera fusion and instance segmentation
AU - Kang, Hanwen
AU - Wang, Xing
AU - Chen, Chao
N1 - Funding Information:
We gratefully acknowledge the financial support from the Australian Research Council ( ARC ITRH IH150100006 ).
Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/12
Y1 - 2022/12
N2 - Accurate depth-sensing is crucial in securing a high success rate of robotic harvesting in natural orchard environments. The solid-state LiDAR technique, a recently introduced LiDAR sensor, can perceive high-resolution geometric information of the scenes, which can be utilised to receive accurate depth information. Meanwhile, the fusion of the sensory data from LiDAR and the camera can significantly enhance the sensing ability of the harvesting robots. This work first introduces a LiDAR-camera fusion-based visual sensing and perception strategy to perform accurate fruit localisation in the apple orchards. Two SOTA LiDAR-camera extrinsic calibration methods are evaluated to obtain the accurate extrinsic matrix between the LiDAR and camera. After that, the point clouds and colour images are fused to perform fruit localisation using a one-stage instance segmentation network. In addition, comprehensive experiments show that LiDAR-camera achieves better visual sensing performance in natural environments. Meanwhile, introducing the LiDAR-camera fusion can largely improve the accuracy and robustness of the fruit localisation. Specifically, the standard deviations of fruit localisation using LiDAR-camera at 0.5, 1.2, and 1.8 m are 0.253, 0.230, and 0.285 cm, respectively, during the afternoon with intensive sunlight. This measurement error is much smaller compared with that from Realsense D455. Lastly, visualised point cloud of the apple trees have been attached to demonstrate the highly accurate sensing results of the proposed Lidar-camera fusion method.
AB - Accurate depth-sensing is crucial in securing a high success rate of robotic harvesting in natural orchard environments. The solid-state LiDAR technique, a recently introduced LiDAR sensor, can perceive high-resolution geometric information of the scenes, which can be utilised to receive accurate depth information. Meanwhile, the fusion of the sensory data from LiDAR and the camera can significantly enhance the sensing ability of the harvesting robots. This work first introduces a LiDAR-camera fusion-based visual sensing and perception strategy to perform accurate fruit localisation in the apple orchards. Two SOTA LiDAR-camera extrinsic calibration methods are evaluated to obtain the accurate extrinsic matrix between the LiDAR and camera. After that, the point clouds and colour images are fused to perform fruit localisation using a one-stage instance segmentation network. In addition, comprehensive experiments show that LiDAR-camera achieves better visual sensing performance in natural environments. Meanwhile, introducing the LiDAR-camera fusion can largely improve the accuracy and robustness of the fruit localisation. Specifically, the standard deviations of fruit localisation using LiDAR-camera at 0.5, 1.2, and 1.8 m are 0.253, 0.230, and 0.285 cm, respectively, during the afternoon with intensive sunlight. This measurement error is much smaller compared with that from Realsense D455. Lastly, visualised point cloud of the apple trees have been attached to demonstrate the highly accurate sensing results of the proposed Lidar-camera fusion method.
KW - Deep learning
KW - Harvesting robot
KW - Instance segmentation
KW - LiDAR fusion
KW - Solid-state LiDAR
UR - http://www.scopus.com/inward/record.url?scp=85140808033&partnerID=8YFLogxK
U2 - 10.1016/j.compag.2022.107450
DO - 10.1016/j.compag.2022.107450
M3 - Article
AN - SCOPUS:85140808033
VL - 203
JO - Computers and Electronics in Agriculture
JF - Computers and Electronics in Agriculture
SN - 0168-1699
M1 - 107450
ER -