Lessons from Deploying CropFollow++: Under-Canopy Agricultural Navigation with Keypoints

Sivakumar, Arun N., Gasparino, Mateus V., McGuire, Michael, Higuti, Vitor A. H., Akcal, M. Ugur, Chowdhary, Girish

arXiv.org Artificial Intelligence 

We present a vision-based navigation system for under-canopy agricultural robots using semantic keypoints. Autonomous under-canopy navigation is challenging due to the tight spacing between the crop rows ($\sim 0.75$ m), degradation in RTK-GPS accuracy due to multipath error, and noise in LiDAR measurements from the excessive clutter. Our system, CropFollow++, introduces modular and interpretable perception architecture with a learned semantic keypoint representation. We deployed CropFollow++ in multiple under-canopy cover crop planting robots on a large scale (25 km in total) in various field conditions and we discuss the key lessons learned from this.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found