A Segmented Robot Grasping Perception Neural Network for Edge AI

Bröcheler, Casper, Vroom, Thomas, Timmermans, Derrick, Akker, Alan van den, Tang, Guangzhi, Kouzinopoulos, Charalampos S., Möckel, Rico

arXiv.org Artificial Intelligence 

--Robotic grasping, the ability of robots to reliably secure and manipulate objects of varying shapes, sizes and orientations, is a complex task that requires precise perception and control. Deep neural networks have shown remarkable success in grasp synthesis by learning rich and abstract representations of objects. When deployed at the edge, these models can enable low-latency, low-power inference, making real-time grasping feasible in resource-constrained environments. This work implements Heatmap-Guided Grasp Detection, an end-to-end framework for the detection of 6-Dof grasp poses, on the GAP9 RISC-V System-on-Chip. The model is optimised using hardware-aware techniques, including input dimensionality reduction, model partitioning, and quantisation. Object grasping synthesis is a fundamental challenge in robotics, underpinning applications such as automated warehouse operations, patient assistance in healthcare, and object sorting on assembly lines [1]. While humans excel at grasping objects of various shapes and sizes with precision, replicating this ability in robotics remains challenging.