Non-Prehensile Aerial Manipulation using Model-Based Deep Reinforcement Learning
Dimmig, Cora A., Kobilarov, Marin
–arXiv.org Artificial Intelligence
With the continual adoption of Uncrewed Aerial Vehicles (UAVs) across a wide-variety of application spaces, robust aerial manipulation remains a key research challenge. Aerial manipulation tasks require interacting with objects in the environment, often without knowing their dynamical properties like mass and friction a priori. Additionally, interacting with these objects can have a significant impact on the control and stability of the vehicle. We investigated an approach for robust control and non-prehensile aerial manipulation in unknown environments. In particular, we use model-based Deep Reinforcement Learning (DRL) to learn a world model of the environment while simultaneously learning a policy for interaction with the environment. We evaluated our approach on a series of push tasks by moving an object between goal locations and demonstrated repeatable behaviors across a range of friction values.
arXiv.org Artificial Intelligence
Jun-30-2024
- Country:
- North America > United States > Maryland > Baltimore (0.04)
- Genre:
- Research Report (0.82)
- Technology: