DiffusionRL: Efficient Training of Diffusion Policies for Robotic Grasping Using RL-Adapted Large-Scale Datasets
Makarova, Maria, Liu, Qian, Tsetserukou, Dzmitry
–arXiv.org Artificial Intelligence
Diffusion models have proven to be a powerful tool in the field of generative artificial intelligence successfully applied in image synthesis, video generation and audio generation [1, 2, 3, 4, 5]. Using an iterative denoising approach, these models learn to invert a diffusion process, transforming random noise into sophisticated, high-quality samples. Reinforcement Learning (RL) and Imitation Learning (IL) have become particularly popular in the field of robot learning for the tasks of perceiving the environment and making decisions to perform actions in recent years [6]. But RL approach is highly dependent on the correct tuning of hyper-parameters [7], and effective IL training requires a large amount of diverse high-quality data [8]. Also, the multimodal nature of complex robot tasks hinders the construction of stable control. More recently, researchers have begun to integrate an approach in the form of diffusion policy learning into the field of robotics as well. The concept of diffusion policy was first introduced by Chi et al. [9]. The diffusion process has been applied to robot action sequence generation since such models are able to capture the complex mul-timodal distributions that are characteristic of many robotics tasks, as mentioned above.
arXiv.org Artificial Intelligence
May-27-2025
- Country:
- Asia
- China > Liaoning Province
- Dalian (0.04)
- Russia (0.04)
- China > Liaoning Province
- Europe
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Russia > Central Federal District
- Moscow Oblast > Moscow (0.04)
- Italy > Calabria
- Asia
- Genre:
- Research Report (0.69)
- Technology: