Not enough data to create a plot.
Try a different view from the menu above.
Cabrera, Miguel Altamirano
CognitiveOS: Large Multimodal Model based System to Endow Any Type of Robot with Generative AI
Lykov, Artem, Konenkov, Mikhail, Gbagbe, Koffivi Fidèle, Litvinov, Mikhail, Peter, Robinroy, Davletshin, Denis, Fedoseev, Aleksey, Kobzarev, Oleg, Alabbas, Ali, Alyounes, Oussama, Cabrera, Miguel Altamirano, Tsetserukou, Dzmitry
In cognitive robotics, the scientific community recognized the high generalization capability of these large models as a key to developing a robot that could perform new tasks based on generalized knowledge derived from familiar actions expressed in natural language. However, efforts to apply LLMs in robotics faced challenges, particularly in understanding and processing the external world. Previous attempts to convey the model's understanding of the world through text-only approaches [1], [20], [8] struggled with ambiguities and the assumption of static objects unless interacted with. The introduction of multi-modal transformer-based models such as GPT-4 [16] and Gemini [18], capable of processing images, opened up new possibilities for robotics [5], allowing robots to comprehend their environment and enhancing their'Embodied Experience' [15]. Cognitive robots have been developed on various platforms, ranging from mobile manipulators [5], [3] to bio-inspired humanoid robots [21] and quadrupedal robots [6]. In the latter, cognitive abilities were developed using an'Inner Monologue' approach [10], with improvements inspired by the'Autogen' concept [25]. The cognition of the robot is facilitated through internal communication between agent models, leveraging their strengths to provide different cognitive capabilities to the system.
HaptiCharger: Robotic Charging of Electric Vehicles Based on Human Haptic Patterns
Alyounes, Oussama, Cabrera, Miguel Altamirano, Tsetserukou, Dzmitry
The growing demand for electric vehicles requires the development of automated car charging methods. At the moment, the process of charging an electric car is completely manual, and that requires physical effort to accomplish the task, which is not suitable for people with disabilities. Typically, the effort in the automation of the charging task research is focused on detecting the position and orientation of the socket, which resulted in a relatively high accuracy, 5 mm, and 10 degrees. However, this accuracy is not enough to complete the charging process. In this work, we focus on designing a novel methodology for robust robotic plug-in and plug-out based on human haptics to overcome the error in the orientation of the socket. Participants were invited to perform the charging task, and their cognitive capabilities were recognized by measuring the applied forces along with the movements of the charger. Eventually, an algorithm was developed based on the human's best strategies to be applied to a robotic arm.
TeslaCharge: Smart Robotic Charger Driven by Impedance Control and Human Haptic Patterns
Alyounes, Oussama, Cabrera, Miguel Altamirano, Tsetserukou, Dzmitry
The growing demand for electric vehicles requires the development of automated car charging methods. At the moment, the process of charging an electric car is completely manual, and that requires physical effort to accomplish the task, which is not suitable for people with disabilities. Typically, the effort in the research is focused on detecting the position and orientation of the socket, which resulted in a relatively high accuracy, $\pm 5 \: mm $ and $\pm 10^o$. However, this accuracy is not enough to complete the charging process. In this work, we focus on designing a novel methodology for robust robotic plug-in and plug-out based on human haptics, to overcome the error in the position and orientation of the socket. Participants were invited to perform the charging task, and their cognitive capabilities were recognized by measuring the applied forces along with the movement of the charger. Three controllers were designed based on impedance control to mimic the human patterns of charging an electric car. The recorded data from humans were used to calibrate the parameters of the impedance controllers: inertia $M_d$, damping $D_d$, and stiffness $K_d$. A robotic validation was performed, where the designed controllers were applied to the robot UR10. Using the proposed controllers and the human kinesthetic data, it was possible to successfully automate the operation of charging an electric car.
ArUcoGlide: a Novel Wearable Robot for Position Tracking and Haptic Feedback to Increase Safety During Human-Robot Interaction
Alabbas, Ali, Cabrera, Miguel Altamirano, Alyounes, Oussama, Tsetserukou, Dzmitry
The current capabilities of robotic systems make human collaboration necessary to accomplish complex tasks effectively. In this work, we are introducing a framework to ensure safety in a human-robot collaborative environment. The system is composed of a wearable 2-DOF robot, a low-cost and easy-to-install tracking system, and a collision avoidance algorithm based on the Artificial Potential Field (APF). The wearable robot is designed to hold a fiducial marker and maintain its visibility to the tracking system, which, in turn, localizes the user's hand with good accuracy and low latency and provides haptic feedback to the user. The system is designed to enhance the performance of collaborative tasks while ensuring user safety. Three experiments were carried out to evaluate the performance of the proposed system. The first one evaluated the accuracy of the tracking system. The second experiment analyzed human-robot behavior during an imminent collision. The third experiment evaluated the system in a collaborative activity in a shared working environment. The results show that the implementation of the introduced system reduces the operation time by 16% and increases the average distance between the user's hand and the robot by 5 cm.
MorphoArms: Morphogenetic Teleoperation of Multimanual Robot
Martynov, Mikhail, Darush, Zhanibek, Cabrera, Miguel Altamirano, Karaf, Sausar, Tsetserukou, Dzmitry
Nowadays, there are few unmanned aerial vehicles (UAVs) capable of flying, walking and grasping. A drone with all these functionalities can significantly improve its performance in complex tasks such as monitoring and exploring different types of terrain, and rescue operations. This paper presents MorphoArms, a novel system that consists of a morphogenetic chassis and a hand gesture recognition teleoperation system. The mechanics, electronics, control architecture, and walking behavior of the morphogenetic chassis are described. This robot is capable of walking and grasping objects using four robotic limbs. Robotic limbs with four degrees-of-freedom are used as pedipulators when walking and as manipulators when performing actions in the environment. The robot control system is implemented using teleoperation, where commands are given by hand gestures. A motion capture system is used to track the user's hands and to recognize their gestures. The method of controlling the robot was experimentally tested in a study involving 10 users. The evaluation included three questionnaires (NASA TLX, SUS, and UEQ). The results showed that the proposed system was more user-friendly than 56% of the systems, and it was rated above average in terms of attractiveness, stimulation, and novelty.