new robot
Gear News of the Week: Samsung's Trifold Promise, Ikea's Sonos Split, and Hugging Face's New Robot
Samsung's Galaxy Unpacked event in Brooklyn earlier this week debuted seven new devices, from the Galaxy Z Fold7 to the Galaxy Watch8 series. But there weren't any surprises at the end, despite rumors that Samsung would unveil a trifold phone. Sensing disappointment, the company later confirmed that the phone is expected to land in 2025. "I expect we will be able to launch the trifold phone within this year," TM Roh, head of Samsung's mobile business, told The Korea Times. The trifold phone, rumored to be called the Galaxy G Fold, would have a normal screen on the front and two hinges that let you open it up as a tablet-sized screen.
- North America > United States (0.05)
- Asia > China (0.05)
- Information Technology > Artificial Intelligence > Robots (0.74)
- Information Technology > Communications > Mobile (0.50)
Sthymuli: a Static Educational Robot. Leveraging the Thymio II Platform
Bernal-Lecina, Manuel, Hernández, Alejandrina, Pannatier, Adrien, Pereyre, Léa, Mondada, Francesco
The use of robots in education represents a challenge for teachers and a fixed vision of what robots can do for students. This paper presents the development of Sthymuli, a static educational robot designed to explore new classroom interactions between robots, students and teachers. We propose the use of the Thymio II educational platform as a base, ensuring a robust benchmark for a fair comparison of the commonly available wheeled robots and our exploratory approach with Sthymuli. This paper outlines the constraints and requirements for developing such a robot, the current state of development and future work.
Fed-EC: Bandwidth-Efficient Clustering-Based Federated Learning For Autonomous Visual Robot Navigation
Gummadi, Shreya, Gasparino, Mateus V., Vasisht, Deepak, Chowdhary, Girish
Centralized learning requires data to be aggregated at a central server, which poses significant challenges in terms of data privacy and bandwidth consumption. Federated learning presents a compelling alternative, however, vanilla federated learning methods deployed in robotics aim to learn a single global model across robots that works ideally for all. But in practice one model may not be well suited for robots deployed in various environments. This paper proposes Federated-EmbedCluster (Fed-EC), a clustering-based federated learning framework that is deployed with vision based autonomous robot navigation in diverse outdoor environments. The framework addresses the key federated learning challenge of deteriorating model performance of a single global model due to the presence of non-IID data across real-world robots. Extensive real-world experiments validate that Fed-EC reduces the communication size by 23x for each robot while matching the performance of centralized learning for goal-oriented navigation and outperforms local learning. Fed-EC can transfer previously learnt models to new robots that join the cluster.
Dogs of war: Britain's new robots aiding Ukraine, terrorizing Russia as drones continue dominating battlefield
The United Kingdom has provided Ukraine with robotic "war dogs" that have started assisting troops on the battlefield and terrifying Russian troops who see them, according to reports. "The robot dog demonstrated its capabilities in delivering a range of critical equipment, showcasing its potential as an invaluable asset to military units," manufacturer Brit Alliance said of the units. "The robot dog exhibited exceptional mobility and agility, crucial for traversing complex and hostile environments," the company added. "Whether navigating through debris, climbing over obstacles, or moving stealthily across open ground, the robot dog has proven itself capable of maintaining a high level of operational effectiveness." The British second-generation Brit Alliance Dog (BAD2) has taken to the battlefield, utilizing remote-sensing technology and a thermal-infrared camera to navigate the tricky landscape and perform a wide range of wartime tasks, such as delivering equipment or reconnaissance.
- Asia > Russia (0.75)
- Europe > United Kingdom (0.71)
- Europe > Russia (0.44)
- (3 more...)
Real life Skynet? Controversial robot powered by OpenAI's ChatGPT can now have real-time conversations
A new automated humanoid robot powered by OpenAI's ChatGPT resembles something akin to the AI Skynet from the sci-fi film Terminator While the new robot is not a killing machine, Figure 01 can perform basic autonomous tasks and carry out real-time conversations with humans - with the help of ChatGPT. The company, Figure AI, shared a demonstration video, showing how ChatGPT helps the two-legged machine visual objects, plan future actions and even reflect on its memory. Figure's cameras snap its surrounding and send them to a a large vision-language model trained by OpenAI, which than translates the images back to the robot. The clip showed a man asking the humanoid to put away dirty laundry, wash dishes and hand him something to eat - and the robot performed the tasks - but unlike ChatGPT, Figure is more hesitant when it comes to answering questions. Figure AI hopes that its first AI humanoid robot will prove capable at jobs too dangerous for human laborers and might alleviate worker shortages. 'Two weeks ago, we announced Figure OpenAI are joining forces to push the boundaries of robot learning,' Figure founder Brett Adcock wrote on X. 'Together we are developing next-generation AI models for our humanoid robots,' he added.
- Automobiles & Trucks (0.49)
- Leisure & Entertainment (0.35)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Generalization of Heterogeneous Multi-Robot Policies via Awareness and Communication of Capabilities
Howell, Pierce, Rudolph, Max, Torbati, Reza, Fu, Kevin, Ravichandar, Harish
Recent advances in multi-agent reinforcement learning (MARL) are enabling impressive coordination in heterogeneous multi-robot teams. However, existing approaches often overlook the challenge of generalizing learned policies to teams of new compositions, sizes, and robots. While such generalization might not be important in teams of virtual agents that can retrain policies on-demand, it is pivotal in multi-robot systems that are deployed in the real-world and must readily adapt to inevitable changes. As such, multi-robot policies must remain robust to team changes -- an ability we call adaptive teaming. In this work, we investigate if awareness and communication of robot capabilities can provide such generalization by conducting detailed experiments involving an established multi-robot test bed. We demonstrate that shared decentralized policies, that enable robots to be both aware of and communicate their capabilities, can achieve adaptive teaming by implicitly capturing the fundamental relationship between collective capabilities and effective coordination. Videos of trained policies can be viewed at: https://sites.google.com/view/cap-comm
- South America > Brazil > São Paulo (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.90)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
'Brainless' robot can navigate complex obstacles
Researchers who created a soft robot that could navigate simple mazes without human or computer direction have now built on that work, creating a "brainless" soft robot that can navigate more complex and dynamic environments. "In our earlier work, we demonstrated that our soft robot was able to twist and turn its way through a very simple obstacle course," says Jie Yin, co-corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at North Carolina State University. "However, it was unable to turn unless it encountered an obstacle. In practical terms this meant that the robot could sometimes get stuck, bouncing back and forth between parallel obstacles. "We've developed a new soft robot that is capable of turning on its own, allowing it to make its way through twisty mazes, even negotiating its way around moving obstacles.
Watch as a ROBOT tennis player zips around the court ahead of Wimbledon
The moment that tennis fans have been waiting for is almost finally here, with the Wimbledon Championships set to kick off next week. This year's tournament will see the likes of Petra Kvitova, Novak Djokovic and Carlos Alcaraz take to the grass. But in the near future, they could face stiff competition from an unlikely new contender - a robot. Scientists from Georgia Tech have developed a new robot named ESTHER (Experimental Sport Tennis Wheelchair Robot), which can zip around the court and even return human shots. The team believes the bot could serve as a training partner for professional players in the future, removing the psychological pressure of training against another human.
Amazon's New Robots Are Rolling Out an Automation Revolution
In a giant warehouse in Reading, Massachusetts, I meet a pair of robots that look like goofy green footstools from the future. Their round eyes and satisfied grins are rendered with light emitting diodes. They sport small lidar sensors like tiny hats that scan nearby objects and people in 3D. Suddenly, one of them plays a chipper little tune, its mouth starts flashing, and its eyes morph into heart shapes. This means, I am told, that the robot is happy.
GNM: A General Navigation Model to Drive Any Robot
Shah, Dhruv, Sridhar, Ajay, Bhorkar, Arjun, Hirose, Noriaki, Levine, Sergey
Learning provides a powerful tool for vision-based navigation, but the capabilities of learning-based policies are constrained by limited training data. If we could combine data from all available sources, including multiple kinds of robots, we could train more powerful navigation models. In this paper, we study how a general goal-conditioned model for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots, and enable broad generalization across environments and embodiments. We analyze the necessary design decisions for effective data sharing across robots, including the use of temporal context and standardized action spaces, and demonstrate that an omnipolicy trained from heterogeneous datasets outperforms policies trained on any single dataset. We curate 60 hours of navigation trajectories from 6 distinct robots, and deploy the trained GNM on a range of new robots, including an underactuated quadrotor. We find that training on diverse data leads to robustness against degradation in sensing and actuation. Using a pre-trained navigation model with broad generalization capabilities can bootstrap applications on novel robots going forward, and we hope that the GNM represents a step in that direction. For more information on the datasets, code, and videos, please check out our project page https://sites.google.com/view/drive-any-robot.