Goto

Collaborating Authors

Robot Planning & Action


This dextrous robot arm is off to the ISS next year

Mashable

The S1 is a robotic arm created by GITAI. Plans are to send it to the International Space Station in 2021. The arm will help out aboard an airlock-extension module called Bishop, developed by NanoRacks. NanoRacks will launch the module in November of 2020 via the SpaceX CRS-21, followed by the S1 sometime in 2021.


Helping robots avoid collisions

#artificialintelligence

George Konidaris still remembers his disheartening introduction to robotics. "When you're a young student and you want to program a robot, the first thing that hits you is this immense disappointment at how much you can't do with that robot," he says. Most new roboticists want to program their robots to solve interesting, complex tasks -- but it turns out that just moving them through space without colliding with objects is more difficult than it sounds. Fortunately, Konidaris is hopeful that future roboticists will have a more exciting start in the field. That's because roughly four years ago, he co-founded Realtime Robotics, a startup that's solving the "motion planning problem" for robots.


Boston Dynamics expands Spot sales to Canada and Europe, robot arm coming in January

#artificialintelligence

Boston Dynamics today opened commercial sales of Spot, its quadruped robot that can climb stairs and traverse rough terrain, in Canada, the EU, and the U.K. Additionally, CEO Robert Playter told VentureBeat in an interview that Spot is getting more payloads next year, including a recharging station and a robot arm. Boston Dynamics started selling the Spot Explorer developer kit to U.S. businesses for $74,500 in June. Spot Explorer includes the robot, two batteries, the battery charger, the tablet controller, a robot case, a power case, and Python client packages for Spot APIs. You can only buy up to two Spots via Boston Dynamics' shopping portal. If you want more units, the company has two other pricing tiers: Academic (discount for accredited educational institutions) and Enterprise (more sensors, software integration, communications infrastructure, and robot fleet management).


An In-Memory Physics Environment as a World Model for Robot Motion Planning

#artificialintelligence

This paper investigates the utilization of a physics simulation environment as the imagination of a robot, where it creates a replica of the detected terrain in a physics simulation environment in its memory, and "imagines" a simulated version of itself in that memory, performing actions and navigation on the terrain. The physics of the environment simulates the movement of robot parts and its interaction with the objects in the environment and the terrain, thus avoiding the need for explicitly programming many calculations.


Machine Learning Helps Robot Swarms Coordinate - ScienceBlog.com

#artificialintelligence

Engineers at Caltech have designed a new data-driven method to control the movement of multiple robots through cluttered, unmapped spaces, so they do not run into one another. Multi-robot motion coordination is a fundamental robotics problem with wide-ranging applications that range from urban search and rescue to the control of fleets of self-driving cars to formation-flying in cluttered environments. Two key challenges make multi-robot coordination difficult: first, robots moving in new environments must make split-second decisions about their trajectories despite having incomplete data about their future path; second, the presence of larger numbers of robots in an environment makes their interactions increasingly complex (and more prone to collisions). To overcome these challenges, Soon-Jo Chung, Bren Professor of Aerospace, and Yisong Yue, professor of computing and mathematical sciences, along with Caltech graduate student Benjamin Rivière (MS '18), postdoctoral scholar Wolfgang Hönig, and graduate student Guanya Shi, developed a multi-robot motion-planning algorithm called "Global-to-Local Safe Autonomy Synthesis," or GLAS, which imitates a complete-information planner with only local information, and "Neural-Swarm," a swarm-tracking controller augmented to learn complex aerodynamic interactions in close-proximity flight. "Our work shows some promising results to overcome the safety, robustness, and scalability issues of conventional black-box artificial intelligence (AI) approaches for swarm motion planning with GLAS and close-proximity control for multiple drones using Neural-Swarm," says Chung. When GLAS and Neural-Swarm are used, a robot does not require a complete and comprehensive picture of the environment that it is moving through, or of the path its fellow robots intend to take.


Machine learning helps robot swarms coordinate

#artificialintelligence

Engineers at Caltech have designed a new data-driven method to control the movement of multiple robots through cluttered, unmapped spaces, so they do not run into one another. Multi-robot motion coordination is a fundamental robotics problem with wide-ranging applications that range from urban search and rescue to the control of fleets of self-driving cars to formation-flying in cluttered environments. Two key challenges make multi-robot coordination difficult: first, robots moving in new environments must make split-second decisions about their trajectories despite having incomplete data about their future path; second, the presence of larger numbers of robots in an environment makes their interactions increasingly complex (and more prone to collisions). To overcome these challenges, Soon-Jo Chung, Bren Professor of Aerospace, and Yisong Yue, professor of computing and mathematical sciences, along with Caltech graduate student Benjamin Rivière (MS '18), postdoctoral scholar Wolfgang Hönig, and graduate student Guanya Shi, developed a multi-robot motion-planning algorithm called "Global-to-Local Safe Autonomy Synthesis," or GLAS, which imitates a complete-information planner with only local information, and "Neural-Swarm," a swarm-tracking controller augmented to learn complex aerodynamic interactions in close-proximity flight. "Our work shows some promising results to overcome the safety, robustness, and scalability issues of conventional black-box artificial intelligence (AI) approaches for swarm motion planning with GLAS and close-proximity control for multiple drones using Neural-Swarm," says Chung.


Enabling humanoid robot movement with imitation learning and mimicking of animal behaviors – TechCrunch

#artificialintelligence

Rish is an entrepreneur and investor. Previously, he was a VC at Gradient Ventures (Google's AI fund), co-founded a fintech startup building an analytics platform for SEC filings and worked on deep-learning research as a graduate student in computer science at MIT. Enabling humanoid robot movement with imitation learning and mimicking of animal behaviors It's time to build against pandemics It's time to build against pandemics


Machine Learning Helps Robot Swarms Coordinate – IAM Network

#artificialintelligence

Engineers at Caltech have designed a new data-driven method to control the movement of multiple robots through cluttered, unmapped spaces, so they do not run into one another.Multi-robot motion coordination is a fundamental robotics problem with wide-ranging applications that range from urban search and rescue to the control of fleets of self-driving cars to formation-flying in cluttered environments. Two key challenges make multi-robot coordination difficult: first, robots moving in new environments must make split-second decisions about their trajectories despite having incomplete data about their future path; second, the presence of larger numbers of robots in an environment makes their interactions increasingly complex (and more prone to collisions).[embedded


Machine Learning Helps Robot Swarms Coordinate

#artificialintelligence

To test their new systems, Chung's and Yue's teams implemented GLAS and Neural-Swarm on quadcopter swarms of up to 16 drones and flew them in the open-air drone arena at Caltech's Center for Autonomous Systems and Technologies (CAST). The teams found that GLAS could outperform the current state-of-the-art multi-robot motion-planning algorithm by 20 percent in a wide range of cases. Meanwhile, Neural-Swarm significantly outperformed a commercial controller that cannot consider aerodynamic interactions; tracking errors, a key metric in how the drones orient themselves and track desired positions in three-dimensional space, were up to four times smaller when the new controller was used. Their research appears in two recently published studies. "GLAS: Global-to-Local Safe Autonomy Synthesis for Multi-Robot Motion Planning with End-to-End Learning" was published in IEEE Robotics and Automation Letters on May 11 by Chung, Yue, Rivière, and Hönig. "Neural-Swarm: Decentralized Close-Proximity Multirotor Control Using Learned Interactions" was published in Proceedings of IEEE International Conference on Robotics and Automation on June 1 by Chung, Yue, Shi, and Hönig.


Incredibly malleable robot arm snakes towards a future of very versatile technology

Mashable

Researchers from Imperial College London's REDS lab have created a robotic arm that is extremely flexible. The inside of the arm is layered with flaps of mylar sheets allowing the user to bend and adjust the shape of the arm as needed.