Goto

Collaborating Authors

 high speed


Flying through Moving Gates without Full State Estimation

Römer, Ralf, Emmert, Tim, Schoellig, Angela P.

arXiv.org Artificial Intelligence

Autonomous drone racing requires powerful perception, planning, and control and has become a benchmark and test field for autonomous, agile flight. Existing work usually assumes static race tracks with known maps, which enables offline planning of time-optimal trajectories, performing localization to the gates to reduce the drift in visual-inertial odometry (VIO) for state estimation or training learning-based methods for the particular race track and operating environment. In contrast, many real-world tasks like disaster response or delivery need to be performed in unknown and dynamic environments. To close this gap and make drone racing more robust against unseen environments and moving gates, we propose a control algorithm that does not require a race track map or VIO and uses only monocular measurements of the line of sight (LOS) to the gates. For this purpose, we adopt the law of proportional navigation (PN) to accurately fly through the gates despite gate motions or wind. We formulate the PN-informed vision-based control problem for drone racing as a constrained optimization problem and derive a closed-form optimal solution. We demonstrate through extensive simulations and real-world experiments that our method can navigate through moving gates at high speeds while being robust to different gate movements, model errors, wind, and delays.


Fast and Modular Autonomy Software for Autonomous Racing Vehicles

Saba, Andrew, Adetunji, Aderotimi, Johnson, Adam, Kothari, Aadi, Sivaprakasam, Matthew, Spisak, Joshua, Bharatia, Prem, Chauhan, Arjun, Duff, Brendan Jr., Gasparro, Noah, King, Charles, Larkin, Ryan, Mao, Brian, Nye, Micah, Parashar, Anjali, Attias, Joseph, Balciunas, Aurimas, Brown, Austin, Chang, Chris, Gao, Ming, Heredia, Cindy, Keats, Andrew, Lavariega, Jose, Muckelroy, William III, Slavescu, Andre, Stathas, Nickolas, Suvarna, Nayana, Zhang, Chuan Tian, Scherer, Sebastian, Ramanan, Deva

arXiv.org Artificial Intelligence

Autonomous motorsports aim to replicate the human racecar driver with software and sensors. As in traditional motorsports, Autonomous Racing Vehicles (ARVs) are pushed to their handling limits in multi-agent scenarios at extremely high ($\geq 150mph$) speeds. This Operational Design Domain (ODD) presents unique challenges across the autonomy stack. The Indy Autonomous Challenge (IAC) is an international competition aiming to advance autonomous vehicle development through ARV competitions. While far from challenging what a human racecar driver can do, the IAC is pushing the state of the art by facilitating full-sized ARV competitions. This paper details the MIT-Pitt-RW Team's approach to autonomous racing in the IAC. In this work, we present our modular and fast approach to agent detection, motion planning and controls to create an autonomy stack. We also provide analysis of the performance of the software stack in single and multi-agent scenarios for rapid deployment in a fast-paced competition environment. We also cover what did and did not work when deployed on a physical system the Dallara AV-21 platform and potential improvements to address these shortcomings. Finally, we convey lessons learned and discuss limitations and future directions for improvement.


Japanese robot solves Rubik's Cube in record time

The Japan Times

A Mitsubishi Electric machine has cracked the notoriously challenging Rubik's Cube puzzle in less than a third of a second. In the blink of an eye, computer-controlled components moved the squares of the 3 x 3 x 3 cube until each side of the block was a single color, thus completing the game. Humans present applauded the feat. Guinness World Records recognized the 0.305-second time achieved by the TOKUI Fast Accurate Synchronized Motion Testing Robot as a new world best, with it beating the previous record of 0.38 seconds. Mitsubishi Electric received a certificate from the records body on May 21. The fastest time by a human is 3.13 seconds, achieved in June 2023 by Max Park at an event in California.


UFO or drone? 'Flying cylinder' spotted soaring over New York City's LaGuardia Airport baffles passenger

Daily Mail - Science & tech

A woman has claimed that she witnessed a possible UFO while flying in a passenger airplane over New York City. Michelle Reyes shared the video online, which she capture from the window seat, showing a'flying cylinder' whizz by as she traveled over LaGuardia Airport. She told NewsNation that she observed the black object moving at high speeds - much faster than the airplane - and that another passenger had also witnessed it. A UFO expert analyzed the clip, determining no evidence that the video was fake or a hoax - but some have suggested the object was a drone. Michelle Reyes spoke NewsMax's Ashleigh Banfield about the mysterious object she spotted while flying over New York City'The first thing I did was email the FAA to let them know what I saw,' Reyes told NewsMax's Ashleigh Banfield, noting she has yet to receive a response.


Researchers train robotic sensor to read braille at high speed

AIHub

Researchers have developed a robotic sensor that incorporates artificial intelligence techniques to read braille at speeds roughly double that of most human readers. The research team, from the University of Cambridge, used machine learning algorithms to teach a robotic sensor to quickly slide over lines of braille text. The robot was able to read the braille at 315 words per minute at close to 90% accuracy. Although the robot braille reader was not developed as an assistive technology, the researchers say the high sensitivity required to read braille makes it an ideal test in the development of robot hands or prosthetics with comparable sensitivity to human fingertips. The results are reported in the journal IEEE Robotics and Automation Letters.


RAMP: A Risk-Aware Mapping and Planning Pipeline for Fast Off-Road Ground Robot Navigation

Sharma, Lakshay, Everett, Michael, Lee, Donggun, Cai, Xiaoyi, Osteen, Philip, How, Jonathan P.

arXiv.org Artificial Intelligence

A key challenge in fast ground robot navigation in 3D terrain is balancing robot speed and safety. Recent work has shown that 2.5D maps (2D representations with additional 3D information) are ideal for real-time safe and fast planning. However, the prevalent approach of generating 2D occupancy grids through raytracing makes the generated map unsafe to plan in, due to inaccurate representation of unknown space. Additionally, existing planners such as MPPI do not consider speeds in known free and unknown space separately, leading to slower overall plans. The RAMP pipeline proposed here solves these issues using new mapping and planning methods. This work first presents ground point inflation with persistent spatial memory as a way to generate accurate occupancy grid maps from classified pointclouds. Then we present an MPPI-based planner with embedded variability in horizon, to maximize speed in known free space while retaining cautionary penetration into unknown space. Finally, we integrate this mapping and planning pipeline with risk constraints arising from 3D terrain, and verify that it enables fast and safe navigation using simulations and hardware demonstrations.


Distributed Optimal Control Framework for High-Speed Convoys: Theory and Hardware Results

Bagree, Namya, Noren, Charles, Singh, Damanpreet, Travers, Matthew, Vundurthy, Bhaskar

arXiv.org Artificial Intelligence

Practical deployments of coordinated fleets of mobile robots in different environments have revealed the benefits of maintaining small distances between robots, especially as they move at higher speeds. However, this is counter-intuitive in that as speed increases, reducing the amount of space between robots also reduces the time available to the robots to respond to sudden motion variations in surrounding robots. However, in certain examples, the benefits in performance due to traveling at closer distances can outweigh the potential instability issues, for instance, autonomous trucks on highways that optimize energy by vehicle ``drafting'' or smaller robots in cluttered environments that need to maintain close, line of sight communication, etc. To achieve this kind of closely coordinated fleet behavior, this work introduces a model predictive optimal control framework that directly takes non-linear dynamics of the vehicles in the fleet into account while planning motions for each robot. The robots are able to follow each other closely at high speeds by proactively making predictions and reactively biasing their responses based on state information from the adjacent robots. This control framework is naturally decentralized and, as such, is able to apply to an arbitrary number of robots without any additional computational burden. We show that our approach is able to achieve lower inter-robot distances at higher speeds compared to existing controllers. We demonstrate the success of our approach through simulated and hardware results on mobile ground robots.


VI-IKD: High-Speed Accurate Off-Road Navigation using Learned Visual-Inertial Inverse Kinodynamics

Karnan, Haresh, Sikand, Kavan Singh, Atreya, Pranav, Rabiee, Sadegh, Xiao, Xuesu, Warnell, Garrett, Stone, Peter, Biswas, Joydeep

arXiv.org Artificial Intelligence

One of the key challenges in high speed off road navigation on ground vehicles is that the kinodynamics of the vehicle terrain interaction can differ dramatically depending on the terrain. Previous approaches to addressing this challenge have considered learning an inverse kinodynamics (IKD) model, conditioned on inertial information of the vehicle to sense the kinodynamic interactions. In this paper, we hypothesize that to enable accurate high-speed off-road navigation using a learned IKD model, in addition to inertial information from the past, one must also anticipate the kinodynamic interactions of the vehicle with the terrain in the future. To this end, we introduce Visual-Inertial Inverse Kinodynamics (VI-IKD), a novel learning based IKD model that is conditioned on visual information from a terrain patch ahead of the robot in addition to past inertial information, enabling it to anticipate kinodynamic interactions in the future. We validate the effectiveness of VI-IKD in accurate high-speed off-road navigation experimentally on a scale 1/5 UT-AlphaTruck off-road autonomous vehicle in both indoor and outdoor environments and show that compared to other state-of-the-art approaches, VI-IKD enables more accurate and robust off-road navigation on a variety of different terrains at speeds of up to 3.5 m/s.


'Artificial synapse' could make neural networks work more like brains

New Scientist

A resistor that works in a similar way to nerve cells in the body could be used to build neural networks for machine learning. Many large machine learning models rely on increasing amounts of processing power to achieve their results, but this has vast energy costs and produces large amounts of heat. One proposed solution is analogue machine learning, which works like a brain by using electronic devices similar to neurons to act as the parts of the model. However, these devices have so far not been fast, small or efficient enough to provide advantages over digital machine learning. Murat Onen at the Massachusetts Institute of Technology and his colleagues have created a nanoscale resistor that transmits protons from one terminal to another.


MIT researchers use simulation to train a robot to run at high speeds

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Four-legged robots are nothing novel -- Boston Dynamics' Spot has been making the rounds for some time, as have countless alternative open source designs. But with theirs, researchers at MIT claim to have broken the record for the fastest robot run recorded. Working out of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), the team says that they developed a system that allows the MIT-designed Mini Cheetah to learn to run by trial and error in simulation. While the speedy Mini Cheetah has limited direct applications in the enterprise, the researchers believe that their technique could be used to improve the capabilities of other robotics systems -- including those used in factories to assemble products before they're shipped to customers.