Goto

Collaborating Authors

 lincoln laboratory


Coding for underwater robotics

Robohub

During a summer internship at MIT Lincoln Laboratory, Ivy Mahncke, an undergraduate student of robotics engineering at Olin College of Engineering, took a hands-on approach to testing algorithms for underwater navigation. She first discovered her love for working with underwater robotics as an intern at the Woods Hole Oceanographic Institution in 2024. Drawn by the chance to tackle new problems and cutting-edge algorithm development, Mahncke began an internship with Lincoln Laboratory's Advanced Undersea Systems and Technology Group in 2025. Mahncke spent the summer developing and troubleshooting an algorithm that would help a human diver and robotic vehicle collaboratively navigate underwater. The lack of traditional localization aids -- such as the Global Positioning System, or GPS -- in an underwater environment posed challenges for navigation that Mahncke and her mentors sought to overcome.

  Country: Atlantic Ocean (0.05)
  Industry: Health & Medicine > Therapeutic Area (0.57)

Estimating See and Be Seen Performance with an Airborne Visual Acquisition Model

Underhill, Ngaire, Maki, Evan, Gill, Bilal, Weinert, Andrew

arXiv.org Artificial Intelligence

-- Separation provision and collision avoidance to avoid other air traffic are fundamental components of the layered conflict management system to ensure safe and efficient operations. Pilots have visual-based separation responsibilities to see and be seen to maintain separation between aircraft. To safely integrate into the airspace, drones should be required to have a minimum level of performance based on the safety achieved as baselined by crewed aircraft seen and be seen interactions. Drone interactions with crewed aircraft should not be more hazardous than interactions between traditional aviation aircraft. Accordingly, there is need for a methodology to design and evaluate detect and avoid systems, to be equipped by drones to mitigate the risk of a midair collision, where the methodology explicitly addresses, both semantically and mathematically, the appropriate operating rules associated with see and be seen. In response, we simulated how onboard pilots safely operate through see and be seen interactions using an updated visual acquisition model that was originally developed by J.W. Andrews decades ago. Monte Carlo simulations were representative two aircraft flying under visual flight rules and results were analyzed with respect to drone detect and avoid performance standards.


Handheld surgical robot can help stem fatal blood loss

Robohub

Matt Johnson (right) and Laura Brattain (left) test a new medical device on an artificial model of human tissue and blood vessels. The device helps users to insert a needle and guidewire quickly and accurately into a vessel, a crucial first step to halting rapid blood loss. After a traumatic accident, there is a small window of time when medical professionals can apply lifesaving treatment to victims with severe internal bleeding. Delivering this type of care is complex, and key interventions require inserting a needle and catheter into a central blood vessel, through which fluids, medications, or other aids can be given. First responders, such as ambulance emergency medical technicians, are not trained to perform this procedure, so treatment can only be given after the victim is transported to a hospital.


MIT Lincoln Laboratory wins nine R&D 100 Awards for 2021

#artificialintelligence

Nine technologies developed at MIT Lincoln Laboratory have been selected as R&D 100 Award winners for 2021. Since 1963, this awards program has recognized the 100 most significant technologies transitioned to use or introduced into the marketplace over the past year. The winners are selected by an independent panel of expert judges. R&D World, an online publication that serves research scientists and engineers worldwide, announces the awards. The winning technologies are diverse in their applications.


Reinforcement learning makes for shitty AI teammates in co-op games

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Artificial intelligence has proven that complicated board and video games are no longer the exclusive domain of the human mind. From chess to Go to StarCraft, AI systems that use reinforcement learning algorithms have outperformed human world champions in recent years. But despite the high individual performance of RL agents, they can become frustrating teammates when paired with human players, according to a study by AI researchers at MIT Lincoln Laboratory. The study, which involved cooperation between humans and AI agents in the card game Hanabi, shows that players prefer the classic and predictable rule-based AI systems over complex RL systems. The findings, presented in a paper published on arXiv, highlight some of the underexplored challenges of applying reinforcement learning to real-world situations and can have important implications for the future development of AI systems that are meant to cooperate with humans.


MIT study finds humans struggle when partnered with RL agents

#artificialintelligence

Artificial intelligence has proven that complicated board and video games are no longer the exclusive domain of the human mind. From chess to Go to StarCraft, AI systems that use reinforcement learning algorithms have outperformed human world champions in recent years. But despite the high individual performance of RL agents, they can become frustrating teammates when paired with human players, according to a study by AI researchers at MIT Lincoln Laboratory. The study, which involved cooperation between humans and AI agents in the card game Hanabi, shows that players prefer the classic and predictable rule-based AI systems over complex RL systems. The findings, presented in a paper published on arXiv, highlight some of the underexplored challenges of applying reinforcement learning to real-world situations and can have important implications for the future development of AI systems that are meant to cooperate with humans. Deep reinforcement learning, the algorithm used by state-of-the-art game-playing bots, starts by providing an agent with a set of possible actions in the game, a mechanism to receive feedback from the environment, and a goal to pursue.


Reinforcement learning frustrates humans in teamplay, MIT study finds

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Artificial intelligence has proven that complicated board and video games are no longer the exclusive domain of the human mind. From chess to Go to StarCraft, AI systems that use reinforcement learning algorithms have outperformed human world champions in recent years. But despite the high individual performance of RL agents, they can become frustrating teammates when paired with human players, according to a study by AI researchers at MIT Lincoln Laboratory. The study, which involved cooperation between humans and AI agents in the card game Hanabi, shows that players prefer the classic and predictable rule-based AI systems over complex RL systems.


Jeremy Kepner named SIAM Fellow

#artificialintelligence

Jeremy Kepner, a Lincoln Laboratory Fellow in the Cyber Security and Information Sciences Division and a research affiliate of the MIT Department of Mathematics, was named to the 2021 class of fellows of the Society for Industrial and Applied Mathematics (SIAM). The fellow designation honors SIAM members who have made outstanding contributions to the 17 mathematics-related research areas that SIAM promotes through its publications, conferences, and community of scientists. Kepner was recognized for "contributions to interactive parallel computing, matrix-based graph algorithms, green supercomputing, and big data." Since joining Lincoln Laboratory in 1998, Kepner has worked to expand the capabilities of computing at the laboratory and throughout the computing community. He has published broadly, served on technical committees of national conferences, and contributed to regional efforts to provide access to supercomputing.


Lincoln Laboratory convenes top network scientists for Graph Exploitation Symposium

#artificialintelligence

As the Covid-19 pandemic has shown, we live in a richly connected world, facilitating not only the efficient spread of a virus but also of information and influence. What can we learn by analyzing these connections? This is a core question of network science, a field of research that models interactions across physical, biological, social, and information systems to solve problems. The 2021 Graph Exploitation Symposium (GraphEx), hosted by MIT Lincoln Laboratory, brought together top network science researchers to share the latest advances and applications in the field. "We explore and identify how exploitation of graph data can offer key technology enablers to solve the most pressing problems our nation faces today," says Edward Kao, a symposium organizer and technical staff in Lincoln Laboratory's AI Software Architectures and Algorithms Group.


Artificial intelligence system could help counter the spread of disinformation

#artificialintelligence

Disinformation campaigns are not new--think of wartime propaganda used to sway public opinion against an enemy. What is new, however, is the use of the internet and social media to spread these campaigns. The spread of disinformation via social media has the power to change elections, strengthen conspiracy theories, and sow discord. Steven Smith, a staff member from MIT Lincoln Laboratory's Artificial Intelligence Software Architectures and Algorithms Group, is part of a team that set out to better understand these campaigns by launching the Reconnaissance of Influence Operations (RIO) program. Their goal was to create a system that would automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks.