Goto

Collaborating Authors

 demonstration


This Defense Company Made AI Agents That Blow Things Up

WIRED

Scout AI is using technology borrowed from the AI industry to power lethal weapons--and recently demonstrated its explosive potential. Like many Silicon Valley companies today, Scout AI is training large AI models and agents to automate chores. The big difference is that instead of writing code, answering emails, or buying stuff online, Scout AI's agents are designed to seek and destroy things in the physical world with exploding drones. In a recent demonstration, held at an undisclosed military base in central California, Scout AI's technology was put in charge of a self-driving off-road vehicle and a pair of lethal drones. The agents used these systems to find a truck hiding in the area, and then blew it to bits using an explosive charge.


Data Quality in Imitation Learning

Neural Information Processing Systems

In supervised learning, the question of data quality and curation has been overshadowed in recent years by increasingly more powerful and expressive models that can ingest internet-scale data.




SafeDICE: Offline Safe Imitation Learning with Non-Preferred Demonstrations

Neural Information Processing Systems

In this paper, we present a hyperparameter-free offline safe IL algorithm, SafeDICE, that learns safe policy by leveraging the non-preferred demonstrations in the space of stationary distributions. Our algorithm directly estimates the stationary distribution corrections of the policy that imitate the demonstrations excluding the non-preferred behavior.