Next year, a squad of souped-up Dallara race cars will reach speeds of up to 200 miles per hour as they zoom around the legendary Indianapolis Motor Speedway to discover whether a computer could be the next Mario Andretti. The planned Indy Autonomous Challenge--taking place in October 2021 in Indianapolis--is intended for 31 university computer science and engineering teams to push the limits of current self-driving car technology. There will be no human racers sitting inside the cramped cockpits of the Dallara IL-15 race cars. Instead, onboard computer systems will take their place, outfitted with deep-learning software enabling the vehicles to drive themselves. In order to win, a team's autonomous car must be able to complete 20 laps--which equates to a little less than 50 miles in distance--and cross the finish line first in 25 minutes or less.
Researchers from MIT, Stanford University, and the University of Pennsylvania have devised a method for predicting failure rates of safety-critical machine learning systems and efficiently determining their rate of occurrence. Safety-critical machine learning systems make decisions for automated technology like self-driving cars, robotic surgery, pacemakers, and autonomous flight systems for helicopters and planes. Unlike AI that helps you write an email or recommends a song, safety-critical system failures can result in serious injury or death. Problems with such machine learning systems can also cause financially costly events like SpaceX missing its landing pad. Researchers say their neural bridge sampling method gives regulators, academics, and industry experts a common reference for discussing the risks associated with deploying complex machine learning systems in safety-critical environments. In a paper titled "Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems," recently published on arXiv, the authors assert their approach can satisfy both the public's right to know that a system has been rigorously tested and an organization's desire to treat AI models like trade secrets.
Lockheed Martin has been selected as the main contractor to conduct a study on how to provide the US Navy with large, autonomous ships that can operate for extended periods without a crew. Part of the Navy's Large Unmanned Surface Vessel (LUSV) competition, Lockheed is working with Portland, Oregon-based shipbuilder Vigor Works, LLC, and will provide program management, platform integration, systems engineering, combat management, automation, and cybernetic expertise. With the biggest costs of building and operating a ship revolving around putting a crew aboard it, the US and other navies are very interested in creating unmanned or man-optional ships that can carry out both routine and extremely hazardous duties, leaving sailors to handle the sort of executive and complex tasks that still require a human touch. These autonomous ships of the future could be anything from small autonomous patrol craft, to sub hunters, to full-blown combat submarines. Such craft could, ideally, leave port on their own, remain at sea for months at a time, and then return autonomously for refit and maintenance.
The safety driver behind the wheel of a self-driving Uber that struck and killed a woman in 2018 has been charged with a crime. Prosecutors in Maricopa County, Arizona, Tuesday said the driver, Rafaela Vasquez, has been indicted for criminal negligence. But Uber, her employer and the company that built the automated system involved in the fatal collision, won't face charges. The attorney for neighboring Yavapai County declined to prosecute Uber last year, writing in a letter that the office found "no basis for criminal liability." Yavapai County attorney Sheila Polk declined to elaborate on her decision.
Not only did Uber have to halt its testing programme for a while, but rivals such as Google's Waymo became notably more cautious in their trials. Only today it is being reported that the Chinese tech giant Baidu is pushing back the full rollout of its robo-taxis until 2025, partly because of confusion about regulations.
Ignorance of history is a badge of honour in Silicon Valley. "The only thing that matters is the future," self-driving-car engineer Anthony Levandowski told The New Yorker in 20181. Levandowski, formerly of Google, Uber and Google's autonomous-vehicle subsidiary Waymo (and recently sentenced to 18 months in prison for stealing trade secrets), is no outlier. The gospel of'disruptive innovation' depends on the abnegation of history2. 'Move fast and break things' was Facebook's motto. Another word for this is heedlessness. And here are a few more: negligence, foolishness and blindness.