Robots in the work place can perform hazardous or even 'impossible' tasks; e.g., toxic waste clean-up, desert and space exploration, and more. AI researchers are also interested in the intelligent processing involved in moving about and manipulating objects in the real world.
Five men in white overalls lifted the stretcher off the ground, one of them taking care to lay a clear plastic IV bag that's connected to the patient onto his stomach. They marched him toward what looks like a black inflatable dinghy on small wheels, crossed with a fly. The stretcher was loaded in through a hatch on the side, and then the men stood back. The patient was actually a medical training mannequin, but that didn't stop him (it, rather) from taking part in the first "mission representative" demonstration of a new aircraft. That bean-shaped thing is called the Cormorant, and it was built by Israel-based Tactical Robotics to make battlefield evacuations--which today rely on helicopters--quicker and safer, thanks to a new design and the fact that there's no human pilot involved.
Just a day after the NTSB released its preliminary findings on the Uber crash in Arizona, senators Edward J. Markey and Richard Blumenthal began an investigation into safety protocols for driverless car testing. In a letter sent to major auto manufacturers involved in autonomous driving systems, the senators asked several specific questions to find out what kind of procedures the companies have to ensure the safety of others during testing. The senators want to know where testing is occurring, how companies determined if the self-driving tech was safe for public roads and whether the technology relies on internal sensors or external inputs and more. Copies of the letter were sent to the US offices of BMW, Daimler Trucks, Fiat Chrysler, Ford, General Motors, Honda, Hyundai, Jaguar Land Rover, Kia, Mazda, Mercedes-Benz, Mitsubishi, Nissan, Subaru, Tesla, Toyota, Volkswagen, Volvo, Amazon, Apple, Intel, Lyft, NVIDIA Corporation, Uber and Waymo. "This latest fatality has raised many questions about the processes companies have in place to guard public safety when testing this type of technology on public roads," the senators wrote in the letter sent to Uber.
It doesn't bother me in the abstract--people have worked on far harder, more complex, problems before and we have figured out how to make them work in a way that is reasonable. Airline safety is a great example, which, despite a lot of early deaths, we were able to continually improve our processes until it is now the safest form of transportation. So there is a model there that we can draw on with autonomous cars, and so long as we stick with it, I see no reason why it shouldn't work. As far as trusting particular models goes, no, I don't at all. And there are good mathematical reasons for that.
Quadrotors have a reputation for being both fun and expensive, but it's not usually obvious how dangerous they can be. While it's pretty clear from the get-go that it's in everyone's best interest to avoid the spinny bits whenever possible, quadrotor safety primarily involves doing little more than trying your level best not to run into people. Not running into people with your drone is generally good advice, but the problems tend to happen when for whatever reason the drone escapes from your control. Maybe it's your fault, maybe it's the drone's fault, but either way, those spinny bits can cause serious damage. Safety-conscious quadrotor pilots have few options for making their drones safer, and none of them are all that great, due either to mediocre effectiveness or significant cost and performance tradeoffs.
The federal investigators examining Uber's fatal self-driving crash in March released a preliminary report this morning. It lays out the facts of the collision that killed a woman walking her bicycle in Tempe, Arizona, and explains what the vehicle actually saw that night. The National Transportation Safety Board won't determine the cause of the crash or issue safety recommendations to stop others from happening until it releases its final report, but this first look makes two things clear: Engineering a car that drives itself is very hard. And any self-driving car developer that is relying on a human operator to monitor its testing systems--to keep everyone on the road safe--should be extraordinarily careful about the design of that system. The report says that the Uber vehicle, a modified Volvo XC90 SUV, had been in autonomous mode for 19 minutes and was driving at about 40 mph when it hit 49-year-old Elaine Herzberg as she was walking her bike across the street.
More details have emerged about the self-driving Uber car crash that killed a woman in Arizona earlier this year. The National Transportation Safety Board (NTSB) released its preliminary findings Thursday about the March 18 fatal crash. Elaine Herzberg, 49, was struck and killed while walking a bicycle across a four-lane road in Tempe, Arizona. A 44-year-old Uber test driver was at the wheel of the modified 2017 Volvo XC90. The car was in autonomous mode and had been for the 19 minutes before the crash.
Your next car probably won't be autonomous. But, it will still have artificial intelligence (AI). While most of the attention has been on advanced driver assistance systems (ADAS) and autonomous driving, AI will penetrate far deeper into the car. These overlooked areas offer fertile ground for incumbents and startups alike. Where is the fertile ground for these features?
The initial report by the National Transportation Safety Board on the fatal self-driving Uber crash in March confirms that the car detected the pedestrian as early as 6 seconds before the crash, but did not slow or stop because its emergency braking systems were deliberately disabled. Uber told the NTSB that "emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior," in other words to ensure a smooth ride. "The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator." It's not clear why the emergency braking capability even exists if it is disabled while the car is in operation.
Today, the NTSB released preliminary findings for an accident back in March, in which a self-driving Uber vehicle collided with a pedestrian. "At 1.3 seconds before impact, the self-driving system determined that emergency braking was needed to mitigate a collision," the release says. "According to Uber emergency braking maneuvers are not enabled while the vehicle is under computer control to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator."
Just over an hour into Tuesday's California Public Utilities Commission public meeting on the future of self-driving taxis, the machines took over. "Please pardon the interruption," a kindly robotic voice said, cutting into a government official's prepared remarks. "Your conference contains less than three participants at this time. If you would like to continue, press star 1 now, or your conference will be terminated." In fact, there were three commissioners and two administrative judges sitting on the auditorium's dais.