You could argue that Waymo, the self-driving subsidiary of Alphabet, has the safest autonomous cars around. It's certainly covered the most miles. But in recent years, serious accidents involving early systems from Uber and Tesla have eroded public trust in the nascent technology. To win it back, putting in the miles on real roads just isn't enough. So today Waymo not only announced that its vehicles have clocked more than 10 million miles since 2009.
Machine-learning and artificial intelligence algorithms used in sophisticated applications such as for autonomous cars are not foolproof and can be easily manipulated by introducing errors, Indian Institute of Science (IISc) researchers have warned. Machine-learning and AI software are trained with initial sets of data such as images of cats and it learns to identify feline images as more such data are fed. A common example is Google throwing up better results as more people search for the same information. Use of AI applications is becoming mainstream in areas such as healthcare, payments processing, deploying drones to monitor crowds, and for facial recognition in offices and airports. "If your data input is not clear and vetted, the AI machine could throw up surprising results and that could end up being hazardous.
The words "fly like an eagle" are famously part of a song, but they may also be words that make some scientists scratch their heads. Especially when it comes to soaring birds like eagles, falcons and hawks, who seem to ascend to great heights over hills, canyons and mountain tops with ease. Scientists realize that upward currents of warm air assist the birds in their flight, but they don't know how the birds find and navigate these thermal plumes. To figure it out, researchers from the University of California San Diego used reinforcement learning to train gliders to autonomously navigate atmospheric thermals, soaring to heights of 700 meters--nearly 2,300 feet. The novel research results, published in the Sept. 19 issue of Nature, highlight the role of vertical wind accelerations and roll-wise torques as viable biological cues for soaring birds.
For any autonomous driving vehicle, control module determines its road performance and safety, i.e. its precision and stability should stay within a carefully-designed range. Nonetheless, control algorithms require vehicle dynamics (such as longitudinal dynamics) as inputs, which, unfortunately, are obscure to calibrate in real time. As a result, to achieve reasonable performance, most, if not all, research-oriented autonomous vehicles do manual calibrations in a one-by-one fashion. Since manual calibration is not sustainable once entering into mass production stage for industrial purposes, we here introduce a machine-learning based auto-calibration system for autonomous driving vehicles. In this paper, we will show how we build a data-driven longitudinal calibration procedure using machine learning techniques. We first generated offline calibration tables from human driving data. The offline table serves as an initial guess for later uses and it only needs twenty-minutes data collection and process. We then used an online-learning algorithm to appropriately update the initial table (the offline table) based on real-time performance analysis. This longitudinal auto-calibration system has been deployed to more than one hundred Baidu Apollo self-driving vehicles (including hybrid family vehicles and electronic delivery-only vehicles) since April 2018. By August 27, 2018, it had been tested for more than two thousands hours, ten thousands kilometers (6,213 miles) and yet proven to be effective.
Artificial intelligence (AI) systems, blending data and advanced algorithms to mimic the cognitive functions of the human mind, have begun to simplify and enhance even the simplest aspects of our everyday experiences -- and the automotive industry is no exception. A Tractica market intelligence study forecasts that the demand for automotive AI hardware, software, and services will explode from $404 million in 2016 to $14 billion by 2025. Semi-autonomous and fully autonomous vehicles must heavily rely on AI systems to guide the dependability of their fail-safe navigation and earn the trust of drivers and passengers. In February 2017, Ford invested $1 billion -- Detroit's biggest investment yet -- in the self-driving car startup Argo AI, which was founded by a partnership between two top engineers from Google and Uber. Tesla founder Elon Musk speculates that AI will surpass solely human-based efforts by the year 2030.
Listen to your vehicle - this is an advice that all car and motorcycle owners are given when they're getting to know more about the vehicle. Now, a new AI service developed by 3Dsignals, an Israel based start-up is doing just that. The AI system can detect an impending failure in cars or other machines, just by listening to the sound. The system depends on deep learning technique to identify the noise patterns of a car. As per a report by IEEE spectrum, 3Dsignals promises to reduce machinery downtime by 40% and improve efficiency.
Don't hold your breath waiting for the first fully autonomous car to hit the streets anytime soon. Car manufacturers have projected for years that we might have fully automated cars on the roads by 2018. But for all the hype that they bring, it may be years, if not decades, before self-driving systems are reliably able to avoid accidents, according to a blog published Tuesday in The Verge. The million-dollar question is whether self-driving cars will keep getting better – like image search, voice recognition and other artificial intelligence "success stories" – or will they run into a "generalization" problem like chatbots (where some chatbots couldn't make unique responses to questions)? Generalization, author Russell Brandom explained in the blog Self-driving cars are headed toward an AI roadblock, can be difficult for conventional deep learning systems.
Machine learning practitioners are often ambivalent about the ethical aspects of their products. We believe anything that gets us from that current state to one in which our systems are achieving some degree of fairness is an improvement that should be welcomed. This is true even when that progress does not get us 100% of the way to the goal of "complete" fairness or perfectly align with our personal belief on which measure of fairness is used. Some measure of fairness being built would still put us in a better position than the status quo. Impediments to getting fairness and ethical concerns applied in real applications, whether they are abstruse philosophical debates or technical overhead such as the introduction of ever more hyper-parameters, should be avoided. In this paper we further elaborate on our argument for this viewpoint and its importance.
Developing an autonomous vehicle requires a massive amount of data. Before any AV can safely navigate on the road, engineers must first train the artificial intelligence (AI) algorithms that enable the car to drive itself. Deep learning, a form of AI, is used to perceive the environment surrounding the car and to make driving decisions with superhuman levels of performance and precision. This is an enormous big data challenge. A single test vehicle can generate petabytes of data a year.
Video: Yandex's autonomous car hits Moscow's streets. Transportation is about to get a technology-driven reboot. The details are still taking shape, but future transport systems will certainly be connected, data-driven and highly automated. With harsh winters, drivers who constantly switch lanes, traffic jams and occasional crashes, the Russian capital of Moscow provides a challenging setting for testing autonomous cars. "In Moscow, the guys behind you honk the horn even before the traffic lights turn green," says Dmitry Polishchuk, head of Yandex's driverless car project.