If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Giving robots the ability to operate in the real world has been, and continues to be, one of the most difficult tasks in AI research. Since 1987, researchers at Carnegie Mellon University have been investigating one such task. Their research has been focused on using adaptive, vision-based systems to increase the driving performance of the Navlab line of on-road mobile robots. This research has led to the development of a neural network system that can learn to drive on many road types simply by watching a human teacher. This article describes the evolution of this system from a research project in machine learning to a robust driving system capable of executing tactical driving maneuvers such as lane changing and intersection navigation.
This means that programmers must account for every type of road situation a car may encounter. MIT's Technology Review spoke with Amnon Shashua, CTO and cofounder of the technology firm to learn more about the initiative. Mobileye has been in the news of late for another reason--its system was the one being used by the Tesla vehicle that was involved in a car crash in Florida recently--the incident is still under investigation by the NHTSA. Tesla publicly blamed Mobileye, and because of that, a rift developed between the companies, which are now no longer partners. Shashua does not believe that will harm the company's new initiative, though--building a system based on neural networking, which, if all goes according to plan, will allow a car or truck to learn how to drive in much the same way that humans do.
Maintaining the highest level of user safety will be non-negotiable when it comes to the deployment of autonomous vehicles whether they are used for personal or mass transport, or logistics in industrial environments. However, for reasons of sheer volume, it will be road vehicles where the biggest changes will be felt. Vehicle efficiency and road safety will be improved and congestion will come down and the technology and legislation is in development to make it a reality. It is generally agreed that the transition to autonomous driving will be gradual. In the US, the National Highway Traffic Safety Administration (NHTSA) has defined five levels of automation, from 0 to 4, which it refers to as the automation continuum.
A rather high profile area generating headlines this year has been connected vehicles. The technological challenges that must be addressed before autonomous cars can be unleashed onto the streets are quite significant. Vision is one critical factor; your car needs to be able to identify all road hazards as well as navigating from A to B. So, how can a car achieve that in an often over-crowded highway space? Computer vision can be described as graphics in reverse. Rather than us viewing the computer's world, the computer turns around to look at ours.
Ben Dickson is a software engineer and freelance writer. He writes regularly on business, technology and politics. The transportation industry is associated with high maintenance costs, disasters, accidents, injuries and loss of life. Hundreds of thousands of people across the world are losing their lives to car accidents and road disasters every year. According to the National Safety Council, 38,300 people were killed and 4.4 million injured on U.S. roads alone in 2015.