Autonomous Cars, LiDAR, and Artificial Intelligence, Oh, My! A Look at CES 2017


The Consumer Electronics Show, one of the world's tentpole technology shows, is a flashy vehicle for the engineering underneath. In the world of chipsets, the show was dominated by Nvidia, one of many companies seeking to power the artificial intelligence in next-generation cars and image processors. There were also plenty of opportunities for component manufacturers to find places in the flashy new cars and fleets that dominated some of the show floors. Chipmakers and sensor manufacturers are also keeping an eye on what customers want in the automotive space. Just seven states – Nevada, California, Florida, Michigan, Hawaii, Washington, and Tennessee -- and the District of Columbia have passed bills related to autonomous driving.

Artificial intelligence virtual consultant helps deliver better patient care


WASHINGTON, DC (March 8, 2017)--Interventional radiologists at the University of California at Los Angeles (UCLA) are using technology found in self-driving cars to power a machine learning application that helps guide patients' interventional radiology care, according to research presented today at the Society of Interventional Radiology's 2017 Annual Scientific Meeting. The researchers used cutting-edge artificial intelligence to create a "chatbot" interventional radiologist that can automatically communicate with referring clinicians and quickly provide evidence-based answers to frequently asked questions. This allows the referring physician to provide real-time information to the patient about the next phase of treatment, or basic information about an interventional radiology treatment. "We theorized that artificial intelligence could be used in a low-cost, automated way in interventional radiology as a way to improve patient care," said Edward W. Lee, M.D., Ph.D., assistant professor of radiology at UCLA's David Geffen School of Medicine and one of the authors of the study. "Because artificial intelligence has already begun transforming many industries, it has great potential to also transform health care."

Lie back and think of cybersecurity: IBM lets students loose on Watson


IBM is teaming up with eight North American universities to further tune its cognitive system to tackle cybersecurity problems. Watson for Cyber Security, a platform already in pre-beta, will be further trained in "learning the nuances of security research findings and discovering patterns and evidence of hidden cyber attacks and threats that could otherwise be missed". IBM will work with eight US universities from autumn onwards for a year in order to push forward the project. The universities selected are California State Polytechnic University, Pomona; Pennsylvania State University; Massachusetts Institute of Technology; New York University; the University of Maryland, Baltimore County (UMBC); the University of New Brunswick; the University of Ottawa; and the University of Waterloo. The project is ultimately designed to bridge the cyber-security skills gap, a perennial issue in the industry.

Watson Will Soon Be a Bus Driver In Washington D.C.


IBM has teamed up with Local Motors, a Phoenix-based automotive manufacturer that made the first 3D-printed car, to create a self-driving electric bus. Named "Olli," the bus has room for 12 people and uses IBM Watson's cloud-based cognitive computing system to provide information to passengers. In addition to automatically driving you where you want to go using Phoenix Wings autonomous driving technology, Olli can respond to questions and provide information, similar to Amazon's Echo home assistant. The bus debuts today in the Washington D.C. area for the public to use during select times over the next several months, and the IBM-Local Motors team hopes to introduce Olli to the Miami and Las Vegas areas by the end of the year. By using Watson's speech to text, natural language classifier, entity extraction, and text to speech APIs, the bus can provide several services beyond taking you to your destination.

Propagation of Delays in the National Airspace System Artificial Intelligence

The National Airspace System (NAS) is a large and complex system with thousands of interrelated components: administration, control centers, airports, airlines, aircraft, passengers, etc. The complexity of the NAS creates many difficulties in management and control. One of the most pressing problems is flight delay. Delay creates high cost to airlines, complaints from passengers, and difficulties for airport operations. As demand on the system increases, the delay problem becomes more and more prominent. For this reason, it is essential for the Federal Aviation Administration to understand the causes of delay and to find ways to reduce delay. Major contributing factors to delay are congestion at the origin airport, weather, increasing demand, and air traffic management (ATM) decisions such as the Ground Delay Programs (GDP). Delay is an inherently stochastic phenomenon. Even if all known causal factors could be accounted for, macro-level national airspace system (NAS) delays could not be predicted with certainty from micro-level aircraft information. This paper presents a stochastic model that uses Bayesian Networks (BNs) to model the relationships among different components of aircraft delay and the causal factors that affect delays. A case study on delays of departure flights from Chicago O'Hare international airport (ORD) to Hartsfield-Jackson Atlanta International Airport (ATL) reveals how local and system level environmental and human-caused factors combine to affect components of delay, and how these components contribute to the final arrival delay at the destination airport.