People with certain visual impairments aren't allowed to drive, for fairly obvious reasons. Now, a study from the University of Washington (UW) has shown that artificial intelligences (AI) aren't immune to vision problems when operating motor vehicles either. The researchers have determined that machine learning models can be prone to a kind of physical-world attack that impedes their ability to process images. Concretely, AI can have problems reading defaced street signs. For their study, the researchers focused on two potential types of physical attacks.
Nvidia researchers have used a pair of generative adversarial networks (GANs) along with some unsupervised learning to create an image-to-image translation network that could allow for artificial intelligence (AI) training times to be reduced. In a blog post, the company explained how its GANs are trained on different data sets, but share a "latent space assumption" that allows for the generation of images by passing the image representation from one GAN to the next. "The use of GANs isn't novel in unsupervised learning, but the Nvidia research produced results -- with shadows peeking through thick foliage under partly cloudy skies -- far ahead of anything seen before," the company said. The benefits of this work could allow for network training to require less labelled data, it said. "For self-driving cars alone, training data could be captured once and then simulated across a variety of virtual conditions: Sunny, cloudy, snowy, rainy, nighttime, etc," Nvidia said.
Deep learning, an advanced machine-learning technique, uses layered (hence "deep") neural networks (neural nets) that are loosely modelled on the human brain. Machine learning itself is a subset of Artificial Intelligence (AI), and is broadly about teaching a computer how to spot patterns and use mountains of data to make connections without any programming to accomplish the specific task--a recommendation engine being a good example. Neural nets, on their part, enable image recognition, speech recognition, self-driving cars and smarthome automation devices, among other things. However, the success of deep learning is primarily dependent on the availability of huge data sets on which these neural nets can be trained, coupled with a lot of computing power, memory and energy to function. To address this issue, says a 14 November press release, researchers at the University of Waterloo, Canada, took a cue from nature to make this process more efficient, thus making deep-learning software compact enough to fit on mobile computer chips for use in everything from smartphones to industrial robots.
In an era of great uncertainty and disruption for automotive manufacturers, Mercedes and its parent company Daimler are jumping in full throttle as leaders of the 4th Industrial Revolution. Not only are they designing new vehicles, but their services, influence in the transportation industry and factories are transforming to embrace the new opportunities and demands of their customers. Other companies should follow their lead to thrive in the new industrial revolution. What is the 4th Industrial Revolution? Often referred to as industry 4.0, the 4th Industrial Revolution is the shift to smart factories that use a combination of cyber-physical systems, the Internet of Things and the Internet of Systems to connect the entire production chain and make decisions on its own.
Governor Andrew Cuomo of the State of New York declared last month that New York City will join 13 other states in testing self-driving cars: "Autonomous vehicles have the potential to save time and save lives, and we are proud to be working with GM and Cruise on the future of this exciting new technology." For General Motors, this represents a major milestone in the development of its Cruise software, since the the knowledge gained on Manhattan's busy streets will be invaluable in accelerating its deep learning technology. In the spirit of one-upmanship, Waymo went one step further by declaring this week that it will be the first car company in the world to ferry passengers completely autonomously (without human engineers safeguarding the wheel). As unmanned systems are speeding ahead toward consumer adoption, one challenge that Cruise, Waymo and others may counter within the busy canyons of urban centers is the loss of Global Positioning System (GPS) satellite data. Robots require a complex suite of coordinating data systems that bounce between orbiting satellites to provide positioning and communication links to accurately navigate our world.
Earlier this year, we open-sourced a research project called AirSim, a high-fidelity system for testing the safety of artificial intelligence systems. AirSim provides realistic environments, vehicle dynamics and sensing for research into how autonomous vehicles that use AI that can operate safely in the open world. Today, we are sharing an update to AirSim: We have extended the system to include car simulation, which will help advance the research and development of self-driving vehicles. The latest version is available now on GitHub as an open-source, cross-platform offering. The updated version of AirSim also includes many other features and enhancements, including additional tools for testing airborne vehicles.
Paint-on-the-floor pedestrian crossings don't cut it anymore. They are outdated, and the cause of 20 incidents a day in the UK. Architectural firm Umbrellium reckons it's got a solution: a sensor-packed digital crossing that responds to your movements. "We've been designing a pedestrian crossing for the 21st century," says Usman Haque, Umbrellium's founding partner. "Crossings that you know were designed in the 1950s, when there was a different type of city and interaction."
Continued from: "Advanced image sensors take automotive vision beyond 20/20." And there are many others now in the race to process all of that vehicle sensor data. Among them, Toshiba has been evolving its Visconti line of image recognition processors in parallel with increasingly demanding European New Car Assessment Programme (Euro NCAP) requirements. Starting in 2014, the Euro NCAP began rating vehicles based on active safety technologies such as lane departure warning (LDW), lane keep assist (LKA), and autonomous emergency braking (AEB). These requirements extended to daytime pedestrian AEB and speed assist systems (SAS) in 2016.
This week at the Intel Shift Conference in New York, I had the opportunity to listen to my colleague Amir Khosrowshahi, CTO of the Intel AI Products group, speak to a gathering of business executives about the transformative impacts of AI. Amir explained how artificial intelligence (AI) can change what organizations do and how they do it, creating new business opportunities. Every company is in some phase of their AI adoption course: evaluating and understanding the opportunities, testing AI use cases and its outcome on their business, or fully integrating AI systems that are increasingly driving business metrics. AI concepts have been around for more than 60 years, but we now have the technology to make AI a reality. AI is predicated on the simple idea that with the right training a computer can simulate human decision making.
After announcing plans this month to supply self-driving vehicles for Lyft's ride-hailing network, the autonomous tech developer has scored financial backing from Southeast Asian rideshare powerhouse Grab and plans to expand into Singapore. Singapore office will study that market as a potential place to deploy vehicles equipped with its software and self-driving hardware kits in government and business fleets, Tandon said. Amid the rush by auto and tech firms to perfect robotic vehicles, Tandon and his co-founders, who were all researchers from Stanford University's Artificial Intelligence Lab, founded Drive.ai to specialize in deep learning-based driving software for business, government and shared vehicle fleets. Small relative to well-funded programs at Waymo, General Motors' Cruise, Uber's Advanced Technology Vehicle Group and Ford's Argo AI, Mountain View, California-based Drive.ai has made quick progress.