Whether they drive themselves or improve the safety of their driver, tomorrow's vehicles will be defined by software. However, it won't be written by developers but by processing data. To prepare for that future, the transportation industry is integrating AI car computers into cars, trucks and shuttles and training them using deep learning in the data center. A benefit of such a software-defined system is that it's capable of handling a wide range of automated driving -- from Level 2 to Level 5. Speaking in Tokyo at the last stop on NVIDIA's seven-city GPU Technology Conference world tour, NVIDIA founder and CEO Jensen Huang demonstrated how the NVIDIA DRIVE platform provides this scalable architecture for autonomous driving. "The future is surely a software defined car," said Huang.
Deep learning, an advanced machine-learning technique, uses layered (hence "deep") neural networks (neural nets) that are loosely modelled on the human brain. Machine learning itself is a subset of Artificial Intelligence (AI), and is broadly about teaching a computer how to spot patterns and use mountains of data to make connections without any programming to accomplish the specific task--a recommendation engine being a good example. Neural nets, on their part, enable image recognition, speech recognition, self-driving cars and smarthome automation devices, among other things. However, the success of deep learning is primarily dependent on the availability of huge data sets on which these neural nets can be trained, coupled with a lot of computing power, memory and energy to function. To address this issue, says a 14 November press release, researchers at the University of Waterloo, Canada, took a cue from nature to make this process more efficient, thus making deep-learning software compact enough to fit on mobile computer chips for use in everything from smartphones to industrial robots.
Governor Andrew Cuomo of the State of New York declared last month that New York City will join 13 other states in testing self-driving cars: "Autonomous vehicles have the potential to save time and save lives, and we are proud to be working with GM and Cruise on the future of this exciting new technology." For General Motors, this represents a major milestone in the development of its Cruise software, since the the knowledge gained on Manhattan's busy streets will be invaluable in accelerating its deep learning technology. In the spirit of one-upmanship, Waymo went one step further by declaring this week that it will be the first car company in the world to ferry passengers completely autonomously (without human engineers safeguarding the wheel). As unmanned systems are speeding ahead toward consumer adoption, one challenge that Cruise, Waymo and others may counter within the busy canyons of urban centers is the loss of Global Positioning System (GPS) satellite data. Robots require a complex suite of coordinating data systems that bounce between orbiting satellites to provide positioning and communication links to accurately navigate our world.
AT THE Consumer Electronics Show in Las Vegas two years ago a leading car maker unveiled a machine that it said was a vision of the future. It certainly looked the part, with a sleek silver body shell, a steering wheel that retracted into the dashboard and four lounge-style chairs that could rotate to face one other. The most startling feature, though, was its self-driving ability. It was filmed navigating through San Francisco shortly before its futuristic doors swung open to journalists. We stepped onto the car's wooden floor and looked at a calming forest projected onto the windows as the car drove itself along the runway of a nearby airbase.
After announcing plans this month to supply self-driving vehicles for Lyft's ride-hailing network, the autonomous tech developer has scored financial backing from Southeast Asian rideshare powerhouse Grab and plans to expand into Singapore. Singapore office will study that market as a potential place to deploy vehicles equipped with its software and self-driving hardware kits in government and business fleets, Tandon said. Amid the rush by auto and tech firms to perfect robotic vehicles, Tandon and his co-founders, who were all researchers from Stanford University's Artificial Intelligence Lab, founded Drive.ai to specialize in deep learning-based driving software for business, government and shared vehicle fleets. Small relative to well-funded programs at Waymo, General Motors' Cruise, Uber's Advanced Technology Vehicle Group and Ford's Argo AI, Mountain View, California-based Drive.ai has made quick progress.
By the middle of 2018, Nvidia believes it will have a system capable of level 5 autonomy in the hands of the auto industry, which will allow for fully self-driving vehicles. Pegasus is rated as being capable of 320 trillion operations per second, which the company claims is a thirteen-fold increase over previous generations. In May, Nvidia took the wraps off its Tesla V100 accelerator aimed at deep learning. The company said the V100 has 1.5 times the general-purpose FLOPS compared to Pascal, a 12 times improvement for deep learning training, and six times the performance for deep learning inference.
Facebook seems to have a strategy of leveraging its capabilities in social marketing, AR & VR and interestingly, who would have thought of it, leveraging its advanced AI and deep learning capabilities to support the development of autonomous vehicles. Potential car buyers spend anywhere between 30 to 50 minutes every day on Facebook and that has helped the social business make significant inroads in digital prospecting and omni-channel commerce. Facebook believes that car companies are focusing more on the connected car, rather than the connected consumer. With every new customer car buying journey now beginning online, it is possible through Facebook's huge data on a customer's social behavior, to make that experience personalized and completely customized.
Dr. Weng-Keen Wong from the NSF echoed much the same distinction between the specific and general case algorithm during his talk "Research in Deep Learning: A Perspective From NSF" and was also mentioned by Nvidia's Dale Southard during the disruptive technology panel. Tim Barr's (Cray) "Perspectives on HPC-Enabled AI" showed how Cray's HPC technologies can be leveraged for Machine and Deep Learning for vision, speech and language. Fresh off their integration of SGI technology into their technology stack, the talk not only highlighted the newer software platforms which the learning systems leverage, but demonstrated that HPE's portfolio of systems and experience in both HPC and hyper scale environments is impressive indeed. Stand-alone image recognition is really cool, but as expounded upon above, the true benefit from deep learning is having an integrated workflow where data sources are ingested by a general purpose deep learning platform with outcomes that benefit business, industry and academia.
We are in the crawling stages of Artificial Intelligence and Deep Learning. So everyone is aware, Deep Learning is a subset of Machine Learning, and Machine Learning is a subset of Artificial Intelligence. Companies like Tesla, Uber, and Google are using Deep Learning to make self driving vehicles a reality. We hope you like the Artificial Intelligence and Deep Learning quotes.
The South Korean electronics maker has recently been approved to test it deep-learning based autonomous vehicles on public roads in Korea. Samsung received approved to test it deep-learning based autonomous vehicles on public roads. For the small companies and students, the race course offered a large, safe testing environment. For the small companies and students, the race course offered a large, safe testing environment.