Video is the world's largest generator of data, created every day by over 500 million cameras worldwide. That number is slated to double by 2020. The potential there, if we could actually analyze the data, is off the charts. It's data from government property and public transit, commercial buildings, roadways, traffic stops, retail locations, and more. The result would be what NVIDIA calls AI Cities, a thinking robot, with billions of eyes trained on residents and programmed to help keep people safe.
AI has become a hot topic among tech corporations, startups, investors, the media, and the public. That's only because machine learning platforms have already been doing hard work for years now. Last month, NVIDIA announced the addition of Huawei and Alibaba as adopters of its system "Metropolis", an AI-platform for smart cities. More than 50 organizations are already using Metropolis and, by 2020, according to NVIDIA, there will be 1 billion video cameras worldwide that could be connected to AI platforms to make cities smarter. When connected to AI, cameras can be used to recognize shapes, faces and even the emotions of individuals, which has varied applications: autonomous cars, video surveillance (traffic flow, crime monitoring), and consumer behavior analysis (reaction to ads for example).
Nvidia has gone ahead with open sourcing the design of one of its AI chips designed to power deep learning. And, by releasing its chip design to open source, Nvidia wants AI chip makers to help bridge this gap. With other chip manufacturers using its chip design technology, Nvidia plans to augment sale of its other hardware and software. The chip module, known as Deep Learning Accelerator (DLA), for which Nvidia has released the design to open source is used for autonomous vehicles and associated technologies.
Blue River Technology, a Silicon Valley startup acquired for $305 million last month by Deere & Co., is using computer vision powered by Nvidia Corp. to help lettuce farmers boost productivity and reduce or reallocate labor costs. Willy Pell, who oversees new technology at Blue River, believes machines outfitted to perceive the world and act on what they sense without human intervention will drive the next wave of Silicon Valley investment. Machines outfitted with camera eyes and silicon brains soon will be able to take over "all kinds of repetitive tasks." Deepu Talla, Nvidia's vice president in charge of AI for applications such as robotics and drones, believes both local and remote processing will be necessary.
Then researchers found its graphics chips were also good at powering deep learning, the software technique behind recent enthusiasm for artificial intelligence. Longtime chip kingpin Intel and a stampede of startups are building and offering chips to power smart machines. This week the company released as open source the designs to a chip module it made to power deep learning in cars, robots, and smaller connected devices such as cameras. In a tweet this week, one Intel engineer called Nvidia's open source tactic a "devastating blow" to startups working on deep learning chips.
Then researchers found its graphics chips were also good at powering deep learning, the software technique behind recent enthusiasm for artificial intelligence. This week the company released as open source the designs to a chip module it made to power deep learning in cars, robots, and smaller connected devices such as cameras. While his unit works to put the DLA in cars, robots, and drones, he expects others to build chips that put it into diverse markets ranging from security cameras to kitchen gadgets to medical devices. In a tweet this week, one Intel engineer called Nvidia's open source tactic a "devastating blow" to startups working on deep learning chips.
NVIDIA announces it has brought together a dozen software partners for its Metropolis Software Partner Program. The NVIDIA Metropolis intelligent video analytics platform applies deep learning to video streams for applications such as public safety, traffic management and resource optimization. To make it into our Metropolis Software Partner Program, partners must have production-ready, field-proven solutions. NVIDIA says deep learning solutions are fueling a growing array of use cases, such as helping first responders react to emergencies more quickly and delivering more personalized experiences to shoppers.
For months now, major companies have been hooking up--Uber and Daimler, Lyft and General Motors, Microsoft and Volvo--but Intel CEO Brian Krzanich's announcement on Monday that the giant chipmaker is helping Waymo, Google's self-driving car project, build robocar technology registers as some seriously juicy gossip. Krzanich said Monday that Waymo's newest self-driving Chrysler Pacificas, delivered last December, use Intel technology to process what's going on around them and make safe decisions in real time. And last year, Google announced it had created its own specialized chip that could help AVs recognize common driving situations and react efficiently and safely. "Our self-driving cars require the highest-performance compute to make safe driving decisions in real-time," Waymo CEO John Krafcik said in a statement.
From setting shutter speeds, apertures and ISOs to choosing just the right filter, he was quickly reminded that photography is an intensely technical undertaking. And with an earlier experience as founder of a company that used recurrent neural networks to enable natural language processing, he thought he might have a solution: machine learning. Just how much pent-up demand there was for simpler DSLR camera settings became clear when Arsenal introduced itself on Kickstarter earlier this year. Then, the companion mobile app takes over, literally seeing what the camera's sensors are picking up and enabling the photographer to establish ideal settings with the slide of a finger -- and without having to know anything about shutter speed, aperture or white balance.
Volvo and NVIDIA have announced that they're teaming up with Zenuity to develop the next generation of self-driving vehicle systems which will be built on NVIDIA's Drive PX AI module. What's more, NVIDIA hopes that integrating additional autonomous safety features like automatic braking will help increase the scores of AI-equipped vehicles taking the DOT's New Car Assessment Program (NCAP) crash test safety certification. An increasing numbers of vehicles trading data with each other as they travel, why not have them talk to the infrastructure around them as well. "We'll be able to protect areas of potential congestion and really work with infrastructure, vehicles and navigation systems to optimize traffic flow and ultimately reduce congestion."