Key Points: – AI already impacts many aspects of our daily lives at work and at home – Over the next decade, AI enterprise software revenue will grow from $644 million to nearly $39 billion – Here are the top 10 ways that we predict AI will impact business over the next decade including vehicular object detection, predictive maintenance and intelligent recruitment. Artificial intelligence already impacts many aspects of our daily lives at work, at home and as we move about. Over the next decade, analyst firm Tractica predicts that annual Global AI enterprise software revenue will grow from $644 million in 2016 to nearly $39 billion by 2025. Services-related revenue should reach almost $150 billion. These functional areas are applicable to many use cases, industries, and generate benefits to both businesses and individuals.
In this tutorial, we're going to cover the implementation of the TensorFlow Object Detection API into the realistic simulation environment that is GTAV. For example, we can detect cars, people, stop signs, trucks, and stop lights. If you want to learn more about the object detection API, or how to track your own custom objects, check out the TensorFlow Object Detection API tutorial. Other objects we can successfully detect in GTA: traffic lights, stop signs, dogs, fire hydrants, benches, and more.
They're there to feed clearer, closer shots of buildings and street signs into Google's image recognition algorithms. Thanks to recent research inside the maps division, when a Street View car captures photos of a stretch of road, algorithms can now automatically create new addresses in the company's maps database by locating and transcribing any street names and numbers. Higher quality images coming from the new hardware now atop Google's Street View vehicles will allow those systems to extract information like that more reliably. This summer, Google began certifying some cameras as "Street View ready," meaning you can upload your own panoramas through the Street View mobile app to live on the company's service.
Using computer vision technology, facial analysis of the driver and/or passenger can be used to determine who out of a pool of drivers is sitting behind the wheel. Analyzing driver demographics in real time will also ensure the radio or music streaming service (such as Spotify, Pandora, Apple Music) displays the most relevant ads. Using computer vision for iris (gaze) tracking, eye openness detection, eyelid brink rate tracking, and head pose detection, in-car sensing technology can determine the driver's drowsiness and inattentiveness in real time. Now it's up to new technologies, like in-car sensing, computer vision, and AI to combat distracted driving and ensure our safety until fully autonomous vehicles hit the market.
Over the past few years, machine learning and AI have pushed forward the capacity of computers to recognize images, understand context, and make decisions. A report from IHS Technology expects that the number of AI systems in vehicles will jump from 7 million in 2015 to 122 million by 2025, bringing new opportunities to enhance the capabilities of connected cars as more data becomes available. In addition, AI will push advanced driver assistance systems (ADAS) into the mainstream. For that, they need AI, which is what enables the camera-based machine vision systems, radar-based detection units, driver condition evaluation and sensor fusion engine control units (ECU) that make autonomous vehicles work.
"Although this Statement focuses on the enormous safety potential of these new technologies, they offer an even wider range of possible benefits. Vehicle control systems that automatically accelerate and brake with the flow of traffic can conserve fuel more efficiently than the average driver. By eliminating a large number of vehicle crashes, highly effective crash avoidance technologies can reduce fuel consumption by also eliminating the traffic congestion that crashes cause every day on our roads. Reductions in fuel consumption, of course, yield corresponding reductions in greenhouse gas emissions. To the extent vehicles can communicate with each other and with the highway infrastructure, the potential for safer and more efficient driving will be increased even more.
Enter Toyota's newest companion: the palm-sized Kirobo Mini robot. Toyota calls Kirobo Mini a "communication partner" and it can read facial expressions and recall past vehicle trips. Kirobo Mini connects to a mobile phone via Bluetooth and uses a camera and voice and facial-expression recognition software in a setup that someday may work its way into vehicles. Toyota introduced a first-generation talking Kirobo robot in 2013.
Like much of the technology needed to support and enable autonomous vehicles, intelligent vision systems already exist and are used in other industries, for example, in industrial robots. This will require processing power that is only now becoming available, through advances made in System-on-Chip platforms, advanced software, deep learning algorithms and open source projects. It is enabled by the development of Heterogeneous System Architectures (HSA); platforms that combine powerful general purpose Microprocessing Units (MPUs) with very powerful and highly parallel Graphical Processing Units (GPUs). The software infrastructure needed to develop intelligent vision systems, such as OpenCV (Open Source Computer Vision) and OpenCL (Open Computer Language) require high performance processing platforms to execute their advanced algorithms.
In the autonomous car, AI will advance machine vision systems, while it will also migrate in sensor fusion electronic control units (ECU). In a phone interview with EE Times, Luca De Ambroggi, principal analyst, automotive semiconductors at IHS told us, "AI is viewed as a key enabler for real autonomous vehicles. EE Times asked the IHS analyst to break down automotive AI, including its advancements, applications inside vehicles, and the hardware available to process AI algorithms. EE Times: What hardware is best suited to implement AI applications [for autonomous cars] today?
The market research firm expects the attach rate of AI-based systems in new vehicles to increase from 8 percent in 2015 (the vast majority of today's AI systems in cars are focused on speech recognition) to 109% in 2025. IHS sees multiple AI systems of various types to be installed in many cars. In the human-machine interface in vehicles, IHS believes AI will play a role in speech and gesture recognition, eye-tracking, driver monitoring and natural language interfaces. In the autonomous car, AI will advance machine vision systems, while it will also migrate in sensor fusion engine control units (ECU). In a phone interview with EE Times, Luca De Ambroggi, principal analyst, automotive semiconductors at IHS told us, "AI is viewed as a key enabler for real autonomous vehicles.