It's been an amazing year leading the Internet of Things (IoT) Group at Intel. During this time we have been working hard to define and develop a data-driven technology foundation for industry innovation. Our strategy is to drive end-to-end distributed computing in every vertical by focusing on silicon platforms and workload consolidation at the edge. Critical to our success is aligning our ecosystem of partners and developers to deliver the benefits. This focused effort is paying off, as Intel's IoT business grew by 20 percent in 2017 and continued with strong growth in this year's first quarter.
This sponsored post explores one of the solutions making computer vision a reality today for more applications: Intel's Distribution of OpenVINO (Open Visual Inference and Neural Network Optimization) toolkit. With the demand for intelligent vision solutions increasing everywhere from edge to cloud, enterprises of every type are demanding visually-enabled – and intelligent – applications for surveillance, retail, manufacturing, smart cities and homes, office automation, autonomous driving, and more coming every day. Increasingly, AI applications are powered by smart vision inputs. OpenVINO includes Intel's deep learning deployment toolkit, which includes a model optimizer that imports and trains models from a number of frameworks (Caffe, Tensoflow, MxNet, ONNX, Kaiai). Up till now, most intelligent computer vision applications have required a wealth of machine learning, deep learning, and data science knowledge to enable simple object recognition, much less facial recognition or collision avoidance.
What's New: Today at the Intel Industrial Summit 2020, Intel announced new enhanced internet of things (IoT) capabilities. The 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series bring new artificial intelligence (AI), security, functional safety and real-time capabilities to edge customers. With a robust hardware and software portfolio, an unparalleled ecosystem and 15,000 customer deployments globally, Intel is providing robust solutions for the $65 billion edge silicon market opportunity by 2024. "By 2023, up to 70% of all enterprises will process data at the edge.1 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series processors represent our most significant step forward yet in enhancements for IoT, bringing features that address our customers' current needs, while setting the foundation for capabilities with advancements in AI and 5G." –John Healy, Intel vice president of the Internet of Things Group and general manager of Platform Management and Customer Engineering Why It's Important: Intel works closely with customers to build proofs of concept, optimize solutions and collect feedback along the way. Innovations delivered with 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series processors are a response to challenges felt across the IoT industry: edge complexity, total cost of ownership and a range of environmental conditions.
The Raspberry Pi Foundation has announced it's bringing the OpenVX 1.3 API to Raspberry Pi devices to improve computer vision on the popular single-board computers. The new open and royalty-free API comes from the Khronos Group, which has backed standards like Vulcan and OpenCL. Khronos members include most big-name software and hardware vendors – AMD, Apple, Arm, Epic Games, Google, Samsung, Intel, Nvidia and so on – as well as companies with a stake in its standards, like Boeing and IKEA. "The Khronos Group and Raspberry Pi have come together to work on an open-source implementation of OpenVX 1.3, which passes the conformance on Raspberry Pi," explained Kiriti Nagesh Gowda, AMD's MTS software development engineer. "The open-source implementation passes the Vision, Enhanced Vision, & Neural Net conformance profiles specified in OpenVX 1.3 on Raspberry Pi."
TensorFlow* is one of the leading deep learning and machine learning frameworks today. Earlier in 2017, Intel worked with Google to incorporate optimizations for Intel Xeon and Xeon Phi processor based platforms using Intel Math Kernel Libraries (Intel MKL). These optimizations resulted in orders of magnitude improvement in performance – up to 70x higher performance for training and up to 85x higher performance for inference. In this blog we provide a performance update for a number of deep learning models running on the Intel Xeon Scalable processor. The Intel Xeon Scalable processor provides up to 28 cores, which brings additional computing power to the table compared to the 22 cores of its predecessor.