If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Nvidia has gone small to create the latest member of its Jetson series aimed at autonomous and embedded systems, unveiling the Nano on Tuesday. Powered by a 128-core Maxwell GPU capable of 472GFlops at half precision, a 4-core ARM A57 CPU, with 4GB of LPDDR4 memory and 16GB of flash storage, the $130 Jetson Nano has been labelled as a low-powered AI computer. For H.264 and H.265 video, the Nano is capable of processing eight 1080p streams in parallel while running object detection on all eight streams simultaneously at a rate of 30 frames per second. At the start of the month, Google launched its Coral board, which uses Google's Edge TPU. Nvidia senior manager of product for autonomous machines Jesse Clayton said Edge TPU was "fast for a few small classification networks", but that the architecture was not suited for "large, deep neural networks".
These new devices are made by Coral, Google's new platform for enabling embedded developers to build amazing experiences with local AI. Coral's first products are powered by Google's Edge TPU chip, and are purpose-built to run TensorFlow Lite, TensorFlow's lightweight solution for mobile and embedded devices. As a developer, you can use Coral devices to explore and prototype new applications for on-device machine learning inference. Coral's Dev Board is a single-board Linux computer with a removable System-On-Module (SOM) hosting the Edge TPU. It allows you to prototype applications and then scale to production by including the SOM in your own devices.
During Injong Rhee's keynote at last year's Google Next conference in San Francisco, Google announced two new upcoming hardware products: a development board and a USB accelerator stick. Both products were built around Google's Edge TPU, their purpose-built ASIC designed to run machine learning inference at the edge. Almost a year on, the hardware silently launched "into Beta" under the name "Coral" earlier today, and both the development board and the USB accelerator are now available for purchase. The new hardware will officially be announced during the TensorFlow Dev Summit later this week. Machine learning development is done in two stages.
TensorFlow is the world's most popular open source machine learning library. Since its initial release in 2015, the Google Brain product has been downloaded over 41 million times. At this week's 2019 TensorFlow Dev Summit, Google announced a major upgrade on the framework, the TensorFlow 2.0 Alpha version. TensorFlow 2.0 focuses on simplicity and ease of use, with updates like eager execution, intuitive higher-level APIs, and flexible model building on any platform. Last August Google Brain Software Engineer Martin Wicke posted in Google Groups that TensorFlow 2.0 would be a major milestone, which led many in the machine learning community to expect the following upgrades: According to the TensorFlow 2.0 official guide, Google has delivered on the expectations.
The TensorFlow Dev Summit 2019 continued to roll out the goodies with new updates to software and hardware announcements. When it comes to AI and machine learning, Google is no stranger to new innovations. Currently in Beta mode, Coral consists of a development board and a USB accelerator stick. It has low power demands for usage in embedded applications and can be deployed offline or in areas with limited Internet connectivity. See what powerful machine learning these pieces of hardware can do.
Google has officially released its Edge TPU (TPU stands for tensor processing unit) processors in its new Coral development board and USB accelerator. The Edge TPU is Google's inference-focused application specific integrated circuit (ASIC) that targets low-power "edge" devices and complements the company's "Cloud TPU," which targets data centers. Credit: GoogleLast July, Google announced that it's working on a low-power version of its Cloud TPU to cater to Internet of Things (IoT) devices. The Edge TPU's main promise is to free IoT devices from cloud dependence when it comes to intelligent analysis of data. For instance, a surveillance camera would no longer need to identify objects it sees in real-time through cloud analysis and could instead do so on its own, locally, thanks to the Edge TPU.
No, the first-generation Edge TPU is capable of accelerating ML inferencing only. You need to create a quantized TensorFlow Lite model and then compile the model for compatibility with the Edge TPU. We will provide a cloud-based compiler tool that accepts your .tflite We will also provide several pre-compiled vision models that perform image classification and object detection. The first-generation Edge TPU is capable of executing deep feed-forward neural networks (DFF) such as convolutional neural networks (CNN), making it ideal for a variety of vision-based ML applications.
Andrew Hobbs delves into Google's latest edge computing developments at Cloud Next 2018, and sits down with Product Lead Indranil Chakraborty to discuss how LG is driving remarkable results with Google's new Edge TPU. In July this year, Google announced its new Edge TPU and Cloud IoT Edge products. The 40x40mm tensor processing unit delivers high performance in a small physical and power footprint, enabling high-accuracy machine learning inferences at the edge. It represents the company's response to the need for more and more data streams to be processed at the point of origin, bypassing the latency and bandwidth issues that cloud solutions introduce. AI models trained in the cloud increasingly need to be run at the edge.
Machine learning can become a robust analytical tool for vast volumes of data. The combination of machine learning and edge computing can filter most of the noise collected by IoT devices and leave the relevant data to be analyzed by the edge and cloud analytic engines. The advances in Artificial Intelligence have allowed us to see self-driving cars, speech recognition, active web search, and facial and image recognition. Machine learning is the foundation of those systems. It is so pervasive today that we probably use it dozens of times a day without knowing it.
Google has designed a low-power version of its homegrown AI math accelerator, dubbed it the Edge TPU, and promised to ship it to developers by October. Announced at Google Next 2018 today, the ASIC is a cutdown edition of its Tensor Processing Unit (TPU) family of in-house-designed coprocessors. TPUs are used internally at Google to power its machine-learning-based services, or are rentable via its public cloud. These chips are specific designed for and used to train neural networks and perform inference. Now the web giant has developed a cut-down inference-only version suitable for running in Internet-of-Things gateways.