Artificial-intelligence (AI) research covers a number of topics, including machine learning (ML). ML covers a lot of ground as well, from rule-based expert systems to the latest hot trend--neural networks. Neural networks are changing how developers solve problems, whether it be self-driving cars or the industrial Internet of Things (IIoT). Neural networks come in many forms, but deep neural networks (DNNs) are the most important at this point. A DNN consists of multiple layers, including input and output layers plus multiple hidden layers (Figure 1).
Understanding which AI technologies to use to advance a project can be challenging given the rapid growth and evolution of the science. This article outlines the differences between machine learning and deep learning, and how to determine when to apply each one. In both machine learning and deep learning, engineers use software tools, such as MATLAB, to enable computers to identify trends and characteristics in data by learning from an example data set. In the case of machine learning, training data is used to build a model that the computer can use to classify test data, and ultimately real-world data. Traditionally, an important step in this workflow is the development of features – additional metrics derived from the raw data – which help the model be more accurate.
Neural networks are computer systems that are vaguely inspired by the construction of animal brains, and much like human brains, can be trained to obey the whims of the almighty domestic cat. The build uses a Raspberry Pi, fitted with the Pi Camera board, to image the area around the back door of the house. A Python script regularly captures images and passes them to a TensorFlow neural network for object recognition. The TensorFlow network returns object type and positions to the Python script. This information can be used to determine if there is a cat in the frame, and if it is inside or outside.
Turns out he's even better seated behind his workbench, as the completely custom auto-tracking gimbal he came up with is nothing short of a work of art. There's quite a bit going on here, and as you might expect, it took several iterations before [Gabriel] got all the parts working together. The rather GLaDOS-looking body of the gimbal is entirely 3D printed, and holds the motors, camera, and a collection of ultrasonic receivers. The Nvidia Jetson TX1 that does the computational heavy lifting is riding shotgun in its own swanky looking 3D printed enclosure, but [Gabriel] notes a future revision of the hardware should be able to reunite them. In the current version of the system, the target wears an ultrasonic emitter that is picked up by the sensors in the gimbal.
A few short months ago, Intel acquired Nervana Systems for 400 million dollars with the intention of using the technology they developed in order to be competitive in the deep learning market currently dominated by GPU-based solutions from NVIDIA. Artificial Intelligence is a big market for Intel and the company sees it as a pivotal ground that they must put a stake in or risk falling behind like they did on the mobile front. With Nervana's technology, Intel is expecting to produce "a breakthrough 100-fold increase in performance in the next three years to train complex neural networks", says Intel CEO Bryan Krzanich in a recent editorial. Nervana's technology will be a PCIe add-in card expected to hit be out sometime around the first half of 2017, codenamed Lake Crest and incorporates HBM technology that is directly targeting current GPU solutions. Intel believes that GPGPU architecture is not uniquely advantageous for AI and that their approach can support much larger models and is much more highly scalable.