Sensor technology is designed to allow machines to interact with real-world inputs, whether they are humans interacting with their smartphones, autonomous vehicles navigating on a busy street, or robots using sensors to aid in manufacturing. Not surprisingly, three-dimensional (3D) sensors, which allow a machine to understand the size, shape, and distance of an object or objects within its field of view, have attracted a lot of attention in recent months, thanks to their inclusion on Apple's most-advanced (to date) smartphone, the iPhone X, which uses a single camera to measure distance. Indeed, the TrueDepth system, which replaces the fingerprint-based TouchID system on the Apple handset, shines approximately 30,000 dots outward onto the user's face. Then, an infrared (IR) camera captures the image of the dots, which provides depth information based on the density of the dots (closer objects display a dot pattern that is spread out, whereas objects that are farther away create a denser pattern of dots. Altogether, the placement of these dots creates a depth map with 3D data that is used to supply the system with the information it needs to check for a facial identity match, which then unlocks the device.
Among the challenges facing autonomous vehicle developers is the need to gather and process vast amounts of data gathered by cameras quickly and efficiently. Last month, BlinkAI Technologies Inc. announced its RoadSight product, which is designed to improve camera performance in low-light conditions. While some autonomous vehicle makers are using multiple cameras rather than more expensive lidar technology, that approach has raised safety concerns. The National Traffic Safety Board recently found that a contributing factor to the fatal Uber crash in 2018 was that the automated driver system did not recognize a jaywalking pedestrian in a low-light setting. BlinkAI spun out from the MIT-Harvard Martinos Center of biomedical imaging and emerged from "stealth mode" over the past few months.
Machine vision has come a long way from the simpler days of cameras attached to frame grabber boards--all arranged along an industrial production line. While the basic concepts are the same, emerging embedded systems technologies such as Artificial Intelligence (AI), deep learning, the Internet-of-Things (IoT) and cloud computing have all opened up new possibilities for machine vision system developers. To keep pace, companies that used to only focus on box-level machine vision systems are now moving toward AI-based edge computing systems that provide all the needed interfacing for machine vision, but also add new levels of compute performance to process imaging in real-time and over remote network configurations. AI IN MACHINE VISION ADLINK Technology appears to be moving in this direction of applying deep learning and AI to machine vision. The company has a number of products, listed "preliminary" at present, that provide AI machine vision solutions. These systems are designed to be "plug and play" (PnP) so that machine vision system developers can evolve their existing applications to AI-enablement right away with no need to replace existing hardware.
Machine-learning (ML) technology is radically changing how robots work and dramatically extending their capabilities. The latest crop of ML technologies is still in its infancy, but it looks like we're at the end of the beginning with respect to robots. Much more looms on the horizon. ML is just one aspect of improved robotics. Robotics has demanding computational requirements, and that's being helped by improvements in multicore processing power.
Advances in 3D imaging have allowed vision users to overcome some challenging inspection tasks. In the machine vision marketplace, 3D imaging continues to mature, tackling applications 2D imaging cannot. "In a manufacturing setting, the fusion of 2D with 3D is necessary to measure how well components go together into an assembly and assess the product for final fit, finish, and packaging," says Terry Arden, CEO of LMI Technologies. According to David Dechow, Principal Vision Systems Architect at Integro Technologies, a systems integrator specializing in machine vision technologies with broad experience in helping companies implement 3D and 2D imaging for industrial automation, accuracy has improved as well. And with inspection tasks in 3D space, which may include measurement or reconstruction, precision is even more essential than with most tasks in robotic guidance or bin picking.