Deep learning, embedded vision, hyperspectral/multispectral imaging, 3D imaging, computational imaging, and polarization imaging have emerged as some of the most popular machine vision technologies today. Featured in the November/December issue, the results of a first-of-its-kind market survey highlights how much these technologies are used, where, how, and by whom. A roundtable discussion on December 4 featuring three top experts in machine vision today (David Dechow, Daniel Lau, and Perry West) will provide a forum for questions on these topics, how they might be using them, and how they may improve machine vision systems today. Submit your question ahead of time by contacting editor Jimmy Carroll at firstname.lastname@example.org, and register for the webcast here.
Attending the recent Automate show in Chicago was an extraordinary experience that allowed me and more than 20,000 other attendees an opportunity to peer into the future of industrial robotics. Being part of a company that is at the forefront of the industrial robotics and manufacturing automation industries still provides only one perspective, and Automate brought together leaders from all corners of the industry, such as Fanuc, ABB, Kuka, Keyence and Cognex, to showcase advances and share insights. The range of technologies on display that were designed to enhance processes, improve product quality and lower manufacturing costs was astonishing. I walked away from the show with a deeper sense of awareness of two notions: The rise of robots is upon us, and machine vision provides robots with the artificial intelligence that will forge the future of robotics in our increasingly globalized society. As many in automation are aware, robots are becoming an increasingly popular answer to completing dangerous or repetitive tasks: grinding, deburring, bin-picking, part inspections, etc.
FREMONT, CA: Machine vision is one of the important additions to the manufacturing sector. It has provided automated inspection capabilities as part of QC procedures. Nevertheless, the world of automation is becoming more complex with time. With rapid developments in many different areas, such as imaging techniques, robot interfaces, CMOS sensors, machine and deep learning, embedded vision, data transmission standards, and image processing capabilities, vision technology can benefit the manufacturing industry at multiple different levels. New imaging techniques have brought new application opportunities.
The increased sophistication of artificial neural networks (ANNs) coupled with the availability of AI-powered chips have driven am unparalleled enterprise interest in computer vision (CV). This exciting new technology will find myriad applications in several industries, and according to GlobalData forecasts, it would reach a market size of $28bn by 2030. The increasing adoption of AI-powered computer vision solutions, consumer drones; and the rising Industry 4.0 adoption will drive this phenomenal change. Deep learning has bought a new change in the role of machine vision used for smart manufacturing and industrial automation. The integration of deep learning propels machine vision systems to adapt itself to manufacturing variations.
Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.