The reputation and bottom line of a company can be adversely affected if defective products are released. If a defect is not detected, and the flawed product is not removed early in the production process, the damage can run in the hundreds of dollars per unit. To mitigate this, many manufacturers install cameras to monitor their products as they move along the production line. But the data may not always be useful. For example, cameras alone often struggle with identifying defects at high volume of images moving at high speed.
A study by McKinsey & Company found that AI-driven quality testing can increase productivity by up to 50% and defect detection rates by up to 90% compared to human inspection. Though machines with automated optical inspection (AOI), powered by machine vision, have replaced most of the manual processes in the modern assembly line, quality control still remains a huge and costly challenge. The European Commission claims that in some industries 50% of production can be abandoned due to defects, and the defect rate can reach up to 90% in complex production environments. The critical limitation with machine learning AOI systems is in disclosing surface defects where even a slight variant (often invisible to the human eye) can hamper the entire production run and render hundreds to thousands of products useless before the defect is discovered. The economic impact can be devastating.
Machine vision quality assurance systems have excelled at automating the location, identification, and inspection of manufactured components through computational image analysis. But when the component is part of a larger assembly, a complex package, or a kit--such as an automotive assembly or surgical intubation kit--defects, random product placement, variations in lighting, and other factors can quickly overwhelm a traditional machine vision system. For this reason, final inspection of assemblies, packages, and kits is usually conducted manually, to the detriment of overall quality and productivity. While manual operators typically excel at inspecting complex assemblies, by comparing multiple attached or connected components to automated quality inspection solutions, it's hard for operators to stay sharp. Studies show most operators can only focus on a single task for 15 to 20 minutes at a time.
Rushing, John (University of Alabama in Huntsville) | Berendes, Todd (University of Alabama in Huntsville) | Lin, Hong (University of Alabama in Huntsville) | Buntain, Cody (University of Alabama in Huntsville) | Graves, Sara (University of Alabama in Huntsville)
This paper describes the Spyglass tool, which is designed to help analysts explore very large collections of unstructured text documents. Spyglass uses a domain ontology to index documents, and provides retrieval and visualization services based on the ontology and the resulting index. The ontology based approach allows analysts to share information and helps to ensure consistency of results. The approach is also scalable and lends itself very well to parallel computation. The Spyglass system is described in detail and indexing and query results using a large set of sample documents are presented.
In the race to enable manufacturing plants to increase production in the face of an intermittent human workforce, manufacturers are looking at how to supplement their cameras with AI to give human inspectors the ability to spot defective products immediately and correct the problem. While machine vision has been around for more than 60 years, the recent surge in the popularity of deep learning has elevated this sometimes misunderstood technology to the attention of major manufacturers globally. As CEO of a deep learning software company, I've seen how deep learning is a natural next step from machine vision, and has the potential to drive innovation for manufacturers. How does deep learning differ from machine vision, and how can manufacturers leverage this natural evolution of camera technology to cope with real-world demands? In the 1960s, several groups of scientists, many of them in the Boston area, set forth to solve "the machine vision problem."