Over the past few months, I've been working on a fascinating project with one of the world's largest pharmaceutical companies to apply SAS Viya computer vision to help identify potential quality issues on the production line as part of the validated inspection process. As I know the application of these types of AI and ML techniques are of real interest to many high-tech manufacturing organisations as part of their Manufacturing 4.0 initiatives, I thought I'd take the to opportunity to share my experiences with a wide audience, so I hope you enjoy this blog post. For obvious reasons, I can't share specifics of the organisation or product, so please don't ask me to. But I hope you find this article interesting and informative, and if you would like to know more about the techniques then please feel free to contact me. Quality inspections are a key part of the manufacturing process, and while many of these inspections can be automated using a range of techniques, tests and measurements, some issues are still best identified by the human eye.
Researchers of the ICAI Group–Computational Intelligence and Image Analysis–of the University of Malaga (UMA) have designed an unprecedented method that is capable of improving brain images obtained through magnetic resonance imaging using artificial intelligence. This new model manages to increase image quality from low resolution to high resolution without distorting the patients' brain structures, using a deep learning artificial neural network –a model that is based on the functioning of the human brain–that "learns" this process. "Deep learning is based on very large neural networks, and so is its capacity to learn, reaching the complexity and abstraction of a brain," explains researcher Karl Thurnhofer, main author of this study, who adds that, thanks to this technique, the activity of identification can be performed alone, without supervision; an identification effort that the human eye would not be capable of doing. Published in the scientific journal "Neurocomputing," this study represents a scientific breakthrough, since the algorithm developed by the UMA yields more accurate results in less time, with clear benefits for patients. "So far, the acquisition of quality brain images has depended on the time the patient remained immobilized in the scanner; with our method, image processing is carried out later on the computer," explains Thurnhofer.
When people create, it's not very often they achieve what they're looking for on the first try. Creating--whether it be a painting, a paper, or a machine learning model--is a process that has a starting point from which new elements and ideas are added and old ones are modified and discarded, sometimes again and again, until the work accomplishes its intended purpose: to evoke emotion, to convey a message, to complete a task. Since I began my work as a researcher, machine learning systems have gotten really good at a particular form of creation that has caught my attention: image generation. Looking at some of the images generated by systems such as BigGAN and ProGAN, you wouldn't be able to tell they were produced by a computer. In these advancements, my colleagues and I see an opportunity to help people create visuals and better express themselves through the medium--from improving the user experience when it comes to designing avatars in the gaming world to making the editing of personal photos and production of digital art in software like Photoshop, which can be challenging to those unfamiliar with such programs' capabilities, easier.
Image recognition typically is a process of the image processing, identifying people, patterns, logos, objects, places, colors, and shapes, the whole thing that can be sited in the image. And advanced image recognition, in this way, is a framework for employing AI and deep learning that can accomplish greater automation across identification processes. As vision and speech are two crucial human interaction elements, data science is able to imitate these human tasks using computer vision and speech recognition technologies. Even it has already started emulating and has leveraged in different fields, particularly in e-commerce amongst sectors. Advancements in machine learning and the use of high bandwidth data services are fortifying the applications of image recognition.
Archaeologists recently discovered a Roman shipwreck in the eastern Mediterranean. The ship and its cargo are both in good condition, despite being 2,000 years old. The wreck, named the Fiskardo after the nearby Roman Empire port of the same name, is the largest shipwreck found in the region to date. The Fiskardo is filled with amphorae -- large terracotta pots that were used in the Roman Empire for transporting goods such as wine, grain, and olive oil. CNN reported, "The survey was carried out by the Oceanus network of the University of Patras, using artificial intelligence image-processing techniques."
Provo • A Utah city police department is considering a partnership with an artificial intelligence company in an effort to help the law enforcement agency work more efficiently. The Springville police may work with technology firm Banjo to help improve the response time to emergencies, The Daily Herald reported. The Park City company can gather real-time data from various sources including 911 dispatch calls, traffic cameras, emergency alarms, and social media posts and report related information to the police, officials said. The Springfield City Council heard a presentation by a Banjo representative during its Jan. 7 meeting but did not immediately make a decision about using the technology. Banjo entered an agreement last July with the Utah Attorney General's Office and the Utah Department of Public Safety to let the agencies use Banjo's technology to "reduce time and resources typically required to generate leads, and instead focus their efforts on incident response," according to a report to the state Legislature.
With a new three-year NSF grant, Ming Hsieh Department of Electrical and Computer Engineering researchers hope to solve the problem of scalable parallelism for AI. Co-PI's Professor Viktor Prasanna, Charles Lee Powell Chair in Electrical and Computer Engineering and Professor Xuehai Qian both from USC Viterbi, along with USC Viterbi alum and assistant professor at Northeastern University Yanzhi Wang, and USC Viterbi senior research associate Ajitesh Srivastava were awarded the $800,000 grant last month. Parallelism is the ability of an algorithm to perform several computations at the same time, rather than sequentially. For artificial intelligence challenges which require fast solutions, like the image processing related to autonomous vehicles, parallelism is an essential step to make these technologies practical to every-day life. Parallelism in neural networks has been explored, but the problem has been scaling it up to a point where it's applicable in time-critical/realtime tasks.
Probabilistic models of natural images are usually evaluated by measuring performance on rather indirect tasks, such as denoising and inpainting. A more direct way to evaluate a generative model is to draw samples from it and to check whether statistical properties of the samples match the statistics of natural images. This method is seldom used with high-resolution images, because current models produce samples that are very different from natural images, as assessed by even simple visual inspection. We investigate the reasons for this failure and we show that by augmenting existing models so that there are two sets of latent variables, one set modelling pixel intensities and the other set modelling image-specific pixel covariances, we are able to generate high-resolution images that look much more realistic than before. The overall model can be interpreted as a gated MRF where both pair-wise dependencies and mean intensities of pixels are modulated by the states of latent variables.
We address the problem of adaptive sensor control in dynamic resource-constrained sensor networks. We focus on a meteorological sensing network comprising radars that can perform sector scanning rather than always scanning 360 degrees. We compare three sector scanning strategies. The sit-and-spin strategy always scans 360 degrees. The limited lookahead strategy additionally uses the expected environmental state K decision epochs in the future, as predicted from Kalman filters, in its decision-making.
We present a new analysis for the combination of binary classifiers. We propose a theoretical framework based on the Neyman-Pearson lemma to analyze combinations of classifiers. In particular, we give a method for finding the optimal decision rule for a combination of classifiers and prove that it has the optimal ROC curve. We also show how our method generalizes and improves on previous work on combining classifiers and generating ROC curves. Papers published at the Neural Information Processing Systems Conference.