Image Processing


Research Infographic

#artificialintelligence

IMAGE: Researchers of the ICAI Group -- Computational Intelligence and Image Analysis -- of the University of Malaga (UMA) have designed an unprecedented method that is capable of improving brain images... view more Researchers of the ICAI Group -Computational Intelligence and Image Analysis- of the University of Malaga (UMA) have designed an unprecedented method that is capable of improving brain images obtained through magnetic resonance imaging using artificial intelligence. This new model manages to increase image quality from low resolution to high resolution without distorting the patients' brain structures, using a deep learning artificial neural network -a model that is based on the functioning of the human brain- that "learns" this process. "Deep learning is based on very large neural networks, and so is its capacity to learn, reaching the complexity and abstraction of a brain", explains researcher Karl Thurnhofer, main author of this study, who adds that, thanks to this technique, the activity of identification can be performed alone, without supervision; an identification effort that the human eye would not be capable of doing. Published in the scientific journal Neurocomputing, this study represents a scientific breakthrough, since the algorithm developed by the UMA yields more accurate results in less time, with clear benefits for patients. "So far, the acquisition of quality brain images has depended on the time the patient remained immobilized in the scanner; with our method, image processing is carried out later on the computer", explains Thurnhofer.


Computer Vision and Image Analytics

#artificialintelligence

Over the past few months, I've been working on a fascinating project with one of the world's largest pharmaceutical companies to apply SAS Viya computer vision to help identify potential quality issues on the production line as part of the validated inspection process. As I know the application of these types of AI and ML techniques are of real interest to many high-tech manufacturing organisations as part of their Manufacturing 4.0 initiatives, I thought I'd take the to opportunity to share my experiences with a wide audience, so I hope you enjoy this blog post. For obvious reasons, I can't share specifics of the organisation or product, so please don't ask me to. But I hope you find this article interesting and informative, and if you would like to know more about the techniques then please feel free to contact me. Quality inspections are a key part of the manufacturing process, and while many of these inspections can be automated using a range of techniques, tests and measurements, some issues are still best identified by the human eye.


Artificial intelligence to improve resolution of brain magnetic resonance imaging

#artificialintelligence

Researchers of the ICAI Group–Computational Intelligence and Image Analysis–of the University of Malaga (UMA) have designed an unprecedented method that is capable of improving brain images obtained through magnetic resonance imaging using artificial intelligence. This new model manages to increase image quality from low resolution to high resolution without distorting the patients' brain structures, using a deep learning artificial neural network –a model that is based on the functioning of the human brain–that "learns" this process. "Deep learning is based on very large neural networks, and so is its capacity to learn, reaching the complexity and abstraction of a brain," explains researcher Karl Thurnhofer, main author of this study, who adds that, thanks to this technique, the activity of identification can be performed alone, without supervision; an identification effort that the human eye would not be capable of doing. Published in the scientific journal "Neurocomputing," this study represents a scientific breakthrough, since the algorithm developed by the UMA yields more accurate results in less time, with clear benefits for patients. "So far, the acquisition of quality brain images has depended on the time the patient remained immobilized in the scanner; with our method, image processing is carried out later on the computer," explains Thurnhofer.


Tell, draw, repeat--iterative text-based image generation

#artificialintelligence

When people create, it's not very often they achieve what they're looking for on the first try. Creating--whether it be a painting, a paper, or a machine learning model--is a process that has a starting point from which new elements and ideas are added and old ones are modified and discarded, sometimes again and again, until the work accomplishes its intended purpose: to evoke emotion, to convey a message, to complete a task. Since I began my work as a researcher, machine learning systems have gotten really good at a particular form of creation that has caught my attention: image generation. Looking at some of the images generated by systems such as BigGAN and ProGAN, you wouldn't be able to tell they were produced by a computer. In these advancements, my colleagues and I see an opportunity to help people create visuals and better express themselves through the medium--from improving the user experience when it comes to designing avatars in the gaming world to making the editing of personal photos and production of digital art in software like Photoshop, which can be challenging to those unfamiliar with such programs' capabilities, easier.


Why Does Data Science Matter in Advanced Image Recognition?

#artificialintelligence

Image recognition typically is a process of the image processing, identifying people, patterns, logos, objects, places, colors, and shapes, the whole thing that can be sited in the image. And advanced image recognition, in this way, is a framework for employing AI and deep learning that can accomplish greater automation across identification processes. As vision and speech are two crucial human interaction elements, data science is able to imitate these human tasks using computer vision and speech recognition technologies. Even it has already started emulating and has leveraged in different fields, particularly in e-commerce amongst sectors. Advancements in machine learning and the use of high bandwidth data services are fortifying the applications of image recognition.


Locating a 2,000-year-old Roman Shipwreck with Image Processing and AI

#artificialintelligence

Archaeologists recently discovered a Roman shipwreck in the eastern Mediterranean. The ship and its cargo are both in good condition, despite being 2,000 years old. The wreck, named the Fiskardo after the nearby Roman Empire port of the same name, is the largest shipwreck found in the region to date. The Fiskardo is filled with amphorae -- large terracotta pots that were used in the Roman Empire for transporting goods such as wine, grain, and olive oil. CNN reported, "The survey was carried out by the Oceanus network of the University of Patras, using artificial intelligence image-processing techniques."


Speeding Up A.I. - USC Viterbi School of Engineering

#artificialintelligence

With a new three-year NSF grant, Ming Hsieh Department of Electrical and Computer Engineering researchers hope to solve the problem of scalable parallelism for AI. Co-PI's Professor Viktor Prasanna, Charles Lee Powell Chair in Electrical and Computer Engineering and Professor Xuehai Qian both from USC Viterbi, along with USC Viterbi alum and assistant professor at Northeastern University Yanzhi Wang, and USC Viterbi senior research associate Ajitesh Srivastava were awarded the $800,000 grant last month. Parallelism is the ability of an algorithm to perform several computations at the same time, rather than sequentially. For artificial intelligence challenges which require fast solutions, like the image processing related to autonomous vehicles, parallelism is an essential step to make these technologies practical to every-day life. Parallelism in neural networks has been explored, but the problem has been scaling it up to a point where it's applicable in time-critical/realtime tasks.


Generating more realistic images using gated MRF's

Neural Information Processing Systems

Probabilistic models of natural images are usually evaluated by measuring performance on rather indirect tasks, such as denoising and inpainting. A more direct way to evaluate a generative model is to draw samples from it and to check whether statistical properties of the samples match the statistics of natural images. This method is seldom used with high-resolution images, because current models produce samples that are very different from natural images, as assessed by even simple visual inspection. We investigate the reasons for this failure and we show that by augmenting existing models so that there are two sets of latent variables, one set modelling pixel intensities and the other set modelling image-specific pixel covariances, we are able to generate high-resolution images that look much more realistic than before. The overall model can be interpreted as a gated MRF where both pair-wise dependencies and mean intensities of pixels are modulated by the states of latent variables.


Intel researchers propose AI that recognizes faces from thermal images

#artificialintelligence

Is thermal imagery detailed enough to enable an AI model to recognize people's facial features? That's the question Intel and Gánsk University of Technology researchers sought to answer in a study recently presented at the Institute of Electrical and Electronics Engineers' 12th International Conference on Human System Interaction. These researchers investigated the performance of a model trained on visible light data that was subsequently retrained on thermal images. As the researchers point out in a paper describing their work, thermal imagery is often used in lieu of RGB camera data within environments where privacy is preferred or otherwise mandated, like medical facilities. That's because it's able to obscure personally identifying details like eye color and jaw line.


PiCoDes: Learning a Compact Code for Novel-Category Recognition

Neural Information Processing Systems

We introduce PiCoDes: a very compact image descriptor which nevertheless allows high performance on object category recognition. In particular, we address novel-category recognition: the task of defining indexing structures and image representations which enable a large collection of images to be searched for an object category that was not known when the index was built. Instead, the training images defining the category are supplied at query time. We explicitly learn descriptors of a given length (from as small as 16 bytes per image) which have good object-recognition performance. In contrast to previous work in the domain of object recognition, we do not choose an arbitrary intermediate representation, but explicitly learn short codes.