Sensing and Signal Processing


Symposium_Overview

#artificialintelligence

After the update/download, to use the app for EI 2020, click on the "More" option in the bottom right corner of the app screen and click "Change Event" from the list of options. Find EI 2020 on the list of available events and download the content to begin planning your EI 2020 itinerary. Go directly to the online SCHOLARONE MyItinerary website! CONFERENCES Electronic Imaging 2020 brings together 17 technical conferences covering all aspects of electronic imaging: (Note: The Photography, Mobile, and Immersive Imaging (PMII) and Image Sensors and Imaging Systems (IMSE) conferences have merged into the Imaging Sensors and Systems (ISS) Conference for 2020.) OPEN ACCESS PROCEEDINGS IS&T EI proceedings are Open Access.


The role of artificial intelligence in medical imaging research BJR Open

#artificialintelligence

Researchers have successfully applied AI in radiology to identify findings either detectable or not by the human eye. Radiology is now moving from a subjective perceptual skill to a more objective science.2,3 In Radiation Oncology, AI has been successfully applied to automatic tumor and organ segmentation,4–6 78 and tumor monitoring during the treatment for adaptive treatment. In 2012, a Dutch researcher, Lambin P, proposed the concept of "Radiomics" for the first time and defined it as follows: the extraction of a large number of image features from radiation images with a high-throughput approach.9 As AI became more popular and also more medical images than ever have been generated, these are good reason for radiomics to evolve rapidly.


DeepHuman's Project Page

#artificialintelligence

We propose DeepHuman, a deep learning based framework for 3D human reconstruction from a single RGB image. Since this problem is highly intractable, we adopt a stage-wise, coarse-to-fine method consisting of three steps, namely inner body estimation, outer surface reconstruction and frontal surface detail refinement. Once an inner body is estimated from the given image, our method generates a dense semantic representation from the inner body to encode body shape and pose and to bridge the 2D image plane and 3D space. An image-guided volume-to-volume translation CNN is introduced to reconstruct the outer surface given the input image and the dense semantic representation. One key feature of our network is that it fuses different scales of image features into the 3D space through volumetric feature transformation, which helps to recover details of the subject's outer surface geometry.


Segmentation: U-Net, Mask R-CNN, and Medical Applications

#artificialintelligence

Segmentation has numerous applications in medical imaging (locating tumors, measuring tissue volumes, studying anatomy, planning surgery, etc.), self-driving cars (localizing pedestrians, other vehicles, brake lights, etc.), satellite image interpretation (buildings, roads, forests, crops), and more. This post will introduce the segmentation task. In the first section, we will discuss the difference between semantic segmentation and instance segmentation. The final section includes many example medical image segmentation applications and video segmentation applications. Here is another illustration of the difference between semantic segmentation and instance segmentation, showing how in semantic segmentation all "chair" pixels have the same label, while in instance segmentation the model has identified specific chairs: The U-Net paper (available here: Ronneberger et al. 2015) introduces a semantic segmentation model architecture that has become very popular, with over 10,000 citations (fifty different follow-up papers are listed in this repository).


Utah police look to artificial intelligence for assistance

#artificialintelligence

A Utah city police department is considering a partnership with an artificial intelligence company in an effort to help the law enforcement agency work more efficiently. The Springville police may work with technology firm Banjo to help improve the response time to emergencies, The Daily Herald reported. The Park City company can gather real-time data from various sources including 911 dispatch calls, traffic cameras, emergency alarms, and social media posts and report related information to the police, officials said. The Springfield City Council heard a presentation by a Banjo representative during its Jan. 7 meeting but did not immediately make a decision about using the technology. Banjo entered an agreement last July with the Utah Attorney General's Office and the Utah Department of Public Safety to let the agencies use Banjo's technology to "reduce time and resources typically required to generate leads, and instead focus their efforts on incident response," according to a report to the state Legislature.


Research Infographic

#artificialintelligence

IMAGE: Researchers of the ICAI Group -- Computational Intelligence and Image Analysis -- of the University of Malaga (UMA) have designed an unprecedented method that is capable of improving brain images... view more Researchers of the ICAI Group -Computational Intelligence and Image Analysis- of the University of Malaga (UMA) have designed an unprecedented method that is capable of improving brain images obtained through magnetic resonance imaging using artificial intelligence. This new model manages to increase image quality from low resolution to high resolution without distorting the patients' brain structures, using a deep learning artificial neural network -a model that is based on the functioning of the human brain- that "learns" this process. "Deep learning is based on very large neural networks, and so is its capacity to learn, reaching the complexity and abstraction of a brain", explains researcher Karl Thurnhofer, main author of this study, who adds that, thanks to this technique, the activity of identification can be performed alone, without supervision; an identification effort that the human eye would not be capable of doing. Published in the scientific journal Neurocomputing, this study represents a scientific breakthrough, since the algorithm developed by the UMA yields more accurate results in less time, with clear benefits for patients. "So far, the acquisition of quality brain images has depended on the time the patient remained immobilized in the scanner; with our method, image processing is carried out later on the computer", explains Thurnhofer.


Computer Vision and Image Analytics

#artificialintelligence

Over the past few months, I've been working on a fascinating project with one of the world's largest pharmaceutical companies to apply SAS Viya computer vision to help identify potential quality issues on the production line as part of the validated inspection process. As I know the application of these types of AI and ML techniques are of real interest to many high-tech manufacturing organisations as part of their Manufacturing 4.0 initiatives, I thought I'd take the to opportunity to share my experiences with a wide audience, so I hope you enjoy this blog post. For obvious reasons, I can't share specifics of the organisation or product, so please don't ask me to. But I hope you find this article interesting and informative, and if you would like to know more about the techniques then please feel free to contact me. Quality inspections are a key part of the manufacturing process, and while many of these inspections can be automated using a range of techniques, tests and measurements, some issues are still best identified by the human eye.


Artificial intelligence to improve resolution of brain magnetic resonance imaging

#artificialintelligence

Researchers of the ICAI Group–Computational Intelligence and Image Analysis–of the University of Malaga (UMA) have designed an unprecedented method that is capable of improving brain images obtained through magnetic resonance imaging using artificial intelligence. This new model manages to increase image quality from low resolution to high resolution without distorting the patients' brain structures, using a deep learning artificial neural network –a model that is based on the functioning of the human brain–that "learns" this process. "Deep learning is based on very large neural networks, and so is its capacity to learn, reaching the complexity and abstraction of a brain," explains researcher Karl Thurnhofer, main author of this study, who adds that, thanks to this technique, the activity of identification can be performed alone, without supervision; an identification effort that the human eye would not be capable of doing. Published in the scientific journal "Neurocomputing," this study represents a scientific breakthrough, since the algorithm developed by the UMA yields more accurate results in less time, with clear benefits for patients. "So far, the acquisition of quality brain images has depended on the time the patient remained immobilized in the scanner; with our method, image processing is carried out later on the computer," explains Thurnhofer.


Tell, draw, repeat--iterative text-based image generation

#artificialintelligence

When people create, it's not very often they achieve what they're looking for on the first try. Creating--whether it be a painting, a paper, or a machine learning model--is a process that has a starting point from which new elements and ideas are added and old ones are modified and discarded, sometimes again and again, until the work accomplishes its intended purpose: to evoke emotion, to convey a message, to complete a task. Since I began my work as a researcher, machine learning systems have gotten really good at a particular form of creation that has caught my attention: image generation. Looking at some of the images generated by systems such as BigGAN and ProGAN, you wouldn't be able to tell they were produced by a computer. In these advancements, my colleagues and I see an opportunity to help people create visuals and better express themselves through the medium--from improving the user experience when it comes to designing avatars in the gaming world to making the editing of personal photos and production of digital art in software like Photoshop, which can be challenging to those unfamiliar with such programs' capabilities, easier.


Why Does Data Science Matter in Advanced Image Recognition?

#artificialintelligence

Image recognition typically is a process of the image processing, identifying people, patterns, logos, objects, places, colors, and shapes, the whole thing that can be sited in the image. And advanced image recognition, in this way, is a framework for employing AI and deep learning that can accomplish greater automation across identification processes. As vision and speech are two crucial human interaction elements, data science is able to imitate these human tasks using computer vision and speech recognition technologies. Even it has already started emulating and has leveraged in different fields, particularly in e-commerce amongst sectors. Advancements in machine learning and the use of high bandwidth data services are fortifying the applications of image recognition.