Image Processing
A Bayesian Hyperprior Approach for Joint Image Denoising and Interpolation, with an Application to HDR Imaging
Aguerrebere, Cecilia, Almansa, Andrés, Delon, Julie, Gousseau, Yann, Musé, Pablo
Recently, impressive denoising results have been achieved by Bayesian approaches which assume Gaussian models for the image patches. This improvement in performance can be attributed to the use of per-patch models. Unfortunately such an approach is particularly unstable for most inverse problems beyond denoising. In this work, we propose the use of a hyperprior to model image patches, in order to stabilize the estimation procedure. There are two main advantages to the proposed restoration scheme: Firstly it is adapted to diagonal degradation matrices, and in particular to missing data problems (e.g. inpainting of missing pixels or zooming). Secondly it can deal with signal dependent noise models, particularly suited to digital cameras. As such, the scheme is especially adapted to computational photography. In order to illustrate this point, we provide an application to high dynamic range imaging from a single image taken with a modified sensor, which shows the effectiveness of the proposed scheme.
Time Stretch Inspired Computational Imaging
Jalali, Bahram, Suthar, Madhuri, Asghari, Mohamad, Mahjoubfar, Ata
We show that dispersive propagation of light followed by phase detection has properties that can be exploited for extracting features from the waveforms. This discovery is spearheading development of a new class of physics-inspired algorithms for feature extraction from digital images with unique properties and superior dynamic range compared to conventional algorithms. In certain cases, these algorithms have the potential to be an energy efficient and scalable substitute to synthetically fashioned computational techniques in practice today.
AI can predict whether a patient is going to die
The medical machines of the future will be able to predict when you will die by analysing images of your organs, according to a new study. Researchers have developed an artificial intelligence system that can tell which patients will die almost as accurately as trained specialists. It is the first study of its kind that uses medical images and artificial intelligence to show how machine can spot things some doctors can't. The findings could lead to earlier detection and treatment of a wide range of serious illnesses. An AI system developed by the University of Adelaide is able to predict whether a patient will die within five years by analysing data gathered from CT scans (pictured) of their chests.
Harnessing the potential of artificial intelligence and machine learning
ARTIFICIAL intelligence (AI) and machine learning (ML) have rapidly matured over the years and are already the norm in many fields, helping companies deploy smart systems of engagement to improve efficiency, enhance security, gain insights, and deliver superior customer experiences. AI and ML are expected to completely redefine operations ― across the front-, middle- and back-office ― creating new opportunities to bolster competitive advantage. Fast becoming mainstream throughout the automation and analytics spectrum, AI and ML are already being used across industries for diverse purposes, ranging from robo-advice in financial services (robo-advisers already have more than US$50 billion in assets under management today), sales forecasting in retail, and supply chain optimization in logistics, to robotic process automation and even medical image analysis. While the ability to learn from data and make predictions, explanations, detect anomalies and make recommendations throws up substantial opportunities to unlock value, organisations are often not sure about where and how to embark upon their AI and ML journey. The "Do, Think, Learn" continuum To begin with, businesses should focus on the "Do, Think, Learn" continuum to identify the types of systems that need to be deployed.
Artificial intelligence predicts patient lifespans
The research, now published in the Nature journal Scientific Reports, has implications for the early diagnosis of serious illness, and medical intervention. Researchers from the University's School of Public Health and School of Computer Science, along with Australian and international collaborators, used artificial intelligence to analyse the medical imaging of 48 patients' chests. This computer-based analysis was able to predict which patients would die within five years, with 69% accuracy -- comparable to'manual' predictions by clinicians. This is the first study of its kind using medical images and artificial intelligence. "Predicting the future of a patient is useful because it may enable doctors to tailor treatments to the individual," says lead author Dr Luke Oakden-Rayner, a radiologist and PhD student with the University of Adelaide's School of Public Health.
Unsupervised Learning of Disentangled Representations from Video
Denton, Emily, Birodkar, Vighnesh
We present a new model DrNET that learns disentangled image representations from video. Our approach leverages the temporal coherence of video and a novel adversarial loss to learn a representation that factorizes each frame into a stationary part and a temporally varying component. The disentangled representation can be used for a range of tasks. For example, applying a standard LSTM to the time-vary components enables prediction of future frames. We evaluate our approach on a range of synthetic and real videos, demonstrating the ability to coherently generate hundreds of steps into the future.
Has AI Gone Too Far? - Automated Inference of Criminality Using Face Images
Summary: This new study claims to be able to identify criminals based on their facial characteristics. Even if the data science is good has AI pushed too far into areas of societal taboos? This isn't the first time data science has been restricted in favor of social goals, but this study may be a trip wire that starts a long and difficult discussion about the role of AI. Has AI gone too far? This might seem like a nonsensical question to data scientists who strive every day to expand the capabilities of AI until you read the headlines created by this just released peer reviewed scientific paper: Automated Inference on Criminality Using Face Images (Xiaolin Wu, McMaster Univ.
junyanz/pytorch-CycleGAN-and-pix2pix
To train a model on your own datasets, you need to create a data folder with two subdirectories trainA and trainB that contain images from domain A and B. For example, landscape painting - landscape photographs works much better than portrait painting - landscape photographs. A and B should each have their own subfolders train, val, test, etc. In /path/to/data/A/train, put training images in style A.
Conditional CycleGAN for Attribute Guided Face Image Generation
Lu, Yongyi, Tai, Yu-Wing, Tang, Chi-Keung
State-of-the-art techniques in Generative Adversarial Networks (GANs) such as cycleGAN is able to learn the mapping of one image domain $X$ to another image domain $Y$ using unpaired image data. We extend the cycleGAN to ${\it Conditional}$ cycleGAN such that the mapping from $X$ to $Y$ is subjected to attribute condition $Z$. Using face image generation as an application example, where $X$ is a low resolution face image, $Y$ is a high resolution face image, and $Z$ is a set of attributes related to facial appearance (e.g. gender, hair color, smile), we present our method to incorporate $Z$ into the network, such that the hallucinated high resolution face image $Y'$ not only satisfies the low resolution constrain inherent in $X$, but also the attribute condition prescribed by $Z$. Using face feature vector extracted from face verification network as $Z$, we demonstrate the efficacy of our approach on identity-preserving face image super-resolution. Our approach is general and applicable to high-quality face image generation where specific facial attributes can be controlled easily in the automatically generated results.
Real Time Image Saliency for Black Box Classifiers
In this work we develop a fast saliency detection method that can be applied to any differentiable image classifier. We train a masking model to manipulate the scores of the classifier by masking salient parts of the input image. Our model generalises well to unseen images and requires a single forward pass to perform saliency detection, therefore suitable for use in real-time systems. We test our approach on CIFAR-10 and ImageNet datasets and show that the produced saliency maps are easily interpretable, sharp, and free of artifacts. We suggest a new metric for saliency and test our method on the ImageNet object localisation task. We achieve results outperforming other weakly supervised methods.