Kim, Namil (NAVER LABS Corp.) | Choi, Yukyung (Clova NAVER Corp.) | Hwang, Soonmin (Korea Advanced Institute of Science and Technology (KAIST)) | Kweon, In So (Korea Advanced Institute of Science and Technology (KAIST))
To understand the real-world, it is essential to perceive in all-day conditions including cases which are not suitable for RGB sensors, especially at night. Beyond these limitations, the innovation introduced here is a multispectral solution in the form of depth estimation from a thermal sensor without an additional depth sensor.Based on an analysis of multispectral properties and the relevance to depth predictions, we propose an efficient and novel multi-task framework called the Multispectral Transfer Network (MTN) to estimate a depth image from a single thermal image. By exploiting geometric priors and chromaticity clues, our model can generate a pixel-wise depth image in an unsupervised manner. Moreover, we propose a new type of multitask module called Interleaver as a means of incorporating the chromaticity and fine details of skip-connections into the depth estimation framework without sharing feature layers. Lastly, we explain a novel technical means of stably training and covering large disparities and extending thermal images to data-driven methods for all-day conditions. In experiments, we demonstrate the better performance and generalization of depth estimation through the proposed multispectral stereo dataset, including various driving conditions.
In this paper we present an automated method for classifying astronomical objects in multispectral wide-field images. The method is divided into three main tasks. The first one consists of locating and matching the objects in the multispectral images. In the second task we create a new representation for each astronomical object using its multispectral images, and also we find a set of features using principal component analysis. In the last task we classify the astronomical objects using neural networks, locally weighted linear regression and random forest. Preliminary results show that the method obtains over 93% accuracy classifying stars and galaxies.
A fully automated artificial intelligence (AI)-based multispectral absorbance imaging system effectively classified function and potency of induced pluripotent stem cell derived retinal pigment epithelial cells (iPSC-RPE) from patients with age-related macular degeneration (AMD). The finding from the system could be applied to assessing future cellular therapies, according to research presented at the 2018 ARVO annual meeting. The software, which uses convolutional neural network (CNN) deep learning algorithms, effectively evaluated release criterion for the iPSC-RPE cell-based therapy in a standard, reproducible, and cost-effective fashion. The AI-based analysis was as specific and sensitive as traditional molecular and physiological assays, without the need for human intervention. "Cells can be classified with high accuracy using nothing but absorbance images," wrote lead investigator Nathan Hotaling and colleagues from the National Institutes of Health in their poster.