Goto

Collaborating Authors

 computational imaging



FourierNets enable the design of highly non-local optical encoders for computational imaging

Neural Information Processing Systems

Differentiable simulations of optical systems can be combined with deep learning-based reconstruction networks to enable high performance computational imaging via end-to-end (E2E) optimization of both the optical encoder and the deep decoder. This has enabled imaging applications such as 3D localization microscopy, depth estimation, and lensless photography via the optimization of local optical encoders. More challenging computational imaging applications, such as 3D snapshot microscopy which compresses 3D volumes into single 2D images, require a highly non-local optical encoder. We show that existing deep network decoders have a locality bias which prevents the optimization of such highly non-local optical encoders. We address this with a decoder based on a shallow neural network architecture using global kernel Fourier convolutional neural networks (FourierNets). We show that FourierNets surpass existing deep network based decoders at reconstructing photographs captured by the highly non-local DiffuserCam optical encoder. Further, we show that FourierNets enable E2E optimization of highly non-local optical encoders for 3D snapshot microscopy. By combining FourierNets with a large-scale multi-GPU differentiable optical simulation, we are able to optimize non-local optical encoders 170$\times$ to 7372$\times$ larger than prior state of the art, and demonstrate the potential for ROI-type specific optical encoding with a programmable microscope.


FourierNets enable the design of highly non-local optical encoders for computational imaging

Neural Information Processing Systems

Differentiable simulations of optical systems can be combined with deep learning-based reconstruction networks to enable high performance computational imaging via end-to-end (E2E) optimization of both the optical encoder and the deep decoder. This has enabled imaging applications such as 3D localization microscopy, depth estimation, and lensless photography via the optimization of local optical encoders. More challenging computational imaging applications, such as 3D snapshot microscopy which compresses 3D volumes into single 2D images, require a highly non-local optical encoder. We show that existing deep network decoders have a locality bias which prevents the optimization of such highly non-local optical encoders. We address this with a decoder based on a shallow neural network architecture using global kernel Fourier convolutional neural networks (FourierNets).


Data-driven discovery of mechanical models directly from MRI spectral data

Heesterbeek, D. G. J., van Riel, M. H. C., van Leeuwen, T., Berg, C. A. T. van den, Sbrizzi, A.

arXiv.org Artificial Intelligence

Finding interpretable biomechanical models can provide insight into the functionality of organs with regard to physiology and disease. However, identifying broadly applicable dynamical models for in vivo tissue remains challenging. In this proof of concept study we propose a reconstruction framework for data-driven discovery of dynamical models from experimentally obtained undersampled MRI spectral data. The method makes use of the previously developed spectro-dynamic framework which allows for reconstruction of displacement fields at high spatial and temporal resolution required for model identification. The proposed framework combines this method with data-driven discovery of interpretable models using Sparse Identification of Non-linear Dynamics (SINDy). The design of the reconstruction algorithm is such that a symbiotic relation between the reconstruction of the displacement fields and the model identification is created. Our method does not rely on periodicity of the motion. It is successfully validated using spectral data of a dynamic phantom gathered on a clinical MRI scanner. The dynamic phantom is programmed to perform motion adhering to 5 different (non-linear) ordinary differential equations. The proposed framework performed better than a 2-step approach where the displacement fields were first reconstructed from the undersampled data without any information on the model, followed by data-driven discovery of the model using the reconstructed displacement fields. This study serves as a first step in the direction of data-driven discovery of in vivo models.


CodedVO: Coded Visual Odometry

Shah, Sachin, Rajyaguru, Naitri, Singh, Chahat Deep, Metzler, Christopher, Aloimonos, Yiannis

arXiv.org Artificial Intelligence

Autonomous robots often rely on monocular cameras for odometry estimation and navigation. However, the scale ambiguity problem presents a critical barrier to effective monocular visual odometry. In this paper, we present CodedVO, a novel monocular visual odometry method that overcomes the scale ambiguity problem by employing custom optics to physically encode metric depth information into imagery. By incorporating this information into our odometry pipeline, we achieve state-of-the-art performance in monocular visual odometry with a known scale. We evaluate our method in diverse indoor environments and demonstrate its robustness and adaptability. We achieve a 0.08m average trajectory error in odometry evaluation on the ICL-NUIM indoor odometry dataset.


Integration of Programmable Diffraction with Digital Neural Networks

Rahman, Md Sadman Sakib, Ozcan, Aydogan

arXiv.org Artificial Intelligence

Optical imaging and sensing systems based on diffractive elements have seen massive advances over the last several decades. Earlier generations of diffractive optical processors were, in general, designed to deliver information to an independent system that was separately optimized, primarily driven by human vision or perception. With the recent advances in deep learning and digital neural networks, there have been efforts to establish diffractive processors that are jointly optimized with digital neural networks serving as their back-end. These jointly optimized hybrid (optical+digital) processors establish a new "diffractive language" between input electromagnetic waves that carry analog information and neural networks that process the digitized information at the back-end, providing the best of both worlds. Such hybrid designs can process spatially and temporally coherent, partially coherent, or incoherent input waves, providing universal coverage for any spatially varying set of point spread functions that can be optimized for a given task, executed in collaboration with digital neural networks. In this article, we highlight the utility of this exciting collaboration between engineered and programmed diffraction and digital neural networks for a diverse range of applications. We survey some of the major innovations enabled by the push-pull relationship between analog wave processing and digital neural networks, also covering the significant benefits that could be reaped through the synergy between these two complementary paradigms.


A theoretical framework for self-supervised MR image reconstruction using sub-sampling via variable density Noisier2Noise

Millard, Charles, Chiew, Mark

arXiv.org Artificial Intelligence

In recent years, there has been attention on leveraging the statistical modeling capabilities of neural networks for reconstructing sub-sampled Magnetic Resonance Imaging (MRI) data. Most proposed methods assume the existence of a representative fully-sampled dataset and use fully-supervised training. However, for many applications, fully sampled training data is not available, and may be highly impractical to acquire. The development and understanding of self-supervised methods, which use only sub-sampled data for training, are therefore highly desirable. This work extends the Noisier2Noise framework, which was originally constructed for self-supervised denoising tasks, to variable density sub-sampled MRI data. We use the Noisier2Noise framework to analytically explain the performance of Self-Supervised Learning via Data Undersampling (SSDU), a recently proposed method that performs well in practice but until now lacked theoretical justification. Further, we propose two modifications of SSDU that arise as a consequence of the theoretical developments. Firstly, we propose partitioning the sampling set so that the subsets have the same type of distribution as the original sampling mask. Secondly, we propose a loss weighting that compensates for the sampling and partitioning densities. On the fastMRI dataset we show that these changes significantly improve SSDU's image restoration quality and robustness to the partitioning parameters.


AI IN HEALTH: Applications, challenges, limitations,

#artificialintelligence

What are possible applications for AI in this field, and how can we develop and use the technology in a way that is transparent and compatible with the public interest, while stimulating and driving innovation in the sector? BIOTOPIA is delighted to welcome a panel of experts from Helmholtz Center Munich.


AI IN HEALTH: Applications, challenges, limitations,

#artificialintelligence

What are possible applications for AI in this field, and how can we develop and use the technology in a way that is transparent and compatible with the public interest, while stimulating and driving innovation in the sector? BIOTOPIA is delighted to welcome a panel of experts from Helmholtz Center Munich.


Year In Review 2019: Wonders in vision systems design never cease

#artificialintelligence

As in any tech-centric industry, new techniques and technologies in machine vision and image processing often create enthusiasm that morphs readily into hype. The line between hype and efficacy lies in successful implementation. Vision Systems Design, throughout 2019, has chronicled the space where the hype behind new technologies ends and the tally of useful applications begins. Our recent Solutions in Vision 2020 global audience survey necessarily focused on some of the hottest vision technologies--deep learning, hyperspectral/multispectral imaging, polarization, embedded vision, 3D imaging, and computational imaging--who is using them now, and when vision professionals expect to be using them in the future. We also have been covering these technologies throughout the year, by way of demonstrating their current importance and understanding the directions in which they will continue to mature in the vision industry.