Di, Zichao Wendy
Integrating Generative and Physics-Based Models for Ptychographic Imaging with Uncertainty Quantification
Ekmekci, Canberk, Bicer, Tekin, Di, Zichao Wendy, Deng, Junjing, Cetin, Mujdat
Ptychography is a scanning coherent diffractive imaging technique that enables imaging nanometer-scale features in extended samples. One main challenge is that widely used iterative image reconstruction methods often require significant amount of overlap between adjacent scan locations, leading to large data volumes and prolonged acquisition times. To address this key limitation, this paper proposes a Bayesian inversion method for ptychography that performs effectively even with less overlap between neighboring scan locations. Furthermore, the proposed method can quantify the inherent uncertainty on the ptychographic object, which is created by the ill-posed nature of the ptychographic inverse problem. At a high level, the proposed method first utilizes a deep generative model to learn the prior distribution of the object and then generates samples from the posterior distribution of the object by using a Markov Chain Monte Carlo algorithm. Our results from simulated ptychography experiments show that the proposed framework can consistently outperform a widely used iterative reconstruction algorithm in cases of reduced overlap. Moreover, the proposed framework can provide uncertainty estimates that closely correlate with the true error, which is not available in practice. The project website is available here.
Uncertainty quantification for ptychography using normalizing flows
Dasgupta, Agnimitra, Di, Zichao Wendy
Ptychography, as an essential tool for high-resolution and nondestructive material characterization, presents a challenging large-scale nonlinear and non-convex inverse problem; however, its intrinsic photon statistics create clear opportunities for statistical-based deep learning approaches to tackle these challenges, which has been underexplored. In this work, we explore normalizing flows to obtain a surrogate for the high-dimensional posterior, which also enables the characterization of the uncertainty associated with the reconstruction: an extremely desirable capability when judging the reconstruction quality in the absence of ground truth, spotting spurious artifacts and guiding future experiments using the returned uncertainty patterns. We demonstrate the performance of the proposed method on a synthetic sample with added noise and in various physical experimental settings.
Gaussian Process for Tomography
Dasgupta, Agnimitra, Graziani, Carlo, Di, Zichao Wendy
Tomographic imaging refers to the reconstruction of a 3D object from its 2D projections by sectioning the object, through the use of any kind of penetrating wave, from many different directions. It has had a revolutionary impact in a number of fields ranging from biology, physics, and chemistry to astronomy [1, 2]. The technique requires an accurate image reconstruction, however, and the resulting reconstruction problem is an ill-posed optimization problem because of insufficient measurements [3]. A direct consequence of ill-posedness is that the reconstruction does not have a unique solution. Therefore, quantifying the solution quality is challenging, given the absence of ground truth and the presence of measurement noise. Moreover, ill-posedness creates a requirement for regularization that imports new parameters to the problem. Regularization parameter choice can lead to substantial variations in reconstruction, and ascertaining optimal values of such parameters is difficult without availing oneself of ground truth [4]. The transition from an optimization perspective on tomographic inversion to a Bayesian statistical perspective can provide a useful reframing of these issues. In particular, the ill-posedness of the optimization view can be replaced by quantified uncertainty in the statistical view, whereas regularization now appears in the guise of parameter estimation.