Goto

Collaborating Authors

 deconv




SupplementaryMaterial

Neural Information Processing Systems

In the following sections, we provide additional details with respect to various elements of the paper which could not be fully expanded upon in the main paper. This begins with an in depth explanation of the proposed dataset, including its exact contents, and the manner in which they were produced. Finally, a comprehensive examination of the prediction of vision charts is provided, again with detailed explanationsofarchitectures,experimentalprocedures,hyper-parameters,andadditionalresults. This includes both the methods by which each component of the dataset was produced,anditsexactcontents. These are CAD objects andsopossess geometry andtexture information.


75c58d36157505a600e0695ed0b3a22d-Supplemental.pdf

Neural Information Processing Systems

The current version of Predify assumes that there is no gap between the encoders. One can easily override the default setting by providing all the details for a PCoder. A.3 ExecutionTime Since we used a variable number of GPUs for the different experiments, an exact execution time is hard to pinpoint. We expect that this could be further improved with a more extensive and systematic hyperparameter search. In other words, their training hyperparameters appeared to have been optimised for their predictive coding network, but not - or not as much - for their feedforward baseline.





A Appendix

Neural Information Processing Systems

Both VGG16 and EfficientNetB0 are converted to predictive coding networks PVGG16 and PEffi-cientNetB0, using the Predify package. The current version of Predify assumes that there is no gap between the encoders. To verify the functionality of Predify's default settings, we applied One can easily override the default setting by providing all the details for a PCoder. VGG16 consists of five convolution blocks and a classification head. Each convolution block contains two or three convolution+ReLU layers with a max-pooling layer on top.


Switchable Deep Beamformer

Khan, Shujaat, Huh, Jaeyoung, Ye, Jong Chul

arXiv.org Machine Learning

Recent proposals of deep beamformers using deep neural networks have attracted significant attention as computational efficient alternatives to adaptive and compressive beamformers. Moreover, deep beamformers are versatile in that image post-processing algorithms can be combined with the beamforming. Unfortunately, in the current technology, a separate beamformer should be trained and stored for each application, demanding significant scanner resources. To address this problem, here we propose a {\em switchable} deep beamformer that can produce various types of output such as DAS, speckle removal, deconvolution, etc., using a single network with a simple switch. In particular, the switch is implemented through Adaptive Instance Normalization (AdaIN) layers, so that various output can be generated by merely changing the AdaIN code. Experimental results using B-mode focused ultrasound confirm the flexibility and efficacy of the proposed methods for various applications.


Bridging Maximum Likelihood and Adversarial Learning via $\alpha$-Divergence

Zhao, Miaoyun, Cong, Yulai, Dai, Shuyang, Carin, Lawrence

arXiv.org Machine Learning

Maximum likelihood (ML) and adversarial learning are two popular approaches for training generative models, and from many perspectives these techniques are complementary. ML learning encourages the capture of all data modes, and it is typically characterized by stable training. However, ML learning tends to distribute probability mass diffusely over the data space, $e.g.$, yielding blurry synthetic images. Adversarial learning is well known to synthesize highly realistic natural images, despite practical challenges like mode dropping and delicate training. We propose an $\alpha$-Bridge to unify the advantages of ML and adversarial learning, enabling the smooth transfer from one to the other via the $\alpha$-divergence. We reveal that generalizations of the $\alpha$-Bridge are closely related to approaches developed recently to regularize adversarial learning, providing insights into that prior work, and further understanding of why the $\alpha$-Bridge performs well in practice.