Goto

Collaborating Authors

 iaf



e3844e186e6eb8736e9f53c0c5889527-Paper.pdf

Neural Information Processing Systems

Inference networks oftraditional Variational Autoencoders (VAEs) aretypically amortized, resulting in relatively inaccurate posterior approximation compared to instance-wise variational optimization. Recent semi-amortized approaches were proposedtoaddress thisdrawback; however,theiriterativegradient update procedures can be computationally demanding.





A Dirichlet Distribution Computations A.1 Dirichlet distribution The Dirichlet distribution with concentration parameters α = (α

Neural Information Processing Systems

The novel Bayesian loss described in formula 7 can be computed in closed form. For vector datasets, all models share an architecture of 3 linear layers with Relu activation. For PostNet, we used a 1D batch normalization after the encoder. All metrics have been scaled by 100 . We obtain numbers in [0, 100] for all scores instead of [0, 1].



many of the comments truly helpful to improve the quality of the paper, and some of them actually enlightened us, 2 correcting some of our initial claims that turn out to be wrong

Neural Information Processing Systems

We are very grateful to all reviewers for their detailed, insightful, and constructive comments and questions. But we believe that they are very important, and we will pursue them in our ongoing study. The column "FC" is excerpted from Our responses (blue) to reviewers' comments/questions ( black/bold/italic) are as follows. We will refine our claims, and also refer to these SA VI methods. It turns out that it was our faulty claim.


Flow Stochastic Segmentation Networks

Ribeiro, Fabio De Sousa, Todd, Omar, Jones, Charles, Kori, Avinash, Mehta, Raghav, Glocker, Ben

arXiv.org Machine Learning

We introduce the Flow Stochastic Segmentation Network (Flow-SSN), a generative segmentation model family featuring discrete-time autoregressive and modern continuous-time flow variants. We prove fundamental limitations of the low-rank parameterisation of previous methods and show that Flow-SSNs can estimate arbitrarily high-rank pixel-wise covariances without assuming the rank or storing the distributional parameters. Flow-SSNs are also more efficient to sample from than standard diffusion-based segmentation models, thanks to most of the model capacity being allocated to learning the base distribution of the flow, constituting an expressive prior. We apply Flow-SSNs to challenging medical imaging benchmarks and achieve state-of-the-art results. Code available: https://github.com/biomedia-mira/flow-ssn.


Relevance for Stability of Verification Status of a Set of Arguments in Incomplete Argumentation Frameworks (with Proofs)

Xiong, Anshu, Zhang, Songmao

arXiv.org Artificial Intelligence

The notion of relevance was proposed for stability of justification status of a single argument in incomplete argumentation frameworks (IAFs) in 2024 by Odekerken et al. To extend the notion, we study the relevance for stability of verification status of a set of arguments in this paper, i.e., the uncertainties in an IAF that have to be resolved in some situations so that answering whether a given set of arguments is an extension obtains the same result in every completion of the IAF. Further we propose the notion of strong relevance for describing the necessity of resolution in all situations reaching stability. An analysis of complexity reveals that detecting the (strong) relevance for stability of sets of arguments can be accomplished in P time under the most semantics discussed in the paper. We also discuss the difficulty in finding tractable methods for relevance detection under grounded semantics.