Interpreting Images by Propagating Bayesian Beliefs
–Neural Information Processing Systems
A central theme of computational vision research has been the realization thatreliable estimation of local scene properties requires propagating measurements across the image. Many authors have therefore suggested solving vision problems using architectures of locally connected units updating their activity in parallel. Unfortunately, theconvergence of traditional relaxation methods on such architectures has proven to be excruciatingly slow and in general they do not guarantee that the stable point will be a global minimum. In this paper we show that an architecture in which Bayesian Beliefs aboutimage properties are propagated between neighboring units yields convergence times which are several orders of magnitude fasterthan traditional methods and avoids local minima. In particular our architecture is non-iterative in the sense of Marr [5]: at every time step, the local estimates at a given location are optimal giventhe information which has already been propagated to that location. We illustrate the algorithm's performance on real images and compare it to several existing methods. 1 Theory The essence of our approach is shown in figure 1. Figure 1a shows the prototypical ill-posed problem: interpolation of a function from sparse data.
Neural Information Processing Systems
Dec-31-1997
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.14)
- Asia > Middle East
- Industry:
- Health & Medicine > Therapeutic Area (0.48)