Goto

Collaborating Authors

 grammar


Submodular Field Grammars: Representation, Inference, and Application to Image Parsing

Neural Information Processing Systems

Natural scenes contain many layers of part-subpart structure, and distributions over them are thus naturally represented by stochastic image grammars, with one production per decomposition of a part. Unfortunately, in contrast to language grammars, where the number of possible split points for a production $A \rightarrow BC$ is linear in the length of $A$, in an image there are an exponential number of ways to split a region into subregions. This makes parsing intractable and requires image grammars to be severely restricted in practice, for example by allowing only rectangular regions. In this paper, we address this problem by associating with each production a submodular Markov random field whose labels are the subparts and whose labeling segments the current object into these subparts. We call the result a submodular field grammar (SFG). Finding the MAP split of a region into subregions is now tractable, and by exploiting this we develop an efficient approximate algorithm for MAP parsing of images with SFGs. Empirically, we present promising improvements in accuracy when using SFGs for scene understanding, and show exponential improvements in inference time compared to traditional methods, while returning comparable minima.




5f268dfb0fbef44de0f668a022707b86-AuthorFeedback.pdf

Neural Information Processing Systems

Thereason thatthemethod MSO in"Efficient multi-objectivemolecular optimization inacontinuous3 latent space" achieved ahigher penalized logP with unlimited property evaluations than ours (26.1 vs 15.18) isdue4 to different experimental settings. With a8 largerLmax, the best penalized logP score can be significantly increased. Wehavestarted11 running the experiments on GuacaMol as suggested. We will fix these two figures in the final version. All generated molecules in the appendix have been24 double-checked by both RDkit and human experts.