Goto

Collaborating Authors

 compositional stroke embedding


CoSE: Compositional Stroke Embeddings

Neural Information Processing Systems

We present a generative model for stroke-based drawing tasks which is able to model complex free-form structures. While previous approaches rely on sequence-based models for drawings of basic objects or handwritten text, we propose a model that treats drawings as a collection of strokes that can be composed into complex structures such as diagrams (e.g., flow-charts). At the core of the approach lies a novel auto-encoder that projects variable-length strokes into a latent space of fixed dimension. This representation space allows a relational model, operating in latent space, to better capture the relationship between strokes and to predict subsequent strokes. We demonstrate qualitatively and quantitatively that our proposed approach is able to model the appearance of individual strokes, as well as the compositional structure of larger diagram drawings. Our approach is suitable for interactive use cases such as auto-completing diagrams. We make code and models publicly available at https://eth-ait.github.io/cose.


Review for NeurIPS paper: CoSE: Compositional Stroke Embeddings

Neural Information Processing Systems

Weaknesses: The rebuttal and discussion clarified my concerns about [1,2] (although I would highly encourage that these works be citied for a more complete related works section). However, I remain unconvinced by the novelty of the approach -- the fact that transformer based models work better compared to simple VAE based models is not surprising to the general NeurIPS audience. However, I do agree that from the point of view of stroke based generative models the work is novel and makes a good contribution to this specific field. Novelty wrt to [1] is not clear -- both methods use a transformer based architecture to model long-range dependencies in strokes. The advantage of an autoregressive structure along with transformers is not clear as transformers contain self-attention layers to capture long range dependencies.


CoSE: Compositional Stroke Embeddings

Neural Information Processing Systems

We present a generative model for stroke-based drawing tasks which is able to model complex free-form structures. While previous approaches rely on sequence-based models for drawings of basic objects or handwritten text, we propose a model that treats drawings as a collection of strokes that can be composed into complex structures such as diagrams (e.g., flow-charts). At the core of the approach lies a novel auto-encoder that projects variable-length strokes into a latent space of fixed dimension. This representation space allows a relational model, operating in latent space, to better capture the relationship between strokes and to predict subsequent strokes. We demonstrate qualitatively and quantitatively that our proposed approach is able to model the appearance of individual strokes, as well as the compositional structure of larger diagram drawings.


CoSE: Compositional Stroke Embeddings

Aksan, Emre, Deselaers, Thomas, Tagliasacchi, Andrea, Hilliges, Otmar

arXiv.org Machine Learning

We present a generative model for stroke-based drawing tasks which is able to model complex free-form structures. While previous approaches rely on sequence-based models for drawings of basic objects or handwritten text, we propose a model that treats drawings as a collection of strokes that can be composed into complex structures such as diagrams (e.g., flow-charts). At the core of the approach lies a novel auto-encoder that projects variable-length strokes into a latent space of fixed dimension. This representation space allows a relational model, operating in latent space, to better capture the relationship between strokes and to predict subsequent strokes. We demonstrate qualitatively and quantitatively that our proposed approach is able to model the appearance of individual strokes, as well as the compositional structure of larger diagram drawings. Our approach is suitable for interactive use cases such as auto-completing diagrams.