Goto

Collaborating Authors

 Deselaers, Thomas


CoSE: Compositional Stroke Embeddings

arXiv.org Machine Learning

We present a generative model for stroke-based drawing tasks which is able to model complex free-form structures. While previous approaches rely on sequence-based models for drawings of basic objects or handwritten text, we propose a model that treats drawings as a collection of strokes that can be composed into complex structures such as diagrams (e.g., flow-charts). At the core of the approach lies a novel auto-encoder that projects variable-length strokes into a latent space of fixed dimension. This representation space allows a relational model, operating in latent space, to better capture the relationship between strokes and to predict subsequent strokes. We demonstrate qualitatively and quantitatively that our proposed approach is able to model the appearance of individual strokes, as well as the compositional structure of larger diagram drawings. Our approach is suitable for interactive use cases such as auto-completing diagrams.


IndyLSTMs: Independently Recurrent LSTMs

arXiv.org Machine Learning

We introduce Independently Recurrent Long Short-term Memory cells: IndyLSTMs. These differ from regular LSTM cells in that the recurrent weights are not modeled as a full matrix, but as a diagonal matrix, i.e.\ the output and state of each LSTM cell depends on the inputs and its own output/state, as opposed to the input and the outputs/states of all the cells in the layer. The number of parameters per IndyLSTM layer, and thus the number of FLOPS per evaluation, is linear in the number of nodes in the layer, as opposed to quadratic for regular LSTM layers, resulting in potentially both smaller and faster models. We evaluate their performance experimentally by training several models on the popular \iamondb and CASIA online handwriting datasets, as well as on several of our in-house datasets. We show that IndyLSTMs, despite their smaller size, consistently outperform regular LSTMs both in terms of accuracy per parameter, and in best accuracy overall. We attribute this improved performance to the IndyLSTMs being less prone to overfitting.


Fast Multi-language LSTM-based Online Handwriting Recognition

arXiv.org Machine Learning

Hindi writing often Given a user input in the form of an ink, i.e. a list of contains a connecting'Shirorekha' line and characters touch or pen strokes, output the textual interpretation can form larger structures (grapheme clusters) which of this input. A stroke is a sequence of points (x, y, t) influence the written shape of the components. Arabic with position (x, y) and timestamp t. is written right-to-left (with embedded left-to-right sequences Figure 1 illustrates example inputs to our online used for numbers or English names) and characters handwriting recognition system in different languages change shape depending on their position within and scripts. The left column shows examples in English a word. Emoji are non-text Unicode symbols that we with different writing styles, with different types also recognize. of content, and that may be written on one or multiple lines. The center column shows examples from Online handwriting recognition has recently been five different alphabetic languages similar in structure gaining importance for multiple reasons: (a) An increasing to English: German, Russian, Vietnamese, Greek, and number of people in emerging markets are obtaining Georgian. The right column shows scripts that are significantly access to computing devices, many exclusively using different from English: Chinese has a much mobile devices with touchscreens. Many of these users larger set of more complex characters, and users often have native languages and scripts that are not as easily overlap characters with one another. Korean, while an typed as English, e.g.


Predicted Variables in Programming

arXiv.org Machine Learning

We present Predicted Variables (PVars), an approach to making machine learning (ML) a first class citizen in programming languages. There is a growing divide in approaches to building systems: using human experts (e.g. programming) on the one hand, and using behavior learned from data (e.g. ML) on the other hand. PVars aim to make ML in programming as easy as `if' statements and with that hybridize ML with programming. We leverage the existing concept of variables and create a new type, a predicted variable. PVars are akin to native variables with one important distinction: PVars determine their value using ML when evaluated. We describe PVars and their interface, how they can be used in programming, and demonstrate the feasibility of our approach on three algorithmic problems: binary search, Quicksort, and caches. We show experimentally that PVars are able to improve over the commonly used heuristics and lead to a better performance than the original algorithms. As opposed to previous work applying ML to algorithmic problems, PVars have the advantage that they can be used within the existing frameworks and do not require the existing domain knowledge to be replaced. PVars allow for a seamless integration of ML into existing systems and algorithms. Our PVars implementation currently relies on standard Reinforcement Learning (RL) methods. To learn faster, PVars use the heuristic function, which they are replacing, as an initial function. We show that PVars quickly pick up the behavior of the initial function and then improve performance beyond that without ever performing substantially worse -- allowing for a safe deployment in critical applications.