Goto

Collaborating Authors

Handwriting Recognition


Machine Learning Framework Algorithm to recognise handwriting

#artificialintelligence

Manually transcribing large amounts of handwritten data is an arduous process that's bound to be fraught with errors. Automated handwriting recognition can drastically cut down on the time required to transcribe large volumes of text, and also serve as a framework for developing future applications of machine learning. Handwritten character recognition is an ongoing field of research encompassing artificial intelligence, computer vision, and pattern recognition. An algorithm that performs handwriting recognition can acquire and detect characteristics from pictures, touch-screen devices and convert them to a machine-readable form. There are two basic types of handwriting recognition systems – online and offline.


Unconstrained On-line Handwriting Recognition with Recurrent Neural Networks

Neural Information Processing Systems

On-line handwriting recognition is unusual among sequence labelling tasks in that the underlying generator of the observed data, i.e. the movement of the pen, is recorded directly. However, the raw data can be difficult to interpret because each letter is spread over many pen locations. As a consequence, sophisticated pre-processing is required to obtain inputs suitable for conventional sequence labelling algorithms, such as HMMs. In this paper we describe a system capable of directly transcribing raw on-line handwriting data. The system consists of a recurrent neural network trained for sequence labelling, combined with a probabilistic language model.


Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks

Neural Information Processing Systems

Offline handwriting recognition---the transcription of images of handwritten text---is an interesting task, in that it combines computer vision with sequence learning. In most systems the two elements are handled separately, with sophisticated preprocessing techniques used to extract the image features and sequential models such as HMMs used to provide the transcriptions. By combining two recent innovations in neural networks---multidimensional recurrent neural networks and connectionist temporal classification---this paper introduces a globally trained offline handwriting recogniser that takes raw pixel data as input. Unlike competing systems, it does not require any alphabet specific preprocessing, and can therefore be used unchanged for any language. Evidence of its generality and power is provided by data from a recent international Arabic recognition competition, where it outperformed all entries (91.4% accuracy compared to 87.2% for the competition winner) despite the fact that neither author understands a word of Arabic.


AiThority Interview with Sasha Apartsin, VP of AI at Harmon.ie

#artificialintelligence

I started developing AI algorithms for handwriting recognition at my part-time student job while doing my Undergraduate degree in Computer Science. Since then, over the last 20 years or so, I have strived to combine my work in the industry with academic research. I did my Graduate degree in Computer Vision and completed my Ph.D. in Machine Learning while having quite an intensive career in the industry in parallel with my studies. In the industry, I've worked on all kinds of data and applications, including medical imaging, educational multimedia, mobile advertising, financial time series, video, text and speech processing for public safety, and other projects. When I began working with the product and business aspects of R&D, I felt that I needed to strengthen the relevant skills, so I went back to school and got an additional Master's degree in Technology Management.


Air-Writing Translater: A Novel Unsupervised Domain Adaptation Method for Inertia-Trajectory Translation of In-air Handwriting

arXiv.org Artificial Intelligence

JOURNAL OF XXX CLASS FILES, VOL. 1, NO. 1, JUNE 2019 1 Air-Writing Translater: A Novel Unsupervised Domain Adaptation Method for Inertia-Trajectory Translation of In-air Handwriting Songbin Xu, Y ang Xue, Xin Zhang, Lianwen Jin As a new way of human-computer interaction, inertial sensor based in-air handwriting can provide a natural and unconstrained interaction to express more complex and richer information in 3D space. However, most of the existing in-air handwriting work is mainly focused on handwritten character recognition, which makes these work suffer from poor readability of inertial signal and lack of labeled samples. T o address these two problems, we use unsupervised domain adaptation method to reconstruct the trajectory of inertial signal and generate inertial samples using online handwritten trajectories. In this paper, we propose an Air-Writing Translater model to learn the bidirectional translation between trajectory domain and inertial domain in the absence of paired inertial and trajectory samples. Through semantic-level adversarial training and latent classification loss, the proposed model learns to extract domain-invariant content between inertial signal and trajectory, while preserving semantic consistency during the translation across the two domains. We carefully design the architecture, so that the proposed framework can accept inputs of arbitrary length and translate between different sampling rates. We also conduct experiments on two public datasets: 6DMG (in-air handwriting dataset) and CT (handwritten trajectory dataset), the results on the two datasets demonstrate that the proposed network successes in both Inertia-to Trajectory and Trajectory-to-Inertia translation tasks. I NTRODUCTION I NAIR handwriting refers to a novel way of human-computer interaction (HCI), which freely writes meaningful characters in 3D space and then converts them into user-to-computer commands. Compared with general motion gestures, in-air handwriting is more complicated and provides more abundant expressions. As modern MEMS(Micro-Electro- Mechanical System) inertial sensors become smaller and more energy efficient, they have been universally employed in portable and wearable devices such as smartphones and wristbands. Unlike optical devices, inertial sensors do not suffer from illumination interference and obstruction. Therefore, inertial sensor based in-air handwriting has widely attracted researchers' attention [1]-[4]. Most of the existing work is mainly focused on in-air handwriting recognition (IAHR) [5]-[8]. But in the research of IAHR, there are usually two problems. Firstly, the inertial signal is full of abstractness and lack of readability, because it is a series of temporal sequences representing motion shifting, as illustrated in Fig.1(a).



Fast Multi-language LSTM-based Online Handwriting Recognition

arXiv.org Machine Learning

Hindi writing often Given a user input in the form of an ink, i.e. a list of contains a connecting'Shirorekha' line and characters touch or pen strokes, output the textual interpretation can form larger structures (grapheme clusters) which of this input. A stroke is a sequence of points (x, y, t) influence the written shape of the components. Arabic with position (x, y) and timestamp t. is written right-to-left (with embedded left-to-right sequences Figure 1 illustrates example inputs to our online used for numbers or English names) and characters handwriting recognition system in different languages change shape depending on their position within and scripts. The left column shows examples in English a word. Emoji are non-text Unicode symbols that we with different writing styles, with different types also recognize. of content, and that may be written on one or multiple lines. The center column shows examples from Online handwriting recognition has recently been five different alphabetic languages similar in structure gaining importance for multiple reasons: (a) An increasing to English: German, Russian, Vietnamese, Greek, and number of people in emerging markets are obtaining Georgian. The right column shows scripts that are significantly access to computing devices, many exclusively using different from English: Chinese has a much mobile devices with touchscreens. Many of these users larger set of more complex characters, and users often have native languages and scripts that are not as easily overlap characters with one another. Korean, while an typed as English, e.g.


Handwriting Recognition of Historical Documents with few labeled data

arXiv.org Machine Learning

Historical documents present many challenges for offline handwriting recognition systems, among them, the segmentation and labeling steps. Carefully annotated textlines are needed to train an HTR system. In some scenarios, transcripts are only available at the paragraph level with no text-line information. In this work, we demonstrate how to train an HTR system with few labeled data. Specifically, we train a deep convolutional recurrent neural network (CRNN) system on only 10% of manually labeled text-line data from a dataset and propose an incremental training procedure that covers the rest of the data. Performance is further increased by augmenting the training set with specially crafted multiscale data. We also propose a model-based normalization scheme which considers the variability in the writing scale at the recognition phase. We apply this approach to the publicly available READ dataset. Our system achieved the second best result during the ICDAR2017 competition.


AI still fails on robust handwritten digit recognition (and how to fix it)

#artificialintelligence

We learn such a generative model for each digit. Then, when a new input comes along, we check which digit model can best approximate the new input. This procedure is typically called analysis-by-synthesis, because we analyse the content of the image according to the model that can best synthesise it. That's really the key difference: feedforward networks have no way to check their predictions, you have to trust them. Our analysis-by-synthesis model, on the other hand, looks whether certain image features are really present in the input before jumping to a conclusion.


A categorisation and implementation of digital pen features for behaviour characterisation

arXiv.org Artificial Intelligence

The research described in this paper is motivated by the development of applications for the behaviour analysis of handwriting and sketch input. Our goal is to provide other researchers with a reproducible, categorised set of features that can be used for behaviour characterisation in different scenarios. We use the term feature to describe properties of strokes and gestures which can be calculated based on the raw sensor input from capture devices, such as digital pens or tablets. In this paper, a large number of features known from the literature are presented and categorised into different subsets.