Goto

Collaborating Authors

Handwriting Recognition


Digital Peter: Dataset, Competition and Handwriting Recognition Methods

arXiv.org Artificial Intelligence

This paper presents a new dataset of Peter the Great's manuscripts and describes a segmentation procedure that converts initial images of documents into the lines. The new dataset may be useful for researchers to train handwriting text recognition models as a benchmark for comparing different models. It consists of 9 694 images and text files corresponding to lines in historical documents. The open machine learning competition Digital Peter was held based on the considered dataset. The baseline solution for this competition as well as more advanced methods on handwritten text recognition are described in the article. Full dataset and all code are publicly available.


Fine-tuning Handwriting Recognition systems with Temporal Dropout

arXiv.org Artificial Intelligence

This paper introduces a novel method to fine-tune handwriting recognition systems based on Recurrent Neural Networks (RNN). Long Short-Term Memory (LSTM) networks are good at modeling long sequences but they tend to overfit over time. To improve the system's ability to model sequences, we propose to drop information at random positions in the sequence. We call our approach Temporal Dropout (TD). We apply TD at the image level as well to internal network representation. We show that TD improves the results on two different datasets. Our method outperforms previous state-of-the-art on Rodrigo dataset.


Motion-Based Handwriting Recognition

arXiv.org Artificial Intelligence

Sensor-Based Gesture Recognition Recently, there have It is prevalent in today's world for people to write on a been lots of researches for various ways of leveraging inertial touch screen with a smart pen, as there is a strong need to digitize motion unit (IMU) data to predict the gesture or the activity handwritten content, to make the review and indexing of users [7, 8, 9, 10, 11], but few studies make use of the IMU easier. However, despite the success of character recognition data to predict the handwriting letter due to the lack of relevant on digital devices [1, 2, 3], requiring a digitizer as the writing dataset. Oh et al. analyzed using inertial sensor based data to surface poses a possibly unnecessary restriction to overcome.


Amazon Textract adds handwriting recognition and support for new languages

#artificialintelligence

Amazon today announced small enhancements to Textract, its service that extracts printed text and other data from documents, as well as tables and forms, using machine learning. As of today, Textract now supports handwriting in English documents, in addition to files typed in Spanish, Portuguese, French, German, and Italian. Amazon rightly notes that many documents, like medical intake forms or employment applications, contain a combination of handwritten and printed text. While rivals like Google and Amazon have offered handwriting recognition-as-a-service for some time, Amazon says customer requests spurred the launch of its own solution, which works with both free-form text and text embedded in tables and forms. Amazon Web Services (AWS) customers can use the Textract handwriting recognition feature in conjunction with Amazon's Augmented AI (A2I) for improved performance.


Handwriting Recognition - Open Electronics

#artificialintelligence

In this project, I build a pen device which can be used to recognize handwritten numerals. As its input, it takes multidimensional accelerometer and gyroscope sensor data. Its output will be a simple classification that notifies us if one of several classes of movements, in this case 0 to 9 digit, has recently occurred.


Machine Learning Framework Algorithm to recognise handwriting

#artificialintelligence

Manually transcribing large amounts of handwritten data is an arduous process that's bound to be fraught with errors. Automated handwriting recognition can drastically cut down on the time required to transcribe large volumes of text, and also serve as a framework for developing future applications of machine learning. Handwritten character recognition is an ongoing field of research encompassing artificial intelligence, computer vision, and pattern recognition. An algorithm that performs handwriting recognition can acquire and detect characteristics from pictures, touch-screen devices and convert them to a machine-readable form. There are two basic types of handwriting recognition systems – online and offline.


Unconstrained On-line Handwriting Recognition with Recurrent Neural Networks

Neural Information Processing Systems

On-line handwriting recognition is unusual among sequence labelling tasks in that the underlying generator of the observed data, i.e. the movement of the pen, is recorded directly. However, the raw data can be difficult to interpret because each letter is spread over many pen locations. As a consequence, sophisticated pre-processing is required to obtain inputs suitable for conventional sequence labelling algorithms, such as HMMs. In this paper we describe a system capable of directly transcribing raw on-line handwriting data. The system consists of a recurrent neural network trained for sequence labelling, combined with a probabilistic language model.


Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks

Neural Information Processing Systems

Offline handwriting recognition---the transcription of images of handwritten text---is an interesting task, in that it combines computer vision with sequence learning. In most systems the two elements are handled separately, with sophisticated preprocessing techniques used to extract the image features and sequential models such as HMMs used to provide the transcriptions. By combining two recent innovations in neural networks---multidimensional recurrent neural networks and connectionist temporal classification---this paper introduces a globally trained offline handwriting recogniser that takes raw pixel data as input. Unlike competing systems, it does not require any alphabet specific preprocessing, and can therefore be used unchanged for any language. Evidence of its generality and power is provided by data from a recent international Arabic recognition competition, where it outperformed all entries (91.4% accuracy compared to 87.2% for the competition winner) despite the fact that neither author understands a word of Arabic.


AiThority Interview with Sasha Apartsin, VP of AI at Harmon.ie

#artificialintelligence

I started developing AI algorithms for handwriting recognition at my part-time student job while doing my Undergraduate degree in Computer Science. Since then, over the last 20 years or so, I have strived to combine my work in the industry with academic research. I did my Graduate degree in Computer Vision and completed my Ph.D. in Machine Learning while having quite an intensive career in the industry in parallel with my studies. In the industry, I've worked on all kinds of data and applications, including medical imaging, educational multimedia, mobile advertising, financial time series, video, text and speech processing for public safety, and other projects. When I began working with the product and business aspects of R&D, I felt that I needed to strengthen the relevant skills, so I went back to school and got an additional Master's degree in Technology Management.


Air-Writing Translater: A Novel Unsupervised Domain Adaptation Method for Inertia-Trajectory Translation of In-air Handwriting

arXiv.org Artificial Intelligence

JOURNAL OF XXX CLASS FILES, VOL. 1, NO. 1, JUNE 2019 1 Air-Writing Translater: A Novel Unsupervised Domain Adaptation Method for Inertia-Trajectory Translation of In-air Handwriting Songbin Xu, Y ang Xue, Xin Zhang, Lianwen Jin As a new way of human-computer interaction, inertial sensor based in-air handwriting can provide a natural and unconstrained interaction to express more complex and richer information in 3D space. However, most of the existing in-air handwriting work is mainly focused on handwritten character recognition, which makes these work suffer from poor readability of inertial signal and lack of labeled samples. T o address these two problems, we use unsupervised domain adaptation method to reconstruct the trajectory of inertial signal and generate inertial samples using online handwritten trajectories. In this paper, we propose an Air-Writing Translater model to learn the bidirectional translation between trajectory domain and inertial domain in the absence of paired inertial and trajectory samples. Through semantic-level adversarial training and latent classification loss, the proposed model learns to extract domain-invariant content between inertial signal and trajectory, while preserving semantic consistency during the translation across the two domains. We carefully design the architecture, so that the proposed framework can accept inputs of arbitrary length and translate between different sampling rates. We also conduct experiments on two public datasets: 6DMG (in-air handwriting dataset) and CT (handwritten trajectory dataset), the results on the two datasets demonstrate that the proposed network successes in both Inertia-to Trajectory and Trajectory-to-Inertia translation tasks. I NTRODUCTION I NAIR handwriting refers to a novel way of human-computer interaction (HCI), which freely writes meaningful characters in 3D space and then converts them into user-to-computer commands. Compared with general motion gestures, in-air handwriting is more complicated and provides more abundant expressions. As modern MEMS(Micro-Electro- Mechanical System) inertial sensors become smaller and more energy efficient, they have been universally employed in portable and wearable devices such as smartphones and wristbands. Unlike optical devices, inertial sensors do not suffer from illumination interference and obstruction. Therefore, inertial sensor based in-air handwriting has widely attracted researchers' attention [1]-[4]. Most of the existing work is mainly focused on in-air handwriting recognition (IAHR) [5]-[8]. But in the research of IAHR, there are usually two problems. Firstly, the inertial signal is full of abstractness and lack of readability, because it is a series of temporal sequences representing motion shifting, as illustrated in Fig.1(a).