Amazon today announced small enhancements to Textract, its service that extracts printed text and other data from documents, as well as tables and forms, using machine learning. As of today, Textract now supports handwriting in English documents, in addition to files typed in Spanish, Portuguese, French, German, and Italian. Amazon rightly notes that many documents, like medical intake forms or employment applications, contain a combination of handwritten and printed text. While rivals like Google and Amazon have offered handwriting recognition-as-a-service for some time, Amazon says customer requests spurred the launch of its own solution, which works with both free-form text and text embedded in tables and forms. Amazon Web Services (AWS) customers can use the Textract handwriting recognition feature in conjunction with Amazon's Augmented AI (A2I) for improved performance.
I started developing AI algorithms for handwriting recognition at my part-time student job while doing my Undergraduate degree in Computer Science. Since then, over the last 20 years or so, I have strived to combine my work in the industry with academic research. I did my Graduate degree in Computer Vision and completed my Ph.D. in Machine Learning while having quite an intensive career in the industry in parallel with my studies. In the industry, I've worked on all kinds of data and applications, including medical imaging, educational multimedia, mobile advertising, financial time series, video, text and speech processing for public safety, and other projects. When I began working with the product and business aspects of R&D, I felt that I needed to strengthen the relevant skills, so I went back to school and got an additional Master's degree in Technology Management.
JOURNAL OF XXX CLASS FILES, VOL. 1, NO. 1, JUNE 2019 1 Air-Writing Translater: A Novel Unsupervised Domain Adaptation Method for Inertia-Trajectory Translation of In-air Handwriting Songbin Xu, Y ang Xue, Xin Zhang, Lianwen Jin As a new way of human-computer interaction, inertial sensor based in-air handwriting can provide a natural and unconstrained interaction to express more complex and richer information in 3D space. However, most of the existing in-air handwriting work is mainly focused on handwritten character recognition, which makes these work suffer from poor readability of inertial signal and lack of labeled samples. T o address these two problems, we use unsupervised domain adaptation method to reconstruct the trajectory of inertial signal and generate inertial samples using online handwritten trajectories. In this paper, we propose an Air-Writing Translater model to learn the bidirectional translation between trajectory domain and inertial domain in the absence of paired inertial and trajectory samples. Through semantic-level adversarial training and latent classification loss, the proposed model learns to extract domain-invariant content between inertial signal and trajectory, while preserving semantic consistency during the translation across the two domains. We carefully design the architecture, so that the proposed framework can accept inputs of arbitrary length and translate between different sampling rates. We also conduct experiments on two public datasets: 6DMG (in-air handwriting dataset) and CT (handwritten trajectory dataset), the results on the two datasets demonstrate that the proposed network successes in both Inertia-to Trajectory and Trajectory-to-Inertia translation tasks. I NTRODUCTION I NAIR handwriting refers to a novel way of human-computer interaction (HCI), which freely writes meaningful characters in 3D space and then converts them into user-to-computer commands. Compared with general motion gestures, in-air handwriting is more complicated and provides more abundant expressions. As modern MEMS(Micro-Electro- Mechanical System) inertial sensors become smaller and more energy efficient, they have been universally employed in portable and wearable devices such as smartphones and wristbands. Unlike optical devices, inertial sensors do not suffer from illumination interference and obstruction. Therefore, inertial sensor based in-air handwriting has widely attracted researchers' attention -. Most of the existing work is mainly focused on in-air handwriting recognition (IAHR) -. But in the research of IAHR, there are usually two problems. Firstly, the inertial signal is full of abstractness and lack of readability, because it is a series of temporal sequences representing motion shifting, as illustrated in Fig.1(a).
Inking and navigating with a digital pen or stylus within Windows 10 will become easier within the Fall Creators Update, for those of you who use a tablet as, you know, a tablet. The improvements include two major elements: navigation, including using the pen or stylus to select and scroll text; and better interpretation of inked words as text, via a more accurate and responsive handwriting panel. Combined, it's a love letter of sorts to Surface and other tablet users who use the pen to input data. It's amazing how well Windows can interpret your chicken-scratch into text that can be edited in Word and elsewhere. General Windows 10 users won't be able to take advantage of the new features until the launch of the Fall Creators Update on Oct. 17.
One of the most significant features of Windows 10's Anniversary Update was the addition of pen computing, known as Windows Ink, which we criticized as falling short of the average consumer's needs. We don't know whether any new inking features will be announced at Microsoft's Windows event on Wednesday (or the event that follows on November 2). Recently, however, we took a deeper dive into the capabilities, when we tried the new Math and Replay features within Windows 10's OneNote UWP app. Math translates and solves inked equations, while Replay records your series of ink strokes and can play them back. But the devil's in the details, and the challenges of both features show how Windows Ink is struggling with the realities of handwriting recognition and data wrangling.