Goto

Collaborating Authors

 Text Recognition


CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition

arXiv.org Artificial Intelligence

The attention-based encoder-decoder framework is becoming popular in scene text recognition, largely due to its superiority in integrating recognition clues from both visual and semantic domains. However, recent studies show the two clues might be misaligned in the difficult text (e.g., with rare text shapes) and introduce constraints such as character position to alleviate the problem. Despite certain success, a content-free positional embedding hardly associates with meaningful local image regions stably. In this paper, we propose a novel module called Multi-Domain Character Distance Perception (MDCDP) to establish a visual and semantic related position encoding. MDCDP uses positional embedding to query both visual and semantic features following the attention mechanism. It naturally encodes the positional clue, which describes both visual and semantic distances among characters. We develop a novel architecture named CDistNet that stacks MDCDP several times to guide precise distance modeling. Thus, the visual-semantic alignment is well built even various difficulties presented. We apply CDistNet to two augmented datasets and six public benchmarks. The experiments demonstrate that CDistNet achieves state-of-the-art recognition accuracy. While the visualization also shows that CDistNet achieves proper attention localization in both visual and semantic domains. We will release our code upon acceptance.


Parallel Scale-wise Attention Network for Effective Scene Text Recognition

arXiv.org Artificial Intelligence

The paper proposes a new text recognition network for scene-text images. Many state-of-the-art methods employ the attention mechanism either in the text encoder or decoder for the text alignment. Although the encoder-based attention yields promising results, these schemes inherit noticeable limitations. They perform the feature extraction (FE) and visual attention (VA) sequentially, which bounds the attention mechanism to rely only on the FE final single-scale output. Moreover, the utilization of the attention process is limited by only applying it directly to the single scale feature-maps. To address these issues, we propose a new multi-scale and encoder-based attention network for text recognition that performs the multi-scale FE and VA in parallel. The multi-scale channels also undergo regular fusion with each other to develop the coordinated knowledge together. Quantitative evaluation and robustness analysis on the standard benchmarks demonstrate that the proposed network outperforms the state-of-the-art in most cases.


Hamming OCR: A Locality Sensitive Hashing Neural Network for Scene Text Recognition

arXiv.org Artificial Intelligence

Recently, inspired by Transformer, self-attention-based scene text recognition approaches have achieved outstanding performance. However, we find that the size of model expands rapidly with the lexicon increasing. Specifically, the number of parameters for softmax classification layer and output embedding layer are proportional to the vocabulary size. It hinders the development of a lightweight text recognition model especially applied for Chinese and multiple languages. Thus, we propose a lightweight scene text recognition model named Hamming OCR. In this model, a novel Hamming classifier, which adopts locality sensitive hashing (LSH) algorithm to encode each character, is proposed to replace the softmax regression and the generated LSH code is directly employed to replace the output embedding. We also present a simplified transformer decoder to reduce the number of parameters by removing the feed-forward network and using cross-layer parameter sharing technique. Compared with traditional methods, the number of parameters in both classification and embedding layers is independent on the size of vocabulary, which significantly reduces the storage requirement without loss of accuracy. Experimental results on several datasets, including four public benchmaks and a Chinese text dataset synthesized by SynthText with more than 20,000 characters, shows that Hamming OCR achieves competitive results.


Build a Handwritten Text Recognition System using TensorFlow

#artificialintelligence

Offline Handwritten Text Recognition (HTR) systems transcribe text contained in scanned images into digital text, an example is shown in Figure 1. We will build a Neural Network (NN) which is trained on word-images from the IAM dataset. As the input layer (and therefore also all the other layers) can be kept small for word-images, NN-training is feasible on the CPU (of course, a GPU would be better). This implementation is the bare minimum that is needed for HTR using TF. We use a NN for our task.


Machine Learning with Python: NLP and Text Recognition

#artificialintelligence

Student and freelance AI / Big Data Developer with a passion for full stack. In this article, I apply a series of natural language processing techniques on a dataset containing reviews about businesses. After that, I train a model using Logistic Regression to forecast if a review is "positive" or "negative". The natural language processing field contains a series of tools that are very useful to extract, label, and forecast information starting from raw text data. This collection of techniques are mainly used in the field of emotions recognition, text tagging (for example to automatize the process of sorting complaints from a client), chatbots, and vocal assistants.


ML Kit Android: Implementing Text Recognition -- Firebase

#artificialintelligence

Firebase is now set up, we can now start building our Text Recognition app. We need Firebase ML Vision dependency, we add it in our app-level build.grade After capturing the image from the camera, we'll set the image into the ImageView as: Our app is ready to use. Run the app and click on the camera icon to launch the camera on your Android Device. Click a picture of some text, then click on tick icon and watch Firebase do the magic for you.


Dropbox text recognition makes it easier to find images and PDFs

Engadget

There's nothing worse than having to pore over a pile of PDFs containing documents scanned as images when you quickly have to find a specific file. Dropbox is making it easier to do that by introducing automatic image recognition, which extracts texts from photos and PDFs and makes them searchable. According to the cloud storage provider, there are 20 billion image and PDF files stored on Dropbox. Around 10 to 20 percent of those are photos of documents, so the new feature can be very, very useful. To look for a specific photo or PDF, you simply have to type in a keyword or phrase like you would on a search engine.


Facebook is making AI that can identify offensive memes

#artificialintelligence

Facebook's moderators can't possibly look through every single image that gets posted on the enormous platform, so Facebook is building AI to help them out. In a blog post today, Facebook describes a system it's built called Rosetta that uses machine learning to identify text in images and videos and then transcribe it into something that's machine readable. In particular, Facebook is finding this tool helpful for transcribing the text on memes. Text transcription tools are nothing new, but Facebook faces different challenges because of the size of its platform and the variety of the images it sees. Rosetta is said to be live now, extracting text from 1 billion images and video frames per day across both Facebook and Instagram.


Double Supervised Network with Attention Mechanism for Scene Text Recognition

arXiv.org Artificial Intelligence

In this paper, we propose Double Supervised Network with Attention Mechanism (DSAN), a novel end-to-end trainable framework for scene text recognition. It incorporates one text attention module during feature extraction which enforces the model to focus on text regions and the whole framework is supervised by two branches. One supervision branch comes from context-level modelling and another comes from one extra supervision enhancement branch which aims at tackling inexplicit semantic information at character level. These two supervisions can benefit each other and yield better performance. The proposed approach can recognize text in arbitrary length and does not need any predefined lexicon. Our method outperforms the current state-of-the-art methods on three text recognition benchmarks: IIIT5K, ICDAR2013 and SVT reaching accuracy 88.6%, 92.3% and 84.1% respectively which suggests the effectiveness of the proposed method.


STN-OCR: A single Neural Network for Text Detection and Text Recognition

#artificialintelligence

STN-OCR, a single semi-supervised Deep Neural Network(DNN), consist of a spatial transformer network -- which is used to detected text regions in images, and a text recognition network -- which recognizes the textual content of the identified text regions. STN-OCR is an end-to-end scene text recognition system, but it is not easy to train. This model is mostly able to detect text in differently arranged lines of text in images, while also recognizing the content of these words. The overview of the system is shown in Figure 1. Compared with most of the current text recognition systems, which extract all the information from the image at once, STN-OCR behaves more like a human.