Attention networks for image-to-text
The paper approaches the problem of image-to-text with attention-based encoder-decoder networks that are trained to handle sequences of characters rather than words. We experiment on lines of text from a popular handwriting database with different attention mechanisms for the decoder. The model trained with softmax attention achieves the lowest test error, outperforming several other RNN-based models. Our results show that softmax attention is able to learn a linear alignment whereas the alignment generated by sigmoid attention is linear but much less precise.
Dec-11-2017
- Country:
- North America > United States > California > Alameda County > Berkeley (0.14)
- Genre:
- Research Report > New Finding (0.55)
- Technology: