Efficient, Lexicon-Free OCR Using Deep Learning

Efficient, Lexicon-Free OCR Using Deep Learning

Efficient, Lexicon-Free OCR using Deep Learning Marcin Namysl Iuliu Konya Fraunhofer IAIS Fraunhofer IAIS 53757 Sankt Augustin, Germany 53757 Sankt Augustin, Germany [email protected] [email protected] Abstract—Contrary to popular belief, Optical Character designed for scene text are generally inadequate for printed Recognition (OCR) remains a challenging problem when text document OCR, whereas high throughput and support for great occurs in unconstrained environments, like natural scenes, due varieties of symbols are essential. to geometrical distortions, complex backgrounds, and diverse fonts. In this paper, we present a segmentation-free OCR sys- In this paper, we address the general OCR problem and try tem that combines deep learning methods, synthetic training to overcome the limitations of both printed- and scene text data generation, and data augmentation techniques. We render recognition systems. To this end, we present a fast and robust synthetic training data using large text corpora and over 2 000 deep learning multi-font OCR engine, which currently recog- fonts. To simulate text occurring in complex natural scenes, we nizes 132 different character classes. Our models are trained augment extracted samples with geometric distortions and with a proposed data augmentation technique – alpha-compositing almost exclusively using synthetically generated documents. with background textures. Our models employ a convolutional We employ segmentation-free text recognition methods that neural network encoder to extract features from text images. require a much lower data labeling effort, making the resulting Inspired by the recent progress in neural machine translation and framework more readily extensible for new languages and language modeling, we examine the capabilities of both recurrent scripts. Subsequently we propose a novel data augmentation and convolutional neural networks in modeling the interactions between input elements. The proposed OCR system surpasses technique that improves the robustness of neural models the accuracy of leading commercial and open-source engines on for text recognition. Several large and challenging datasets, distorted text samples. consisting of both real and synthetically rendered documents, Index Terms—OCR, CNN, LSTM, CTC, synthetic data are used to evaluate all OCR methods. The comparison with leading established commercial (ABBYY FineReader 1, Om- I. INTRODUCTION niPage Capture 2) and open-source engines (Tesseract 3 [9], 3 Optical character recognition (OCR) is one of the most Tesseract 4 ) shows that the proposed solutions obtain sig- widely studied problems in the field of pattern recognition nificantly better recognition results with comparable execution and computer vision. It is not limited to printed but also time. handwritten documents [1], as well as natural scene text [2]. The remaining part of this paper is organized as follows: The accuracy of various OCR methods has recently greatly in Section II, we highlight related research papers, while in improved due to advances in deep learning [3]–[5]. Moreover, Section III, we describe the datasets used in our experiments, many current open-source and commercial products reach a as well as the data augmentation routines. In Section IV, high recognition accuracy and good throughput for run-of-the- we present the detailed system architecture, which is then mill printed document images. While this has lead the research evaluated and compared against several state-of-the-art OCR community to regard OCR as a largely solved problem, we engines in Section V. Our conclusions, alongside a few show that even the most successful and widespread OCR worthwhile avenues for further investigations, are the subject solutions are neither able to robustly handle large font vari- of the final Section VI. arXiv:1906.01969v1 [cs.CV] 5 Jun 2019 eties, nor distorted texts, potentially superimposed on complex backgrounds. Such unconstrained environments for digital II. RELATED WORK documents have already become predominant, due to the wide In this section, we review related approaches for printed-, availability of mobile phones and various specialized video handwritten- and scene text recognition. These can be broadly recording devices. categorized into segmentation-based and segmentation-free In contrast to popular OCR engines, methods used in scene methods. text recognition [6], [7] exploit computationally expensive network models, aiming to achieve the best possible recogni- Segmentation-based OCR methods recognize individual tion rates on popular benchmarks. Such methods are tuned to character hypotheses, explicitly or implicitly generated by a deal with significantly smaller amounts of text per image and character segmentation method. The output is a recognition are often constrained to predefined lexicons. Commonly used 1 evaluation protocols substantially limit the diversity of sym- https://www.abbyy.com/en-eu/ocr-sdk/ 2https://www.nuance.com/print-capture-and-pdf-solutions/ bols to be recognized, e.g., by ignoring all non-alphanumeric optical-character-recognition/omnipage/omnipage-for-developers.html characters, or neglecting case sensitivity [8]. Hence, models 3https://github.com/tesseract-ocr/tesseract/wiki/4.0-with-LSTM lattice containing various segmentation and recognition alter- in the field of printed document OCR. Instead, in order to natives weighted by the classifier. The lattice is then decoded, overcome the issue of sensitivity to stroke variations along the e.g., via a greedy or beam search method and the decoding vertical axis, researchers have proposed different solutions. For process may also make use of an external language model or example, Breuel et al. [14] combined a standard 1-D LSTM allow the incorporation of certain (lexicon) constraints. network architecture with a text line normalization method The PhotoOCR system for text extraction from smartphone for performing OCR of printed Latin and Fraktur scripts. In imagery, proposed by Bissacco et al. [10], is a representative a similar manner, by normalizing the positions and baselines example of a segmentation-based OCR method. They used of letters, Yousefi et al. [16] achieved superior performance a deep neural network trained on extracted histogram of and faster convergence with a 1-D LSTM network over a 2-D oriented gradient (HOG) features for character classification variant for Arabic handwriting recognition. and incorporated a character-level language model into the An additional advantage of segmentation-free approaches is score function. their inherent ability to work directly on grayscale or full-color The accuracy of segmentation-based methods heavily suf- images. This increases the robustness and accuracy of text fers from segmentation errors and the lack of context in- recognition, as any information loss caused by a previously formation wider than a single cropped character-candidate mandatory binarization step can be avoided. Asad et al. [17] image during classification. Improper incorporation of an applied the 1-D LSTM network directly to original, blurred external language model or lexicon constraints can degrade document images and were able to obtain state-of-the-art accuracy [11]. While offering a high flexibility in the choice recognition results. of segmenters, classifiers, and decoders, segmentation-based The introduction of convolutional neural networks (CNNs) approaches require a similarly high effort in order to tune allowed for a further jump in the recognition rates. Since optimally for specific application scenarios. Moreover, the pre- CNNs are able to extract latent representations of input images, cise weighting of all involved hypotheses must be re-computed thus increasing robustness to local distortions, they can be from scratch as soon as one component is updated (e.g., the successfully employed as a substitute for MD-LSTM layers. language model), whereas the process of data labeling (e.g., Breuel et al. [18] proposed a model that combined CNNs at the character/pixel level) is usually a painstaking endeavor, and LSTMs for printed text recognition. Features extracted by with a high cost in terms of human annotation labor. CNNs were combined and fed into the LSTM network with Segmentation-free OCR methods eliminate the need for a Connectionist Temporal Classification (CTC) [19] output pre-segmented inputs. Cropped words or entire text lines layer. A few recent methods have completely forgone the use are usually geometrically normalized (III-B) and then can of the computationally expensive recurrent layers and rely be directly recognized. Previous works on segmentation-free purely on convolutional layers for modeling the local context. OCR [12] employed Hidden Markov Models (HMMs) to Borisyuk et al. [20] presented a scalable OCR system called avoid the difficulties of segmentation-based techniques. Most Rosetta, which employs a fully convolutional network (FCN) of the recently developed segmentation-free solutions employ model followed by the CTC layer in order to extract the text recurrent and convolutional neural networks. depicted on input images uploaded daily at Facebook. Multi-directional, multi-dimensional recurrent neural net- In the current work, we build upon the previously mentioned works (MDRNNs) currently enjoy a high popularity among techniques and propose an end-to-end segmentation-free OCR researchers in the field of handwriting recognition [1] because system. Our approach is purely data-driven and can be adapted

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us