Ocr4all—An Open-Source Tool Providing a (Semi-)Automatic OCR Workflow for Historical Printings
Total Page:16
File Type:pdf, Size:1020Kb
applied sciences Article OCR4all—An Open-Source Tool Providing a (Semi-)Automatic OCR Workflow for Historical Printings Christian Reul 1,* , Dennis Christ 1, Alexander Hartelt 1, Nico Balbach 1, Maximilian Wehner 1, Uwe Springmann 2, Christoph Wick 1 , Christine Grundig 3, Andreas Büttner 4 and Frank Puppe 1 1 Artificial Intelligence and Applied Computer Science, University of Würzburg, 97074 Würzburg, Germany; [email protected] (D.C.); [email protected] (A.H.); [email protected] (N.B.); [email protected] (M.W.); [email protected] (C.W.); [email protected] (F.P.) 2 Center for Information and Language Processing, LMU Munich, 80538 Munich, Germany; [email protected] 3 Institute for Modern Art History, University of Zurich, 8006 Zurich, Switzerland; [email protected] 4 Institute for Philosophy, University of Würzburg, 97074 Würzburg, Germany; [email protected] * Correspondence: [email protected] Received: 6 September 2019; Accepted: 1 November 2019; Published: 13 November 2019 Abstract: Optical Character Recognition (OCR) on historical printings is a challenging task mainly due to the complexity of the layout and the highly variant typography. Nevertheless, in the last few years, great progress has been made in the area of historical OCR, resulting in several powerful open-source tools for preprocessing, layout analysis and segmentation, character recognition, and post-processing. The drawback of these tools often is their limited applicability by non-technical users like humanist scholars and in particular the combined use of several tools in a workflow. In this paper, we present an open-source OCR software called OCR4all, which combines state-of-the-art OCR components and continuous model training into a comprehensive workflow. While a variety of materials can already be processed fully automatically, books with more complex layouts require manual intervention by the users. This is mostly due to the fact that the required ground truth for training stronger mixed models (for segmentation, as well as text recognition) is not available, yet, neither in the desired quantity nor quality. To deal with this issue in the short run, OCR4all offers a comfortable GUI that allows error corrections not only in the final output, but already in early stages to minimize error propagations. In the long run, this constant manual correction produces large quantities of valuable, high quality training material, which can be used to improve fully automatic approaches. Further on, extensive configuration capabilities are provided to set the degree of automation of the workflow and to make adaptations to the carefully selected default parameters for specific printings, if necessary. During experiments, the fully automated application on 19th Century novels showed that OCR4all can considerably outperform the commercial state-of-the-art tool ABBYY Finereader on moderate layouts if suitably pretrained mixed OCR models are available. Furthermore, on very complex early printed books, even users with minimal or no experience were able to capture the text with manageable effort and great quality, achieving excellent Character Error Rates (CERs) below 0.5%. The architecture of OCR4all allows the easy integration (or substitution) of newly developed tools for its main components by standardized interfaces like PageXML, thus aiming at continual higher automation for historical printings. Keywords: optical character recognition; document analysis; historical printings Appl. Sci. 2019, 9, 4853; doi:10.3390/app9224853 www.mdpi.com/journal/applsci Appl. Sci. 2019, 9, 4853 2 of 30 1. Introduction While Optical Character Recognition (OCR) is regularly considered to be a solved problem [1], gathering the textual content of historical printings using OCR can still be a very challenging and cumbersome task [2], due to various reasons. Among the problems that need to be addressed for early printings is the often intricate layout containing images, ornaments, marginal notes, and swash capitals. Furthermore, the non-standardized typography represents a big challenge for OCR approaches. While modern fonts can be recognized with excellent accuracy by so-called omnifont or polyfont models, very early printings like incunabula (books printed before 1501), but also handwritten texts usually require book-specific training in order to reach Character Error Rates (CERs) well below 10% or even 5%, as shown by Springmann et al. [3] (printings) and Fischer et al. [4] (manuscripts). For a successful supervised training process, the Ground Truth (GT) in the form of line images and their corresponding transcriptions has to be manually prepared as training examples. In the last few years, some progress has been made in the area of historical OCR, especially concerning the character recognition problem. An important milestone was the introduction of recurrent neural networks with Long Short Term Memory (LSTM) [5] trained using a Connectionist Temporal Classification (CTC) [6] decoder, which Breuel et al. applied to the task of OCR [7]. The LSTM approach was later extended by deep Convolutional Neural Networks (CNN), pushing the recognition accuracy even further [8,9]. The present paper describes our efforts to collect these recent advances into an easy-to-use and platform independent software environment called OCR4all that enables an interested party to obtain a textual digital representation of the contents of these printings. OCR4all covers all steps of an OCR workflow from preprocessing, document analysis (segmentation of text and non-text regions on a page), model training, to character recognition of the text regions. Our focus is throughout on an easy-to-use and efficient method, employing automatic methods where feasible and resorting to manual intervention where necessary. A special feature of our process model is that manual interventions lead to the production of high quality GT being used as additional training data, thus enabling a spiral towards continuously higher automation. In the following, we give a short overview over the steps of a typical OCR workflow and how we address the challenges that arise for early printings. 1.1. Steps of a Typical OCR Workflow The character recognition in itself only represents one subtask within an OCR workflow, which usually consists of four main steps (see Figure1), which often can be split up into further substeps. We use the term “OCR” as a separate main step within the OCR workflow, as other notations like “recognition” would be misleading since the step comprises more sub-tasks than the text recognition alone. Figure 1. Main steps of a typical OCR workflow. From left to right: original image, preprocessing, segmentation, OCR, postcorrection. 1. Preprocessing: First of all, the input images have to be prepared for further processing. Generally, this includes a step that simplifies the representation of the original color image by converting it into binary, as well as a deskewing operation in order to get the pages into an upright position. Additional routines like cropping, dewarping, denoising, or despeckling may be performed. Appl. Sci. 2019, 9, 4853 3 of 30 2. Segmentation: Next, one or several segmentation steps have to be conducted, mostly depending on the material at hand and the requirements of the user. After separating the text regions from non-text areas, individual text lines or even single glyphs have to be identified. Optionally, a more fine-grained classification for non-text (images, ornaments, etc.), as well as for text elements (headings, marginalia, etc.) can be performed already on the layout level. Another important sub-task is the determination of the reading order, which defines the succession of text elements (region and/or lines) on a page. 3. OCR: The recognition of the segmented lines (or glyphs) leads to a textual representation of the printed input. Depending on the material at hand and the user requirements, this can either be performed by making use of existing mixed models and/or by training book-specific models after producing the required GT. 4. Post-processing: The raw OCR output can be further improved during a post-processing step, for example by incorporating dictionaries or language models. This step can support or replace the manual final correction phase depending on the accuracy requirements of the users. As for the final output, plain text, that is the (post-processed) OCR output, has to be considered the minimal solution. Additionally, several formats that can incorporate a more sophisticated output also containing layout or confidence information have been proposed, for example ALTO (https: //www.loc.gov/standards/alto), hOCR [10], or PAGE [11]. 1.2. Challenges for the Users To produce training data for the OCR, one has to find and transcribe text lines (considering a line-based approach) manually, which is a highly non-trivial task when dealing with very old fonts and historical languages. However, the combination of all steps for automatic transcription with manual support can be supported by components of open-source tools such as OCRopus, Tesseract, or Calamari. While these tools are highly functional and very powerful, their usage can be quite complicated, as they: • in most cases lack a comfortable GUI, which leaves the users with the often unfamiliar command line usage • usually rely on different input/output formats, which requires the users to invest additional effort