Ocr4all—An Open-Source Tool Providing a (Semi-)Automatic OCR Workflow for Historical Printings

Total Page:16

File Type:pdf, Size:1020Kb

Ocr4all—An Open-Source Tool Providing a (Semi-)Automatic OCR Workflow for Historical Printings applied sciences Article OCR4all—An Open-Source Tool Providing a (Semi-)Automatic OCR Workflow for Historical Printings Christian Reul 1,* , Dennis Christ 1, Alexander Hartelt 1, Nico Balbach 1, Maximilian Wehner 1, Uwe Springmann 2, Christoph Wick 1 , Christine Grundig 3, Andreas Büttner 4 and Frank Puppe 1 1 Artificial Intelligence and Applied Computer Science, University of Würzburg, 97074 Würzburg, Germany; [email protected] (D.C.); [email protected] (A.H.); [email protected] (N.B.); [email protected] (M.W.); [email protected] (C.W.); [email protected] (F.P.) 2 Center for Information and Language Processing, LMU Munich, 80538 Munich, Germany; [email protected] 3 Institute for Modern Art History, University of Zurich, 8006 Zurich, Switzerland; [email protected] 4 Institute for Philosophy, University of Würzburg, 97074 Würzburg, Germany; [email protected] * Correspondence: [email protected] Received: 6 September 2019; Accepted: 1 November 2019; Published: 13 November 2019 Abstract: Optical Character Recognition (OCR) on historical printings is a challenging task mainly due to the complexity of the layout and the highly variant typography. Nevertheless, in the last few years, great progress has been made in the area of historical OCR, resulting in several powerful open-source tools for preprocessing, layout analysis and segmentation, character recognition, and post-processing. The drawback of these tools often is their limited applicability by non-technical users like humanist scholars and in particular the combined use of several tools in a workflow. In this paper, we present an open-source OCR software called OCR4all, which combines state-of-the-art OCR components and continuous model training into a comprehensive workflow. While a variety of materials can already be processed fully automatically, books with more complex layouts require manual intervention by the users. This is mostly due to the fact that the required ground truth for training stronger mixed models (for segmentation, as well as text recognition) is not available, yet, neither in the desired quantity nor quality. To deal with this issue in the short run, OCR4all offers a comfortable GUI that allows error corrections not only in the final output, but already in early stages to minimize error propagations. In the long run, this constant manual correction produces large quantities of valuable, high quality training material, which can be used to improve fully automatic approaches. Further on, extensive configuration capabilities are provided to set the degree of automation of the workflow and to make adaptations to the carefully selected default parameters for specific printings, if necessary. During experiments, the fully automated application on 19th Century novels showed that OCR4all can considerably outperform the commercial state-of-the-art tool ABBYY Finereader on moderate layouts if suitably pretrained mixed OCR models are available. Furthermore, on very complex early printed books, even users with minimal or no experience were able to capture the text with manageable effort and great quality, achieving excellent Character Error Rates (CERs) below 0.5%. The architecture of OCR4all allows the easy integration (or substitution) of newly developed tools for its main components by standardized interfaces like PageXML, thus aiming at continual higher automation for historical printings. Keywords: optical character recognition; document analysis; historical printings Appl. Sci. 2019, 9, 4853; doi:10.3390/app9224853 www.mdpi.com/journal/applsci Appl. Sci. 2019, 9, 4853 2 of 30 1. Introduction While Optical Character Recognition (OCR) is regularly considered to be a solved problem [1], gathering the textual content of historical printings using OCR can still be a very challenging and cumbersome task [2], due to various reasons. Among the problems that need to be addressed for early printings is the often intricate layout containing images, ornaments, marginal notes, and swash capitals. Furthermore, the non-standardized typography represents a big challenge for OCR approaches. While modern fonts can be recognized with excellent accuracy by so-called omnifont or polyfont models, very early printings like incunabula (books printed before 1501), but also handwritten texts usually require book-specific training in order to reach Character Error Rates (CERs) well below 10% or even 5%, as shown by Springmann et al. [3] (printings) and Fischer et al. [4] (manuscripts). For a successful supervised training process, the Ground Truth (GT) in the form of line images and their corresponding transcriptions has to be manually prepared as training examples. In the last few years, some progress has been made in the area of historical OCR, especially concerning the character recognition problem. An important milestone was the introduction of recurrent neural networks with Long Short Term Memory (LSTM) [5] trained using a Connectionist Temporal Classification (CTC) [6] decoder, which Breuel et al. applied to the task of OCR [7]. The LSTM approach was later extended by deep Convolutional Neural Networks (CNN), pushing the recognition accuracy even further [8,9]. The present paper describes our efforts to collect these recent advances into an easy-to-use and platform independent software environment called OCR4all that enables an interested party to obtain a textual digital representation of the contents of these printings. OCR4all covers all steps of an OCR workflow from preprocessing, document analysis (segmentation of text and non-text regions on a page), model training, to character recognition of the text regions. Our focus is throughout on an easy-to-use and efficient method, employing automatic methods where feasible and resorting to manual intervention where necessary. A special feature of our process model is that manual interventions lead to the production of high quality GT being used as additional training data, thus enabling a spiral towards continuously higher automation. In the following, we give a short overview over the steps of a typical OCR workflow and how we address the challenges that arise for early printings. 1.1. Steps of a Typical OCR Workflow The character recognition in itself only represents one subtask within an OCR workflow, which usually consists of four main steps (see Figure1), which often can be split up into further substeps. We use the term “OCR” as a separate main step within the OCR workflow, as other notations like “recognition” would be misleading since the step comprises more sub-tasks than the text recognition alone. Figure 1. Main steps of a typical OCR workflow. From left to right: original image, preprocessing, segmentation, OCR, postcorrection. 1. Preprocessing: First of all, the input images have to be prepared for further processing. Generally, this includes a step that simplifies the representation of the original color image by converting it into binary, as well as a deskewing operation in order to get the pages into an upright position. Additional routines like cropping, dewarping, denoising, or despeckling may be performed. Appl. Sci. 2019, 9, 4853 3 of 30 2. Segmentation: Next, one or several segmentation steps have to be conducted, mostly depending on the material at hand and the requirements of the user. After separating the text regions from non-text areas, individual text lines or even single glyphs have to be identified. Optionally, a more fine-grained classification for non-text (images, ornaments, etc.), as well as for text elements (headings, marginalia, etc.) can be performed already on the layout level. Another important sub-task is the determination of the reading order, which defines the succession of text elements (region and/or lines) on a page. 3. OCR: The recognition of the segmented lines (or glyphs) leads to a textual representation of the printed input. Depending on the material at hand and the user requirements, this can either be performed by making use of existing mixed models and/or by training book-specific models after producing the required GT. 4. Post-processing: The raw OCR output can be further improved during a post-processing step, for example by incorporating dictionaries or language models. This step can support or replace the manual final correction phase depending on the accuracy requirements of the users. As for the final output, plain text, that is the (post-processed) OCR output, has to be considered the minimal solution. Additionally, several formats that can incorporate a more sophisticated output also containing layout or confidence information have been proposed, for example ALTO (https: //www.loc.gov/standards/alto), hOCR [10], or PAGE [11]. 1.2. Challenges for the Users To produce training data for the OCR, one has to find and transcribe text lines (considering a line-based approach) manually, which is a highly non-trivial task when dealing with very old fonts and historical languages. However, the combination of all steps for automatic transcription with manual support can be supported by components of open-source tools such as OCRopus, Tesseract, or Calamari. While these tools are highly functional and very powerful, their usage can be quite complicated, as they: • in most cases lack a comfortable GUI, which leaves the users with the often unfamiliar command line usage • usually rely on different input/output formats, which requires the users to invest additional effort
Recommended publications
  • OCR Pwds and Assistive Qatari Using OCR Issue No
    Arabic Optical State of the Smart Character Art in Arabic Apps for Recognition OCR PWDs and Assistive Qatari using OCR Issue no. 15 Technology Research Nafath Efforts Page 04 Page 07 Page 27 Machine Learning, Deep Learning and OCR Revitalizing Technology Arabic Optical Character Recognition (OCR) Technology at Qatar National Library Overview of Arabic OCR and Related Applications www.mada.org.qa Nafath About AboutIssue 15 Content Mada Nafath3 Page Nafath aims to be a key information 04 Arabic Optical Character resource for disseminating the facts about Recognition and Assistive Mada Center is a private institution for public benefit, which latest trends and innovation in the field of Technology was founded in 2010 as an initiative that aims at promoting ICT Accessibility. It is published in English digital inclusion and building a technology-based community and Arabic languages on a quarterly basis 07 State of the Art in Arabic OCR that meets the needs of persons with functional limitations and intends to be a window of information Qatari Research Efforts (PFLs) – persons with disabilities (PWDs) and the elderly in to the world, highlighting the pioneering Qatar. Mada today is the world’s Center of Excellence in digital work done in our field to meet the growing access in Arabic. Overview of Arabic demands of ICT Accessibility and Assistive 11 OCR and Related Through strategic partnerships, the center works to Technology products and services in Qatar Applications enable the education, culture and community sectors and the Arab region. through ICT to achieve an inclusive community and educational system. The Center achieves its goals 14 Examples of Optical by building partners’ capabilities and supporting the Character Recognition Tools development and accreditation of digital platforms in accordance with international standards of digital access.
    [Show full text]
  • Master Thesis
    Master thesis To obtain a Master of Science Degree in Informatics and Communication Systems from the Merseburg University of Applied Sciences Subject: Tunisian truck license plate recognition using an Android Application based on Machine Learning as a detection tool Author: Supervisor: Achraf Boussaada Prof.Dr.-Ing. Rüdiger Klein Matr.-Nr.: 23542 Prof.Dr. Uwe Schröter Table of contents Chapter 1: Introduction ................................................................................................................................. 1 1.1 General Introduction: ................................................................................................................................... 1 1.2 Problem formulation: ................................................................................................................................... 1 1.3 Objective of Study: ........................................................................................................................................ 4 Chapter 2: Analysis ........................................................................................................................................ 4 2.1 Methodological approaches: ........................................................................................................................ 4 2.1.1 Actual approach: ................................................................................................................................... 4 2.1.2 Image Processing with OCR: ................................................................................................................
    [Show full text]
  • An Accuracy Examination of OCR Tools
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-8, Issue-9S4, July 2019 An Accuracy Examination of OCR Tools Jayesh Majumdar, Richa Gupta texts, pen computing, developing technologies for assisting Abstract—In this research paper, the authors have aimed to do a the visually impaired, making electronic images searchable comparative study of optical character recognition using of hard copies, defeating or evaluating the robustness of different open source OCR tools. Optical character recognition CAPTCHA. (OCR) method has been used in extracting the text from images. OCR has various applications which include extracting text from any document or image or involves just for reading and processing the text available in digital form. The accuracy of OCR can be dependent on text segmentation and pre-processing algorithms. Sometimes it is difficult to retrieve text from the image because of different size, style, orientation, a complex background of image etc. From vehicle number plate the authors tried to extract vehicle number by using various OCR tools like Tesseract, GOCR, Ocrad and Tensor flow. The authors in this research paper have tried to diagnose the best possible method for optical character recognition and have provided with a comparative analysis of their accuracy. Keywords— OCR tools; Orcad; GOCR; Tensorflow; Tesseract; I. INTRODUCTION Optical character recognition is a method with which text in images of handwritten documents, scripts, passport documents, invoices, vehicle number plate, bank statements, Fig.1: Functioning of OCR [2] computerized receipts, business cards, mail, printouts of static-data, any appropriate documentation or any II. OCR PROCDURE AND PROCESSING computerized receipts, business cards, mail, printouts of To improve the probability of successful processing of an static-data, any appropriate documentation or any picture image, the input image is often ‘pre-processed’; it may be with text in it gets processed and the text in the picture is de-skewed or despeckled.
    [Show full text]
  • Enforcing Abstract Immutability
    Enforcing Abstract Immutability by Jonathan Eyolfson A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Electrical and Computer Engineering Waterloo, Ontario, Canada, 2018 © Jonathan Eyolfson 2018 Examining Committee Membership The following served on the Examining Committee for this thesis. The decision of the Examining Committee is by majority vote. External Examiner Ana Milanova Associate Professor Rensselaer Polytechnic Institute Supervisor Patrick Lam Associate Professor University of Waterloo Internal Member Lin Tan Associate Professor University of Waterloo Internal Member Werner Dietl Assistant Professor University of Waterloo Internal-external Member Gregor Richards Assistant Professor University of Waterloo ii I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. iii Abstract Researchers have recently proposed a number of systems for expressing, verifying, and inferring immutability declarations. These systems are often rigid, and do not support “abstract immutability”. An abstractly immutable object is an object o which is immutable from the point of view of any external methods. The C++ programming language is not rigid—it allows developers to express intent by adding immutability declarations to methods. Abstract immutability allows for performance improvements such as caching, even in the presence of writes to object fields. This dissertation presents a system to enforce abstract immutability. First, we explore abstract immutability in real-world systems. We found that developers often incorrectly use abstract immutability, perhaps because no programming language helps developers correctly implement abstract immutability.
    [Show full text]
  • CSI: Inferring Mobile ABR Video Adaptation Behavior Under HTTPS and QUIC
    CSI: Inferring Mobile ABR Video Adaptation Behavior under HTTPS and QUIC Shichang Xu Subhabrata Sen Z. Morley Mao University of Michigan AT&T Labs – Research University of Michigan Abstract Server Manifest Network Client Mobile video streaming services have widely adopted Adap- Chunks HTTP tive Bitrate (ABR) streaming to dynamically adapt the stream- Track ing quality to variable network conditions. A wide range of 720p 1 Buffer third-party entities such as network providers and testing 480p IP packets services need to understand such adaptation behavior for 360p 1 2 3 Index purposes such as QoE monitoring and network management. CSI The traditional approach involved conducting test runs and analyzing the HTTP-level information from the associated network traffic to understand the adaptation behavior under Figure 1. ABR streaming overview different network conditions. However, end-to-end traffic encryption protocols such as HTTPS and QUIC are being increasingly used by streaming services, hindering such tra- Rate (ABR) streaming (predominantly HLS [75] and DASH [31]) ditional traffic analysis approaches. has been widely adopted in industry for delivering satisfac- To address this, we develop CSI (Chunk Sequence Infer- tory Quality of Experience (QoE) over dynamic cellular net- encer), a general system that enables third-parties to conduct work conditions. The server encodes each video into multiple active measurements and infer mobile ABR video adapta- versions with different picture quality levels and encoding tion behavior based on packet size and timing information bitrates (with higher bitrates for higher-quality encodings) still available in the encrypted traffic. We perform exten- called tracks, and splits each track into shorter chunks, each sive evaluations and demonstrate that CSI achieves high representing a few seconds worth of playback content (Fig- inference accuracy for video encodings of popular streaming ure 1).
    [Show full text]
  • Expense Tracking Mobile Application with Receipt Scanning Functionality Bachelor’S Thesis
    TALLINN UNIVERSITY OF TECHNOLOGY Faculty of Information Technology Department of Computer Science Chair of Network Software Expense tracking mobile application with receipt scanning functionality Bachelor’s thesis Student: Roman Kaskman Student code: 113089 IAPB Advisor: Roger Kerse Tallinn 2015 Author’s declaration I declare that this thesis is the result of my own research except as cited in the references. The thesis has not been accepted for any degree and is not concurrently submitted in candidature of any other degree. 25.05.2015 Roman Kaskman (date) (signature) Abstract The purpose of this thesis is to create a mobile application for expense tracking, with the main focus on functionality allowing to take pictures of receipts issued by Estonian enterprises, extract basic expense information from the captured receipt images and store extracted expenses information in authenticated user’s expense list. The main problems covered in this work are finding the best architectural and design solutions for the application from the perspective of performance, usability, security and further development as well as researching and implementing techniques to handle expense recognition from receipts in an efficient way. As a result of the thesis, a working implementation of expense tracking mobile application for Android appears. After functionality of expenses information extraction from receipt images passes the testing phase, conclusion regarding its reliability is made. Moreover, proposals for further improvements of the application’s functionality are also presented. The thesis is in English and contains 53 pages of text, 6 chapters and 14 figures. Annotatsioon Käesoleva bakalaureusetöö eesmärk on luua mobiilirakendus kasutaja kulude üle arvestuse pidamiseks ja dokumenteerimiseks.
    [Show full text]
  • Gradu04243.Pdf
    Paperilomakkeesta tietomalliin Kalle Malin Tampereen yliopisto Tietojenkäsittelytieteiden laitos Tietojenkäsittelyoppi Pro gradu -tutkielma Ohjaaja: Erkki Mäkinen Toukokuu 2010 i Tampereen yliopisto Tietojenkäsittelytieteiden laitos Tietojenkäsittelyoppi Kalle Malin: Paperilomakkeesta tietomalliin Pro gradu -tutkielma, 61 sivua, 3 liitesivua Toukokuu 2010 Tässä tutkimuksessa käsitellään paperilomakkeiden digitalisointiin liittyvää kokonaisprosessia yleisellä tasolla. Prosessiin tutustutaan tarkastelemalla eri osa-alueiden toimintoja ja laitteita kokonaisjärjestelmän vaatimusten näkökul- masta. Tarkastelu aloitetaan paperilomakkeiden skannaamisesta ja lopetetaan kerättyjen tietojen tallentamiseen tietomalliin. Lisäksi luodaan silmäys markki- noilla oleviin valmisratkaisuihin, jotka sisältävät prosessin kannalta oleelliset toiminnot. Avainsanat ja -sanonnat: lomake, skannaus, lomakerakenne, lomakemalli, OCR, OFR, tietomalli. ii Lyhenteet ADRT = Adaptive Document Recoginition Technology API = Application Programming Interface BAG = Block Adjacency Graph DIR = Document Image Recognition dpi= Dots Per Inch ICR = Intelligent Character Recognition IFPS = Intelligent Forms Processing System IR = Information Retrieval IRM = Image and Records Management IWR = Intelligent Word Recognition NAS = Network Attached Storage OCR = Optical Character Recognition OFR = Optical Form Recognition OHR = Optical Handwriting Recognition OMR = Optical Mark Recognition PDF = Portable Document Format SAN = Storage Area Networks SDK = Software Development Kit SLM
    [Show full text]
  • Character Recognition in Natural Images Utilising Tensorflow
    DEGREE PROJECT IN TECHNOLOGY, FIRST CYCLE, 15 CREDITS STOCKHOLM, SWEDEN 2017 Character Recognition in Natural Images Utilising TensorFlow ALEXANDER VIKLUND EMMA NIMSTAD KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION Character Recognition in Natural Images Utilising TensorFlow ALEXANDER VIKLUND EMMA NIMSTAD Degree project in Computer Science, DD143X Date: June 12, 2017 Supervisor: Kevin Smith Examiner: Örjan Ekeberg Swedish title: Teckenigenkänning i naturliga bilder med TensorFlow School of Computer Science and Communication Abstract Convolutional Neural Networks (CNNs) are commonly used for character recogni- tion. They achieve the lowest error rates for popular datasets such as SVHN and MNIST. Usage of CNN is lacking in research about character classification in nat- ural images regarding the whole English alphabet. This thesis conducts an experi- ment where TensorFlow is used to construct a CNN that is trained and tested on the Chars74K dataset, with 15 images per class for training and 15 images per class for testing. This is done with the aim of achieving a higher accuracy than the non-CNN approach by de Campos et al. [1], that achieved 55:26%. The thesis explores data augmentation techniques for expanding the small training set and evaluates the result of applying rotation, stretching, translation and noise- adding. The result of this is that all of these methods apart from adding noise gives a positive effect on the accuracy of the network. Furthermore, the experiment shows that with a three layered convolutional neural network it is possible to create a character classifier that is as good as de Campos et al.’s.
    [Show full text]
  • JETIR Research Journal
    © 2019 JETIR June 2019, Volume 6, Issue 6 www.jetir.org (ISSN-2349-5162) IMAGE TEXT CONVERSION FROM REGIONAL LANGUAGE TO SPEECH/TEXT IN LOCAL LANGUAGE 1Sayee Tale,2Manali Umbarkar,3Vishwajit Singh Javriya,4Mamta Wanjre 1Student,2Student,3Student,4Assistant Professor 1Electronics and Telecommunication, 1AISSMS, Institute of Information Technology, Pune, India. Abstract: The drivers who drive in other states are unable to recognize the languages on the sign board. So this project helps them to understand the signs in different language and also they will be able to listen it through speaker. This paper describes the working of two module image processing module and voice processing module. This is done majorly using Raspberry Pi using the technology OCR (optical character recognition) technique. This system is constituted by raspberry Pi, camera, speaker, audio playback module. So the system will help in decreasing the accidents causes due to wrong sign recognition. Keywords: Raspberry pi model B+, Tesseract OCR, camera module, recording and playback module,switch. 1. Introduction: In today’s world life is too important and one cannot loose it simply in accidents. The accident rates in today’s world are increasing day by day. The last data says that 78% accidents were because of driver’s fault. There are many faults of drivers and one of them is that they are unable to read the signs and instructions written on boards when they drove into other states. Though the instruction are for them only but they are not able to make it. So keeping this thing in mind we have proposed a system which will help them to understand the sign boards written in regional language.
    [Show full text]
  • Wen-Chieh Wu Technical LEAD · SOFTWARE Architect · FULL STACK Engineer
    Wen-Chieh Wu TECHNiCAL LEAD · SOFTWARE ARCHiTECT · FULL STACK ENGiNEER [email protected] | jeromewu.github.io | jeromewu | wenchiehwu | jeromewus Summary Passionate problem solver, technology enthusiast, compassionate leader with 7+ years as software engineer and architect and 5+ years as technical lead in multidisciplinary environments. Excel in leadership, critical thinking, communication and software engineering & architecture . Strong skills in JavaScript (incl. React, Express), Golang, Docker, CI/CD pipelines and software architecture design. Major contributor of tesseract.js (24.8k+ stars) and ffmpeg.wasm (6.1k+ stars). Certified architecting, data engineering, big data and machine learning specializations in Google Cloud Platform. Work Experience ByteDance Singapore STAFF SOFTWARE ENGiNEER Aug. 2021 ‑ PRESENT INTELLLEX Singapore TECHNiCAL LEAD Apr. 2020 ‑ Jul. 2021 • Led a team of backend and frontend engineers to develop knowledge platform for law. • Led re‑architecture project which eliminates 10+ redundant services, boosts observability and flexibility of data analysis pipeline and reduce cost • Reduced 30% of AWS cost in 2 months • Succeed in supporting to pass company ISO 27001 annual audit without any NC. Delta Electronics Taipei City, Taiwan SOFTWARE ARCHiTECT & PRODUCT MANAGER May 2019 ‑ Mar. 2020 • Established a cross‑region project team with 10+ members to research and develop an edge computing solution in manufacturing industry. • Designed a microservice software architecture for edge computing solution to enable flexiblity and scalibility. • Adopted gitlab‑CI and Ansible to deploy microservices in Kubernetes cluster to minize deployment efforts. • Developed a tensorflow serving like interface library in Golang. SENiOR TECHNiCAL LEAD Feb. 2018 ‑ May 2019 (1 year 4 months) • Established a corss‑region team of engineers, designers and QA with 20+ members in charge of front‑end development.
    [Show full text]
  • Multilingual String Verification for Automotive Instrument Cluster Using Artificial Intelligence
    WHITE PAPER www.visteon.com Multilingual String Verification for Automotive Instrument Cluster Using Artificial Intelligence Multilingual string verification for automotive instrument cluster Using Artificial Intelligence Deepan Raj M a, Prabu A b Software Engineer at Visteon Technical & Services Centre, Chennai a Technical Profession at Visteon Technical & Services Centre, Chennai b Abstract: Validation of HMI contents in Driver information system, In-Vehicle Infotainment, and Centre media Display is a big challenge as it prone human manual testing errors. Automating the text verification for multi languages and horizontal/vertical scrolling text is difficult challenge as the intelligence needed to achieve is high. To mitigate this effect vision based machine/deep learning algorithms will be implemented in such a way that three important applications have been created which serves to be the mother code for all the features that involves text as it main component of automated testing. They are 1. Multilingual text verification at various illuminations, grade out conditions and various background gradients using Opencv and Tesseract-OCR. 2. Horizontal & Vertical scrolling text using image key point stitching. 3. Automatic region of interest generation for text regions in the message center display using inception v3 deep neural network architecture. With the power of artificial intelligence in validating text makes it easier to automate with greater accuracy and can also be implemented in other areas like User Setting Menu traversal verification, Clock verification and much can be automated with less time and higher accuracy. Keywords: Computer Vision, Machine Learning, Deep learning, Neural Networks, OCR, automating validation. 1. INTRODUCTION: In recent years there has been a drastic improvement in the field of automobile, all the analog and semi digital clusters are being changed to full digital cluster, and lot of driver assisting feature are being added as the days goes on.
    [Show full text]
  • Android Optical Character Recognition
    Imperial Journal of Interdisciplinary Research (IJIR) Vol-3, Issue-4, 2017 ISSN: 2454-1362, http://www.onlinejournal.in Android Optical Character Recognition 1 2 3 Shital Malhar , Manasi Gosavi , Pooja Lad 1,2,3 Dilkap Research Institute of Engineering and Management Studies, Neral. (Mumbai University) operating system which can run on every mobile Abstract - The next generation open operating device and not for their specific mobile devices systems are not on desktops or mainframes but on itself. This enables them to reach as many people as the small mobile devices people carry every day. possible. This application is useful for native The openness of these new environments leads to Tourists and Travellers who possess Android Smart new applications and markets and enables greater phones. The main objective of this application is to integration. Every day a Smartphone user may look help tourist to travel easily and freely without any for a new application dedicated for his need. difficulty. The proposed application also provides Android makes it easier for consumers to get and currency convertor, a real market value. use new content and applications on their Smart phones. The Proposed project presents an 1.1 Problem Definition extremely on-demand, fast and user friendly Android Application. This application is useful for The Project presents an extremely on-demand, fast native Tourists and Travellers who possess Android and user friendly Android Application. This Smart phones. One of the feature in application is it application is useful for native Tourists and enables Travellers and Tourists to easily capture Travelers who possess Android Smart phones.
    [Show full text]