The Phone Reader

Total Page:16

File Type:pdf, Size:1020Kb

The Phone Reader THE PHONE READER Submitted in partial fulfillment of the requirements of the degree of BACHELOR OF SCIENCE (HONOURS) of Rhodes University Mich`eleMarilyn Bihina Bihina Grahamstown, South Africa November, 2012 Abstract The Phone Reader is an Android application that reads text extracted from a photo taken with a mobile Android phone. It uses Tesseract OCR to provide accurate character recogni- tion in the image, and Apertium translator engine to translate the extracted text. It aims to help people with reading disabilities, or illiterates and non-native speakers, to hear the content of text they have difficulties to read. The system provides a user friendly client interface that communicates with a remote server; and the latter processes uploaded images to extract the text contained in it. ACM Computing Classification System Thesis classification under the ACM Computing Classification System (1998 version, valid through 2012): B.4.1 [Data Communications Devices]: Receivers (voice, data, image) I.2.7 [[Natural Language Processing]: Speech recognition and synthesis I.4 [Image Processing And Computer Vision]: Image processing software I.7.5 [Document Capture]: Graphics Recognition And Interpretation Optical character recognition (OCR) Scanning General Terms: Image processing, Android application, OCR (Optical Character Recog- nition), Text recognition, Text-To-Speech, Text translation i Acknowledgments I would like to thank God for all the strength He has given during this year. I would like to give my deep and sincere thanks to all the members of my family who have helped and supported me this year, especially my mother and my uncle J.V. Nkolo. I also want to thank the following persons for their support: * My supervisor Mr James Connan for all the help he has offered me during the devel- opment of this project. * Rhodes University and the department of computer science for the opportunity to pursue my Honours degree. * All my friends that have assisted and given me advices this year. And finally, I would like to acknowledge the financial and technical support of Telkom, Tellabs, Stortech, Genband, Easttel, Bright Ideas 39 and THRIP through the Telkom Centre of Excellence in the Department of Computer Science at Rhodes University. ii Contents Abstract . i ACM Computing Classification System . i Acknowledgments . ii 1 Introduction 1 1.1 Problem statement . 1 1.2 Objectives . 2 1.3 Methodology . 2 1.4 Progression . 2 1.5 Structure of the thesis . 3 2 Literature Review 4 2.1 Introduction . 4 2.2 Image processing . 4 2.2.1 Definitions . 4 2.2.2 Image processing methods . 5 2.3 Text reading systems . 6 2.3.1 Mobile text reading applications requiring OCR . 6 2.3.2 Mobile text reading applications not requiring OCR . 8 2.4 Object recognition systems . 9 2.4.1 Systems using crowd-sourcing for object recognition . 9 2.4.2 Mobile applications using visual search on specific types of objects . 10 2.5 Text and object recognition systems offering extended functionalities . 11 2.6 Common tools used by object recognition and text reading mobile applications 13 iii 2.6.1 Mobile operating systems . 13 2.6.2 OCR . 14 2.6.3 Text-To-Speech Engine . 14 2.7 Plan of Action . 15 2.8 Conclusion . 15 3 Design of the system 17 3.1 Introduction . 17 3.2 Textual description . 17 3.3 System Design . 19 3.3.1 System architecture . 19 3.3.2 UML approach . 20 3.4 Conclusion . 24 4 Implementation 25 4.1 Introduction . 25 4.2 System requirements . 25 4.3 Description of the tools used for the system . 26 4.4 Code documentation . 28 4.4.1 Image processing techniques used with Imagemagick . 28 4.4.2 Java implementation of the classes of the system . 31 4.5 Conclusion . 40 5 Tests and Results 41 5.1 Different font sizes for the same text . 41 5.2 Lighting conditions . 43 5.3 Testing the translation accuracy . 46 6 Conclusion 47 6.1 Goals achieved by the system . 47 6.2 Limits of the system . 47 6.3 Future work . 48 iv 6.4 Conclusion . 48 A User's guide 49 v List of Tables 1.1 Progression of the Phone Reader project . 3 4.1 System specifications . 26 vi List of Figures 3.1 Flowchart diagram of the system . 18 3.2 Architecture of the system . 19 3.3 Use case diagram . 21 3.4 Class diagram on the client side . 22 3.5 Class diagram on the server side . 23 4.1 Function to apply Unsharp method to image . 31 4.2 Initialization of the Camera Activity class . 32 4.3 Function to open camera mode . 32 4.4 Display bitmap image on phone screen . 33 4.5 Snippet of code for the event ACTION-UP . 34 4.6 Map a language to its code . 35 4.7 Php script to upload image on server . 36 4.8 URL to download the text file . 37 4.9 Calling the image processing methods from the main class . 38 4.10 Calling the OCR function . 39 4.11 Calling the translation function . 40 4.12 Function to perform Text-To-Speech . 40 5.1 OCR results of a text with font size 12 . 42 5.2 OCR results of a text with font size 14 . 42 5.3 OCR results of a text with font size 16 . 43 5.4 OCR results under low light . 44 5.5 OCR results under low light using the camera flash . 44 vii 5.6 Result of the pre-processing of an image taken with flash activated . 45 5.7 OCR result of a pre-processed image taken with flash activated . 46 5.8 Representation of accurately translated words in a text . 46 A.1 Screen 1 . 49 A.2 Screen 2 . 50 A.3 Screen 3 . 51 A.4 Screen 4 . 52 A.5 Screen 5 . 53 A.6 Screen 6 . 53 A.7 Screen 7 . 54 viii Chapter 1 Introduction In today's society, mobile phones offer a wide variety of functionalities that are not always related to calling or sending messages. Those functionalities include web browsing, playing games or music, banking, taking photos and so much more. The Phone Reader is an Android application that aims to allow the user to hear a text contained in a picture that has been taken with a mobile phone. It is an application meant to help those who cannot read a text they encounter, like non-native speakers, the visually impaired and the blind people, estimated at 285 millions in 2010 by the World Health Organization [22]. This project is mainly related to image processing to recognize characters in an image. 1.1 Problem statement Reading or understanding a text can at times be a challenge if it is written in a foreign language, if the reader is illiterate, or if one has reading disabilities. The solution to this problem is the goal of the Phone Reader project. This latter aims to develop a mobile application that can read a text for the user through a mobile Android device. To use it, the user has to photograph the text with his phone, choose a language for the translation if necessary, and send the photo to the server, which extracts the text in the photo and produces its speech. 1 1.2 Objectives The Phone Reader is meant to help different type of people unable to read a text. The following list presents cases in which the Phone Reader can be used: • Blind people can use it when they have a text to read. • Non-native speakers (like tourists) can use it when they do not understand a text written in a foreign language, or when they are not sure of the right pronunciation of words. • The illiterate or dyslexic can use it when they have difficulties reading a text. 1.3 Methodology The system was programmed using Android [26], which is a Linux-based operating system for mobile devices developed by the Open Handset Alliance. The phone sends requests to the Apache Server by uploading photos to it. The Apache Server processes the client's request by pre-processing the image sent, before extracting its text with an OCR (Optical Character Recognition) program. TTS (Text To Speech) engine produces the speech on the phone after performing a required translation of the extracted text. The programming languages used are Java and PHP. 1.4 Progression The following table presents the different steps that need to be accomplished in order to develop the Phone Reader: 2 Table 1.1: Progression of the Phone Reader project Steps Tasks Step 1 Review existing technologies Step 2 Determine system requirements Step 3 Configure web service Step 4 Implement and evaluate preprocessing Step 5 Implement OCR Step 6 Implement translation Step 7 Implement user interface/phone client Step 8 Implement TTS functionality on the phone Step 9 Testing the system Step 10 Documentation 1.5 Structure of the thesis This thesis has seven chapters, the first one is the introduction. In Chapter 2, which is the literature review, we review all the related works to this project; we examine which tools have been used and which tools we could use for our system. Chapter 3, which is the design of the system, describes how the system has been designed and presents an overview of the structure of the system. Chapter 4 is the implementation, it describes all the technical aspects of the Phone Reader: the system requirements and the programming aspect. Chapter 5 presents and discusses the results obtained from different tests performed with the system. Chapter 6 is the conclusion, it presents the system performance and how it can be improved.
Recommended publications
  • OCR Pwds and Assistive Qatari Using OCR Issue No
    Arabic Optical State of the Smart Character Art in Arabic Apps for Recognition OCR PWDs and Assistive Qatari using OCR Issue no. 15 Technology Research Nafath Efforts Page 04 Page 07 Page 27 Machine Learning, Deep Learning and OCR Revitalizing Technology Arabic Optical Character Recognition (OCR) Technology at Qatar National Library Overview of Arabic OCR and Related Applications www.mada.org.qa Nafath About AboutIssue 15 Content Mada Nafath3 Page Nafath aims to be a key information 04 Arabic Optical Character resource for disseminating the facts about Recognition and Assistive Mada Center is a private institution for public benefit, which latest trends and innovation in the field of Technology was founded in 2010 as an initiative that aims at promoting ICT Accessibility. It is published in English digital inclusion and building a technology-based community and Arabic languages on a quarterly basis 07 State of the Art in Arabic OCR that meets the needs of persons with functional limitations and intends to be a window of information Qatari Research Efforts (PFLs) – persons with disabilities (PWDs) and the elderly in to the world, highlighting the pioneering Qatar. Mada today is the world’s Center of Excellence in digital work done in our field to meet the growing access in Arabic. Overview of Arabic demands of ICT Accessibility and Assistive 11 OCR and Related Through strategic partnerships, the center works to Technology products and services in Qatar Applications enable the education, culture and community sectors and the Arab region. through ICT to achieve an inclusive community and educational system. The Center achieves its goals 14 Examples of Optical by building partners’ capabilities and supporting the Character Recognition Tools development and accreditation of digital platforms in accordance with international standards of digital access.
    [Show full text]
  • Reconocimiento De Escritura Lecture 4/5 --- Layout Analysis
    Reconocimiento de Escritura Lecture 4/5 | Layout Analysis Daniel Keysers Jan/Feb-2008 Keysers: RES-08 1 Jan/Feb-2008 Outline Detection of Geometric Primitives The Hough-Transform RAST Document Layout Analysis Introduction Algorithms for Layout Analysis A `New' Algorithm: Whitespace Cuts Evaluation of Layout Analyis Statistical Layout Analysis OCR OCR - Introduction OCR fonts Tesseract Sources of OCR Errors Keysers: RES-08 2 Jan/Feb-2008 Outline Detection of Geometric Primitives The Hough-Transform RAST Document Layout Analysis Introduction Algorithms for Layout Analysis A `New' Algorithm: Whitespace Cuts Evaluation of Layout Analyis Statistical Layout Analysis OCR OCR - Introduction OCR fonts Tesseract Sources of OCR Errors Keysers: RES-08 3 Jan/Feb-2008 Detection of Geometric Primitives some geometric entities important for DIA: I text lines I whitespace rectangles (background in documents) Keysers: RES-08 4 Jan/Feb-2008 Outline Detection of Geometric Primitives The Hough-Transform RAST Document Layout Analysis Introduction Algorithms for Layout Analysis A `New' Algorithm: Whitespace Cuts Evaluation of Layout Analyis Statistical Layout Analysis OCR OCR - Introduction OCR fonts Tesseract Sources of OCR Errors Keysers: RES-08 5 Jan/Feb-2008 Hough-Transform for Line Detection Assume we are given a set of points (xn; yn) in the image plane. For all points on a line we must have yn = a0 + a1xn If we want to determine the line, each point implies a constraint yn 1 a1 = − a0 xn xn Keysers: RES-08 6 Jan/Feb-2008 Hough-Transform for Line Detection The space spanned by the model parameters a0 and a1 is called model space, parameter space, or Hough space.
    [Show full text]
  • An Accuracy Examination of OCR Tools
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-8, Issue-9S4, July 2019 An Accuracy Examination of OCR Tools Jayesh Majumdar, Richa Gupta texts, pen computing, developing technologies for assisting Abstract—In this research paper, the authors have aimed to do a the visually impaired, making electronic images searchable comparative study of optical character recognition using of hard copies, defeating or evaluating the robustness of different open source OCR tools. Optical character recognition CAPTCHA. (OCR) method has been used in extracting the text from images. OCR has various applications which include extracting text from any document or image or involves just for reading and processing the text available in digital form. The accuracy of OCR can be dependent on text segmentation and pre-processing algorithms. Sometimes it is difficult to retrieve text from the image because of different size, style, orientation, a complex background of image etc. From vehicle number plate the authors tried to extract vehicle number by using various OCR tools like Tesseract, GOCR, Ocrad and Tensor flow. The authors in this research paper have tried to diagnose the best possible method for optical character recognition and have provided with a comparative analysis of their accuracy. Keywords— OCR tools; Orcad; GOCR; Tensorflow; Tesseract; I. INTRODUCTION Optical character recognition is a method with which text in images of handwritten documents, scripts, passport documents, invoices, vehicle number plate, bank statements, Fig.1: Functioning of OCR [2] computerized receipts, business cards, mail, printouts of static-data, any appropriate documentation or any II. OCR PROCDURE AND PROCESSING computerized receipts, business cards, mail, printouts of To improve the probability of successful processing of an static-data, any appropriate documentation or any picture image, the input image is often ‘pre-processed’; it may be with text in it gets processed and the text in the picture is de-skewed or despeckled.
    [Show full text]
  • Integral Estimation in Quantum Physics
    INTEGRAL ESTIMATION IN QUANTUM PHYSICS by Jane Doe A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematical Physics Department of Mathematics The University of Utah May 2016 Copyright c Jane Doe 2016 All Rights Reserved The University of Utah Graduate School STATEMENT OF DISSERTATION APPROVAL The dissertation of Jane Doe has been approved by the following supervisory committee members: Cornelius L´anczos , Chair(s) 17 Feb 2016 Date Approved Hans Bethe , Member 17 Feb 2016 Date Approved Niels Bohr , Member 17 Feb 2016 Date Approved Max Born , Member 17 Feb 2016 Date Approved Paul A. M. Dirac , Member 17 Feb 2016 Date Approved by Petrus Marcus Aurelius Featherstone-Hough , Chair/Dean of the Department/College/School of Mathematics and by Alice B. Toklas , Dean of The Graduate School. ABSTRACT Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah.
    [Show full text]
  • JETIR Research Journal
    © 2019 JETIR June 2019, Volume 6, Issue 6 www.jetir.org (ISSN-2349-5162) IMAGE TEXT CONVERSION FROM REGIONAL LANGUAGE TO SPEECH/TEXT IN LOCAL LANGUAGE 1Sayee Tale,2Manali Umbarkar,3Vishwajit Singh Javriya,4Mamta Wanjre 1Student,2Student,3Student,4Assistant Professor 1Electronics and Telecommunication, 1AISSMS, Institute of Information Technology, Pune, India. Abstract: The drivers who drive in other states are unable to recognize the languages on the sign board. So this project helps them to understand the signs in different language and also they will be able to listen it through speaker. This paper describes the working of two module image processing module and voice processing module. This is done majorly using Raspberry Pi using the technology OCR (optical character recognition) technique. This system is constituted by raspberry Pi, camera, speaker, audio playback module. So the system will help in decreasing the accidents causes due to wrong sign recognition. Keywords: Raspberry pi model B+, Tesseract OCR, camera module, recording and playback module,switch. 1. Introduction: In today’s world life is too important and one cannot loose it simply in accidents. The accident rates in today’s world are increasing day by day. The last data says that 78% accidents were because of driver’s fault. There are many faults of drivers and one of them is that they are unable to read the signs and instructions written on boards when they drove into other states. Though the instruction are for them only but they are not able to make it. So keeping this thing in mind we have proposed a system which will help them to understand the sign boards written in regional language.
    [Show full text]
  • Representation of Web Based Graphics and Equations for the Visually Impaired
    TH NATIONAL ENGINEERING CONFERENCE 2012, 18 ERU SYMPOSIUM, FACULTY OF ENGINEERING, UNIVERSITY OF MORATUWA, SRI LANKA Representation of Web based Graphics and Equations for the Visually Impaired C.L.R. Gunawardhana, H.M.M. Hasanthika, T.D.G.Piyasena,S.P.D.P.Pathirana, S. Fernando, A.S. Perera, U. Kohomban Abstract With the extensive growth of technology, it is becoming prominent in making learning more interactive and effective. Due to the use of Internet based resources in the learning process, the visually impaired community faces difficulties. In this research we are focusing on developing an e-Learning solution that can be accessible by both normal and visually impaired users. Accessibility to tactile graphics is an important requirement for visually impaired people. Recurrent expenditure of the printers which support graphic printing such as thermal embossers is beyond the budget for most developing countries which cannot afford such a cost for printing images. Currently most of the books printed using normal text Braille printers ignore images in documents and convert only the textual part. Printing images and equations using normal text Braille printers is a main research area in the project. Mathematical content in a forum and simple images such as maps in a course page need to be made affordable using the normal text Braille printer, as these functionalities are not available in current Braille converters. The authors came up with an effective solution for the above problems and the solution is presented in this paper. 1 1. Introduction In order to images accessible to visually impaired people the images should be converted into a tactile In this research our main focus is to make e-Learning format.
    [Show full text]
  • Manual Archivista 2009/I
    Manual Archivista 2009/I c 18th January 2009 by Archivista GmbH, CH-8118 Pfaffhausen Web pages: www.archivista.ch Contents I Introduction 8 4.4 Accessing the manual . 26 4.5 Login WebClient . 26 1 Introduction 9 4.6 Scanning and entering keywords . 26 1.1 Welcome to Archivista . 9 4.7 Rotating pages . 26 1.2 Notes on the manual . 9 4.8 Title search . 27 1.3 Our address . 9 4.9 Full text search . 27 1.4 Previous versions . 9 4.10 Login WebAdmin . 27 1.5 Licensing . 12 4.11 Adding users . 27 4.12 Adding/deleting fields . 27 2 First Steps 18 4.13 Editing the input/search mask . 27 2.1 Introduction . 18 4.14 Activating SSH . 27 2.2 The digital archive . 18 4.15 Activating VNC . 27 2.3 Database, server and client . 18 4.16 Enabling print server (CUPS) . 28 2.4 Tables, records and fields . 19 4.17 Password, Unlock & Restart OCR . 28 2.5 Archivista and working method . 19 4.18 Activating HTTPS . 28 2.6 Tips for archiving . 20 2.7 Archive, pages and documents . 20 5 Tutorial RichClient 29 2.8 The Archivista document . 20 5.1 Archivista in 90 Seconds . 29 5.2 Adding Documents . 29 3 Installation 22 5.3 Search . 29 3.1 ArchivistaBox . 22 5.4 Extended Functions . 30 3.2 Virtual (Box) . 22 5.5 Users & Fields . 31 3.3 OpenSource (Box) . 22 5.6 Databases, fields and barcodes . 31 3.4 OpenSource (Windows) . 22 III ArchivistaBox 33 II Tutorials 25 6 Introduction 34 4 Introduction 26 6.1 Advantages .
    [Show full text]
  • Design and Implementation of a System for Recording and Remote Monitoring of a Parking Using Computer Vision and Ip Surveillance Systems
    VOL. 11, NO. 24, DECEMBER 2016 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences ©2006-2016 Asian Research Publishing Network (ARPN). All rights reserved. www.arpnjournals.com DESIGN AND IMPLEMENTATION OF A SYSTEM FOR RECORDING AND REMOTE MONITORING OF A PARKING USING COMPUTER VISION AND IP SURVEILLANCE SYSTEMS Albeiro Cortés Cabezas, Rafael Charry Andrade and Harrinson Cárdenas Almario Department of Electronic Engineering, Surcolombiana University Grupo de Tratamiento de Señales y Telecomunicaciones, Pastrana Av. Neiva Colombia, Huila E-Mail: [email protected] ABSTRACT This article presents the design and implementation of a system for detection of license plates for a public parking located at the municipality of Altamira at the state of Huila in Colombia. The system includes also, a module of surveillance cameras for remote monitoring. The detection system consists of three steps: the first step consists in locating and cutting the vehicle license plate from an image taken by an IP camera. The second step is an optical character recognition (OCR), responsible for identifying alphanumeric characters included in the license plate obtained in the first step. And the third step consists in the date and time register operation for the vehicles entry and exit. This data base stores also the license plate number, information about the contract between parking and vehicle owner and billing. Keywords: image processing, vision, monitoring, surveillance, OCR. 1. INTRODUCTION vehicles in a parking and store them in digital way to gain In recent years the automatic recognition plates control of the activity in the parking, minimizing errors numbers technology -ANPR- (Automatic Number Plate that entail manual registration and storage of information Recognition) has had a great development in congestion in books accounting and papers.
    [Show full text]
  • Optical Character Recognition Using Android
    International Journal of Computer Networking, Wireless and Mobile Communications (IJCNWMC) ISSN 2250-1568 Vol. 3, Issue 3, Aug 2013, 57-62 © TJPRC Pvt. Ltd. OPTICAL CHARACTER RECOGNITION USING ANDROID BHAKTI KARANI1, PARITA SANGHAVI2, HARSHITA BHALAKIYA3 & VINAYA SAWANT4 1,2,3B.E.,Information Technology, D. J. Sanghvi College of Engineering, Mumbai, Maharashtra, India 4Assistant Professor, Information Technology, D. J. Sanghvi College of Engineering, Mumbai, Maharashtra, India ABSTRACT The paper deals with the development of a mobile application based on Android platform for capturing digital photography and its subsequent processing by OCR (Optical Character Recognition) technologies. The developed solution provides capability of calculating equations and getting synonyms of difficult words. KEYWORDS: ABBYY’s Engine, Android OS, Optical Character Recognition (OCR) INTRODUCTION In today's world of automation, time is equivalent to money. Doing the work on the go is the need of the hour. The major problem these days arises in the organization is, for performing even the small task the user has to input the data manually into the corresponding software on the computer. This becomes very time consuming and tedious task for the user. Results for strategic decisions are required every now and then, but, on the other hand, it takes a long time for the user to obtain the results. Archaic aims on solving all the above problems on the go by just using their mobile phones. The project aims at using scanning and comparing techniques intended to identify printed text and equations into a digital format, in other words, getting answer of equation and meaning of words. In comparison with other calculation software’s that are available for PCs and mobiles, Archaic has the potential to give the output just by scanning the data taken as input in the form of equations or words from a mobile camera rather than making the user input the data manually.
    [Show full text]
  • The Common OCR Service Interface Rev 4 (2 Apr 2016) – © Sylvain Giroudon, 2007-2016
    COSI – The Common OCR Service Interface Rev 4 (2 apr 2016) – © Sylvain Giroudon, 2007-2016 What is COSI COSI stands for « Common OCR Service Interface ». COSI is an open standard targeted at using different OCR (Optical Character Recognition) programs as external agents in a unified manner. COSI allows to integrate an OCR tool into various top-level applications using a client-server approach. COSI has been successfully implemented and experimented on well-known open source OCR applications: – GOCR (http://jocr.sourceforge.net); – Tesseract-OCR (http://code.google.com/p/tesseract-ocr). – GNU Ocrad (http://www.gnu.org/software/ocrad). COSI implementations are available from the GitHub repository, at github.com/testfarm/cosi. Where does COSI come from ? The primary goal of COSI is to implement OCR capability for the “TestFarm Virtual User” software platform, a test automation tool that detects and tracks graphical objects displayed by an application. Please refer to the web site http://www.testfarm.org for further details about this tool. COSI Overview The main principles that characterize COSI are: – We use use existing OCR tools as local services to extract text from a graphical frame buffer, rather than from a static image file. An OCR tool that supports COSI is then called the “OCR server”. – The OCR server takes its graphical input from a frame buffer mapped in a shared memory segment. This shared frame buffer is written by the client and read by the OCR server. – The OCR server accepts requests from the client through its standard input. A request is a simple command line, which may include some options such as the coordinates of the graphical window from which the text is extracted (a.k.a Region of Interest).
    [Show full text]
  • Comparison of Optical Character Recognition (OCR) Software
    Angelica Gabasio May 2013 Comparison of optical character recognition (OCR) software Master's thesis work carried out at Sweco Position. Supervisors Bj¨ornHarrtell, Sweco Tobias Lennartsson, Sweco Examiner Jacek Malec, Lund University The purpose of the thesis was to test some different optical char- acter recognition (OCR) software to find out which one gives the best output. Different kinds of images have been OCR scanned to test the software. The output from the OCR tools has been automatically compared to the correct text and the error percentage has been cal- culated. Introduction Optical character recognition (OCR) is used to extract plain text from images containing text. This is useful when information in paper form is going to be digitized. OCR is applied to the images to make the text editable or searchable. Another way of using OCR scanning is to process forms automatically.[1] If the output from the OCR tool contains many errors, it would require a lot of manual work to fix them. This is why it is important to get a correct or almost correct output when using OCR. The output from different OCR tools may differ a lot, and the purpose of this comparison is to find out which OCR software to use for OCR scanning. OCR steps As seen in Figure 1, the OCR software takes an image, applies OCR on it and outputs plain text. Figure 1 also shows the basic steps of the OCR algorithm, which are explained below[1, 2]: Preprocessing Preprocessing can be done either manually before the image is given to the OCR tool, or it can be done internally in the software.
    [Show full text]
  • Ocrdroid: a Framework to Digitize Text Using Mobile Phones
    OCRdroid: A Framework to Digitize Text Using Mobile Phones Anand Joshiy, Mi Zhangx, Ritesh Kadmawalay, Karthik Dantuy, Sameera Poduriy and Gaurav S. Sukhatmeyx yComputer Science Department, xElectrical Engineering Department University of Southern California, Los Angeles, CA 90089, USA {ananddjo,mizhang,kadmawal,dantu,sameera,gaurav}@usc.edu Abstract. As demand grows for mobile phone applications, research in optical character recognition, a technology well developed for scanned documents, is shifting focus to the recognition of text embedded in digital photographs. In this paper, we present OCRdroid, a generic framework for developing OCR-based ap- plications on mobile phones. OCRdroid combines a light-weight image prepro- cessing suite installed inside the mobile phone and an OCR engine connected to a backend server. We demonstrate the power and functionality of this framework by implementing two applications called PocketPal and PocketReader based on OCRdroid on HTC Android G1 mobile phone. Initial evaluations of these pilot experiments demonstrate the potential of using OCRdroid framework for real- world OCR-based mobile applications. 1 Introduction Optical character recognition (OCR) is a powerful tool for bringing information from our analog lives into the increasingly digital world. This technology has long seen use in building digital libraries, recognizing text from natural scenes, understanding hand-written office forms, and etc. By applying OCR technologies, scanned or camera- captured documents are converted into machine editable soft copies that can be edited, searched, reproduced and transported with ease [15]. Our interest is in enabling OCR on mobile phones. Mobile phones are one of the most commonly used electronic devices today. Commodity mobile phones with powerful microprocessors (above 500MHz), high- resolution cameras (above 2megapixels), and a variety of embedded sensors (ac- celerometers, compass, GPS) are widely deployed and becoming ubiquitous.
    [Show full text]