Automated Reading of High Volume Water Meters

Total Page:16

File Type:pdf, Size:1020Kb

Automated Reading of High Volume Water Meters Automated Reading of High Volume Water Meters by Jessica Ulyate Thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Engineering at Stellenbosch University Supervisor: Dr. R. Wolhuter Department of Electrical and Electronic Engineering March 2011 Declaration By submitting this thesis electronically, I declare that the entirety of the work contained therein is my own original work, that I am the sole author thereof (save to the extent explicitly otherwise stated), that reproduction and publication thereof by Stellenbosch University will not infringe any third party rights and that I have not previously in its entirety or in part submitted it for obtaining any qualification. March 2011 Copyright © 2011 Stellenbosch University All rights reserved. i Abstract Accurate water usage information is very important for municipalities in order to provide accurate billing information for high volume water users. Meter reading are currently obtained by sending a person out to every meter to obtain a manual reading. This is very costly with regards to time and money, and it is also very error prone. In order to improve on this system, an image based telemetry system was developed that can be retrofitted on currently installed bulk water meters. Images of the meter dials are captured and transmitted to a central server where they are further processed and enhanced. Character recognition is performed on the enhanced images in order to extract meter readings. Through tests it was found that characters can be recognised to 100% accuracy for cases which the character recognition software has been trained, and 70% accuracy for cases which is was not trained. Thus, an overall recognition accuracy of 85% was achieved. These results can be improved upon in future work by statistically analysing results and utilizing the inherent heuristic information from the meter dials. Overall the feasibility of the approach was demonstrated and a way forward was indi- cated. ii Samevatting Dit is belangrik vir munisipaliteite om akkurate water verbruikingssyfers te hê sodat hulle akkurate rekeninge aan hoë volume water gebruikers kan stuur. Tans besoek ’n persoon fisies elke meter om meterlesings te verkry. Dit is egter baie oneffektief ten opsigte van tyd en geld. Die metode is ook baie geneig tot foute. Ten einde te verbeter op hierdie stelsel was ’n beeld gebaseerde telemetrie stelsel ontwerp wat geïnstalleer word op huidig geïnstalleerde hoë volume water meters. Beelde van die meters word na ’n sentrale bediener gestuur waar dit verwerk word en die beeld kwaliteit verbeter word. Karakter herkenning sagteware word gebruik om die meter lesings te verkry vanuit die verbeterde beelde. Deur middel van toetse is gevind dat karakters herken kan word tot op 100% graad van akkuraatheid in gevalle waar die karakter herkenning sagteware opgelei is, en 70% akkuraatheid vir gevalle waarvoor dit nie opgelei was nie. Dus was ’n algehele herkennings akkuraatheid van 85% behaal. Hierdie resultate kan verbeter word in die toekoms deur die resultate statisties te analiseer en die inherente heuristieke inligting van die meter syfers te benutting. Ten slotte, in die tesis was die haalbaarheid van die benadering gedemonstreer en ’n weg vorentoe vir toekomstige werk aangedui. iii Acknowledgements I would like to express my sincere gratitude towards the following people: • My promoter, Dr. R. Wolhuter for his guidance and insight as well as the many hours spent reviewing my work. • Prof J. du Preez for his insights and invaluable suggestions. • My friends and family for all their encouragement throughout this project. iv Contents Declaration i Contents v List of Figures viii List of Tables x List of Appreviations xi 1 Introduction 1 1.1 Motivation . 1 1.2 Objectives . 2 1.3 Operational Contributions . 2 1.4 Overview . 3 2 Proposed Methodology 5 2.1 Introduction . 5 2.2 Hardware . 5 2.3 Software . 6 2.4 Character Recognition . 6 2.5 Conclusion . 6 v CONTENTS vi 3 Literature Study 7 3.1 Introduction . 7 3.2 Bulk Water Meters . 7 3.3 Character Recognition Software . 9 3.4 Tesseract In Depth . 14 3.5 Conclusion . 17 4 Implementation Outline 18 4.1 Introduction . 18 4.2 Image Procurement . 18 4.3 Rectangle Detection . 21 4.4 Image Enhancement . 22 4.5 Character Extraction . 24 4.6 Character Recognition . 27 4.7 Conclusion . 29 5 Detailed Work 30 5.1 Introduction . 30 5.2 Image Capturing . 30 5.3 Histogram Modification . 35 5.4 Region Growing . 36 5.5 Centreline Approximation . 38 5.6 Region Filtering . 40 5.7 Conclusion . 44 6 Implementation Problems 45 6.1 Introduction . 45 6.2 Rectangle Detection . 45 6.3 Image Enhancement . 46 CONTENTS vii 6.4 Character Extraction . 48 6.5 Conclusion . 50 7 Experimental Investigation 52 7.1 Introduction . 52 7.2 Motivation . 52 7.3 Setup . 53 7.4 Test 1: Median Filter . 54 7.5 Test 2: Histogram Normalisation . 55 7.6 Test 3: Tesseract Accuracy I . 56 7.7 Test 4: Tesseract Accuracy II . 62 7.8 Conclusion . 66 8 Conclusion 67 8.1 Conclusion . 67 8.2 Overview of the Project and Operational Contributions . 68 8.3 Future Work . 69 Appendices 70 A Program Code (CD) 71 List of References 72 List of Figures 3.1 The Proline Promag 10H digital flow meter. 8 3.2 The OPTO Pulser installed on a bulk water meter. 9 4.1 The process of transmitting the image from the bulk water meter to the central server. 19 4.2 The process of capturing an image and uploading it from the device to an FTP server. 20 4.3 High contrast border around meter dials. 21 4.4 Cropped image after rectangle detection. 22 4.5 Example of a 3x3 median filter. The m indicates the median value. 23 4.6 Image before applying smoothing. 23 4.7 Image after applying smoothing. 23 4.8 Image before applying thresholding. 24 4.9 Image after applying thresholding. 24 4.10 Meter dials with two region growing lines illustrated as well as regions found. 25 4.11 Image before extracting numbers. 26 4.12 Reconstructed image. 27 4.13 A part of the training image used for Tesseract. 28 5.1 A block diagram showing how the COMedia UART camera is connected to the Wavecom processor. 31 viii LIST OF FIGURES ix 5.2 The physical setup of the block diagram in figure 5.1. 31 5.3 A close-up image of the development board. 32 5.4 A typical command exchange between the controller and the camera. 34 5.5 The image and image histogram before normalization. 35 5.6 The image and image histogram after normalization. 36 5.7 How pixels adjacent to the centre pixel are numbered. 37 5.8 How the initial pixel is determined, the first step. 38 5.9 How the initial pixel is determined, the second step. 38 5.10 Image showing the spacing between regions and the widths of regions. 39 5.11 Centrelines illustrated. 40 5.12 The meter dials after thresholding in case 1. 40 5.13 The meter dials after thresholding in case 2. 42 5.14 The meter dials after thresholding in case 3. 42 5.15 The meter dials after thresholding in case 4. 43 5.16 The meter dials after thresholding in case 5. 43 6.1 Example of number extraction with the radon transform. 49 7.1 The device setup for capturing images. 53 7.2 A thresholded image where the median filter was applied in the previous step. 54 7.3 A thresholded image where the median filter was not applied in the previous step. 55 List of Tables 4.1 Half-numbers and the corresponding letters that Tesseract outputs when the number is found. 28 5.1 The estimated implementation costs. 33 7.1 Results from case 1. Dial images and character recognition outputs are displayed. 58 7.2 Results from case 2. Dial images and character recognition outputs are displayed. 59 7.3 Results from case 3. Dial images and character recognition outputs are displayed. 60 7.4 Results from case 4. Dial images and character recognition outputs are displayed. 61 7.5 Results from case 3 version 2. Dial images and character recognition outputs are displayed. 64 7.6 Results from case 4 version 2. Dial images and character recognition outputs are displayed. 65 x List of Appreviations CMOS Complementary Metal Oxide Semiconductor FTP File Transfer Protocol GSM Global System for Mobile Communications GUI Graphical User Interface LED Light Emitting Diode OCR Optical Character Recognition OpenCV Open Computer Vision UART Universal Asynchronous Receiver/Transmitter xi Chapter 1 Introduction 1.1 Motivation The project was prompted by a request from Cape Town Municipality for a more efficient way obtain bulk water meter readings. Accurate meter readings are required by the Municipality in order to accurately bill water usage, especially where large quantities of water.
Recommended publications
  • Glosar De Termeni
    CUPRINS 1. Reprezentarea digitală a informaţiei……………………………………………………….8 1.1. Reducerea datelor………………………………………………………………..10 1.2. Entropia şi câştigul informaţional………………………………………………. 10 2. Identificarea şi analiza tipurilor formatelor de documente pe suport electronic……... 12 2.1. Tipuri de formate pentru documentele pe suport electronic……………………..…13 2.1.1 Formate text……………………………………………………………… 13 2.1.2 Formate imagine……………………...........……………………………....26 2.1.3 Formate audio……………………………………………………………...43 2.1.4 Formate video……………………………………………………………. 48 2.1.5. Formate multimedia.....................................................................................55 2.2. Comparaţie între formate………………………….………………………………72 3. Conţinutul web………………………………………………………………………………74 3.1. Identificarea formatelor de documente pentru conţinutul web……………………. 74 4. Conversia documentelor din format tradiţional în format electronic…………………101 4.1. Metode de digitizare……………………………………………………………..104 4.1.1 Cerinţe în procesul de digitizare……………………………………………104 4.1.2. Scanarea……………………………………………………………………104 4.1.3. Fotografiere digitală………………………………………………………105 4.1.4. Scanarea fotografiilor analogice………………………………………106 4.2. Arhitectura unui sistem de digitizare………………………………………………106 4.3. Moduri de livrare a conţinutului digital………………………………………….108 4.3.1. Text sub imagine………….……………………………………………..108 4.3.2. Text peste imagine…………………………………………….…………108 4.4. Alegerea unui format………………………………………………………………108 4.5. Tehnici de conversie prin Recunoaşterea Optică a Caracterelor (Optical Charater Recognition-OCR)……………………………………………109 4.5.1. Scurt istoric al conversiei documentelor din format tradiţional prin scanare şi OCR-izare…………………... …..109 4.5.2. Starea actuală a tehnologiei OCR……………………………………….110 4.5.3. Tehnologii curente, produse software pentru OCR-izare………………112 4.5.4. Suportul lingvistic al produselor pentru OCR-izare………………….…114 1 4.5.5. Particularităţi ale sistemelor OCR actuale………………………..……..118 5. Elaborarea unui model conceptual pentru un sistem de informare şi documentare………………………………………………………126 5.1.
    [Show full text]
  • Operational Platform with Additional Tools Integrated
    DELIVERABLE SUBMISSION SHEET To: Cristina Maier (Project Officer) DG CONNECT G.2 Creativity Unit EUFO 01/162 - European Commission L-2557 Luxembourg From: Support Action Centre of Competence in Digitisation Project acronym: Succeed Project Number: 600555 Project Manager: Rafael C. Carrasco Jiménez Project Coordinator: Universidad de Alicante The following deliverable: Deliverable title: Operational platform with additional tools integrated Deliverable number: D2.4 Deliverable date: M23 Partners responsible: UA, KB Status: X Public Restricted Confidential is now complete X It is available for your inspection X Relevant descriptive documents are attached The deliverable is: X A document X A website Url: http://succeed-project.eu/demonstrator- platform X Software An event Other Sent to Project Officer: Sent to functional mailbox: On date: [email protected] [email protected] 04/12/2014 Succeed is supported by the European Union under FP7-ICT and coordinated by Universidad de Alicante. D2.4 Operational platform with additional tools integrated Support Action Centre of Competence in Digitisation (Succeed) December 4, 2014 Abstract Deliverable 2.4 (Operational platform with additional tools integrated) is an online service and open-source platform which complements and enhances deliverable D2.3 (Operational platform, submitted in Decem- ber 2013 and approved in February 2014). This report summarises the digitisation tools incorporated into the online platform, as well as the management of the releases of its open-source components. In 2014, 13 tools have been integrated in addition to those initially available in the operational platform or subsequently added during Suc- ceed's first year. The operational platform, currently provides 41 tools for text digitisation which can be executed and tested online without the need to install any software on the user's computer.
    [Show full text]
  • Choosing Character Recognition Software To
    CHOOSING CHARACTER RECOGNITION SOFTWARE TO SPEED UP INPUT OF PERSONIFIED DATA ON CONTRIBUTIONS TO THE PENSION FUND OF UKRAINE Prepared by USAID/PADCO under Social Sector Restructuring Project Kyiv 1999 <CHARACTERRECOGNITION_E_ZH.DOC> printed on June 25, 2002 2 CONTENTS LIST OF ACRONYMS.......................................................................................................................................................................... 3 INTRODUCTION................................................................................................................................................................................ 4 1. TYPES OF INFORMATION SYSTEMS....................................................................................................................................... 4 2. ANALYSIS OF EXISTING SYSTEMS FOR AUTOMATED TEXT RECOGNITION................................................................... 5 2.1. Classification of automated text recognition systems .............................................................................................. 5 3. ATRS BASIC CHARACTERISTICS............................................................................................................................................ 6 3.1. CuneiForm....................................................................................................................................................................... 6 3.1.1. Some information on Cognitive Technologies ..................................................................................................
    [Show full text]
  • Tool for Forensic Analyses of Digital Traces
    Masaryk University Faculty of Informatics Tool for forensic analyses of digital traces Master’s Thesis Bc. Mária Hatalová Brno, Spring 2018 Declaration Hereby I declare that this paper is my original authorial work, which I have worked out on my own. All sources, references, and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Bc. Mária Hatalová Advisor: prof. RNDr. Václav Matyáš M.Sc. Ph.D. i Acknowledgements I would like to express my gratitude to my advisor prof. Václav Matyáš for the continuous guidance throughout writing the thesis, for his insightful comments, and for enhancing my motivation. My thanks also go to my technical consultant Mgr. Roman Pavlík from Trusted Network Solutions, a.s. for providing me an insight into the work of expert witnesses in the field of IT, for his encourage- ment, and for giving me the perception that the work is meaningful and useful in practice. I would also like to thank Petr Svoboda from Trusted Network Solu- tions, a.s. for consultations regarding IT Forensic Tool v1.0 and for tech- nical support. Last but not least, my gratitude goes to my family and friends for their support and patience. ii Abstract The thesis examines selected issuesin the area of forensic analy- sis and their impact on expert witnesses that work for the Police of the Czech Republic. A description of a typical forensic task as it is assigned by the police is provided, together with an overview of avail- able tools that could be used for solving the task.
    [Show full text]
  • Real-Time Ukrainian Text Recognition and Voicing
    Real-Time Ukrainian Text Recognition and Voicing Kateryna Tymoshenkoa, Victoria Vysotskaa, Oksana Kovtunb, Roman Holoshchuka and Svitlana Holoshchuka a Lviv Polytechnic National University, S. Bandera Street, 12, Lviv, 79013, Ukraine b Vasyl' Stus Donetsk National University, 600-richchia Street, 21, Vinnytsia, 21021, Ukraine Abstract The main application task, solving which the project aims to, is to help people with visual impairments and teach the correct pronunciation of words to people learning a Ukrainian language as a foreign language. This problem is solved by developing software that will recognise text in video mode. As a result, when you tap the screen, you will hear how the selected text sounds. Keywords 1 Text, real-time, recognition, image, text dubbing, text sounding, speech to text, text to speech, text recognition, text recognition algorithm, recognised text, video mode, android operating system, OCR algorithm, improved algorithm, optical character recognition, video recording mode, user-friendly interface, information technology, textual content, text analysis, intelligent system 1. Introduction The main problems when recognising a text are the following [1-5]: Programs are designed to identify a text from only one image; Some programs are complicated for the user to understand; A large number of characters causes a slowdown in text recognition; The reader needs to be selected by the user to be recognised; Some Ukrainian symbols are incorrectly read; Programs are not adapted for text recognition in video mode; Programs are not designed for visually impaired people; No sound of the text from an image. This study identifies the need for visually impaired people to read the text just by pointing the camera and hearing it as a result [1-5].
    [Show full text]
  • Yelieseva S. V. Information Technologies in Translation.Pdf
    The Ministry of Education and Science of Ukraine Petro Mohyla Black Sea National University S. V. Yelisieieva INFORMATION TECHNOLOGIES IN TRANSLATION A STUDY GUIDE PMBSNU Publishing House Mykolaiv – 2018 S. V. Yelisieieva UDC 81' 322.2 Y 40 Рекомендовано до друку вченою радою Чорноморського національного університету імені Петра Могили (протокол № 12 від 03.07.2018). Рецензенти: Демецька В. В., д-р філол. наук, професор, декан факультету перекладознавства Херсонського державного університету; Філіпова Н. М., д-р філол. наук, професор кафедри прикладної лінгвістики Національного морського університету ім. адмірала Макарова; Кондратенко Ю. П., д-р техн. наук, професор, професор кафедри інтелектуальних інформаційних систем факультету коп’ютерних наук Чорноморського національного університету ім. Петра Могили; Бабій Ю. Б., канд. філол. наук, доцент кафедри прикла- дної лінгвістики Миколаївського національного університету ім. В. О. Сухомлинського. Y 40 Yelisieieva S. V. Information Technologies in Translation : A Study Guide / S. V. Yelisieieva. – Мykolaiv : PMBSNU Publishing House, 2018. – 176 р. ISBN 978-966-336-399-8 The basis of the training course Information Technologies in Translation lies the model of work cycle on translation, which describes the sequence of necessary actions for qualified performance and further maintenance of written translation order. At each stage, translators use the appropriate software (S/W), which simplifies the work and allows to improve the quality of the finished documentation. The course familiarizes students with all stages of the translation work cycle and with those components of the software used at each stage. Besides, this training course includes information on working with the latest multimedia technologies that are widely used nowadays for presentations of various information materials.
    [Show full text]
  • Extracción Automáticas De Modelos Uml Contenidos En Imágenes
    Universidad Carlos III de Madrid Departamento de Informática Doctorado en Ciencia y Tecnología Informática EXTRACCIÓN AUTOMÁTICA DE MODELOS UML CONTENIDOS EN IMÁGENES Tesis Doctoral Autor Valentín Moreno Pelayo Director Prof. Dr. D. Juan Bautista Llorens Morillo Prof. Dr. Gonzalo Génova Fuster Leganés, noviembre de 2015 TESIS DOCTORAL EXTRACCIÓN AUTOMÁTICAS DE MODELOS UML CONTENIDOS EN IMÁGENES Autor: Valentín Moreno Pelayo Director/es: Juan Bautista Llorens Morillo Gonzalo Génova Fuster Firma del Tribunal Calificador: Firma Presidente: (Nombre y apellidos) Vocal: (Nombre y apellidos) Secretario: (Nombre y apellidos) Calificación: Leganés, de de A mi familia y amigos Resumen Aunque parezca extraño, pese a no poder encontrar sitios web especializados en ofertar diseños de software representados mediante diagramas UML, existe una ingente cantidad de documentación a disposición de cualquiera, y que contiene dichos modelos: como imágenes en documentos textuales. Este universo de información no se encuentra fácilmente accesible para los desarrolladores porque no es posible, con la tecnología actual, buscar de forma precisa información semántica dentro de imágenes. Lo único que pueden hacer los desarrolladores es intentar buscar documentos relevantes, leerlos, y decidir si los diseños le sirven a sus intereses. Para evitar este problema, y conseguir poner a disposición de toda la comunidad de desarrolladores centenas de miles de diseños, este trabajo pretende desarrollar la metodología necesaria para poder extraer la información textual y gráfica de las imágenes que representen diagramas UML, y convertirla en información pura UML (es decir, en modelos UML reales). El poner a disposición de los analistas, desarrolladores de software, o interesados tal cantidad de diagramas y modelos de software permitirá la aplicación de técnicas modernas de reutilización de software basadas en la búsqueda de diagramas UML.
    [Show full text]
  • TEXT RECOGNITION Kazylina Y.I
    XX International conference for students and young scientists «MODERN TECHNIQUE AND TECHNOLOGIES» Section 7: Informatics And Control In Engeneering Systems TEXT RECOGNITION Kazylina Y.I. Scientific supervisor: lepustin a.v., senior teacher Tomsk Polytechnic University, 634050, Russia, Tomsk, Lenin Avenue, 30 E-mail: [email protected] Optical character recognition, or OCR, is a mechanical or electronic process of a handwritten, typewritten or printed text conversion into text data, i.e. code sequence representing computer symbols, for example, in a word processor. OCR is widely used for converting books and documents into an electronic format, business accounting system computerization and text publication in the Web. Text recognition on the PC The process of digitization and OCR includes five steps. Fig 2. Method of recognition • Page data input. On this stage scanned or photographed document is transferred into a computer • image. Document reconstruction. After the recognition process is complete the software starts to • Layout analysis. OCR application detects the recreate pages, combining separate symbols into arrangement of text, pictures, tables etc. on the page words, words into sentences, sentences into and divide it into blocks. The software sequentially paragraphs etc., using the embedded vocabulary. To splits the page into smaller blocks: the text into quicken the process the results of layout analysis (step paragraphs, then into sentences, separate words and 2) are used. Moreover, using special methods symbols. At the end layout analysis the page is applications try to take into account grammatical represented by a set of separate symbols. The features of the text so the result would represent application remembers the location of every one of grammatically correct sentences.
    [Show full text]
  • TEXT LINE EXTRACTION USING SEAM CARVING a Thesis
    TEXT LINE EXTRACTION USING SEAM CARVING A Thesis Presented to The Graduate Faculty of The University of Akron In Partial Fulfillment of the Requirement for the Degree Master of Science Christopher Stoll May, 2015 TEXT LINE EXTRACTION USING SEAM CARVING Christopher Stoll Thesis Approved: Accepted: __________________________________ __________________________________ Advisor Dean of the College Dr. Zhong-Hui Duan Dr. Chand Midha __________________________________ __________________________________ Faculty Reader Interim Dean of the Graduate School Dr. Chien-Chung Chan Dr. Rex Ramsier __________________________________ __________________________________ Faculty Reader Date Dr. Yingcai Xiao __________________________________ Department Chair Dr. Timothy Norfolk !ii ABSTRACT Optical character recognition (OCR) is a well researched area of computer science; presently there are numerous commercial and open source applications which can perform OCR, albeit with varying levels of accuracy. Since the process of performing OCR requires extracting text from images, it should follow that text line extraction is also a well researched area. And indeed there are many methods to extract text from images of scanned documents, the process known in that field as document analysis and recognition. However, the existing text extraction techniques were largely devised to feed existing character recognition techniques. Since this work was originally conceived from the perspective of computer vision and pattern recognition and pattern analysis and machine intelligence, a new approach seemed necessary to meet the new objectives. Out of that need an apparently novel approach to text extraction was devised which relies upon the central idea behind seam carving. Text images are examined for seams, but rather than removing the lowest energy seams, they are evaluated to determine where text is located within the image.
    [Show full text]
  • A Study on the Accuracy of OCR Engines for Source Code
    A Study on the Accuracy of OCR Engines for Source Code Transcription from Programming Screencasts Abdulkarim Khormi1;2, Mohammad Alahmadi1;3, Sonia Haiduc1 1Florida State University, Tallahassee, FL, United States { khormi, alahmadi, shaiduc } @cs.fsu.edu 2Jazan University, Jizan, Saudi Arabia 3University of Jeddah, Jeddah, Saudi Arabia ABSTRACT 1 INTRODUCTION Programming screencasts can be a rich source of documentation Nowadays, programmers spend 20-30% of their time online seeking for developers. However, despite the availability of such videos, information to support their daily tasks [6, 11]. Among the plethora the information available in them, and especially the source code of online documentation sources available to them, programming being displayed is not easy to fnd, search, or reuse by programmers. screencasts are one source that has been rapidly growing in popu- Recent work has identifed this challenge and proposed solutions larity in recent years [17]. For example, YouTube alone hosts vast that identify and extract source code from video tutorials in order amounts of videos that cover a large variety of programming topics to make it readily available to developers or other tools. A crucial and have been watched billions of times. However, despite their component in these approaches is the Optical Character Recogni- widespread availability, programming screencasts present some tion (OCR) engine used to transcribe the source code shown on challenges that need to be addressed before they can be used by de- screen. Previous work has simply chosen one OCR engine, without velopers to their full potential. One important limitation is the lack consideration for its accuracy or that of other engines on source of support for code reuse.
    [Show full text]
  • Document Image Analysis Using Imagemagick and Tesseract-Ocr
    IARJSET ISSN (Online) 2393-8021 ISSN (Print) 2394-1588 International Advanced Research Journal in Science, Engineering and Technology Vol. 3, Issue 5, May 2016 Document Image Analysis Using Imagemagick and Tesseract-ocr Prof. Smitha M L1, Dr. Antony P J1, Sachin D N1 KVG College of Engineering, Sullia, D.K, Karnataka, India1 Abstract: Document image analysis is the field of converting paper documents into an editable electronic representation by performing optical character recognition (OCR). In recent years, there has been a tremendous amount of progress in the development of open source OCR systems. The tesseract-ocr engine, as was the HP Research Prototype in the UNLV Fourth Annual Test of OCR Accuracy, is described in a comprehensive overview. Emphasis is placed on aspects that are novel or at least unusual in an OCR engine, including in particular the line finding, features/classification methods, and the adaptive classifier. OCRopus is one of the leading open source document analysis systems using tesseract-ocr with a modular and pluggable architecture. Imagemagick is an open source image processing tool. This paper presents an overview of different steps involved in a document image analysis system and illustrates them with examples from Combination of imagemagick and OCRopus. Keywords: Document Image Analysis, Imagemagick, tesseract-ocr, open source OCR, Free Software. I. INTRODUCTION Paper documents like books, handwritten manuscripts, open source systems are available for OCR. The most magazines, newspapers, etc. have traditionally been used notable commercial systems are ABBYY FineReader [2] as the main source for acquiring, disseminating, and and Nuance Omnipage [3]. An overview of the leading preserving knowledge.
    [Show full text]
  • OCR Mit Freier Software
    OCR mit freier Software Scannen und dann? Worum geht es? ● OCR – Optical Character Recognition ● Automatisches Extrahieren von Text aus Bildern ● Für verschiedene Zeichensätze, Sprachen und Schriften ● mit/ohne Erhalt des Layouts Voraussetzung/Vorbearbeitung ● Möglichst guter Scan bzw. gutes Foto – Ausreichende Auflösung (200..400dpi) – Ausrichtung (Landscape, Kopfstand) – Geraderichten – Kontrasterhöhung, z.B. bei Schwarz auf Grau – Geeignetes Dateiformat Aufgabenstellung ● Umsatzsteuerrecht → Abliefernachweis EU ● ca. 200 Lieferungen je Monat ● Versandbescheinigungen von Speditionen und Packetdiensten ● (halb-)automatische Zuordnung zu eigenen Lieferscheinen ● Stapelscanner → FTP-Server ● PDF-Format, 300dpi, schwarz/weiß OCR-Software für Linux ● Kommerziell: Kofax, Abby, ... ● Frei, Debian (apt-cache search ocr) – gocr – ocrad – tesseract – cuneiform gocr ● Verarbeitet nur pnm-Files ● convert aus imagemagick benutzen ● Verschiedene Ausgabeformate (ISO8859_1, TeX, HTML, UTF8, XML, ASCII) ● Mäßige Erkennungsleistung ● Trainierbar, nicht ausprobiert ocrad ● Verarbeitet pnm-Files, wie gocr ● Schnell! ● Keine Abhängigkeiten ● Direkt als Filter benutzbar ● Mäßige Erkennungsleistung cuneiform ● Von russ. Firma Cognitive Technologies ● Seit 2008 unter BSD-Lizenz ● Geforkte Kommandozeilen-Version ● verarbeitet gängige 1-Seiten-Bildformate ● Gute Erkennungsleistung ● Kann rtf-Dateien erzeugen → Textverarbeitung ● Erkennung für >20 (europäische) Sprachen tesseract ● 1985..1995 von HP entwickelt ● 2005 von Google überarbeitet und freigegeben ● Apache-Lizenz
    [Show full text]