A Study on the Accuracy of OCR Engines for Source Code

Total Page:16

File Type:pdf, Size:1020Kb

A Study on the Accuracy of OCR Engines for Source Code A Study on the Accuracy of OCR Engines for Source Code Transcription from Programming Screencasts Abdulkarim Khormi1;2, Mohammad Alahmadi1;3, Sonia Haiduc1 1Florida State University, Tallahassee, FL, United States { khormi, alahmadi, shaiduc } @cs.fsu.edu 2Jazan University, Jizan, Saudi Arabia 3University of Jeddah, Jeddah, Saudi Arabia ABSTRACT 1 INTRODUCTION Programming screencasts can be a rich source of documentation Nowadays, programmers spend 20-30% of their time online seeking for developers. However, despite the availability of such videos, information to support their daily tasks [6, 11]. Among the plethora the information available in them, and especially the source code of online documentation sources available to them, programming being displayed is not easy to fnd, search, or reuse by programmers. screencasts are one source that has been rapidly growing in popu- Recent work has identifed this challenge and proposed solutions larity in recent years [17]. For example, YouTube alone hosts vast that identify and extract source code from video tutorials in order amounts of videos that cover a large variety of programming topics to make it readily available to developers or other tools. A crucial and have been watched billions of times. However, despite their component in these approaches is the Optical Character Recogni- widespread availability, programming screencasts present some tion (OCR) engine used to transcribe the source code shown on challenges that need to be addressed before they can be used by de- screen. Previous work has simply chosen one OCR engine, without velopers to their full potential. One important limitation is the lack consideration for its accuracy or that of other engines on source of support for code reuse. Given that copy-pasting code is the most code recognition. In this paper, we present an empirical study on common action performed by developers online [6], programming the accuracy of six OCR engines for the extraction of source code screencasts should also strive to support this developer behavior. from screencasts and code images. Our results show that the tran- However, very few programming screencast creators make their scription accuracy varies greatly from one OCR engine to another source code available alongside the video, and online platforms and that the most widely chosen OCR engine in previous studies is like YouTube do not currently have support for automatic source by far not the best choice. We also show how other factors, such code transcription. Therefore, developers often have no choice but as font type and size can impact the results of some of the engines. to manually transcribe the code they see on the screen into their We conclude by ofering guidelines for programming screencast project, which is not only time consuming, but also cumbersome, creators on which fonts to use to enable a better OCR recognition of as it requires pausing-and-playing the video numerous times [23]. their source code, as well as advice on OCR choice for researchers Researchers have recently recognized this problem and proposed aiming to analyze source code in screencasts. novel techniques to automatically detect, locate, and extract source code from programming tutorials [1, 2, 14, 19, 20, 24, 32, 34]. While CCS CONCEPTS diferent approaches to achieve these goals were proposed, they all • Software and its engineering → Documentation; • Computer have one component in common: an Optical Character Recognition vision → Image recognition; (OCR) engine, which is used to transcribe the source code found in a frame of the screencast into text. To ensure the most accurate ex- KEYWORDS traction of source code from programming screencasts, one needs to Optical Character Recognition, Code Extraction, Programming carefully choose the OCR engine that performs the best specifcally video tutorials, Software documentation, Video mining on images containing code. However, no empirical evaluation cur- rently exists on the accuracy of OCR engines for code transcription. ACM Reference Format: Previous work that analyzed programming screencasts [14, 25, 32] 1 2 1 3 1 Abdulkarim Khormi ; , Mohammad Alahmadi ; , Sonia Haiduc . 2020. A simply picked one OCR engine, namely Tesseract1, and applied it Study on the Accuracy of OCR Engines for Source Code Transcription to programming screencast frames. However, without validating its from Programming Screencasts. In 17th International Conference on Mining accuracy or comparing it to other OCR engines on the task of code Software Repositories (MSR ’20), October 5–6, 2020, Seoul, Republic of Korea. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3379597.3387468 transcription, there is no evidence that Tesseract is the best choice for transcribing source code. Choosing a suboptimal OCR engine Permission to make digital or hard copies of all or part of this work for personal or can lead to noise that is difcult to correct, and therefore code that classroom use is granted without fee provided that copies are not made or distributed for proft or commercial advantage and that copies bear this notice and the full citation is hard to reuse without expensive post-processing steps or man- on the frst page. Copyrights for components of this work owned by others than ACM ual fxing. To this end, there is a need for an empirical evaluation must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, that examines the accuracy of diferent OCR engines for extracting to post on servers or to redistribute to lists, requires prior specifc permission and/or a fee. Request permissions from [email protected]. source code from images, with the goal of fnding the engine with MSR ’20, October 5–6, 2020, Seoul, Republic of Korea the lowest risk of code transcription error. © 2020 Association for Computing Machinery. ACM ISBN 978-1-4503-7517-7/20/05...$15.00 https://doi.org/10.1145/3379597.3387468 1https://github.com/tesseract-ocr 1 2 1 3 1 MSR ’20, October 5–6, 2020, Seoul, Republic of Korea Abdulkarim Khormi ; , Mohammad Alahmadi ; , Sonia Haiduc In this paper, we address this need and present a set of exper- OCR has also been recently adopted by researchers in software iments evaluating six OCR engines (Google Drive OCR, ABBYY engineering for extracting source code from programming video FineReader, GOCR, Tesseract, OCRAD, and Cuneiform) applied tutorials [14, 24, 32]. It brings great promise in this area of research, to images of code written in three programming languages: Java, allowing the text found in programming tutorials, including the Python, and C#. We evaluate the OCR engines on two code im- source code, to be extracted, indexed, searched, and reused. While age datasets. The frst one contains 300 frames extracted from 300 researchers in software engineering have only used one OCR engine YouTube programming screencasts (100 videos per programming thus far (i.e., Tesseract), there are several other OCR engines that use language, one frame per video). The evaluation on this dataset is diferent computer vision algorithms, leading to high-quality text relevant to researchers working on the analysis of programming extraction and real-time performance. In this paper, we investigate screencasts and software developers wanting to reuse code from and compare six well-known OCR engines: Google Docs OCR, screencasts. The experiments on this dataset allow us to determine ABBYY FineReader, GOCR, Tesseract, OCRAD, and Cuneiform. the best OCR engine for source code extraction from screencasts, Google Drive2 is a fle storage and synchronization service therefore reducing the risk of error in the OCR results and increas- developed by Google, which interacts with Google Docs, a cloud ing the potential for reuse of the extracted code. The second dataset service allowing users to view, edit, and share documents. Google is composed of 3,750 screenshots of code (1,250 per programming Drive provides an API, which allows converting images into text language) written in an IDE using diferent font types and font using the Google Docs OCR engine. The text extraction process sizes. The code was extracted from a variety of GitHub projects, is performed through a cloud vision API that Google uses in the then locally opened in a popular IDE, where a screenshot of the back-end with a custom machine learning model in order to yield code region was taken. This dataset, unlike the frst one, allowed us high accuracy results. 3 to manipulate the font used to write the code in the IDE and change ABBYY FineReader is a commercial OCR software company its size and type, in order to observe how these factors impact the that ofers intelligent text detection for digital documents. It can accuracy of OCR engines. The results on this dataset are relevant process digital documents of many types, such as PDF or, in our for creators of programming screencasts, revealing font types and case, a video frame containing source code. FineReader uses an sizes that enable better OCR on the code in their screencasts. At advanced algorithm to preserve the layout of the image during the the same time, researchers can use these results as guidelines for text extraction process. It ofers a paid plan, in which users can identifying screencasts that lend themselves to OCR the best. create a new pattern to train the engine. However, we used the Our results show that there are signifcant diferences in accuracy built-in default patterns of the free trial version of FineReader 12 in between OCR engines. Google Drive OCR (Gdrive) and ABBYY our work. FineReader ofers a software development kit (SDK)4 to FineReader performed the best, while the default choice in previous facilitate the OCR extraction process for developers. We used the work, Tesseract, was never in the top three best performing OCR SDK in our study to enable the efcient processing of our datasets. engines. We also found that applying OCR on code written using the GOCR5 is a free OCR engine that converts images of several DejaVu Sans Mono Font produced the best results compared to other types into text.
Recommended publications
  • Advanced OCR with Omnipage and Finereader
    AAddvvHighaa Technn Centerccee Trainingdd UnitOO CCRR 21050 McClellan Rd. Cupertino, CA 95014 www.htctu.net Foothill – De Anza Community College District California Community Colleges Advanced OCR with OmniPage and FineReader 10:00 A.M. Introductions and Expectations FineReader in Kurzweil Basic differences: cost Abbyy $300, OmniPage Pro $150/Pro Office $600; automating; crashing; graphic vs. text 10:30 A.M. OCR program: Abbyy FineReader www.abbyy.com Looking at options Working with TIFF files Opening the file Zoom window Running OCR layout preview modifying spell check looks for barcodes Blocks Block types Adding to blocks Subtracting from blocks Reordering blocks Customize toolbars Adding reordering shortcut to the tool bar Save and load blocks Eraser Saving Types of documents Save to file Formats settings Optional hyphen in Word remove optional hyphen (Tools > Format Settings) Tables manipulating Languages Training 11:45 A.M. Lunch 1:00 P.M. OCR program: ScanSoft OmniPage www.scansoft.com Looking at options Languages Working with TIFF files SET Tools (see handout) www.htctu.net rev. 9/27/2011 Opening the file View toolbar with shortcut keys (View > Toolbar) Running OCR On-the-fly zoning modifying spell check Zone type Resizing zones Reordering zones Enlargement tool Ungroup Templates Saving Save individual pages Save all files in one document One image, one document Training Format types Use true page for PDF, not Word Use flowing page or retain fronts and paragraphs for Word Optional hyphen in Word Tables manipulating Scheduler/Batch manager: Workflow Speech Saving speech files (WAV) Creating a Workflow 2:30 P.M. Break 2:45 P.M.
    [Show full text]
  • OCR Pwds and Assistive Qatari Using OCR Issue No
    Arabic Optical State of the Smart Character Art in Arabic Apps for Recognition OCR PWDs and Assistive Qatari using OCR Issue no. 15 Technology Research Nafath Efforts Page 04 Page 07 Page 27 Machine Learning, Deep Learning and OCR Revitalizing Technology Arabic Optical Character Recognition (OCR) Technology at Qatar National Library Overview of Arabic OCR and Related Applications www.mada.org.qa Nafath About AboutIssue 15 Content Mada Nafath3 Page Nafath aims to be a key information 04 Arabic Optical Character resource for disseminating the facts about Recognition and Assistive Mada Center is a private institution for public benefit, which latest trends and innovation in the field of Technology was founded in 2010 as an initiative that aims at promoting ICT Accessibility. It is published in English digital inclusion and building a technology-based community and Arabic languages on a quarterly basis 07 State of the Art in Arabic OCR that meets the needs of persons with functional limitations and intends to be a window of information Qatari Research Efforts (PFLs) – persons with disabilities (PWDs) and the elderly in to the world, highlighting the pioneering Qatar. Mada today is the world’s Center of Excellence in digital work done in our field to meet the growing access in Arabic. Overview of Arabic demands of ICT Accessibility and Assistive 11 OCR and Related Through strategic partnerships, the center works to Technology products and services in Qatar Applications enable the education, culture and community sectors and the Arab region. through ICT to achieve an inclusive community and educational system. The Center achieves its goals 14 Examples of Optical by building partners’ capabilities and supporting the Character Recognition Tools development and accreditation of digital platforms in accordance with international standards of digital access.
    [Show full text]
  • Glosar De Termeni
    CUPRINS 1. Reprezentarea digitală a informaţiei……………………………………………………….8 1.1. Reducerea datelor………………………………………………………………..10 1.2. Entropia şi câştigul informaţional………………………………………………. 10 2. Identificarea şi analiza tipurilor formatelor de documente pe suport electronic……... 12 2.1. Tipuri de formate pentru documentele pe suport electronic……………………..…13 2.1.1 Formate text……………………………………………………………… 13 2.1.2 Formate imagine……………………...........……………………………....26 2.1.3 Formate audio……………………………………………………………...43 2.1.4 Formate video……………………………………………………………. 48 2.1.5. Formate multimedia.....................................................................................55 2.2. Comparaţie între formate………………………….………………………………72 3. Conţinutul web………………………………………………………………………………74 3.1. Identificarea formatelor de documente pentru conţinutul web……………………. 74 4. Conversia documentelor din format tradiţional în format electronic…………………101 4.1. Metode de digitizare……………………………………………………………..104 4.1.1 Cerinţe în procesul de digitizare……………………………………………104 4.1.2. Scanarea……………………………………………………………………104 4.1.3. Fotografiere digitală………………………………………………………105 4.1.4. Scanarea fotografiilor analogice………………………………………106 4.2. Arhitectura unui sistem de digitizare………………………………………………106 4.3. Moduri de livrare a conţinutului digital………………………………………….108 4.3.1. Text sub imagine………….……………………………………………..108 4.3.2. Text peste imagine…………………………………………….…………108 4.4. Alegerea unui format………………………………………………………………108 4.5. Tehnici de conversie prin Recunoaşterea Optică a Caracterelor (Optical Charater Recognition-OCR)……………………………………………109 4.5.1. Scurt istoric al conversiei documentelor din format tradiţional prin scanare şi OCR-izare…………………... …..109 4.5.2. Starea actuală a tehnologiei OCR……………………………………….110 4.5.3. Tehnologii curente, produse software pentru OCR-izare………………112 4.5.4. Suportul lingvistic al produselor pentru OCR-izare………………….…114 1 4.5.5. Particularităţi ale sistemelor OCR actuale………………………..……..118 5. Elaborarea unui model conceptual pentru un sistem de informare şi documentare………………………………………………………126 5.1.
    [Show full text]
  • Reconocimiento De Escritura Lecture 4/5 --- Layout Analysis
    Reconocimiento de Escritura Lecture 4/5 | Layout Analysis Daniel Keysers Jan/Feb-2008 Keysers: RES-08 1 Jan/Feb-2008 Outline Detection of Geometric Primitives The Hough-Transform RAST Document Layout Analysis Introduction Algorithms for Layout Analysis A `New' Algorithm: Whitespace Cuts Evaluation of Layout Analyis Statistical Layout Analysis OCR OCR - Introduction OCR fonts Tesseract Sources of OCR Errors Keysers: RES-08 2 Jan/Feb-2008 Outline Detection of Geometric Primitives The Hough-Transform RAST Document Layout Analysis Introduction Algorithms for Layout Analysis A `New' Algorithm: Whitespace Cuts Evaluation of Layout Analyis Statistical Layout Analysis OCR OCR - Introduction OCR fonts Tesseract Sources of OCR Errors Keysers: RES-08 3 Jan/Feb-2008 Detection of Geometric Primitives some geometric entities important for DIA: I text lines I whitespace rectangles (background in documents) Keysers: RES-08 4 Jan/Feb-2008 Outline Detection of Geometric Primitives The Hough-Transform RAST Document Layout Analysis Introduction Algorithms for Layout Analysis A `New' Algorithm: Whitespace Cuts Evaluation of Layout Analyis Statistical Layout Analysis OCR OCR - Introduction OCR fonts Tesseract Sources of OCR Errors Keysers: RES-08 5 Jan/Feb-2008 Hough-Transform for Line Detection Assume we are given a set of points (xn; yn) in the image plane. For all points on a line we must have yn = a0 + a1xn If we want to determine the line, each point implies a constraint yn 1 a1 = − a0 xn xn Keysers: RES-08 6 Jan/Feb-2008 Hough-Transform for Line Detection The space spanned by the model parameters a0 and a1 is called model space, parameter space, or Hough space.
    [Show full text]
  • ABBYY Finereader Engine OCR
    ABBYY FineReader Engine Performance Guide Integrating optical character recognition (OCR) technology will effectively extend the functionality of your application. Excellent performance of the OCR component is one of the key factors for high customer satisfaction. This document provides information on general OCR performance factors and the possibilities to optimize them in the Software Development Kit ABBYY FineReader Engine. By utilizing its advanced capabilities and options, the high OCR performance can be improved even further for optimal customer experience. When measuring OCR performance, there are two major parameters to consider: RECOGNITION ACCURACY PROCESSING SPEED Which Factors Influence the OCR Accuracy and Processing Speed? Image type and Image image source quality OCR accuracy and Processing System settings processing resources speed Document Application languages architecture Recognition speed and recognition accuracy can be significantly improved by using the right parameters in ABBYY FineReader Engine. Image Type and Image Quality Images can come from different sources. Digitally created PDFs, screenshots of computer and tablet devices, image Key factor files created by scanners, fax servers, digital cameras Image for OCR or smartphones – various image sources will lead to quality = different image types with different level of image quality. performance For example, using the wrong scanner settings can cause “noise” on the image, like random black dots or speckles, blurred and uneven letters, or skewed lines and shifted On the other hand, processing ‘high-quality images’ with- table borders. In terms of OCR, this is a ‘low-quality out distortions reduces the processing time. Additionally, image’. reading high-quality images leads to higher accuracy results. Processing low-quality images requires high computing power, increases the overall processing time and deterio- Therefore, it is recommended to use high-quality images rates the recognition results.
    [Show full text]
  • An Accuracy Examination of OCR Tools
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-8, Issue-9S4, July 2019 An Accuracy Examination of OCR Tools Jayesh Majumdar, Richa Gupta texts, pen computing, developing technologies for assisting Abstract—In this research paper, the authors have aimed to do a the visually impaired, making electronic images searchable comparative study of optical character recognition using of hard copies, defeating or evaluating the robustness of different open source OCR tools. Optical character recognition CAPTCHA. (OCR) method has been used in extracting the text from images. OCR has various applications which include extracting text from any document or image or involves just for reading and processing the text available in digital form. The accuracy of OCR can be dependent on text segmentation and pre-processing algorithms. Sometimes it is difficult to retrieve text from the image because of different size, style, orientation, a complex background of image etc. From vehicle number plate the authors tried to extract vehicle number by using various OCR tools like Tesseract, GOCR, Ocrad and Tensor flow. The authors in this research paper have tried to diagnose the best possible method for optical character recognition and have provided with a comparative analysis of their accuracy. Keywords— OCR tools; Orcad; GOCR; Tensorflow; Tesseract; I. INTRODUCTION Optical character recognition is a method with which text in images of handwritten documents, scripts, passport documents, invoices, vehicle number plate, bank statements, Fig.1: Functioning of OCR [2] computerized receipts, business cards, mail, printouts of static-data, any appropriate documentation or any II. OCR PROCDURE AND PROCESSING computerized receipts, business cards, mail, printouts of To improve the probability of successful processing of an static-data, any appropriate documentation or any picture image, the input image is often ‘pre-processed’; it may be with text in it gets processed and the text in the picture is de-skewed or despeckled.
    [Show full text]
  • Ocr: a Statistical Model of Multi-Engine Ocr Systems
    University of Central Florida STARS Electronic Theses and Dissertations, 2004-2019 2004 Ocr: A Statistical Model Of Multi-engine Ocr Systems Mercedes Terre McDonald University of Central Florida Part of the Electrical and Computer Engineering Commons Find similar works at: https://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu This Masters Thesis (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations, 2004-2019 by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation McDonald, Mercedes Terre, "Ocr: A Statistical Model Of Multi-engine Ocr Systems" (2004). Electronic Theses and Dissertations, 2004-2019. 38. https://stars.library.ucf.edu/etd/38 OCR: A STATISTICAL MODEL OF MULTI-ENGINE OCR SYSTEMS by MERCEDES TERRE ROGERS B.S. University of Central Florida, 2000 A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in the Department of Electrical and Computer Engineering in the College of Engineering and Computer Science at the University of Central Florida Orlando, Florida Summer Term 2004 ABSTRACT This thesis is a benchmark performed on three commercial Optical Character Recognition (OCR) engines. The purpose of this benchmark is to characterize the performance of the OCR engines with emphasis on the correlation of errors between each engine. The benchmarks are performed for the evaluation of the effect of a multi-OCR system employing a voting scheme to increase overall recognition accuracy. This is desirable since currently OCR systems are still unable to recognize characters with 100% accuracy.
    [Show full text]
  • Gradu04243.Pdf
    Paperilomakkeesta tietomalliin Kalle Malin Tampereen yliopisto Tietojenkäsittelytieteiden laitos Tietojenkäsittelyoppi Pro gradu -tutkielma Ohjaaja: Erkki Mäkinen Toukokuu 2010 i Tampereen yliopisto Tietojenkäsittelytieteiden laitos Tietojenkäsittelyoppi Kalle Malin: Paperilomakkeesta tietomalliin Pro gradu -tutkielma, 61 sivua, 3 liitesivua Toukokuu 2010 Tässä tutkimuksessa käsitellään paperilomakkeiden digitalisointiin liittyvää kokonaisprosessia yleisellä tasolla. Prosessiin tutustutaan tarkastelemalla eri osa-alueiden toimintoja ja laitteita kokonaisjärjestelmän vaatimusten näkökul- masta. Tarkastelu aloitetaan paperilomakkeiden skannaamisesta ja lopetetaan kerättyjen tietojen tallentamiseen tietomalliin. Lisäksi luodaan silmäys markki- noilla oleviin valmisratkaisuihin, jotka sisältävät prosessin kannalta oleelliset toiminnot. Avainsanat ja -sanonnat: lomake, skannaus, lomakerakenne, lomakemalli, OCR, OFR, tietomalli. ii Lyhenteet ADRT = Adaptive Document Recoginition Technology API = Application Programming Interface BAG = Block Adjacency Graph DIR = Document Image Recognition dpi= Dots Per Inch ICR = Intelligent Character Recognition IFPS = Intelligent Forms Processing System IR = Information Retrieval IRM = Image and Records Management IWR = Intelligent Word Recognition NAS = Network Attached Storage OCR = Optical Character Recognition OFR = Optical Form Recognition OHR = Optical Handwriting Recognition OMR = Optical Mark Recognition PDF = Portable Document Format SAN = Storage Area Networks SDK = Software Development Kit SLM
    [Show full text]
  • Journal of the Text Encoding Initiative, Issue 5 | June 2013, « TEI Infrastructures » [Online], Online Since 19 April 2013, Connection on 04 March 2020
    Journal of the Text Encoding Initiative Issue 5 | June 2013 TEI Infrastructures Tobias Blanke and Laurent Romary (dir.) Electronic version URL: http://journals.openedition.org/jtei/772 DOI: 10.4000/jtei.772 ISSN: 2162-5603 Publisher TEI Consortium Electronic reference Tobias Blanke and Laurent Romary (dir.), Journal of the Text Encoding Initiative, Issue 5 | June 2013, « TEI Infrastructures » [Online], Online since 19 April 2013, connection on 04 March 2020. URL : http:// journals.openedition.org/jtei/772 ; DOI : https://doi.org/10.4000/jtei.772 This text was automatically generated on 4 March 2020. TEI Consortium 2011 (Creative Commons Attribution-NoDerivs 3.0 Unported License) 1 TABLE OF CONTENTS Editorial Introduction to the Fifth Issue Tobias Blanke and Laurent Romary PhiloLogic4: An Abstract TEI Query System Timothy Allen, Clovis Gladstone and Richard Whaling TAPAS: Building a TEI Publishing and Repository Service Julia Flanders and Scott Hamlin Islandora and TEI: Current and Emerging Applications/Approaches Kirsta Stapelfeldt and Donald Moses TEI and Project Bamboo Quinn Dombrowski and Seth Denbo TextGrid, TEXTvre, and DARIAH: Sustainability of Infrastructures for Textual Scholarship Mark Hedges, Heike Neuroth, Kathleen M. Smith, Tobias Blanke, Laurent Romary, Marc Küster and Malcolm Illingworth The Evolution of the Text Encoding Initiative: From Research Project to Research Infrastructure Lou Burnard Journal of the Text Encoding Initiative, Issue 5 | June 2013 2 Editorial Introduction to the Fifth Issue Tobias Blanke and Laurent Romary 1 In recent years, European governments and funders, universities and academic societies have increasingly discovered the digital humanities as a new and exciting field that promises new discoveries in humanities research.
    [Show full text]
  • Integral Estimation in Quantum Physics
    INTEGRAL ESTIMATION IN QUANTUM PHYSICS by Jane Doe A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematical Physics Department of Mathematics The University of Utah May 2016 Copyright c Jane Doe 2016 All Rights Reserved The University of Utah Graduate School STATEMENT OF DISSERTATION APPROVAL The dissertation of Jane Doe has been approved by the following supervisory committee members: Cornelius L´anczos , Chair(s) 17 Feb 2016 Date Approved Hans Bethe , Member 17 Feb 2016 Date Approved Niels Bohr , Member 17 Feb 2016 Date Approved Max Born , Member 17 Feb 2016 Date Approved Paul A. M. Dirac , Member 17 Feb 2016 Date Approved by Petrus Marcus Aurelius Featherstone-Hough , Chair/Dean of the Department/College/School of Mathematics and by Alice B. Toklas , Dean of The Graduate School. ABSTRACT Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah.
    [Show full text]
  • JETIR Research Journal
    © 2019 JETIR June 2019, Volume 6, Issue 6 www.jetir.org (ISSN-2349-5162) IMAGE TEXT CONVERSION FROM REGIONAL LANGUAGE TO SPEECH/TEXT IN LOCAL LANGUAGE 1Sayee Tale,2Manali Umbarkar,3Vishwajit Singh Javriya,4Mamta Wanjre 1Student,2Student,3Student,4Assistant Professor 1Electronics and Telecommunication, 1AISSMS, Institute of Information Technology, Pune, India. Abstract: The drivers who drive in other states are unable to recognize the languages on the sign board. So this project helps them to understand the signs in different language and also they will be able to listen it through speaker. This paper describes the working of two module image processing module and voice processing module. This is done majorly using Raspberry Pi using the technology OCR (optical character recognition) technique. This system is constituted by raspberry Pi, camera, speaker, audio playback module. So the system will help in decreasing the accidents causes due to wrong sign recognition. Keywords: Raspberry pi model B+, Tesseract OCR, camera module, recording and playback module,switch. 1. Introduction: In today’s world life is too important and one cannot loose it simply in accidents. The accident rates in today’s world are increasing day by day. The last data says that 78% accidents were because of driver’s fault. There are many faults of drivers and one of them is that they are unable to read the signs and instructions written on boards when they drove into other states. Though the instruction are for them only but they are not able to make it. So keeping this thing in mind we have proposed a system which will help them to understand the sign boards written in regional language.
    [Show full text]
  • Optical Character Recognition As a Cloud Service in Azure Architecture
    International Journal of Computer Applications (0975 – 8887) Volume 146 – No.13, July 2016 Optical Character Recognition as a Cloud Service in Azure Architecture Onyejegbu L. N. Ikechukwu O. A. Department of Computer Science, Department of Computer Science, University of Port Harcourt. University of Port Harcourt. Port Harcourt. Rivers State. Nigeria. Port Harcourt. Rivers State. Nigeria ABSTRACT The dawn of digital computers has made it unavoidable that Cloud computing and Optical Character Recognition (OCR) everything processed by the digital computer must be technology has become an extremely attractive area of processed in digital form. For example, most famous libraries research over the last few years. Sometimes it is difficult to in the world like the Boston public library has over 6 million retrieve text from the image because of different size, style, books, for public consumption, inescapably has to change all orientation and complex background of image. There is need its paper-based books into digital documents in order that they to convert paper books and documents into text. OCR is still could be stored on a storage drive. It can also be projected that imperfect as it occasionally mis-recognizes letters and falsely every year over 200 million books are being published[22], identifies scanned text, leading to misspellings and linguistics many of which are disseminated and published on papers [6]. errors in the OCR output text. A cloud based Optical Thus, for several document-input jobs, Opitcal Character Character Recognition Technology was used. This was Reconigtion is the utmost economical and swift process powered on Microsoft Windows Azure in form of a Web obtainable.
    [Show full text]