A Comparison of OCR Methods on Natural Images in Different Image Domains

Total Page:16

File Type:pdf, Size:1020Kb

A Comparison of OCR Methods on Natural Images in Different Image Domains DEGREE PROJECT IN TECHNOLOGY, FIRST CYCLE, 15 CREDITS STOCKHOLM, SWEDEN 2020 A comparison of OCR methods on natural images in different image domains AGNES FORSBERG MELVIN LUNDQVIST KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE En jämförelse av OCR-metoder i olika domäner AGNES FORSBERG MELVIN LUNDQVIST Degree Project in Computer Science, DD142X Date: June 8, 2020 Supervisor: Kevin Smith Examiner: Pawel Herman School of Electrical Engineering and Computer Science Abstract Optical character recognition (OCR) is a blanket term for methods that convert printed or handwritten text into machine-encoded text. As the digital world keeps growing the amount of digital images with text increases, and the need for OCR methods that can handle more than plain text documents as well. There are OCR engines that can convert images of clean documents with an over 99% recognition rate. OCR for natural images is getting more and more attention, but because natural images can be far more diverse than plain text documents it also leads to complications. To combat these issues it needs to be clear in what areas the OCR methods of today struggle. This thesis aims to answer this by testing three popular, readily available, OCR methods on a dataset comprised only of natural images containing text. The results show that one of the methods, GOCR, can not handle natural images as its test results were very far from correct. For the other two methods, ABBYY FineReader and Tesseract, the results were better but also show that there still is a long way to go, especially when it comes to images with special font. However when the images are less complicated some of our methods performed above our expectations. i Sammanfattning Optical character recognition (OCR) är en samlingsterm för metoder som konverterar tryckt eller handskriven text till maskinkod. När den digitala världen växer så växer även antalet digitala bilder med text, och även behovet för OCR- metoder som kan hantera mer än vanliga textdokument. Det finns idag OCR- motorer som kan konvertera bilder av rena dokument till maskinkod med över 99% korrekthet. OCR för fotografier får mer och mer uppmärksamhet, men eftersom fotografier har mycket större mångfaldhet än rena textdokument leder detta också till problem. För att hantera detta krävs klarhet inom vilka områden som dagens OCR-metoder har problem. Denna uppsats ämnar svara på denna fråga genom att undersöka och testa tre populära, enkelt tillgängliga OCR- metoder på ett dataset som endast innehåller fotografier av naturliga miljöer med text. Resultaten visade att en av metoderna, GOCR, inte kan hantera fotografier. GOCRs testresulet var långt från det korrekta. För de andra metoderna, ABBYY FineReader och Tesseract, var resultaten bättre men visade att det fortfarande finns mycket arbete att göra inom området, särskilt när det kommer till bilder med speciella typsnitt. När det däremot kommer till bilder som är mindre komplicerade blev vi förvånade över hur bra resultatet var för några av metoderna. ii Contents 1 Introduction 1 1.1 Problem statement ............................ 1 1.2 Scope ................................... 1 1.3 Hypothesis ................................ 2 1.4 Outline .................................. 3 2 Background 4 2.1 Optical character recognition ...................... 4 2.2 Tesseract ................................. 4 2.3 ABBYY FineReader Engine ....................... 5 2.4 GOCR ................................... 5 3 Method 7 3.1 The NEOCR dataset ........................... 7 3.2 Image domains .............................. 7 3.3 Image filtering .............................. 10 3.4 Experiment ................................ 11 4 Results 15 4.1 Font .................................... 15 4.2 Texture .................................. 15 4.3 Arrangement ............................... 16 4.4 Contrast .................................. 16 4.5 Blurriness ................................. 16 4.6 Comparative study ............................ 17 5 Discussion 19 5.1 Method discussion ............................ 19 5.2 Discussion of results ........................... 20 5.3 Future research ............................. 22 5.4 Ethical and sustainability considerations . 22 5.5 Societal considerations ......................... 23 5.6 Conclusion ................................ 23 References 24 iii 1 Introduction Optical character recognition (OCR) is the process of converting images of typewritten, handwritten or printed text to editable machine-encoded text [7]. The first true OCR machine was installed in 1954 on typewritten sales reports [4]. As the digital world is growing and the amount of digital images with text increases, the potential use of OCR technology is expanding. Nowadays there are a lot of great OCR engines for converting pictures of clean documents to editable text that can be used by computer software, some with over 99% recognition rate [6]. Therefore, OCR is very helpful converting physical office documents to digital ones, but is far from faultless in scenarios such as recognising text in natural scene images [7]. Recognising text in real world images is getting more and more attention, but come with numerous complications, since text in natural images is far more diverse than in plain text documents [6]. 1.1 Problem statement OCR engines are very complex and consist of several steps. To develop OCR technology in natural images further, it must be clear in what cases OCR methods perform poorly in order to know what should be in focus. Therefore, the goal of this study is to evaluate the current state of three different OCR methods and identify possible scenarios in which each respective method demonstrates lower recognition rate by answering the following question: How accurate are the OCR methods Tesseract, ABBYY FineReader and GOCR on natural images overall and in different domains? 1.2 Scope The study is solely comparative and will not take into account the accessibility or computational cost of the OCR methods. The study is restricted to investigate the performance of the three popular off-the- shelf OCR methods Tesseract (version 4.0.0-beta.1)1, ABBYY FineReader Engine 1https://tesseract-ocr.github.io/tessdoc/Home.html 1 (version 12 for Linux)2 and GOCR(version 0.52-20181015)3. The experiment will use the accessible software of the OCR methods and will not make use of additional support, such as language input, training or neural networks. String distance function Levenshtein distance will be used to evaluate the performance. Furthermore, the OCR methods will only be evaluated on their overall performance and when applied to data in the presence of the domains (confounding factors) font, texture, arrangement, contrast and blurredness (from here on referred to as blurriness). All images for the study are included in the dataset NEOCR version 1.0 and will be filtered for specific domains. For more specifics of the characteristics of the images in NEOCR, see section 3.1. 1.3 Hypothesis GOCR seems to be a rather simple OCR method. In terms of results, we believe that this will be a disadvantage in complex images of natural environment. ABBYY FineReader commercially appears to be a tool for digital documents only; however the SDK engine used for these experiments supposedly has support for text recognition in images as well. It is also used in many large scale applications4. Because of this, we expect ABBYY FineReader to have a better accuracy than the other methods. Tesseract is open source but maintained by Google and is one of the most used systems in the world. However, we do not know how well it performs on natural images. We expect it to have good accuracy, but not as good as the ABBYY FineReader Engine. In addition, we expect all methods to perform better in images with standard font, horizontal arrangement and high contrast while performing worse in images with high texture and blurriness, since text in these types of images is more difficult to distinguish for the eye and then hypothetically also for OCR methods. 2https://abbyy.technology/en:products:fre:linux 3https://www-e.ovgu.de/jschulen/ocr/download.html 4https://www.abbyy.com/en-gb/case-studies/?product=3250 2 1.4 Outline The next section presents a more explicated background required for the comprehension of the rest of the report, using previous studies. Section three describes how the experiment was carried out. It contains an explanation of how filtering the images in the dataset was made and a technical specification from accessing the OCR tools needed to retrieve and treat the outputs. The fourth section presents the results of the experiment and compares them. In the fifth section, there is a discussion on the course of action and the reliability and usefulness of the results, both to answer the research question and for future research. Finally, a conclusion of the study is presented. 3 2 Background In this section a theoretical background for the area of study will be given. Specifically OCR technology will be introduced further, and the three OCR methods Tesseract, ABBYY FineReader and GOCR will be described. 2.1 Optical character recognition Optical character recognition (OCR) uses technology to distinguish printed or handwritten text characters in digital images. It was originally invented as a tool that read text out loud for the blind or visually impaired. Nowadays it is most frequently used to transform historic documents and books into PDFs. OCR methods use algorithms
Recommended publications
  • Ubuntu Kung Fu
    Prepared exclusively for Alison Tyler Download at Boykma.Com What readers are saying about Ubuntu Kung Fu Ubuntu Kung Fu is excellent. The tips are fun and the hope of discov- ering hidden gems makes it a worthwhile task. John Southern Former editor of Linux Magazine I enjoyed Ubuntu Kung Fu and learned some new things. I would rec- ommend this book—nice tips and a lot of fun to be had. Carthik Sharma Creator of the Ubuntu Blog (http://ubuntu.wordpress.com) Wow! There are some great tips here! I have used Ubuntu since April 2005, starting with version 5.04. I found much in this book to inspire me and to teach me, and it answered lingering questions I didn’t know I had. The book is a good resource that I will gladly recommend to both newcomers and veteran users. Matthew Helmke Administrator, Ubuntu Forums Ubuntu Kung Fu is a fantastic compendium of useful, uncommon Ubuntu knowledge. Eric Hewitt Consultant, LiveLogic, LLC Prepared exclusively for Alison Tyler Download at Boykma.Com Ubuntu Kung Fu Tips, Tricks, Hints, and Hacks Keir Thomas The Pragmatic Bookshelf Raleigh, North Carolina Dallas, Texas Prepared exclusively for Alison Tyler Download at Boykma.Com Many of the designations used by manufacturers and sellers to distinguish their prod- ucts are claimed as trademarks. Where those designations appear in this book, and The Pragmatic Programmers, LLC was aware of a trademark claim, the designations have been printed in initial capital letters or in all capitals. The Pragmatic Starter Kit, The Pragmatic Programmer, Pragmatic Programming, Pragmatic Bookshelf and the linking g device are trademarks of The Pragmatic Programmers, LLC.
    [Show full text]
  • Advanced OCR with Omnipage and Finereader
    AAddvvHighaa Technn Centerccee Trainingdd UnitOO CCRR 21050 McClellan Rd. Cupertino, CA 95014 www.htctu.net Foothill – De Anza Community College District California Community Colleges Advanced OCR with OmniPage and FineReader 10:00 A.M. Introductions and Expectations FineReader in Kurzweil Basic differences: cost Abbyy $300, OmniPage Pro $150/Pro Office $600; automating; crashing; graphic vs. text 10:30 A.M. OCR program: Abbyy FineReader www.abbyy.com Looking at options Working with TIFF files Opening the file Zoom window Running OCR layout preview modifying spell check looks for barcodes Blocks Block types Adding to blocks Subtracting from blocks Reordering blocks Customize toolbars Adding reordering shortcut to the tool bar Save and load blocks Eraser Saving Types of documents Save to file Formats settings Optional hyphen in Word remove optional hyphen (Tools > Format Settings) Tables manipulating Languages Training 11:45 A.M. Lunch 1:00 P.M. OCR program: ScanSoft OmniPage www.scansoft.com Looking at options Languages Working with TIFF files SET Tools (see handout) www.htctu.net rev. 9/27/2011 Opening the file View toolbar with shortcut keys (View > Toolbar) Running OCR On-the-fly zoning modifying spell check Zone type Resizing zones Reordering zones Enlargement tool Ungroup Templates Saving Save individual pages Save all files in one document One image, one document Training Format types Use true page for PDF, not Word Use flowing page or retain fronts and paragraphs for Word Optional hyphen in Word Tables manipulating Scheduler/Batch manager: Workflow Speech Saving speech files (WAV) Creating a Workflow 2:30 P.M. Break 2:45 P.M.
    [Show full text]
  • Intelligent Chat Bot
    INTELLIGENT CHAT BOT A. Mohamed Rasvi, V.V. Sabareesh, V. Suthajebakumari Computer Science and Engineering, Kamaraj College of Engineering and Technology, India ABSTRACT This paper discusses the workflow of intelligent chat bot powered by various artificial intelligence algorithms. The replies for messages in chats are trained against set of predefined questions and chat messages. These trained data sets are stored in database. Relying on one machine-learning algorithm showed inaccurate performance, so this bot is powered by four different machine-learning algorithms to make a decision. The inference engine pre-processes the received message then matches it against the trained datasets based on the AI algorithms. The AIML provides similar way of replying to a message in online chat bots using simple XML based mechanism but the method of employing AI provides accurate replies than the widely used AIML in the Internet. This Intelligent chat bot can be used to provide assistance for individual like answering simple queries to booking a ticket for a trip and also when trained properly this can be used as a replacement for a teacher which teaches a subject or even to teach programming. Keywords : AIML, Artificial Intelligence, Chat bot, Machine-learning, String Matching. I. INTRODUCTION Social networks are attracting masses and gaining huge momentum. They allow instant messaging and sharing features. Guides and technical help desks provide on demand tech support through chat services or voice call. Queries are taken to technical support team from customer to clear their doubts. But this process needs a dedicated support team to answer user‟s query which is a lot of man power.
    [Show full text]
  • OCR Pwds and Assistive Qatari Using OCR Issue No
    Arabic Optical State of the Smart Character Art in Arabic Apps for Recognition OCR PWDs and Assistive Qatari using OCR Issue no. 15 Technology Research Nafath Efforts Page 04 Page 07 Page 27 Machine Learning, Deep Learning and OCR Revitalizing Technology Arabic Optical Character Recognition (OCR) Technology at Qatar National Library Overview of Arabic OCR and Related Applications www.mada.org.qa Nafath About AboutIssue 15 Content Mada Nafath3 Page Nafath aims to be a key information 04 Arabic Optical Character resource for disseminating the facts about Recognition and Assistive Mada Center is a private institution for public benefit, which latest trends and innovation in the field of Technology was founded in 2010 as an initiative that aims at promoting ICT Accessibility. It is published in English digital inclusion and building a technology-based community and Arabic languages on a quarterly basis 07 State of the Art in Arabic OCR that meets the needs of persons with functional limitations and intends to be a window of information Qatari Research Efforts (PFLs) – persons with disabilities (PWDs) and the elderly in to the world, highlighting the pioneering Qatar. Mada today is the world’s Center of Excellence in digital work done in our field to meet the growing access in Arabic. Overview of Arabic demands of ICT Accessibility and Assistive 11 OCR and Related Through strategic partnerships, the center works to Technology products and services in Qatar Applications enable the education, culture and community sectors and the Arab region. through ICT to achieve an inclusive community and educational system. The Center achieves its goals 14 Examples of Optical by building partners’ capabilities and supporting the Character Recognition Tools development and accreditation of digital platforms in accordance with international standards of digital access.
    [Show full text]
  • An Ensemble Regression Approach for Ocr Error Correction
    AN ENSEMBLE REGRESSION APPROACH FOR OCR ERROR CORRECTION by Jie Mei Submitted in partial fulfillment of the requirements for the degree of Master of Computer Science at Dalhousie University Halifax, Nova Scotia March 2017 © Copyright by Jie Mei, 2017 Table of Contents List of Tables ................................... iv List of Figures .................................. v Abstract ...................................... vi List of Symbols Used .............................. vii Acknowledgements ............................... viii Chapter 1 Introduction .......................... 1 1.1 Problem Statement............................ 1 1.2 Proposed Model .............................. 2 1.3 Contributions ............................... 2 1.4 Outline ................................... 3 Chapter 2 Background ........................... 5 2.1 OCR Procedure .............................. 5 2.2 OCR-Error Characteristics ........................ 6 2.3 Modern Post-Processing Models ..................... 7 Chapter 3 Compositional Correction Frameworks .......... 9 3.1 Noisy Channel ............................... 11 3.1.1 Error Correction Models ..................... 12 3.1.2 Correction Inferences ....................... 13 3.2 Confidence Analysis ............................ 16 3.2.1 Error Correction Models ..................... 16 3.2.2 Correction Inferences ....................... 17 3.3 Framework Comparison ......................... 18 ii Chapter 4 Proposed Model ........................ 21 4.1 Error Detection .............................. 22
    [Show full text]
  • Reconocimiento De Escritura Lecture 4/5 --- Layout Analysis
    Reconocimiento de Escritura Lecture 4/5 | Layout Analysis Daniel Keysers Jan/Feb-2008 Keysers: RES-08 1 Jan/Feb-2008 Outline Detection of Geometric Primitives The Hough-Transform RAST Document Layout Analysis Introduction Algorithms for Layout Analysis A `New' Algorithm: Whitespace Cuts Evaluation of Layout Analyis Statistical Layout Analysis OCR OCR - Introduction OCR fonts Tesseract Sources of OCR Errors Keysers: RES-08 2 Jan/Feb-2008 Outline Detection of Geometric Primitives The Hough-Transform RAST Document Layout Analysis Introduction Algorithms for Layout Analysis A `New' Algorithm: Whitespace Cuts Evaluation of Layout Analyis Statistical Layout Analysis OCR OCR - Introduction OCR fonts Tesseract Sources of OCR Errors Keysers: RES-08 3 Jan/Feb-2008 Detection of Geometric Primitives some geometric entities important for DIA: I text lines I whitespace rectangles (background in documents) Keysers: RES-08 4 Jan/Feb-2008 Outline Detection of Geometric Primitives The Hough-Transform RAST Document Layout Analysis Introduction Algorithms for Layout Analysis A `New' Algorithm: Whitespace Cuts Evaluation of Layout Analyis Statistical Layout Analysis OCR OCR - Introduction OCR fonts Tesseract Sources of OCR Errors Keysers: RES-08 5 Jan/Feb-2008 Hough-Transform for Line Detection Assume we are given a set of points (xn; yn) in the image plane. For all points on a line we must have yn = a0 + a1xn If we want to determine the line, each point implies a constraint yn 1 a1 = − a0 xn xn Keysers: RES-08 6 Jan/Feb-2008 Hough-Transform for Line Detection The space spanned by the model parameters a0 and a1 is called model space, parameter space, or Hough space.
    [Show full text]
  • ABBYY Finereader Engine OCR
    ABBYY FineReader Engine Performance Guide Integrating optical character recognition (OCR) technology will effectively extend the functionality of your application. Excellent performance of the OCR component is one of the key factors for high customer satisfaction. This document provides information on general OCR performance factors and the possibilities to optimize them in the Software Development Kit ABBYY FineReader Engine. By utilizing its advanced capabilities and options, the high OCR performance can be improved even further for optimal customer experience. When measuring OCR performance, there are two major parameters to consider: RECOGNITION ACCURACY PROCESSING SPEED Which Factors Influence the OCR Accuracy and Processing Speed? Image type and Image image source quality OCR accuracy and Processing System settings processing resources speed Document Application languages architecture Recognition speed and recognition accuracy can be significantly improved by using the right parameters in ABBYY FineReader Engine. Image Type and Image Quality Images can come from different sources. Digitally created PDFs, screenshots of computer and tablet devices, image Key factor files created by scanners, fax servers, digital cameras Image for OCR or smartphones – various image sources will lead to quality = different image types with different level of image quality. performance For example, using the wrong scanner settings can cause “noise” on the image, like random black dots or speckles, blurred and uneven letters, or skewed lines and shifted On the other hand, processing ‘high-quality images’ with- table borders. In terms of OCR, this is a ‘low-quality out distortions reduces the processing time. Additionally, image’. reading high-quality images leads to higher accuracy results. Processing low-quality images requires high computing power, increases the overall processing time and deterio- Therefore, it is recommended to use high-quality images rates the recognition results.
    [Show full text]
  • Michael Sharpe
    294 TUGboat, Volume 38 (2017), No. 3 Interview: Michael Sharpe complex machinery, though I did spend a couple of years working as an assistant to a projectionist in David Walden the local movie theater during my high school years. DW : When you say \misspent on sport", what are you thinking of? MS: Because we moved regularly, I was motivated to focus on making new friends as quickly as possible, and sport was a good way to do it in that environ- ment. I played cricket, Australian Rules football and tennis. It was fortunate for my later career that I was not really good at any of them. DW : Were you already doing electronics things as a hobby and enjoying high school math and science before university? MS: I was not into electronics as a hobby, finding Michael Sharpe has been using TEX since the mid- the analog radio of those days not very interesting. 1980s. In more recent years he has been active in I did do well in sciences and math in high school. If the TEX fonts world. there had been computers available in those days, it may have been a different story. Dave Walden, interviewer: Please tell me a bit DW : What took you away from Australia and to about yourself. Yale for your Ph.D. work? Michael Sharpe, interviewee: I was born in Syd- MS: Just previous to my generation of college grad- ney, Australia in 1941. After 1945, my father joined uates in Australia, most students wanting to pursue the Commonwealth Public Service, which corresponds an advanced degree in sciences and engineering went in the US to the federal civil service, and moved fre- to Great Britain if they could manage it.
    [Show full text]
  • 4C24fb34-Ubuntu-Server-Guide.Pdf
    Introduction Welcome to the Ubuntu Server Guide! Download the Ubuntu server guide as a PDF. This is the preliminary and in development for the next Ubuntu LTS, Focal Fossa. Contents may have errors and omissions. Changes, Errors, and Bugs If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with each page. Support There are a couple of different ways that Ubuntu Server Edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions. See the Ubuntu Support page for more information. Installation This chapter provides a quick overview of installing Ubuntu 20.04 Server Edition. For more detailed instruc- tions, please refer to the Ubuntu Installation Guide. Preparing to Install This section explains various aspects to consider before starting the installation. System Requirements Ubuntu 20.04 Server Edition provides a common, minimalist base for a variety of server applications, such as file/print services, web hosting, email hosting, etc.
    [Show full text]
  • Practice with Python
    CSI4108-01 ARTIFICIAL INTELLIGENCE 1 Word Embedding / Text Processing Practice with Python 2018. 5. 11. Lee, Gyeongbok Practice with Python 2 Contents • Word Embedding – Libraries: gensim, fastText – Embedding alignment (with two languages) • Text/Language Processing – POS Tagging with NLTK/koNLPy – Text similarity (jellyfish) Practice with Python 3 Gensim • Open-source vector space modeling and topic modeling toolkit implemented in Python – designed to handle large text collections, using data streaming and efficient incremental algorithms – Usually used to make word vector from corpus • Tutorial is available here: – https://github.com/RaRe-Technologies/gensim/blob/develop/tutorials.md#tutorials – https://rare-technologies.com/word2vec-tutorial/ • Install – pip install gensim Practice with Python 4 Gensim for Word Embedding • Logging • Input Data: list of word’s list – Example: I have a car , I like the cat → – For list of the sentences, you can make this by: Practice with Python 5 Gensim for Word Embedding • If your data is already preprocessed… – One sentence per line, separated by whitespace → LineSentence (just load the file) – Try with this: • http://an.yonsei.ac.kr/corpus/example_corpus.txt From https://radimrehurek.com/gensim/models/word2vec.html Practice with Python 6 Gensim for Word Embedding • If the input is in multiple files or file size is large: – Use custom iterator and yield From https://rare-technologies.com/word2vec-tutorial/ Practice with Python 7 Gensim for Word Embedding • gensim.models.Word2Vec Parameters – min_count:
    [Show full text]
  • NLP - Assignment 2
    NLP - Assignment 2 Week 2 December 27th, 2016 1. A 5-gram model is a order Markov Model: (a) Six (b) Five (c) Four (d) Constant Ans : c) Four 2. For the following corpus C1 of 3 sentences, what is the total count of unique bi- grams for which the likelihood will be estimated? Assume we do not perform any pre-processing, and we are using the corpus as given. (i) ice cream tastes better than any other food (ii) ice cream is generally served after the meal (iii) many of us have happy childhood memories linked to ice cream (a) 22 (b) 27 (c) 30 (d) 34 Ans : b) 27 3. Arrange the words \curry, oil and tea" in descending order, based on the frequency of their occurrence in the Google Books n-grams. The Google Books n-gram viewer is available at https://books.google.com/ngrams: (a) tea, oil, curry (c) curry, tea, oil (b) curry, oil, tea (d) oil, tea, curry Ans: d) oil, tea, curry 4. Given a corpus C2, The Maximum Likelihood Estimation (MLE) for the bigram \ice cream" is 0.4 and the count of occurrence of the word \ice" is 310. The likelihood of \ice cream" after applying add-one smoothing is 0:025, for the same corpus C2. What is the vocabulary size of C2: 1 (a) 4390 (b) 4690 (c) 5270 (d) 5550 Ans: b)4690 The Questions from 5 to 10 require you to analyse the data given in the corpus C3, using a programming language of your choice.
    [Show full text]
  • Software Decode SDK for Android Developer Guide (En)
    SOFTWARE DECODE SDK FOR ANDROID DEVELOPER GUIDE SOFTWARE DECODE SDK FOR ANDROID DEVELOPER GUIDE 72E-162670-06 Revision A November 2016 ii Software Decode SDK for Android Developer Guide No part of this publication may be reproduced or used in any form, or by any electrical or mechanical means, without permission in writing from Zebra. This includes electronic or mechanical means, such as photocopying, recording, or information storage and retrieval systems. The material in this manual is subject to change without notice. The software is provided strictly on an “as is” basis. All software, including firmware, furnished to the user is on a licensed basis. Zebra grants to the user a non-transferable and non-exclusive license to use each software or firmware program delivered hereunder (licensed program). Except as noted below, such license may not be assigned, sublicensed, or otherwise transferred by the user without prior written consent of Zebra. No right to copy a licensed program in whole or in part is granted, except as permitted under copyright law. The user shall not modify, merge, or incorporate any form or portion of a licensed program with other program material, create a derivative work from a licensed program, or use a licensed program in a network without written permission from Zebra. The user agrees to maintain Zebra’s copyright notice on the licensed programs delivered hereunder, and to include the same on any authorized copies it makes, in whole or in part. The user agrees not to decompile, disassemble, decode, or reverse engineer any licensed program delivered to the user or any portion thereof.
    [Show full text]