Recognition of Printed and Handwritten Kannada Characters Using SVM Classifier

Total Page:16

File Type:pdf, Size:1020Kb

Recognition of Printed and Handwritten Kannada Characters Using SVM Classifier View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by MAT Journals Journal of VLSI Design and Signal Processing Volume 4 Issue 2 Recognition of Printed and Handwritten Kannada Characters using SVM Classifier Urmila B. Basarkod 1, Shivanand Patil 2 1P.G. Student, 2Assistant Professor Department of Computer Science and Engineering, KLE Dr. M. S. Sheshgiri College of Engineering and Technology, Udyambag, Belagavi, Karnataka, India E-mail: [email protected] Abstract The optical character recognition is the process of converting textual scanned image into a computer editable format but one of the major challenges faced is the recognition of character from the image. The proposed system is application software for Recognition of Kannada Printed and Handwritten Characters from an image. The input image is subjected for pre-processing to make the image noise free by using median filter and then it is converted to binary image. Segmentation process is carried out to extract one character from the image by performing horizontal segmentation followed by vertical segmentation. Co- relation coefficient is used for extracting the features from the image then the character is classified using SVM classifier finally the classified character is post-processed using its Unicode values to display the recognized character. We have obtained perfectness of 100% and 99% in recognition of Kannada Printed and Handwritten characters respectively. Keywords: Kannada Script; Optical Character Recognition; Median Filter; Local Binary Pattern, Co-relation Coefficient, SVM Classifier; INTRODUCTION processing. Each stage result is depending In the present scenario large amount of on the result of previous stage. information is stored in images, thus character recognition has obtained Kannada is Dravidian language significant popularity and it has emerged predominantly spoken by kannada people. as an active area of research, Thus Image It is the official language of Karnataka. processing is mainly concerned about The Kannada script is a combination of 49 extraction of more valuable information letters and these letters are of 3 categories from the images. The recognition of they are: swara includes 13 letters, information begins by identifying the text vyanjanas include 34 letters and yogavaka from the image, this process of identifying includes 2 letters. and recognition text is called Optical Character Recognition (OCR). Optical Character Recognition is mechanical conversion or electronic conversion of printed or hand written text images into a form that the computer can manipulate. Optical Character Recognition includes five stage of processing those Fig. 1: Kannada letters include pre-processing, segmentation, feature extraction, classification, and post Some languages of Asia like Japanese and Chinese and many European languages 1 Page 1-9 © MAT Journals 2018. All Rights Reserved Journal of VLSI Design and Signal Processing Volume 4 Issue 2 have used an OCR system. However less recognising text in the field of image effort as put in the OCR system for Indian processing. In [6] author used OCR system languages, particularly for South Indian for handwritten Kannada and Telugu digits languages such as Kannada. Thus this recognition they have used zoning features work concentrates on Recognition of along with the K-nearest Neighbour Kannada Characters from an image. classifier and Support Vector Machine classifier independently to identify LITERATURE SURVEY handwritten Kannada and Telugu digits. In[1] author provides general method for They have considered an image consisting Recognition of Character from the image. a digit which is then divided it into 64 This process of recognising text from zones. Computed each zone pixel density image is step by step process which and adopted classifiers independently to includes pre-processing for basic classify the digits. Acquired exactness for processing input image like binarization handwritten Kannada and Telugu digits for converting gray scale image into recognition of 95.50%, 96.22% and Binary image and also for removing noise 99.83% and 99.80% respectively. In [7] from the image, segmentation is the author has considered an image containing processing step that divides image into line Kannada vowels or English characters, and then into character, feature extraction is to normalized it into 32*32, then divided into calculate the characteristics of an image, 64 zones and the pixel density is counted classification is to compare characters for each zone. These features are assigned from database, and at last it is post to KNN and SVM classifiers. In case of processing. This paper provides a brief handwritten Kannada character survey on the extraction of characters from recognition, gained an average perfectness given images. In [3] usually images of 92.71% and 96.0% with KNN and SVM include some noise, because of presence of respectively. For uppercase English these noise quality of an image degrades. alphabet gained an average perfectness of Therefore making an image noise free is 97.51% and 98.26% with KNN and SVM too important in order to maximise the respectively. In[8] Recognised handwritten quality of an image. The process of Kannada and English characters from an removal of noise or improving the quality image based on spatial features. Characters of image is called de-noising. In this an are assigned to KNN classifier. The author has discussed about some list of algorithm is dependent on the type and noise that can be present in the image and size of the structuring element used for the some de-noising techniques those can feature extraction. It is independent of the be used to minimise the noise in the image. image normalisation, thinning, noise and In [5] Recognition of text from an image slant of the characters. Gained a include 5 steps among these classification perfectness of an average percentage is one of the necessary step need to be 96.2% and 90.01% % in case of carried out for recognising the text from handwritten numerical and English the image. The efficiency and accuracy of uppercase alphabets respectively. recognising text highly depends on classification phase. In this paper an author PROPOSED SYSTEM provided briefly about classifiers, to the Aim of the proposed system is to recognise beginners of this field. In this they have the Kannada Printed and Handwritten explained about Support Vector Machine Characters from the considered image (SVM), Artificial Neural Network (ANN), using Optical Character Recognition. The Decision Tree (DT), and KNN classifier, process includes five steps which are highly popular classifiers for 2 Page 1-9 © MAT Journals 2018. All Rights Reserved Journal of VLSI Design and Signal Processing Volume 4 Issue 2 Fig. 2: System Architecture Step 1: Pre-processing of the image: of an image. Even the accuracy of the An image is considered and it is converted classification / recognition is depending on from RGB - color image into gray scale the result of feature extraction. For image. Then median filter-2 is used to extraction of potential features from filter the image then it is converted to considered image, the image is segmented binary image of 0's and 1's form using by LBP and normalized to size of 42*24 thresholding. Then BWareaopen is used to and after normalizing correlation remove noise from the binirized image and coefficient is used for extraction of text after completion, the image is passed for from image. The result of correlation segmenting the image. coefficient lies between 0 to 1. If the result is 0, there are no similarity exits. If the Step 2: Segmentation process result is 1 both images are exactly the The pre-processed image is considered and same it is segmented horizontally then followed by vertically to extract the single character Step 4: Classification from the image. It considers the image For classification of characters SVM performs horizontal segmentation and (Support Vector Machine) classifier is divides into lines, Then it considers the used. It provides separation between 2 first line, Performs vertical segmentation linearly separable classes which is on the line and divide it into single achieved by considering a hyper plane on characters which are independent of other all data points of considered classes. The characters in the line. Repeat this process hyper plane provides proper classification for all the characters in the line, and for all on all data points. Consider two classes A the lines in the image. and B. All the points belongs to class A are labeled +1 and the points belongs to Step 3: Feature Extraction class B are labeled as -1. As shown in Feature extraction is one of the major below figure. steps, which describes the characteristics 3 Page 1-9 © MAT Journals 2018. All Rights Reserved Journal of VLSI Design and Signal Processing Volume 4 Issue 2 Fig 3: SVM Classification Step 5:Post-Processing The experiment was carried out for Post - Processing is last step of this project Kannada Printed and Handwritten which considers the image belongs to Characters by using SVM as a classifier. class-A which is classified by SVM The result of the experiment is gained by classifier. Assign them unique ASCII using 118 Training examples, and tested value. In this the testing character is by 118 examples. Below pictures show the compared with the ASCII codes or result snapshots of the proposed system Unicode's of the Kannada characters and same procedure is followed for matched character is displayed onto the handwritten characters. Table 1 shows the screen. comparison of different methods those were implemented by different authors Experimental Result with the proposed system. Checking for printed Characters Fig 4: Selecting an image 4 Page 1-9 © MAT Journals 2018. All Rights Reserved Journal of VLSI Design and Signal Processing Volume 4 Issue 2 Fig 5: Performing gray scale Fig 6: performing Binarization Fig 7: Removing noise from the image 5 Page 1-9 © MAT Journals 2018.
Recommended publications
  • UNICODE for Kannada General Information & Description
    UNICODE for Kannada (U+0C80 to U+0CFF) General Information & Description Written by: C V Srinatha Sastry Issued by: Director Directorate of Information Technology Government of Karnataka Multi Storied Buildings, Vidhana Veedhi BANGALORE 560 001 INDIA PDF created with FinePrint pdfFactory Pro trial version http://www.pdffactory.com UNICODE for Kannada Introduction The Kannada script is a South Indian script. It is used to write Kannada language of Karnataka State in India. This is also used in many parts of Tamil Nadu, Kerala, Andhra Pradesh and Maharashtra States of India. In addition, the Kannada script is also used to write Tulu, Konkani and Kodava languages. Kannada along with other Indian language scripts shares a large number of structural features. The Kannada block of Unicode Standard (0C80 to 0CFF) is based on ISCII-1988 (Indian Standard Code for Information Interchange). The Unicode Standard (version 3) encodes Kannada characters in the same relative positions as those coded in the ISCII-1988 standard. The Writing system that employs Kannada script constitutes a cross between syllabic writing systems and phonemic writing systems (alphabets). The effective unit of writing Kannada is the orthographic syllable consisting of a consonant (Vyanjana) and vowel (Vowel) (CV) core and optionally, one or more preceding consonants, with a canonical structure of ((C)C)CV. The orthographic syllable need not correspond exactly with a phonological syllable, especially when a consonant cluster is involved, but the writing system is built on phonological principles and tends to correspond quite closely to pronunciation. The orthographic syllable is built up of alphabetic pieces, the actual letters of Kannada script.
    [Show full text]
  • Assessment of Options for Handling Full Unicode Character Encodings in MARC21 a Study for the Library of Congress
    1 Assessment of Options for Handling Full Unicode Character Encodings in MARC21 A Study for the Library of Congress Part 1: New Scripts Jack Cain Senior Consultant Trylus Computing, Toronto 1 Purpose This assessment intends to study the issues and make recommendations on the possible expansion of the character set repertoire for bibliographic records in MARC21 format. 1.1 “Encoding Scheme” vs. “Repertoire” An encoding scheme contains codes by which characters are represented in computer memory. These codes are organized according to a certain methodology called an encoding scheme. The list of all characters so encoded is referred to as the “repertoire” of characters in the given encoding schemes. For example, ASCII is one encoding scheme, perhaps the one best known to the average non-technical person in North America. “A”, “B”, & “C” are three characters in the repertoire of this encoding scheme. These three characters are assigned encodings 41, 42 & 43 in ASCII (expressed here in hexadecimal). 1.2 MARC8 "MARC8" is the term commonly used to refer both to the encoding scheme and its repertoire as used in MARC records up to 1998. The ‘8’ refers to the fact that, unlike Unicode which is a multi-byte per character code set, the MARC8 encoding scheme is principally made up of multiple one byte tables in which each character is encoded using a single 8 bit byte. (It also includes the EACC set which actually uses fixed length 3 bytes per character.) (For details on MARC8 and its specifications see: http://www.loc.gov/marc/.) MARC8 was introduced around 1968 and was initially limited to essentially Latin script only.
    [Show full text]
  • Proposal for a Kannada Script Root Zone Label Generation Ruleset (LGR)
    Proposal for a Kannada Script Root Zone Label Generation Ruleset (LGR) Proposal for a Kannada Script Root Zone Label Generation Ruleset (LGR) LGR Version: 3.0 Date: 2019-03-06 Document version: 2.6 Authors: Neo-Brahmi Generation Panel [NBGP] 1. General Information/ Overview/ Abstract The purpose of this document is to give an overview of the proposed Kannada LGR in the XML format and the rationale behind the design decisions taken. It includes a discussion of relevant features of the script, the communities or languages using it, the process and methodology used and information on the contributors. The formal specification of the LGR can be found in the accompanying XML document: proposal-kannada-lgr-06mar19-en.xml Labels for testing can be found in the accompanying text document: kannada-test-labels-06mar19-en.txt 2. Script for which the LGR is Proposed ISO 15924 Code: Knda ISO 15924 N°: 345 ISO 15924 English Name: Kannada Latin transliteration of the native script name: Native name of the script: ಕನ#ಡ Maximal Starting Repertoire (MSR) version: MSR-4 Some languages using the script and their ISO 639-3 codes: Kannada (kan), Tulu (tcy), Beary, Konkani (kok), Havyaka, Kodava (kfa) 1 Proposal for a Kannada Script Root Zone Label Generation Ruleset (LGR) 3. Background on Script and Principal Languages Using It 3.1 Kannada language Kannada is one of the scheduled languages of India. It is spoken predominantly by the people of Karnataka State of India. It is one of the major languages among the Dravidian languages. Kannada is also spoken by significant linguistic minorities in the states of Andhra Pradesh, Telangana, Tamil Nadu, Maharashtra, Kerala, Goa and abroad.
    [Show full text]
  • The Unicode Standard, Version 4.0--Online Edition
    This PDF file is an excerpt from The Unicode Standard, Version 4.0, issued by the Unicode Consor- tium and published by Addison-Wesley. The material has been modified slightly for this online edi- tion, however the PDF files have not been modified to reflect the corrections found on the Updates and Errata page (http://www.unicode.org/errata/). For information on more recent versions of the standard, see http://www.unicode.org/standard/versions/enumeratedversions.html. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and Addison-Wesley was aware of a trademark claim, the designations have been printed in initial capital letters. However, not all words in initial capital letters are trademark designations. The Unicode® Consortium is a registered trademark, and Unicode™ is a trademark of Unicode, Inc. The Unicode logo is a trademark of Unicode, Inc., and may be registered in some jurisdictions. The authors and publisher have taken care in preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. The Unicode Character Database and other files are provided as-is by Unicode®, Inc. No claims are made as to fitness for any particular purpose. No warranties of any kind are expressed or implied. The recipient agrees to determine applicability of information provided. Dai Kan-Wa Jiten used as the source of reference Kanji codes was written by Tetsuji Morohashi and published by Taishukan Shoten.
    [Show full text]
  • An Introduction to Indic Scripts
    An Introduction to Indic Scripts Richard Ishida W3C [email protected] HTML version: http://www.w3.org/2002/Talks/09-ri-indic/indic-paper.html PDF version: http://www.w3.org/2002/Talks/09-ri-indic/indic-paper.pdf Introduction This paper provides an introduction to the major Indic scripts used on the Indian mainland. Those addressed in this paper include specifically Bengali, Devanagari, Gujarati, Gurmukhi, Kannada, Malayalam, Oriya, Tamil, and Telugu. I have used XHTML encoded in UTF-8 for the base version of this paper. Most of the XHTML file can be viewed if you are running Windows XP with all associated Indic font and rendering support, and the Arial Unicode MS font. For examples that require complex rendering in scripts not yet supported by this configuration, such as Bengali, Oriya, and Malayalam, I have used non- Unicode fonts supplied with Gamma's Unitype. To view all fonts as intended without the above you can view the PDF file whose URL is given above. Although the Indic scripts are often described as similar, there is a large amount of variation at the detailed implementation level. To provide a detailed account of how each Indic script implements particular features on a letter by letter basis would require too much time and space for the task at hand. Nevertheless, despite the detail variations, the basic mechanisms are to a large extent the same, and at the general level there is a great deal of similarity between these scripts. It is certainly possible to structure a discussion of the relevant features along the same lines for each of the scripts in the set.
    [Show full text]
  • Supported Languages in Unicode SAP Systems Index Language
    Supported Languages in Unicode SAP Systems Non-SAP Language Language SAP Code Page SAP SAP Code Code Support in Code (ASCII) = Blended Additional Language Description Country Script Languag according according SAP Page Basis of SAP Code Information e Code to ISO 639- to ISO 639- systems Translated as (ASCII) Code Page Page 2 1 of SAP Index (ASCII) Release Download Info: AThis list is Unicode Read SAP constantly being Abkhazian Abkhazian Georgia Cyrillic AB abk ab 1500 ISO8859-5 revised. only Note 895560 Unicode Read SAP You can download Achinese Achinese AC ace the latest version of only Note 895560 Unicode Read SAP this file from SAP Acoli Acoli AH ach Note 73606 or from only Note 895560 Unicode Read SAP SDN -> /i18n Adangme Adangme AD ada only Note 895560 Unicode Read SAP Adygei Adygei A1 ady only Note 895560 Unicode Read SAP Afar Afar Djibouti Latin AA aar aa only Note 895560 Unicode Read SAP Afar Afar Eritrea Latin AA aar aa only Note 895560 Unicode Read SAP Afar Afar Ethiopia Latin AA aar aa only Note 895560 Unicode Read SAP Afrihili Afrihili AI afh only Note 895560 6100, Unicode Read SAP Afrikaans Afrikaans South Africa Latin AF afr af 1100 ISO8859-1 6500, only Note 895560 6100, Unicode Read SAP Afrikaans Afrikaans Botswana Latin AF afr af 1100 ISO8859-1 6500, only Note 895560 6100, Unicode Read SAP Afrikaans Afrikaans Malawi Latin AF afr af 1100 ISO8859-1 6500, only Note 895560 6100, Unicode Read SAP Afrikaans Afrikaans Namibia Latin AF afr af 1100 ISO8859-1 6500, only Note 895560 6100, Unicode Read SAP Afrikaans Afrikaans Zambia
    [Show full text]
  • Information, Characters, Unicode
    Information, Characters, Unicode Unicode © 24 August 2021 1 / 107 Hidden Moral Small mistakes can be catastrophic! Style Care about every character of your program. Tip: printf Care about every character in the program’s output. (Be reasonably tolerant and defensive about the input. “Fail early” and clearly.) Unicode © 24 August 2021 2 / 107 Imperative Thou shalt care about every Ěaracter in your program. Unicode © 24 August 2021 3 / 107 Imperative Thou shalt know every Ěaracter in the input. Thou shalt care about every Ěaracter in your output. Unicode © 24 August 2021 4 / 107 Information – Characters In modern computing, natural-language text is very important information. (“number-crunching” is less important.) Characters of text are represented in several different ways and a known character encoding is necessary to exchange text information. For many years an important encoding standard for characters has been US ASCII–a 7-bit encoding. Since 7 does not divide 32, the ubiquitous word size of computers, 8-bit encodings are more common. Very common is ISO 8859-1 aka “Latin-1,” and other 8-bit encodings of characters sets for languages other than English. Currently, a very large multi-lingual character repertoire known as Unicode is gaining importance. Unicode © 24 August 2021 5 / 107 US ASCII (7-bit), or bottom half of Latin1 NUL SOH STX ETX EOT ENQ ACK BEL BS HT LF VT FF CR SS SI DLE DC1 DC2 DC3 DC4 NAK SYN ETP CAN EM SUB ESC FS GS RS US !"#$%&’()*+,-./ 0123456789:;<=>? @ABCDEFGHIJKLMNO PQRSTUVWXYZ[\]^_ `abcdefghijklmno pqrstuvwxyz{|}~ DEL Unicode Character Sets © 24 August 2021 6 / 107 Control Characters Notice that the first twos rows are filled with so-called control characters.
    [Show full text]
  • Proposal for an Oriya Script Root Zone Label Generation Ruleset (LGR)
    Proposal for an Oriya Script Root Zone Label Generation Ruleset (LGR) LGR Version: 3.0 Date: 2019-03-06 Document version: 2.12 Authors: Neo-Brahmi Generation Panel [NBGP] 1 General Information/ Overview/ Abstract The purpose of this document is to give an overview of the proposed Root Zone Level Generation Rules for the Oriya script. It includes a discussion of relevant features of the script, the communities or languages using it, the process and methodology used and information on the contributors. The formal specification of the LGR can be found in the accompanying XML document: proposal-oriya-lgr-06mar19-en.xml Labels for testing can be found in the accompanying text document: oriya-test-labels-06mar19-en.txt 2 Script for which the LGR is proposed ISO 15924 Code: Orya ISO 15924 Key N°: 327 ISO 15924 English Name: Oriya (Odia) Latin transliteration of native script name: oṛiā Native name of the script: ଓଡ଼ିଆ Maximal Starting Repertoire (MSR) version: MSR-4 3 Background on Script and Principal Languages using it Oriya (amended later as Odia) is an Eastern Indic language spoken by about 40 million people (3,75,21,324 as per census 2011(http://censusindia.gov.in/2011Census/Language- 2011/Statement-4.pdf) mainly in the Indian state of Orissa, and also in parts of West Proposal for an Oriya Root Zone LGR Neo-Brahmi Generation Panel Bengal, Jharkhand, Chhattisgarh and Andhra Pradesh. Oriya(Odia) is one of the many official languages of India. It is the official language of Odisha, and the second official language of Jharkhand.
    [Show full text]
  • Globalization Support Oracle Unicode Database Support
    Globalization Support Oracle Unicode database support An Oracle White Paper May 2005 Oracle Unicode database support Introduction ....................................................................................................... 3 Requirement....................................................................................................... 3 What is UNICODE? ........................................................................................ 4 Supplementary Characters........................................................................... 4 Unicode Encodings ...................................................................................... 5 UTF-8 Encoding ...................................................................................... 5 UCS-2 Encoding ...................................................................................... 6 UTF-16 Encoding.................................................................................... 6 Unicode and Oracle .......................................................................................... 7 AL24UTFFSS................................................................................................ 7 UTF8 .............................................................................................................. 8 UTFE.............................................................................................................. 8 AL32UTF8 .................................................................................................... 8 AL16UTF16 .................................................................................................
    [Show full text]
  • The Unicode Standard, Version 3.0, Issued by the Unicode Consor- Tium and Published by Addison-Wesley
    The Unicode Standard Version 3.0 The Unicode Consortium ADDISON–WESLEY An Imprint of Addison Wesley Longman, Inc. Reading, Massachusetts · Harlow, England · Menlo Park, California Berkeley, California · Don Mills, Ontario · Sydney Bonn · Amsterdam · Tokyo · Mexico City Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and Addison-Wesley was aware of a trademark claim, the designations have been printed in initial capital letters. However, not all words in initial capital letters are trademark designations. The authors and publisher have taken care in preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. The Unicode Character Database and other files are provided as-is by Unicode®, Inc. No claims are made as to fitness for any particular purpose. No warranties of any kind are expressed or implied. The recipient agrees to determine applicability of information provided. If these files have been purchased on computer-readable media, the sole remedy for any claim will be exchange of defective media within ninety days of receipt. Dai Kan-Wa Jiten used as the source of reference Kanji codes was written by Tetsuji Morohashi and published by Taishukan Shoten. ISBN 0-201-61633-5 Copyright © 1991-2000 by Unicode, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or other- wise, without the prior written permission of the publisher or Unicode, Inc.
    [Show full text]
  • Overview and Rationale
    Integration Panel: Maximal Starting Repertoire — MSR-4 Overview and Rationale REVISION – November 09, 2018 Table of Contents 1 Overview 3 2 Maximal Starting Repertoire (MSR-4) 3 2.1 Files 3 2.1.1 Overview 3 2.1.2 Normative Definition 3 2.1.3 Code Charts 4 2.2 Determining the Contents of the MSR 5 2.3 Process of Deciding the MSR 6 3 Scripts 7 3.1 Comprehensiveness and Staging 7 3.2 What Defines a Related Script? 8 3.3 Separable Scripts 8 3.4 Deferred Scripts 9 3.5 Historical and Obsolete Scripts 9 3.6 Selecting Scripts and Code Points for the MSR 9 3.7 Scripts Appropriate for Use in Identifiers 9 3.8 Modern Use Scripts 10 3.8.1 Common and Inherited 11 3.8.2 Scripts included in MSR-1 11 3.8.3 Scripts added in MSR-2 11 3.8.4 Scripts added in MSR-3 or MSR-4 12 3.8.5 Modern Scripts Ineligible for the Root Zone 12 3.9 Scripts for Possible Future MSRs 12 3.10 Scripts Identified in UAX#31 as Not Suitable for identifiers 13 4 Exclusions of Individual Code Points or Ranges 14 4.1 Historic and Phonetic Extensions to Modern Scripts 14 4.2 Code Points That Pose Special Risks 15 4.3 Code Points with Strong Justification to Exclude 15 4.4 Code Points That May or May Not be Excludable from the Root Zone LGR 15 4.5 Non-spacing Combining Marks 16 5 Discussion of Particular Code Points 18 Integration Panel: Maximal Starting Repertoire — MSR-3 Overview and Rationale 5.1 Digits and Hyphen 19 5.2 CONTEXT O Code Points 19 5.3 CONTEXT J Code Points 19 5.4 Code Points Restricted for Identifiers 19 5.5 Compatibility with IDNA2003 20 5.6 Code Points for Which the
    [Show full text]
  • Kannada Range: 0C80–0CFF
    Kannada Range: 0C80–0CFF This file contains an excerpt from the character code tables and list of character names for The Unicode Standard, Version 14.0 This file may be changed at any time without notice to reflect errata or other updates to the Unicode Standard. See https://www.unicode.org/errata/ for an up-to-date list of errata. See https://www.unicode.org/charts/ for access to a complete list of the latest character code charts. See https://www.unicode.org/charts/PDF/Unicode-14.0/ for charts showing only the characters added in Unicode 14.0. See https://www.unicode.org/Public/14.0.0/charts/ for a complete archived file of character code charts for Unicode 14.0. Disclaimer These charts are provided as the online reference to the character contents of the Unicode Standard, Version 14.0 but do not provide all the information needed to fully support individual scripts using the Unicode Standard. For a complete understanding of the use of the characters contained in this file, please consult the appropriate sections of The Unicode Standard, Version 14.0, online at https://www.unicode.org/versions/Unicode14.0.0/, as well as Unicode Standard Annexes #9, #11, #14, #15, #24, #29, #31, #34, #38, #41, #42, #44, #45, and #50, the other Unicode Technical Reports and Standards, and the Unicode Character Database, which are available online. See https://www.unicode.org/ucd/ and https://www.unicode.org/reports/ A thorough understanding of the information contained in these additional sources is required for a successful implementation.
    [Show full text]