Unicode Demystified a Tutorial Introduction to the Unciode Standard

Total Page:16

File Type:pdf, Size:1020Kb

Unicode Demystified a Tutorial Introduction to the Unciode Standard Unicode Demystified A Tutorial Introduction to the Unciode Standard Richard Gillam Senior Developer Trilogy Software, Inc. TRILOGY Shameless Plug 1 The Code Page Problem • Characters in most languages are traditionally represented by single-byte values – Allows for 256 characters max – Real limit for most encodings is 192 characters – This includes letters, digits, punctuation, symbols • When a system is used for a new language, the encoding has to be adapted to use that language’s characters • Encodings proliferate – Each language or group of languages gets its own encoding – Different vendors or standards committees devise different encodings, so generally each language has several, often incompatible, encodings Multi-byte encodings • Some languages (Chinese, Japanese, Korean, etc.) have more than 256 characters • Encoding standards for these languages use sequences of bytes for many characters – In many standards, not all characters are the same number of bytes – Can’t tell whether a given byte is a whole character or part of a character – Corruption of one byte can corrupt the whole data stream 2 Interoperability problems • Can’t easily mix languages in a document or system • Data not tagged with encoding, so loss can occur when transferring between systems • Most encodings are ASCII-based, so problems often not seen with English-only data • Two possible solutions: – Systematic tagging of textual data with encoding ID – Universal encoding standard with all languages’ characters 3 Encoding space An ASCII character is 7 bits wide Encoding space Most encodings press the eighth bit into service 4 Encoding space Early versions of Unicode used 16 bits Encoding space Unicode now uses 21 bits 5 Encoding space Plane Row Character number number number Unicode • 21-bit encoding space allows for 1,114,112 characters • 95,156 code point values assigned to characters in Unicode 3.2 • 137,216 code point values set aside for application use • 2,114 code point values set aside for non- character use • 879,626 code point values reserved for future character assignments 6 The Unicode Encoding Space 10 F E D C B A 9 8 7 6 5 4 3 2 1 0 Basic Multilingual Plane The Unicode Encoding Space 10 F E D C B A Supplementary Planes 9 8 7 6 5 4 3 2 1 0 7 The Unicode Encoding Space 10 F Supplementary Special-Purpose E Plane D C B A 9 8 7 6 5 4 3 Supplementary Ideographic Plane 2 Supplementary Multilingual Plane 1 0 The Unicode Encoding Space Private Use Planes 10 F E D C B A 9 8 7 6 5 4 3 2 1 0 8 The Unicode Encoding Space 10 F E D C B A 9 8 7 6 5 4 3 2 1 0 Basic Multilingual Plane The Basic Multilingual Plane 0 General Scripts Area 1 2 Symbols Area CJK Punct. 3 CJK Punct. 4 5 Han 6 7 8 9 A Yi B Hangul C D Surrogates Area E Private Use Area F Compatibility Area 9 The General Scripts Area 00/01 Latin 02/03 IPA Diacriticals Greek 04/05 Cyrillic Armenian Hebrew 06/07 Arabic Syriac Thaana 08/09 Devanagari Bengali 0A/0B Gurmukhi Gujarati Oriya Tamil 0C/0D Telugu Kannada Malayalam Sinhala 0E/0F Thai Lao Tibetan 10/11 Myanmar Georgian Hangul 12/13 Ethiopic Cherokee 14/15 Canadian Aboriginal Syllabics Ogh 16/17 am Runic Philippine Khmer 18/19 Mongolian 1A/1B 1C/1D 1E/1F Latin Greek Unicode Coverage • European scripts – Latin, Greek, Cyrillic, Armenian, Georgian, IPA • Bidirectional (Middle Eastern) scripts – Hebrew, Arabic, Syriac, Thaana • Indic (Indian and Southeast Asian) scripts – Devanagari, Bengali, Gurmukhi, Gujarati, Oriya, Tamil, Telugu, Kannada, Malayalam, Sinhala, Thai, Lao, Khmer, Myanmar, Tibetan, Philippine • East Asian scripts – Chinese (Han) characters, Japanese (Hiragana and Katakana), Korean (Hangul), Yi • Other modern scripts – Mongolian, Ethiopic, Cherokee, Canadian Aboriginal • Historical scripts – Runic, Ogham, Old Italic, Gothic, Deseret • Punctuation and symbols – Numerals, math symbols, scientific symbols, arrows, blocks, geometric shapes, Braille, musical notation, etc. 10 Characters and Glyphs Characters and Glyphs 11 Characters and Glyphs Ligatures fi 12 Ligatures fi Ligatures fi 13 Ligatures Ligatures 14 Ligatures Split Vowels These two marks are parts of the same character 15 Character Positioning Character Positioning 16 Character Positioning Character Positioning 17 Combining characters One character… é Combining characters …or two? é 18 Combining characters Actually, either. Unicode is generative, with accent marks represented with their own code point values… = U+0065 (e) U+0301 (accent) …buté common combinations of letters and accents are also given their own code points for convenience. é = U+00E9 Combining characters This can be tough, because the two representations are to be treated as absolutely identical. é = é U+0065 U+0301 = U+00E9 19 Combining characters Things can get really wild for characters with more than one accent mark: = 006F (o) 0302 (circumflex) 0323 (dot) = 006F (o) 0323 (dot) 0302 (circumflex) = 00F4 (o-circumflex) 0323 (dot) = 1ECD (o-dot) 0302 (circumflex) = 1ED9 (o-circumflex-dot) Combining characters Unicode provides normalization rules to aid in comparison. These provide for a preferred (normalized) representation: = 006F (o) 0302 (circumflex) 0323 (dot) = 006F (o) 0323 (dot) 0302 (circumflex) = 00F4 (o-circumflex) 0323 (dot) Fully = 1ECD (o-dot) 0302 (circumflex) decomposed = 1ED9 (o-circumflex-dot) Fully composed 20 Combining characters • Certain characters are designated as combining characters • Combining characters are grouped into classes by how they combine • Many accented characters are represented as combining character sequences • Composite characters with equivalent combining character sequences are said to decompose to the equivalent sequence • The standard provides for four normalized forms to aid in comparison and processing • The standard provides for a canonical ordering for multiple combining marks attached to the same character Character semantics • The Unicode standard includes an extensive database that specifies a large number of character properties, including: –Name – Type (e.g., letter, digit, punctuation mark) – Decomposition – Case and case mappings (for cased letters) – Numeric value (for digits and numerals) – Combining class (for combining characters) – Directionality – Line-breaking behavior – Cursive joining behavior – For Chinese characters, mappings to various other standards and many other properties 21 Storage formats UTF-32: The 21-bit abstract Unicode value is simply zero-padded to 32 bits: Storage formats UTF-16: For characters in the BMP, the 21-bit value is simply truncated to 16 bits: For other characters, the 21-bit value is turned into a sequence of two 16-bit values called a surrogate pair: A particular numeric value is either a BMP character, a high surrogate, or a low surrogate. 22 Storage formats UTF-8: For ASCII characters, the 21-bit value is truncated to 8 bits: For other characters, the 21-bit value is turned into a sequence of two, three, or four 8-bit values: Different numeric ranges are used for ASCII characters and leading and trailing bytes. Different ranges are used for leading bytes of different-length sequences. Serialization formats • UTF-16 and UTF-32 can be written to a serial device in different byte orders. The standard provides three serialization formats for UTF-16 and UTF-32: – A big-endian version (UTF-16BE and UTF-32BE) where the most-significant byte is written first – A little-endian version (UTF-16LE and UTF-32LE) where the least-significant byte is written first – A self-describing version where the text is preceded by a byte order mark that the receiving process can use to determing endian-ness 23 The Unicode standard • The Unicode standard consists of: – The standard text, published in book form (this includes a complete set of printed code charts) – The Unicode Character Database, a set of data files providing complete property information on every character – Various Web-published supplemental materials: • Unicode Standard Annexes (UAX): Amendments to the standard since the last book was published • Unicode Technical Standards (UTS): Allied standards maintained separately from Unicode itself • Unicode Technical Reports (UTR): Non-normative documents providing background info, implementation hints, or other useful information • Unicode Technical Notes (UTN): Other articles of interest Dealing with Unicode • The basic character and string classes in Windows 2000 and XP are Unicode-based, and Windows provides an extensive set of APIs for working with Unicode text • The basic character and string classes in Java are also Unicode-based, and the Java Class Library also provides an extensive set of APIs for working with Unicode text • Several third-party packages, including the open-source International Components for Unicode, are also available 24 For more information • The published standard is available in bookstores • Virtually everything related to the standard is available at http://ww w.unicode.org • Two good books: Unicode Demystified by yours truly and Unicode: A Primer by Tony Graham • Ask questions at unicode @ u nicode.org • Contact me at rtgillam @concentric.net 25.
Recommended publications
  • The Unicode Cookbook for Linguists: Managing Writing Systems Using Orthography Profiles
    Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2017 The Unicode Cookbook for Linguists: Managing writing systems using orthography profiles Moran, Steven ; Cysouw, Michael DOI: https://doi.org/10.5281/zenodo.290662 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-135400 Monograph The following work is licensed under a Creative Commons: Attribution 4.0 International (CC BY 4.0) License. Originally published at: Moran, Steven; Cysouw, Michael (2017). The Unicode Cookbook for Linguists: Managing writing systems using orthography profiles. CERN Data Centre: Zenodo. DOI: https://doi.org/10.5281/zenodo.290662 The Unicode Cookbook for Linguists Managing writing systems using orthography profiles Steven Moran & Michael Cysouw Change dedication in localmetadata.tex Preface This text is meant as a practical guide for linguists, and programmers, whowork with data in multilingual computational environments. We introduce the basic concepts needed to understand how writing systems and character encodings function, and how they work together. The intersection of the Unicode Standard and the International Phonetic Al- phabet is often not met without frustration by users. Nevertheless, thetwo standards have provided language researchers with a consistent computational architecture needed to process, publish and analyze data from many different languages. We bring to light common, but not always transparent, pitfalls that researchers face when working with Unicode and IPA. Our research uses quantitative methods to compare languages and uncover and clarify their phylogenetic relations. However, the majority of lexical data available from the world’s languages is in author- or document-specific orthogra- phies.
    [Show full text]
  • Unicode and Code Page Support
    Natural for Mainframes Unicode and Code Page Support Version 4.2.6 for Mainframes October 2009 This document applies to Natural Version 4.2.6 for Mainframes and to all subsequent releases. Specifications contained herein are subject to change and these changes will be reported in subsequent release notes or new editions. Copyright © Software AG 1979-2009. All rights reserved. The name Software AG, webMethods and all Software AG product names are either trademarks or registered trademarks of Software AG and/or Software AG USA, Inc. Other company and product names mentioned herein may be trademarks of their respective owners. Table of Contents 1 Unicode and Code Page Support .................................................................................... 1 2 Introduction ..................................................................................................................... 3 About Code Pages and Unicode ................................................................................ 4 About Unicode and Code Page Support in Natural .................................................. 5 ICU on Mainframe Platforms ..................................................................................... 6 3 Unicode and Code Page Support in the Natural Programming Language .................... 7 Natural Data Format U for Unicode-Based Data ....................................................... 8 Statements .................................................................................................................. 9 Logical
    [Show full text]
  • Assessment of Options for Handling Full Unicode Character Encodings in MARC21 a Study for the Library of Congress
    1 Assessment of Options for Handling Full Unicode Character Encodings in MARC21 A Study for the Library of Congress Part 1: New Scripts Jack Cain Senior Consultant Trylus Computing, Toronto 1 Purpose This assessment intends to study the issues and make recommendations on the possible expansion of the character set repertoire for bibliographic records in MARC21 format. 1.1 “Encoding Scheme” vs. “Repertoire” An encoding scheme contains codes by which characters are represented in computer memory. These codes are organized according to a certain methodology called an encoding scheme. The list of all characters so encoded is referred to as the “repertoire” of characters in the given encoding schemes. For example, ASCII is one encoding scheme, perhaps the one best known to the average non-technical person in North America. “A”, “B”, & “C” are three characters in the repertoire of this encoding scheme. These three characters are assigned encodings 41, 42 & 43 in ASCII (expressed here in hexadecimal). 1.2 MARC8 "MARC8" is the term commonly used to refer both to the encoding scheme and its repertoire as used in MARC records up to 1998. The ‘8’ refers to the fact that, unlike Unicode which is a multi-byte per character code set, the MARC8 encoding scheme is principally made up of multiple one byte tables in which each character is encoded using a single 8 bit byte. (It also includes the EACC set which actually uses fixed length 3 bytes per character.) (For details on MARC8 and its specifications see: http://www.loc.gov/marc/.) MARC8 was introduced around 1968 and was initially limited to essentially Latin script only.
    [Show full text]
  • QCMUQ@QALB-2015 Shared Task: Combining Character Level MT and Error-Tolerant Finite-State Recognition for Arabic Spelling Correction
    QCMUQ@QALB-2015 Shared Task: Combining Character level MT and Error-tolerant Finite-State Recognition for Arabic Spelling Correction Houda Bouamor1, Hassan Sajjad2, Nadir Durrani2 and Kemal Oflazer1 1Carnegie Mellon University in Qatar [email protected], [email protected] 2Qatar Computing Research Institute fhsajjad,[email protected] Abstract sion, in this task participants are asked to imple- We describe the CMU-Q and QCRI’s joint ment a system that takes as input MSA (Modern efforts in building a spelling correction Standard Arabic) text with various spelling errors system for Arabic in the QALB 2015 and automatically correct it. In this year’s edition, Shared Task. Our system is based on a participants are asked to test their systems on two hybrid pipeline that combines rule-based text genres: (i) news corpus (mainly newswire ex- linguistic techniques with statistical meth- tracted from Aljazeera); (ii) a corpus of sentences ods using language modeling and machine written by learners of Arabic as a Second Lan- translation, as well as an error-tolerant guage (ASL). Texts produced by learners of ASL finite-state automata method. Our sys- generally contain a number of spelling errors. The tem outperforms the baseline system and main problem faced by them is using Arabic with achieves better correction quality with an vocabulary and grammar rules that are different F1-measure of 68.42% on the 2014 testset from their native language. and 44.02 % on the L2 Dev. In this paper, we describe our Arabic spelling correction system. Our system is based on a 1 Introduction hybrid pipeline which combines rule-based tech- With the increased usage of computers in the pro- niques with statistical methods using language cessing of various languages comes the need for modeling and machine translation, as well as an correcting errors introduced at different stages.
    [Show full text]
  • Geometry and Art LACMA | | April 5, 2011 Evenings for Educators
    Geometry and Art LACMA | Evenings for Educators | April 5, 2011 ALEXANDER CALDER (United States, 1898–1976) Hello Girls, 1964 Painted metal, mobile, overall: 275 x 288 in., Art Museum Council Fund (M.65.10) © Alexander Calder Estate/Artists Rights Society (ARS), New York/ADAGP, Paris EOMETRY IS EVERYWHERE. WE CAN TRAIN OURSELVES TO FIND THE GEOMETRY in everyday objects and in works of art. Look carefully at the image above and identify the different, lines, shapes, and forms of both GAlexander Calder’s sculpture and the architecture of LACMA’s built environ- ment. What is the proportion of the artwork to the buildings? What types of balance do you see? Following are images of artworks from LACMA’s collection. As you explore these works, look for the lines, seek the shapes, find the patterns, and exercise your problem-solving skills. Use or adapt the discussion questions to your students’ learning styles and abilities. 1 Language of the Visual Arts and Geometry __________________________________________________________________________________________________ LINE, SHAPE, FORM, PATTERN, SYMMETRY, SCALE, AND PROPORTION ARE THE BUILDING blocks of both art and math. Geometry offers the most obvious connection between the two disciplines. Both art and math involve drawing and the use of shapes and forms, as well as an understanding of spatial concepts, two and three dimensions, measurement, estimation, and pattern. Many of these concepts are evident in an artwork’s composition, how the artist uses the elements of art and applies the principles of design. Problem-solving skills such as visualization and spatial reasoning are also important for artists and professionals in math, science, and technology.
    [Show full text]
  • L2/14-274 Title: Proposed Math-Class Assignments for UTR #25
    L2/14-274 Title: Proposed Math-Class Assignments for UTR #25 Revision 14 Source: Laurențiu Iancu and Murray Sargent III – Microsoft Corporation Status: Individual contribution Action: For consideration by the Unicode Technical Committee Date: 2014-10-24 1. Introduction Revision 13 of UTR #25 [UTR25], published in April 2012, corresponds to Unicode Version 6.1 [TUS61]. As of October 2014, Revision 14 is in preparation, to update UTR #25 and its data files to the character repertoire of Unicode Version 7.0 [TUS70]. This document compiles a list of characters proposed to be assigned mathematical classes in Revision 14 of UTR #25. In this document, the term math-class is being used to refer to the character classification in UTR #25. While functionally similar to a UCD character property, math-class is applicable only within the scope of UTR #25. Math-class is different from the UCD binary character property Math [UAX44]. The relation between Math and math-class is that the set of characters with the property Math=Yes is a proper subset of the set of characters assigned any math-class value in UTR #25. As of Revision 13, the set relation between Math and math-class is invalidated by the collection of Arabic mathematical alphabetic symbols in the range U+1EE00 – U+1EEFF. This is a known issue [14-052], al- ready discussed by the UTC [138-C12 in 14-026]. Once those symbols are added to the UTR #25 data files, the set relation will be restored. This document proposes only UTR #25 math-class values, and not any UCD Math property values.
    [Show full text]
  • Combining Character and Word Information in Neural Machine Translation Using a Multi-Level Attention
    Combining Character and Word Information in Neural Machine Translation Using a Multi-Level Attention † † ‡ † † Huadong Chen , Shujian Huang ,∗ David Chiang , Xinyu Dai , Jiajun Chen †State Key Laboratory for Novel Software Technology, Nanjing University chenhd,huangsj,daixinyu,chenjj @nlp.nju.edu.cn { } ‡Department of Computer Science and Engineering, University of Notre Dame [email protected] Abstract of (sub)words are based purely on their contexts, but the potentially rich information inside the unit Natural language sentences, being hierarchi- itself is seldom explored. Taking the Chinese word cal, can be represented at different levels of 被打伤 (bei-da-shang) as an example, the three granularity, like words, subwords, or charac- ters. But most neural machine translation sys- characters in this word are a passive voice marker, tems require the sentence to be represented as “hit” and “wound”, respectively. The meaning of a sequence at a single level of granularity. It the whole word, “to be wounded”, is fairly com- can be difficult to determine which granularity positional. But this compositionality is ignored if is better for a particular translation task. In this the whole word is treated as a single unit. paper, we improve the model by incorporating Secondly, obtaining the word or sub-word multiple levels of granularity. Specifically, we boundaries can be non-trivial. For languages like propose (1) an encoder with character atten- tion which augments the (sub)word-level rep- Chinese and Japanese, a word segmentation step resentation with character-level information; is needed, which must usually be trained on la- (2) a decoder with multiple attentions that en- beled data.
    [Show full text]
  • Combining Character and Conversational Types in Strategic Choice
    Different Games in Dialogue: Combining character and conversational types in strategic choice Alafate Abulmiti Universite´ de Paris, CNRS Laboratoire de Linguistique Formelle [email protected] Abstract We seek to develop an approach to strategic choice applicable to the general case of dialogical In this paper, we show that investigating the in- interaction, where termination is an important con- teraction of conversational type (often known sideration and where assessment is internal to the as language game or speech genre) with the character types of the interlocutors is worth- participants. Strategic choice is modelled by com- while. We present a method of calculating the bining structure from conversational types with psy- decision making process for selecting dialogue chological and cognitive notions associated with moves that combines character type and con- the players. versational type. We also present a mathemat- Character type as a relatively independent factor ical model that illustrate these factors’ interac- abstracted out of the conversational type is impor- tions in a quantitative way. tant for dialogue. Although there is some analy- 1 Introduction sis concerning both character effects and conversa- tional types in the dialogue, combining them and Wittgenstein(1953); Bakhtin(2010) introduced analyzing their interactions in a quantitative way language games/speech genres as notions tying di- has not, to our knowledge, been carried out before. versity of linguistic behaviors to activity. Build- The purposes of this paper is, hence, to combine ing on this, and insights of pragmaticists such as character type effect and conversational type anal- Hymes(1974); Allwood(2000), and earlier work ysis to yield a method that could help to analyse in AI by (Allen and Perrault, 1980; Cohen and strategic choice in dialogue.
    [Show full text]
  • PHP and Unicode
    PHP and Unicode Andrei Zmievski Yahoo! Inc ApacheCon US 2005 Andrei Zmievski © 2005 Agenda ✓ Multi-i18n-what? ✓ Can’t PHP do it now? ✓ Unicode ✓ How do we get it into PHP? ✓ When can I get my hands on it? PHP and Unicode Andrei Zmievski © 2005 Multi-i18n-what? ✓ There is more than one country in the world ✓ They don't all speak English! ✓ Some of them even speak French PHP and Unicode Andrei Zmievski © 2005 Definitions Character Set A collection of abstract characters or graphemes used in a certain domain ...А Б В Г Д Е Ё Ж З И... PHP and Unicode Andrei Zmievski © 2005 Definitions Character Encoding Form Representation of a character set using a number of integer codes (code values) KOI8-R: А = 225, И= 234 CP-1252: А = 192, И = 201 Unicode: А = 410, И = 418 PHP and Unicode Andrei Zmievski © 2005 Multi-i18n-what? ✓ Dealing with multiple encodings is a pain ✓ Different algorithms, conversion, detection, validation, processing... understanding ✓ Dealing with multiple languages is a pain too ✓ But cannot be avoided in this day and age PHP and Unicode Andrei Zmievski © 2005 Challenges ✓ Need to implement applications for multiple languages and cultures ✓ Perform language and encoding appropriate searching, sorting, word breaking, etc. ✓ Support date, time, number, currency, and more esoteric formatting in the specific locale ✓ And much more PHP and Unicode Andrei Zmievski © 2005 Agenda ✓ Multi-i18n-what? ✓ Can’t PHP do it now? ✓ Unicode ✓ How do we get it into PHP? ✓ When can I get my hands on it? PHP and Unicode Andrei Zmievski © 2005 Agenda
    [Show full text]
  • Chapter 5. Characters: Typology and Page Encoding 1
    Chapter 5. Characters: typology and page encoding 1 Chapter 5. Characters: typology and encoding Version 2.0 (16 May 2008) 5.1 Introduction PDF of chapter 5. The basic characters a-z / A-Z in the Latin alphabet can be encoded in virtually any electronic system and transferred from one system to another without loss of information. Any other characters may cause problems, even well established ones such as Modern Scandinavian ‘æ’, ‘ø’ and ‘å’. In v. 1 of The Menota handbook we therefore recommended that all characters outside a-z / A-Z should be encoded as entities, i.e. given an appropriate description and placed between the delimiters ‘&’ and ‘;’. In the last years, however, all major operating systems have implemented full Unicode support and a growing number of applications, including most web browsers, also support Unicode. We therefore believe that encoders should take full advantage of the Unicode Standard, as recommended in ch. 2.2.2 above. As of version 2.0, the character encoding recommended in The Menota handbook has been synchronised with the recommendations by the Medieval Unicode Font Initiative . The character recommendations by MUFI contain more than 1,300 characters in the Latin alphabet of potential use for the encoding of Medieval Nordic texts. As a consequence of the synchronisation, the list of entities which is part of the Menota scheme is identical to the one by MUFI. In other words, if a character is encoded with a code point or an entity in the MUFI character recommendation, it will be a valid character encoding also in a Menota text.
    [Show full text]
  • Diakritika in Unicode
    Reinhold Heuvelmann Diakritika in Unicode Ç↔C+◌̧ 1 Code Charts http://www.unicode.org/charts/ , Stichwort "combining" – http://www.unicode.org/charts/PDF/U0300.pdf – http://www.unicode.org/charts/PDF/U1AB0.pdf – http://www.unicode.org/charts/PDF/U1DC0.pdf – http://www.unicode.org/charts/PDF/U20D0.pdf – http://www.unicode.org/charts/PDF/UFE20.pdf 2 | 10 | Diakritika in Unicode | Datenbezieher-Workshop 30. Mai 2017 3 | 10 | Diakritika in Unicode | Datenbezieher-Workshop 30. Mai 2017 Stacking Sequences, Beispiel 1 http://www.unicode.org/versions/Unicode9.0.0/ch02.pdf 4 | 10 | Diakritika in Unicode | Datenbezieher-Workshop 30. Mai 2017 Stacking Sequences, Beispiel 2 5 | 10 | Diakritika in Unicode | Datenbezieher-Workshop 30. Mai 2017 Aus den FAQ zu Characters and Combining Marks Q: Why are new combinations of Latin letters with diacritical marks not suitable for addition to Unicode? A: There are several reasons. First, Unicode encodes many diacritical marks, and the combinations can already be produced, as noted in the answers to some questions above. If precomposed equivalents were added, the number of multiple spellings would be increased, and decompositions would need to be defined and maintained for them, adding to the complexity of existing decomposition tables in implementations. ... 6 | 10 | Diakritika in Unicode | Datenbezieher-Workshop 30. Mai 2017 Aus den FAQ zu Characters and Combining Marks ... Finally, normalization form NFC (the composed form favored for use on the Web) is frozen—no new letter combinations can be added to it. Therefore, the normalized NFC representation of any new precomposed letters would still use decomposed sequences, which can already be expressed by combining character sequences in Unicode.
    [Show full text]
  • Special Characters in Aletheia
    Special Characters in Aletheia Last Change: 28 May 2014 The following table comprises all special characters which are currently available through the virtual keyboard integrated in Aletheia. The virtual keyboard aids re-keying of historical documents containing characters that are mostly unsupported in other text editing tools (see Figure 1). Figure 1: Text input dialogue with virtual keyboard in Aletheia 1.2 Due to technical reasons, like font definition, Aletheia uses only precomposed characters. If required for other applications the mapping between a precomposed character and the corresponding decomposed character sequence is given in the table as well. When writing to files Aletheia encodes all characters in UTF-8 (variable-length multi-byte representation). Key: Glyph – the icon displayed in the virtual keyboard. Unicode – the actual Unicode character; can be copied and pasted into other applications. Please note that special characters might not be displayed properly if there is no corresponding font installed for the target application. Hex – the hexadecimal code point for the Unicode character Decimal – the decimal code point for the Unicode character Description – a short description of the special character Origin – where this character has been defined Base – the base character of the special character (if applicable), used for sorting Combining Character – combining character(s) to modify the base character (if applicable) Pre-composed Character Decomposed Character (used in Aletheia) (only for reference) Combining Glyph
    [Show full text]