A Sample of the Complexities Associated with Special Processing of Unicode Codepoints Used in Domain Names

Total Page:16

File Type:pdf, Size:1020Kb

A Sample of the Complexities Associated with Special Processing of Unicode Codepoints Used in Domain Names A Sample of the Complexities Associated with Special Processing of Unicode Codepoints Used in Domain Names INTRODUCTION In early 2013, Verisign and other generic Top-Level Domain (gTLD) registries began participating in a procedure developed by the Internet Corporation for Assigned Names and Numbers (ICANN) to test gTLD operational stability and security prior to the delegation and launch of a new gTLD. This procedure is known as Pre-Delegation Testing (PDT)1. This document describes some of the complexities that Verisign has encountered with Unicode codepoint processing during the pre-delegation testing process due to ambiguities associated with local language communities having different practices for normalization of combining characters and composed characters. This need for context introduces risks of ambiguity and adds complexity to pre-delegation testing and makes it more difficult to develop consistent Internationalized Domain Name (IDN) implementations. This ambiguity and complexity may ultimately have profound effects on universal resolvability and deterministic navigation on the Internet, particularly as internationalization is more widely deployed and other opportunities arise for a multitude of interpretations and confusability across the various systems that are used to derive URIs. Certain Unicode codepoints have rules or handling that require special processing. Occasionally, Verisign and Internetstiftelsen i Sverige (IIS) (the Swedish company selected by ICANN to run the tests) had different interpretations of the rules found in the Internationalized Domain Names for Applications (IDNA, RFCs 58902 and 58913) standard or other applicable standards. The PDT system designed by IIS provides the “ability of the software and system to allow for interaction with the applicants”4, and Verisign took advantage of this ability to discuss ambiguities with IIS. Below is a sample of the codepoints that required discussion with IIS. CODEPOINT SAMPLES 1. Hyphen-Minus (U+002D) 1 https://newgtlds.icann.org/en/applicants/pdt 2 https://tools.ietf.org/html/rfc5890 3 https://tools.ietf.org/html/rfc5891 4 https://newgtlds.icann.org/en/applicants/pdt/vendor-selection-summary-02sep16-en.pdf Verisign Public 1 This codepoint is allowed in all scripts. It is a COMMON5 script character and is PROTOCOL VALID (PVALID, RFC 58926) for IDN registration. It has a Bidirectional (Bidi) property7 of ES (European Number Separator). This property is allowed in both Left-To-Right (LTR) and Right-To-Left (RTL) labels. This code point commonly occurs across all scripts and is accepted for registration. Why this matters: The PDT system flagged common script characters as potential issues, but the Hyphen-Minus character is PVALID and widely used in domain names. 2. Digits (U+0030 – U+0039) Digit characters (U+0030 - U+0039) are allowed in all scripts. These are COMMON script characters and are PVALID for IDN registration. They have a Bidi property of EN (European Number). This property is allowed in both LTR and RTL labels. These code points commonly occur across all scripts and are accepted for registration. Why this matters: The PDT system flagged common script characters as potential issues, but the Digit characters are PVALID and widely used in domain names. 3. The Middle Dot (U+00B7) This PVALID codepoint is PROHIBITED by Verisign in all registrations. Originally it was included in Verisign’s Latin table, but the PDT tester wanted to relegate this codepoint to the Catalan language only. Unicode does not recognize Catalan as a script separate from Latin, and so Verisign provides no special support for this language. Why this matters: Verisign prohibited use of this character in accordance with guidance received from the PDT tester. A contextual rule is required to make this codepoint an allowable exception per RFC 5892. 4. Arabic-Indic Digits (U+0660 – U+06F0) These code points are limited to the Arabic and Thaana script tables by a Unicode Script Extension. Further, a Contextual Rule prevents these code points from mixing with the EXTENDED ARABIC- INDIC DIGITS. Also, the ARABIC-INDIC DIGITS have a Bidi property of AN (Arabic Number). The EXTENDED ARABIC-INDIC DIGITS have a Bidi property of EN. The DIGITS (U+0030 – U+0039) also have a Bidi property of EN. A Bidi Rule specified in RFC 58938, (Section 2, Rule #4) prevents EN and AN characters from appearing in the same label, so the ARABIC-INDIC DIGITS cannot mix with EXTENDED ARABIC-INDIC DIGITS or with the DIGIT characters. The Verisign Registry implements these rules. 5 http://unicode.org/reports/tr24/#Special_Explicit 6 https://tools.ietf.org/html/rfc5892 7 http://www.unicode.org/reports/tr9/#Bidirectional_Character_Types 8 https://tools.ietf.org/html/rfc5893 Verisign Public 2 Why this matters: Verisign implemented use of these characters in accordance with guidance received from the PDT tester. Contextual rules apply and simple inclusion or exclusion is not protocol-compliant. 5. Hebrew Punctuation Geresh and Gershayim (U+05F3, U+05F4) These are Hebrew code points. A Contextual Rule asserts that these code points must be preceded by another Hebrew code point, so these code points cannot immediately follow a Digit character. These Hebrew characters are Right-To-Left points, and a Bidi Rule (RFC 5893, Section 2, Rule #1) prevents an RTL label from beginning with an EN character like a Digit. A label containing U+05F3 or U+05F4 cannot begin with a Digit, and it cannot place a Digit immediately before U+05F3 or U+05F4, but there is no public standard that would generally prevent these code points from appearing with DIGITS. Verisign’s implementation allows these characters to be followed by a digit. Why this matters: Verisign implemented use of these characters in accordance with guidance received from the PDT tester. Contextual rules apply and simple inclusion or exclusion is not protocol-compliant. 6. Katakana Middle Dot (U+30FB) This is a COMMON codepoint, with a Script Extension rule. This codepoint can only appear with characters from the Bopomofo, Hangul, Han, Hiragana, Katakana or Yii scripts. Verisign also has a Japanese Language table that includes this codepoint as well as other scripts such as Latin. When using the Japanese Language table, Verisign’s implementation prevents the Katakana Middle Dot from appearing with Latin characters. Why this matters: Verisign implemented use of this character in accordance with guidance received from the PDT tester. The PDT tester recommends that this character not be combined with Latin characters in a Japanese language context) and simple inclusion or exclusion is not protocol- compliant. 7. Katakana-Hiragana Prolonged Sound Mark (U+30FC) This is a COMMON code point, with a Script Extension rule. This codepoint can only appear with characters from the Hiragana or Katakana scripts. Verisign has a Japanese Language table that includes this codepoint as well as other scripts such as Latin. When using the Japanese Language table, Verisign’s implementation prevents the Prolonged Sound Mark from appearing with Latin characters. Why this matters: Verisign implemented use of this character in accordance with guidance received from the PDT tester. The PDT tester recommended that this character not be combined with Latin characters in a Japanese language context) and simple inclusion or exclusion is not protocol- compliant. Verisign Public 3 VERISIGN POSITION Verisign reviewed our PDT test results with the PDT testing agent to resolve issues of interpretation as noted in this paper. As a result of these reviews, multiple changes were made to Verisign’s IDNA implementation. We also discussed Verisign’s concerns with embedding language rules in Label Generation Rulesets with the PDT testing agent, and as a result rules about consonant/vowel patterns in the Thai language were not implemented. Verisign believes that additional work is required in this area to explore the role of context in resolution and to reduce implementation complexity. Verisign Public 4 .
Recommended publications
  • Background I. Names
    Background I. Names 1. China It used to be thought that the name ‘China’ derived from the name of China’s early Qin dynasty (Chin or Ch’in in older transcriptions), whose rulers conquered all rivals and initiated the dynasty in 221 BC. But, as Wilkinson notes (Chinese History: A Manual: 753, and fn 7), the original pronunciation of the name ‘Qin’ was rather different, and would make it an unlikely source for the name China. Instead, China is thought to derive from a Persian root, and was, apparently, first used for porcelain, and only later applied to the country from which the finest examples of that material came. Another name, Cathay, now rather poetic in English, but surviving as the regular name for the country in languages such as Russian (Kitai), is said to derive from the name of the Khitan Tarters, who formed the Liao dynasty in north China in the 10th century. The Khitan dynasty was the first to make a capital on the site of Beijing. The Chinese now call their country Zhōngguó, often translated as ‘Middle Kingdom’. Originally, this name meant the central – or royal – state of the many that occupied the region prior to the unification of Qin. Other names were used before Zhōngguó became current. One of the earliest was Huá (or Huáxià, combining Huá with the name of the earliest dynasty, the Xià). Xià, in fact, combined with the Zhōng of Zhōngguó, appears in the modern official names of the country, as we see below. 2. Chinese places a) The People’s Republic of China (PRC) [Zhōnghuá Rénmín Gònghéguó] This is the political entity proclaimed by Máo Zédōng when he gave his speech (‘China has risen again’) at the Gate of Heavenly Peace [Tiān’ān Mén] in Beijing on October 1, 1949.
    [Show full text]
  • Localizing Into Chinese: the Two Most Common Questions White Paper Answered
    Localizing into Chinese: the two most common questions White Paper answered Different writing systems, a variety of languages and dialects, political and cultural sensitivities and, of course, the ever-evolving nature of language itself. ALPHA CRC LTD It’s no wonder that localizing in Chinese can seem complicated to the uninitiated. St Andrew’s House For a start, there is no single “Chinese” language to localize into. St Andrew’s Road Cambridge CB4 1DL United Kingdom Most Westerners referring to the Chinese language probably mean Mandarin; but @alpha_crc you should definitely not assume this as the de facto language for all audiences both within and outside mainland China. alphacrc.com To clear up any confusion, we talked to our regional language experts to find out the most definitive and useful answers to two of the most commonly asked questions when localizing into Chinese. 1. What’s the difference between Simplified Chinese and Traditional Chinese? 2. Does localizing into “Chinese” mean localizing into Mandarin, Cantonese or both? Actually, these are really pertinent questions because they get to the heart of some of the linguistic, political and cultural complexities that need to be taken into account when localizing for this region. Because of the important nature of these issues, we’ve gone a little more in depth than some of the articles on related themes elsewhere on the internet. We think you’ll find the answers a useful starting point for any considerations about localizing for the Chinese-language market. And, taking in linguistic nuances and cultural history, we hope you’ll find them an interesting read too.
    [Show full text]
  • Neural Substrates of Hanja (Logogram) and Hangul (Phonogram) Character Readings by Functional Magnetic Resonance Imaging
    ORIGINAL ARTICLE Neuroscience http://dx.doi.org/10.3346/jkms.2014.29.10.1416 • J Korean Med Sci 2014; 29: 1416-1424 Neural Substrates of Hanja (Logogram) and Hangul (Phonogram) Character Readings by Functional Magnetic Resonance Imaging Zang-Hee Cho,1 Nambeom Kim,1 The two basic scripts of the Korean writing system, Hanja (the logography of the traditional Sungbong Bae,2 Je-Geun Chi,1 Korean character) and Hangul (the more newer Korean alphabet), have been used together Chan-Woong Park,1 Seiji Ogawa,1,3 since the 14th century. While Hanja character has its own morphemic base, Hangul being and Young-Bo Kim1 purely phonemic without morphemic base. These two, therefore, have substantially different outcomes as a language as well as different neural responses. Based on these 1Neuroscience Research Institute, Gachon University, Incheon, Korea; 2Department of linguistic differences between Hanja and Hangul, we have launched two studies; first was Psychology, Yeungnam University, Kyongsan, Korea; to find differences in cortical activation when it is stimulated by Hanja and Hangul reading 3Kansei Fukushi Research Institute, Tohoku Fukushi to support the much discussed dual-route hypothesis of logographic and phonological University, Sendai, Japan routes in the brain by fMRI (Experiment 1). The second objective was to evaluate how Received: 14 February 2014 Hanja and Hangul affect comprehension, therefore, recognition memory, specifically the Accepted: 5 July 2014 effects of semantic transparency and morphemic clarity on memory consolidation and then related cortical activations, using functional magnetic resonance imaging (fMRI) Address for Correspondence: (Experiment 2). The first fMRI experiment indicated relatively large areas of the brain are Young-Bo Kim, MD Department of Neuroscience and Neurosurgery, Gachon activated by Hanja reading compared to Hangul reading.
    [Show full text]
  • Assessment of Options for Handling Full Unicode Character Encodings in MARC21 a Study for the Library of Congress
    1 Assessment of Options for Handling Full Unicode Character Encodings in MARC21 A Study for the Library of Congress Part 1: New Scripts Jack Cain Senior Consultant Trylus Computing, Toronto 1 Purpose This assessment intends to study the issues and make recommendations on the possible expansion of the character set repertoire for bibliographic records in MARC21 format. 1.1 “Encoding Scheme” vs. “Repertoire” An encoding scheme contains codes by which characters are represented in computer memory. These codes are organized according to a certain methodology called an encoding scheme. The list of all characters so encoded is referred to as the “repertoire” of characters in the given encoding schemes. For example, ASCII is one encoding scheme, perhaps the one best known to the average non-technical person in North America. “A”, “B”, & “C” are three characters in the repertoire of this encoding scheme. These three characters are assigned encodings 41, 42 & 43 in ASCII (expressed here in hexadecimal). 1.2 MARC8 "MARC8" is the term commonly used to refer both to the encoding scheme and its repertoire as used in MARC records up to 1998. The ‘8’ refers to the fact that, unlike Unicode which is a multi-byte per character code set, the MARC8 encoding scheme is principally made up of multiple one byte tables in which each character is encoded using a single 8 bit byte. (It also includes the EACC set which actually uses fixed length 3 bytes per character.) (For details on MARC8 and its specifications see: http://www.loc.gov/marc/.) MARC8 was introduced around 1968 and was initially limited to essentially Latin script only.
    [Show full text]
  • Recognition of Online Handwritten Gurmukhi Strokes Using Support Vector Machine a Thesis
    Recognition of Online Handwritten Gurmukhi Strokes using Support Vector Machine A Thesis Submitted in partial fulfillment of the requirements for the award of the degree of Master of Technology Submitted by Rahul Agrawal (Roll No. 601003022) Under the supervision of Dr. R. K. Sharma Professor School of Mathematics and Computer Applications Thapar University Patiala School of Mathematics and Computer Applications Thapar University Patiala – 147004 (Punjab), INDIA June 2012 (i) ABSTRACT Pen-based interfaces are becoming more and more popular and play an important role in human-computer interaction. This popularity of such interfaces has created interest of lot of researchers in online handwriting recognition. Online handwriting recognition contains both temporal stroke information and spatial shape information. Online handwriting recognition systems are expected to exhibit better performance than offline handwriting recognition systems. Our research work presented in this thesis is to recognize strokes written in Gurmukhi script using Support Vector Machine (SVM). The system developed here is a writer independent system. First chapter of this thesis report consist of a brief introduction to handwriting recognition system and some basic differences between offline and online handwriting systems. It also includes various issues that one can face during development during online handwriting recognition systems. A brief introduction about Gurmukhi script has also been given in this chapter In the last section detailed literature survey starting from the 1979 has also been given. Second chapter gives detailed information about stroke capturing, preprocessing of stroke and feature extraction. These phases are considered to be backbone of any online handwriting recognition system. Recognition techniques that have been used in this study are discussed in chapter three.
    [Show full text]
  • Sinitic Language and Script in East Asia: Past and Present
    SINO-PLATONIC PAPERS Number 264 December, 2016 Sinitic Language and Script in East Asia: Past and Present edited by Victor H. Mair Victor H. Mair, Editor Sino-Platonic Papers Department of East Asian Languages and Civilizations University of Pennsylvania Philadelphia, PA 19104-6305 USA [email protected] www.sino-platonic.org SINO-PLATONIC PAPERS FOUNDED 1986 Editor-in-Chief VICTOR H. MAIR Associate Editors PAULA ROBERTS MARK SWOFFORD ISSN 2157-9679 (print) 2157-9687 (online) SINO-PLATONIC PAPERS is an occasional series dedicated to making available to specialists and the interested public the results of research that, because of its unconventional or controversial nature, might otherwise go unpublished. The editor-in-chief actively encourages younger, not yet well established, scholars and independent authors to submit manuscripts for consideration. Contributions in any of the major scholarly languages of the world, including romanized modern standard Mandarin (MSM) and Japanese, are acceptable. In special circumstances, papers written in one of the Sinitic topolects (fangyan) may be considered for publication. Although the chief focus of Sino-Platonic Papers is on the intercultural relations of China with other peoples, challenging and creative studies on a wide variety of philological subjects will be entertained. This series is not the place for safe, sober, and stodgy presentations. Sino- Platonic Papers prefers lively work that, while taking reasonable risks to advance the field, capitalizes on brilliant new insights into the development of civilization. Submissions are regularly sent out to be refereed, and extensive editorial suggestions for revision may be offered. Sino-Platonic Papers emphasizes substance over form.
    [Show full text]
  • + Natali A, Professor of Cartqraphy, the Hebreu Uhiversity of -Msalem, Israel DICTIONARY of Toponymfc TERLMINO~OGY Wtaibynafiail~
    United Nations Group of E%perts OR Working Paper 4eographicalNames No. 61 Eighteenth Session Geneva, u-23 August1996 Item7 of the E%ovisfonal Agenda REPORTSOF THE WORKINGGROUPS + Natali a, Professor of Cartqraphy, The Hebreu UhiVersity of -msalem, Israel DICTIONARY OF TOPONYMfC TERLMINO~OGY WtaIbyNafiaIl~- . PART I:RaLsx vbim 3.0 upi8elfuiyl9!J6 . 001 . 002 003 004 oo!l 006 007 . ooa 009 010 . ol3 014 015 sequala~esfocJphabedcsaipt. 016 putting into dphabetic order. see dso Kqucna ruIt!% Qphabctk 017 Rtlpreat8Ii00, e.g. ia 8 computer, wflich employs ooc only numm ds but also fetters. Ia a wider sense. aIso anploying punauatiocl tnarksmd-SymboIs. 018 Persod name. Esamples: Alfredi ‘Ali. 019 022 023 biliaw 024 02s seecIass.f- 026 GrqbicsymboIusedurunitiawrIdu~morespedficaty,r ppbic symbol in 1 non-dphabedc writiog ryste.n& Exmlptes: Chinese ct, , thong; Ambaric u , ha: Japaoese Hiragana Q) , no. 027 -.modiGed Wnprehauive term for cheater. simplified aad character, varIaoL 031 CbmJnyol 032 CISS, featm? 033 cQdedrepfwltatiul 034 035 036 037' 038 039 040 041 042 047 caavasion alphabet 048 ConMQo table* 049 0nevahte0frpointinlhisgr8ti~ . -.- w%idofplaaecoordiaarurnm;aingoftwosetsofsnpight~ -* rtcight8ngfIertoeachotkrodwithap8ltKliuofl8qthonbo&. rupenmposedonr(chieflytopogtaphtc)map.see8lsouTM gz 051 see axxdimtes. rectangufar. 052 A stahle form of speech, deriyed from a pbfgin, which has became the sole a ptincipal language of 8 qxech comtnunity. Example: Haitian awle (derived from Fresh). ‘053 adllRaIfeatlue see feature, allhlral. 054 055 * 056 057 Ac&uioaofsoftwamrcqkdfocusingrdgRaIdatabmem rstoauMe~osctlto~thisdatabase. 058 ckalog of defItitioas of lbe contmuofadigitaldatabase.~ud- hlg data element cefw labels. f0mw.s. internal refm codMndtextemty,~well~their-p,. 059 see&tadichlq. 060 DeMptioa of 8 basic unit of -Lkatifiile md defiile informatioa tooccqyrspecEcdataf!eldinrcomputernxaxtLExampk Pateofmtifii~ofluwtby~namaturhority’.
    [Show full text]
  • Proposal for a Korean Script Root Zone LGR 1 General Information
    (internal doc. #: klgp220_101f_proposal_korean_lgr-25jan18-en_v103.doc) Proposal for a Korean Script Root Zone LGR LGR Version 1.0 Date: 2018-01-25 Document version: 1.03 Authors: Korean Script Generation Panel 1 General Information/ Overview/ Abstract The purpose of this document is to give an overview of the proposed Korean Script LGR in the XML format and the rationale behind the design decisions taken. It includes a discussion of relevant features of the script, the communities or languages using it, the process and methodology used and information on the contributors. The formal specification of the LGR can be found in the accompanying XML document below: • proposal-korean-lgr-25jan18-en.xml Labels for testing can be found in the accompanying text document below: • korean-test-labels-25jan18-en.txt In Section 3, we will see the background on Korean script (Hangul + Hanja) and principal language using it, i.e., Korean language. The overall development process and methodology will be reviewed in Section 4. The repertoire and variant groups in K-LGR will be discussed in Sections 5 and 6, respectively. In Section 7, Whole Label Evaluation Rules (WLE) will be described and then contributors for K-LGR are shown in Section 8. Several appendices are included with separate files. proposal-korean-lgr-25jan18-en 1 / 73 1/17 2 Script for which the LGR is proposed ISO 15924 Code: Kore ISO 15924 Key Number: 287 (= 286 + 500) ISO 15924 English Name: Korean (alias for Hangul + Han) Native name of the script: 한글 + 한자 Maximal Starting Repertoire (MSR) version: MSR-2 [241] Note.
    [Show full text]
  • The Japanese Writing Systems, Script Reforms and the Eradication of the Kanji Writing System: Native Speakers’ Views Lovisa Österman
    The Japanese writing systems, script reforms and the eradication of the Kanji writing system: native speakers’ views Lovisa Österman Lund University, Centre for Languages and Literature Bachelor’s Thesis Japanese B.A. Course (JAPK11 Spring term 2018) Supervisor: Shinichiro Ishihara Abstract This study aims to deduce what Japanese native speakers think of the Japanese writing systems, and in particular what native speakers’ opinions are concerning Kanji, the logographic writing system which consists of Chinese characters. The Japanese written language has something that most languages do not; namely a total of ​ ​ three writing systems. First, there is the Kana writing system, which consists of the two syllabaries: Hiragana and Katakana. The two syllabaries essentially figure the same way, but are used for different purposes. Secondly, there is the Rōmaji writing system, which is Japanese written using latin letters. And finally, there is the Kanji writing system. Learning this is often at first an exhausting task, because not only must one learn the two phonematic writing systems (Hiragana and Katakana), but to be able to properly read and write in Japanese, one should also learn how to read and write a great amount of logographic signs; namely the Kanji. For example, to be able to read and understand books or newspaper without using any aiding tools such as dictionaries, one would need to have learned the 2136 Jōyō Kanji (regular-use Chinese characters). With the twentieth century’s progress in technology, comparing with twenty years ago, in this day and age one could probably theoretically get by alright without knowing how to write Kanji by hand, seeing as we are writing less and less by hand and more by technological devices.
    [Show full text]
  • Basis Technology Unicode対応ライブラリ スペックシート 文字コード その他の名称 Adobe-Standard-Encoding A
    Basis Technology Unicode対応ライブラリ スペックシート 文字コード その他の名称 Adobe-Standard-Encoding Adobe-Symbol-Encoding csHPPSMath Adobe-Zapf-Dingbats-Encoding csZapfDingbats Arabic ISO-8859-6, csISOLatinArabic, iso-ir-127, ECMA-114, ASMO-708 ASCII US-ASCII, ANSI_X3.4-1968, iso-ir-6, ANSI_X3.4-1986, ISO646-US, us, IBM367, csASCI big-endian ISO-10646-UCS-2, BigEndian, 68k, PowerPC, Mac, Macintosh Big5 csBig5, cn-big5, x-x-big5 Big5Plus Big5+, csBig5Plus BMP ISO-10646-UCS-2, BMPstring CCSID-1027 csCCSID1027, IBM1027 CCSID-1047 csCCSID1047, IBM1047 CCSID-290 csCCSID290, CCSID290, IBM290 CCSID-300 csCCSID300, CCSID300, IBM300 CCSID-930 csCCSID930, CCSID930, IBM930 CCSID-935 csCCSID935, CCSID935, IBM935 CCSID-937 csCCSID937, CCSID937, IBM937 CCSID-939 csCCSID939, CCSID939, IBM939 CCSID-942 csCCSID942, CCSID942, IBM942 ChineseAutoDetect csChineseAutoDetect: Candidate encodings: GB2312, Big5, GB18030, UTF32:UTF8, UCS2, UTF32 EUC-H, csCNS11643EUC, EUC-TW, TW-EUC, H-EUC, CNS-11643-1992, EUC-H-1992, csCNS11643-1992-EUC, EUC-TW-1992, CNS-11643 TW-EUC-1992, H-EUC-1992 CNS-11643-1986 EUC-H-1986, csCNS11643_1986_EUC, EUC-TW-1986, TW-EUC-1986, H-EUC-1986 CP10000 csCP10000, windows-10000 CP10001 csCP10001, windows-10001 CP10002 csCP10002, windows-10002 CP10003 csCP10003, windows-10003 CP10004 csCP10004, windows-10004 CP10005 csCP10005, windows-10005 CP10006 csCP10006, windows-10006 CP10007 csCP10007, windows-10007 CP10008 csCP10008, windows-10008 CP10010 csCP10010, windows-10010 CP10017 csCP10017, windows-10017 CP10029 csCP10029, windows-10029 CP10079 csCP10079, windows-10079
    [Show full text]
  • Implement Bopomofo by Opentype Font Feature.Key
    Implement Bopomofo by OpenType font feature Bobby Tung 董福興 @W3C Digital Publishing Workshop KEIO Univ. Mita campus 2018/9/19 What is Bopomofo? • A phonetic system for Mandarin education in Taiwan. • Major input method for Han characters. → Layout Rules for Bopomofo • Bopomofo Ruby - implement and requirement W3C ebooks and i18n workshop 2013/6/4 https://bit.ly/2w3LEph • "The Manual of The Phonetic Symbols of Mandarin Chinese" Ministry of Education, Taiwan https://bit.ly/2htvssE HTML Markup Light tone <ruby>過<rt>˙ㄍㄨㄛ</rt></ruby> 2nd, 3rd, 4th tone marks <ruby>醒<rt>ㄒㄧㄥˇ</rt></ruby> Tabular ruby markup model(Only support by Firefox) <ruby><rb>你<rb>好<rb>嗎<rt>ㄋㄧˇ<rt>ㄏㄠˇ<rt>˙ㄇㄚ</ruby> → HTML Ruby Markup Extensions CSS Ruby Layout Module ruby-position: inter-character; is support by Webkit. Glyph issue Helvetica Source Han Sans ˙ U+02D9 DOT ABOVE ˊ U+02CA MODIFIER LETTER ACUTE ACCENT ˇ U+02C7 CARON ˋ U+02CB MODIFIER LETTER GRAVE ACCENT Source Han Sans traditional Chinese build fixed the glyphs of tone marks Last step: position of tone marks on browser spec on browser spec Tone marks' position when Bopomofo placed on the top It's ok but not readable for readers Last step: position of tone marks on browser spec on browser spec Tone marks' position when Bopomofo placed on the right side 2nd, 3rd, 4th tone marks should be placed to right side OpenType feature? • Which OpenType feature should we use? • Should we apply for new feature? • Do browsers support those features? Take a try! Hard to imply with Layout Ask expert for advice.
    [Show full text]
  • Scripts, Languages, and Authority Control Joan M
    49(4) LRTS 243 Scripts, Languages, and Authority Control Joan M. Aliprand Library vendors’ use of Unicode is leading to library systems with multiscript capability, which offers the prospect of multiscript authority records. Although librarians tend to focus on Unicode in relation to non-Roman scripts, language is a more important feature of authority records than script. The concept of a catalog “locale” (of which language is one aspect) is introduced. Restrictions on the structure and content of a MARC 21 authority record are outlined, and the alternative structures for authority records containing languages written in non- Roman scripts are described. he Unicode Standard is the universal encoding standard for all the charac- Tters used in writing the world’s languages.1 The availability of library systems based on Unicode offers the prospect of library records not only in all languages but also in all the scripts that a particular system supports. While such a system will be used primarily to create and provide access to bibliographic records in their actual scripts, it can also be used to create authority records for the library, perhaps for contribution to communal authority files. A number of general design issues apply to authority records in multiple languages and scripts, design issues that affect not just the key hubs of communal authority files, but any institution or organization involved with authority control. Multiple scripts in library systems became available in the 1980s in the Research Libraries Information Network (RLIN) with the addition of Chinese, Japanese, and Korean (CJK) capability, and in ALEPH (Israel’s research library network), which initially provided Latin and Hebrew scripts and later Arabic, Cyrillic, and Greek.2 The Library of Congress continued to produce catalog cards for material in the JACKPHY (Japanese, Arabic, Chinese, Korean, Persian, Hebrew, and Yiddish) languages until all of the scripts used to write these languages were supported by an automated system.
    [Show full text]