Learn About Encodings in Python with Data from How ISIS Uses Twitter Dataset (2016)

Total Page:16

File Type:pdf, Size:1020Kb

Learn About Encodings in Python with Data from How ISIS Uses Twitter Dataset (2016) Learn About Encodings in Python With Data From How ISIS Uses Twitter Dataset (2016) © 2019 SAGE Publications, Ltd. All Rights Reserved. This PDF has been generated from SAGE Research Methods Datasets. SAGE SAGE Research Methods Datasets Part 2019 SAGE Publications, Ltd. All Rights Reserved. 2 Learn About Encodings in Python With Data From How ISIS Uses Twitter Dataset (2016) Student Guide Introduction This dataset example introduces researchers to the concept of encodings for text data. Everything stored in a computer is stored in the form of binary numbers or a sequence of 0s and 1s; thus, technically, computers do not store letters, numbers, or other characters directly. To handle text data in computers, therefore, we need a mapping between the characters used by human and the binary numbers “understood” by computers. Such mappings are called encodings. For example, the letter “A” is normally encoded as 65 in decimal or 01000001 in binary. Encodings are not text-analysis methods per se, and hence, they are often overlooked by researchers and teachers. However, they are such a foundationally important concept that sometimes encodings can cause big troubles in analyses conducted far downstream from the text–binary interface, especially when the analysis involves languages other than common English. For example, if you downloaded a text file in Chinese from the Internet and did not open it with the correct encoding, you would see a lot of question marks, weird symbols, or anything but the Chinese text you would like to analyze. This example describes the concept of encodings and discusses several popular encodings in use. We illustrate encoding processes using a subset of data from the 2016 How ISIS Uses Twitter dataset (https://www.kaggle.com/fifthtribe/how- Page 2 of 7 Learn About Encodings in Python With Data From How ISIS Uses Twitter Dataset (2016) SAGE SAGE Research Methods Datasets Part 2019 SAGE Publications, Ltd. All Rights Reserved. 2 isis-uses-twitter/home). The data are collected by a digital agency, Fifth Tribe, and are released under the CC0: Public Domain license through the platform Kaggle. Specifically, we demonstrate how to load this text file with different encodings. What Is Encoding? Character Set Before discussing encodings, we need to introduce a related concept called a character set, which is the set of objects to be encoded. Take plain English, for example. We probably want to encode the letters A–Z (in both upper and lower cases); we then need the numbers 0–9; and probably, we also need the symbols such as “,” and “.”. All these objects together constitute a character set. The standard character set for plain English is the so-called ASCII set. It contains all the objects we just mentioned as well as some other special symbols (dollar and pound signs, some mathematical operators, etc.). Each object in a character set has a unique ID called its code point. However, English is not the only language in the world, and almost any language would need some additional or alternative symbols or objects: Arabic, Chinese, French, German, Hindi, Japanese, Russian, Swedish, etc. For a while, almost every language had its own character set, and the world was accumulating a wealth of character sets and corresponding encodings. In addition to their sheer number and variety, another problem of having so many distinct character sets and encodings is how very inconvenient such multiplicity made working with text files containing multiple languages. Eventually, Unicode was developed to redress this situation. Unicode has virtually all the characters and symbols in all the languages on earth; it is currently the largest character set in use. Encoding Page 3 of 7 Learn About Encodings in Python With Data From How ISIS Uses Twitter Dataset (2016) SAGE SAGE Research Methods Datasets Part 2019 SAGE Publications, Ltd. All Rights Reserved. 2 Now we turn to encodings. ASCII is also an encoding scheme itself. That is to say, each character in ASCII is mapped to a binary sequence directly by converting its code point (i.e., its numeric ID) to a binary number. For example, the code point for the letter “A” is 65, and it is encoded as 65 in binary—01000001. For Unicode, the scheme had to become more complicated, creating some confusion for many people about encodings. First, Unicode, despite the word code in its root, is not actually an encoding scheme; it is a character set. Then, there are not one but three standard encoding schemes to map a code point in the Unicode set to a binary sequence in the computer. They are UTF-32, UTF-16, and UTF-8, with UTF-8 being the most popular encoding. For example, most webpages on the Internet are encoded by UTF-8. For most users of text analysis, it is not necessary to know the details of the mappings. Nonetheless, we discuss them briefly in the following paragraph for the curious. It might seem straightforward to map a code point in Unicode to a binary sequence directly by converting the code point into binary, just as we do for ASCII. The huge size of Unicode complicates this however. Given how many objects it contains (consider all the Asiatic and Arabic and Cyrillic symbols in addition to the Roman ones in ASCII), the largest code point in Unicode would take 32 bits—32 zeros or ones—in binary. We can certainly encode every code point as a 32-bit binary sequence and that is exactly what UTF-32 does, but a lot of space is wasted. Recall that the code point for “A” is 65, which is 01000001 in ASCII encoding. The code point for “A” is also 65 in Unicode, but if we encode it as a 32-bit binary sequence, then it will be 00000000000000000000000001000001. Although this encoding scheme is simple, it is not economical, as most of the spaces are wasted by padding 0s to get to 32 digits. The variable-length encodings such as UTF-8 and UTF-16 were developed to address this issue. If a code point can be encoded with a single byte (8 bits), then UTF-8 will encode it with a single byte (e.g., the letter “A”); if a single byte is not enough for some code point, UTF-8 will use two bytes (16 bits); if two is not enough, UTF-8 will try three, and so on. Page 4 of 7 Learn About Encodings in Python With Data From How ISIS Uses Twitter Dataset (2016) SAGE SAGE Research Methods Datasets Part 2019 SAGE Publications, Ltd. All Rights Reserved. 2 UTF-16 is between UTF-8 and UTF-32: it uses two bytes at least and four bytes when necessary. For the variable-length encodings, several bits are used to signal whether the currently encoded object is single-byte or multi-byte, and if multi-byte, the number of bytes in use. Detecting Encodings Since there are multiple encodings, a common question comes up when opening a text file: Which encoding is used by this file? (Technically, the process of opening a text file and displaying it on the screen is decoding, but if we know the encoding, then we know how to decode, so we will use the two interchangeably in this guide.) This is an important question because if you open the text file with a wrong encoding scheme, then you might see weird symbols or might fail to open the file altogether. But unfortunately, there is no systematic way to tell what encoding the file uses by merely looking at its filename or content (a pile of 0s and 1s, remember?). If there are no metadata about the encoding for the file, then the best one can do is trial and error. For example, start with ASCII, and if the text does not display correctly, try UTF-8, and so on. Illustrative Example: Encodings for Arabic Tweets This example shows how to read a text file using different encodings, with data on 17,000 ISIS-related tweets from more than 100 Twitter users from all over the world. This dataset has been very helpful in developing effective counter- messaging measures against terrorism worldwide. And a necessary precondition of using these data to do any useful analysis is to load its text content correctly. The Data This example uses a subset of data from the 2016 How ISIS Uses Twitter dataset (https://www.kaggle.com/fifthtribe/how-isis-uses-twitter/home). The data Page 5 of 7 Learn About Encodings in Python With Data From How ISIS Uses Twitter Dataset (2016) SAGE SAGE Research Methods Datasets Part 2019 SAGE Publications, Ltd. All Rights Reserved. 2 are collected by a digital agency, Fifth Tribe, and are released under the CC0: Public Domain license through the platform Kaggle. The variable we examine is • tweet: the text content of each tweet. There are 17,410 tweets (rows) in the dataset, posted between January 6, 2015, and May 13, 2016, before and after the November 2015 Paris Attacks. At least two languages—English and Arabic—are used in the tweets, making this data appropriate for demonstrating encodings. Analyzing the Data The data file is encoded with UTF-8. Hence, if we open it with UTF-8 encoding, then it will display correctly on the screen. If we open it with ASCII, some of the text won’t be decoded correctly and will be shown as random symbols depending on which system you are using. For example, using UTF-8, the following Arabic characters in one of the tweets are shown correctly as: Using ASCII, the characters above are interpreted by the computer as “random” symbols: Depending on what system you use, you might see different symbols, but they won’t be shown as the Arabic characters.
Recommended publications
  • A Decision Procedure for String to Code Point Conversion‹
    A Decision Procedure for String to Code Point Conversion‹ Andrew Reynolds1, Andres Notzli¨ 2, Clark Barrett2, and Cesare Tinelli1 1 Department of Computer Science, The University of Iowa, Iowa City, USA 2 Department of Computer Science, Stanford University, Stanford, USA Abstract. In text encoding standards such as Unicode, text strings are sequences of code points, each of which can be represented as a natural number. We present a decision procedure for a concatenation-free theory of strings that includes length and a conversion function from strings to integer code points. Furthermore, we show how many common string operations, such as conversions between lowercase and uppercase, can be naturally encoded using this conversion function. We describe our implementation of this approach in the SMT solver CVC4, which contains a high-performance string subsolver, and show that the use of a native procedure for code points significantly improves its performance with respect to other state-of-the-art string solvers. 1 Introduction String processing is an important part of many kinds of software. In particular, strings often serve as a common representation for the exchange of data at interfaces between different programs, between different programming languages, and between programs and users. At such interfaces, strings often represent values of types other than strings, and developers have to be careful to sanitize and parse those strings correctly. This is a challenging task, making the ability to automatically reason about such software and interfaces appealing. Applications of automated reasoning about strings include finding or proving the absence of SQL injections and XSS vulnerabilities in web applications [28, 25, 31], reasoning about access policies in cloud infrastructure [7], and generating database tables from SQL queries for unit testing [29].
    [Show full text]
  • Unicode and Code Page Support
    Natural for Mainframes Unicode and Code Page Support Version 4.2.6 for Mainframes October 2009 This document applies to Natural Version 4.2.6 for Mainframes and to all subsequent releases. Specifications contained herein are subject to change and these changes will be reported in subsequent release notes or new editions. Copyright © Software AG 1979-2009. All rights reserved. The name Software AG, webMethods and all Software AG product names are either trademarks or registered trademarks of Software AG and/or Software AG USA, Inc. Other company and product names mentioned herein may be trademarks of their respective owners. Table of Contents 1 Unicode and Code Page Support .................................................................................... 1 2 Introduction ..................................................................................................................... 3 About Code Pages and Unicode ................................................................................ 4 About Unicode and Code Page Support in Natural .................................................. 5 ICU on Mainframe Platforms ..................................................................................... 6 3 Unicode and Code Page Support in the Natural Programming Language .................... 7 Natural Data Format U for Unicode-Based Data ....................................................... 8 Statements .................................................................................................................. 9 Logical
    [Show full text]
  • Assessment of Options for Handling Full Unicode Character Encodings in MARC21 a Study for the Library of Congress
    1 Assessment of Options for Handling Full Unicode Character Encodings in MARC21 A Study for the Library of Congress Part 1: New Scripts Jack Cain Senior Consultant Trylus Computing, Toronto 1 Purpose This assessment intends to study the issues and make recommendations on the possible expansion of the character set repertoire for bibliographic records in MARC21 format. 1.1 “Encoding Scheme” vs. “Repertoire” An encoding scheme contains codes by which characters are represented in computer memory. These codes are organized according to a certain methodology called an encoding scheme. The list of all characters so encoded is referred to as the “repertoire” of characters in the given encoding schemes. For example, ASCII is one encoding scheme, perhaps the one best known to the average non-technical person in North America. “A”, “B”, & “C” are three characters in the repertoire of this encoding scheme. These three characters are assigned encodings 41, 42 & 43 in ASCII (expressed here in hexadecimal). 1.2 MARC8 "MARC8" is the term commonly used to refer both to the encoding scheme and its repertoire as used in MARC records up to 1998. The ‘8’ refers to the fact that, unlike Unicode which is a multi-byte per character code set, the MARC8 encoding scheme is principally made up of multiple one byte tables in which each character is encoded using a single 8 bit byte. (It also includes the EACC set which actually uses fixed length 3 bytes per character.) (For details on MARC8 and its specifications see: http://www.loc.gov/marc/.) MARC8 was introduced around 1968 and was initially limited to essentially Latin script only.
    [Show full text]
  • Plain Text & Character Encoding
    Journal of eScience Librarianship Volume 10 Issue 3 Data Curation in Practice Article 12 2021-08-11 Plain Text & Character Encoding: A Primer for Data Curators Seth Erickson Pennsylvania State University Let us know how access to this document benefits ou.y Follow this and additional works at: https://escholarship.umassmed.edu/jeslib Part of the Scholarly Communication Commons, and the Scholarly Publishing Commons Repository Citation Erickson S. Plain Text & Character Encoding: A Primer for Data Curators. Journal of eScience Librarianship 2021;10(3): e1211. https://doi.org/10.7191/jeslib.2021.1211. Retrieved from https://escholarship.umassmed.edu/jeslib/vol10/iss3/12 Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 License. This material is brought to you by eScholarship@UMMS. It has been accepted for inclusion in Journal of eScience Librarianship by an authorized administrator of eScholarship@UMMS. For more information, please contact [email protected]. ISSN 2161-3974 JeSLIB 2021; 10(3): e1211 https://doi.org/10.7191/jeslib.2021.1211 Full-Length Paper Plain Text & Character Encoding: A Primer for Data Curators Seth Erickson The Pennsylvania State University, University Park, PA, USA Abstract Plain text data consists of a sequence of encoded characters or “code points” from a given standard such as the Unicode Standard. Some of the most common file formats for digital data used in eScience (CSV, XML, and JSON, for example) are built atop plain text standards. Plain text representations of digital data are often preferred because plain text formats are relatively stable, and they facilitate reuse and interoperability.
    [Show full text]
  • Character Properties 4
    The Unicode® Standard Version 14.0 – Core Specification To learn about the latest version of the Unicode Standard, see https://www.unicode.org/versions/latest/. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trade- mark claim, the designations have been printed with initial capital letters or in all capitals. Unicode and the Unicode Logo are registered trademarks of Unicode, Inc., in the United States and other countries. The authors and publisher have taken care in the preparation of this specification, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. The Unicode Character Database and other files are provided as-is by Unicode, Inc. No claims are made as to fitness for any particular purpose. No warranties of any kind are expressed or implied. The recipient agrees to determine applicability of information provided. © 2021 Unicode, Inc. All rights reserved. This publication is protected by copyright, and permission must be obtained from the publisher prior to any prohibited reproduction. For information regarding permissions, inquire at https://www.unicode.org/reporting.html. For information about the Unicode terms of use, please see https://www.unicode.org/copyright.html. The Unicode Standard / the Unicode Consortium; edited by the Unicode Consortium. — Version 14.0. Includes index. ISBN 978-1-936213-29-0 (https://www.unicode.org/versions/Unicode14.0.0/) 1.
    [Show full text]
  • Introduction to Unicode
    Introduction to Unicode History of Character Codes In 1968, the American Standard Code for Information Interchange, better known by its acronym ASCII, was standardized. ASCII defined numeric codes for various characters, with the numeric values running from 0 to 127. For example, the lowercase letter ‘a’ is assigned 97 as its code value. ASCII was an American-developed standard, so it only defined unaccented characters. There was an ‘e’, but no ‘é’ or ‘Í’. This meant that languages which required accented characters couldn’t be faithfully represented in ASCII. (Actually the missing accents matter for English, too, which contains words such as ‘naïve’ and ‘café’, and some publications have house styles which require spellings such as ‘coöperate’.) For a while people just wrote programs that didn’t display accents. I remember looking at Apple ][ BASIC programs, published in French-language publications in the mid-1980s, that had lines like these: PRINT "FICHIER EST COMPLETE." PRINT "CARACTERE NON ACCEPTE." Those messages should contain accents, and they just look wrong to someone who can read French. In the 1980s, almost all personal computers were 8-bit, meaning that bytes could hold values ranging from 0 to 255. ASCII codes only went up to 127, so some machines assigned values between 128 and 255 to accented characters. Different machines had different codes, however, which led to problems exchanging files. Eventually various commonly used sets of values for the 128-255 range emerged. Some were true standards, defined by the International Standards Organization, and some were de facto conventions that were invented by one company or another and managed to catch on.
    [Show full text]
  • Unicode Character Properties
    Unicode character properties Document #: P1628R0 Date: 2019-06-17 Project: Programming Language C++ Audience: SG-16, LEWG Reply-to: Corentin Jabot <[email protected]> 1 Abstract We propose an API to query the properties of Unicode characters as specified by the Unicode Standard and several Unicode Technical Reports. 2 Motivation This API can be used as a foundation for various Unicode algorithms and Unicode facilities such as Unicode-aware regular expressions. Being able to query the properties of Unicode characters is important for any application hoping to correctly handle any textual content, including compilers and parsers, text editors, graphical applications, databases, messaging applications, etc. static_assert(uni::cp_script('C') == uni::script::latin); static_assert(uni::cp_block(U'[ ') == uni::block::misc_pictographs); static_assert(!uni::cp_is<uni::property::xid_start>('1')); static_assert(uni::cp_is<uni::property::xid_continue>('1')); static_assert(uni::cp_age(U'[ ') == uni::version::v10_0); static_assert(uni::cp_is<uni::property::alphabetic>(U'ß')); static_assert(uni::cp_category(U'∩') == uni::category::sm); static_assert(uni::cp_is<uni::category::lowercase_letter>('a')); static_assert(uni::cp_is<uni::category::letter>('a')); 3 Design Consideration 3.1 constexpr An important design decision of this proposal is that it is fully constexpr. Notably, the presented design allows an implementation to only link the Unicode tables that are actually used by a program. This can reduce considerably the size requirements of an Unicode-aware executable as most applications often depend on a small subset of the Unicode properties. While the complete 1 Unicode database has a substantial memory footprint, developers should not pay for the table they don’t use. It also ensures that developers can enforce a specific version of the Unicode Database at compile time and get a consistent and predictable run-time behavior.
    [Show full text]
  • Unicode Characters and UTF-8
    Software Design Lecture Notes Prof. Stewart Weiss Unicode and UTF-8 Unicode and UTF-8 1 About Text The Problem Most computer science students are familiar with the ASCII character encoding scheme, but no others. This was the most prevalent encoding for more than forty years. The ASCII encoding maps characters to 7-bit integers, using the range from 0 to 127 to represent 94 printing characters, 33 control characters, and the space. Since a byte is usually used to store a character, the eighth bit of the byte is lled with a 0. The problem with the ASCII code is that it does not provide a way to encode characters from other scripts, such as Cyrillic or Greek. It does not even have encodings of Roman characters with diacritical marks, such as ¦, ¡, ±, or ó. Over time, as computer usage extended world-wide, other encodings for dierent alphabets and scripts were developed, usually with overlapping codes. These encoding systems conicted with one another. That is, two encodings could use the same number for two dierent characters, or use dierent numbers for the same character. A program transferring text from one computer to another would run the risk that the text would be corrupted in the transition. Unifying Solutions In 1989, to overcome this problem, the International Standards Organization (ISO) started work on a universal, all-encompassing character code standard, and in 1990 they published a draft standard (ISO 10646) called the Universal Character Set (UCS). UCS was designed as a superset of all other character set standards, providing round-trip compatibility to other character sets.
    [Show full text]
  • Information, Characters, Unicode
    Information, Characters, Unicode Unicode © 24 August 2021 1 / 107 Hidden Moral Small mistakes can be catastrophic! Style Care about every character of your program. Tip: printf Care about every character in the program’s output. (Be reasonably tolerant and defensive about the input. “Fail early” and clearly.) Unicode © 24 August 2021 2 / 107 Imperative Thou shalt care about every Ěaracter in your program. Unicode © 24 August 2021 3 / 107 Imperative Thou shalt know every Ěaracter in the input. Thou shalt care about every Ěaracter in your output. Unicode © 24 August 2021 4 / 107 Information – Characters In modern computing, natural-language text is very important information. (“number-crunching” is less important.) Characters of text are represented in several different ways and a known character encoding is necessary to exchange text information. For many years an important encoding standard for characters has been US ASCII–a 7-bit encoding. Since 7 does not divide 32, the ubiquitous word size of computers, 8-bit encodings are more common. Very common is ISO 8859-1 aka “Latin-1,” and other 8-bit encodings of characters sets for languages other than English. Currently, a very large multi-lingual character repertoire known as Unicode is gaining importance. Unicode © 24 August 2021 5 / 107 US ASCII (7-bit), or bottom half of Latin1 NUL SOH STX ETX EOT ENQ ACK BEL BS HT LF VT FF CR SS SI DLE DC1 DC2 DC3 DC4 NAK SYN ETP CAN EM SUB ESC FS GS RS US !"#$%&’()*+,-./ 0123456789:;<=>? @ABCDEFGHIJKLMNO PQRSTUVWXYZ[\]^_ `abcdefghijklmno pqrstuvwxyz{|}~ DEL Unicode Character Sets © 24 August 2021 6 / 107 Control Characters Notice that the first twos rows are filled with so-called control characters.
    [Show full text]
  • Overview and Rationale
    Integration Panel: Maximal Starting Repertoire — MSR-4 Overview and Rationale REVISION – November 09, 2018 Table of Contents 1 Overview 3 2 Maximal Starting Repertoire (MSR-4) 3 2.1 Files 3 2.1.1 Overview 3 2.1.2 Normative Definition 3 2.1.3 Code Charts 4 2.2 Determining the Contents of the MSR 5 2.3 Process of Deciding the MSR 6 3 Scripts 7 3.1 Comprehensiveness and Staging 7 3.2 What Defines a Related Script? 8 3.3 Separable Scripts 8 3.4 Deferred Scripts 9 3.5 Historical and Obsolete Scripts 9 3.6 Selecting Scripts and Code Points for the MSR 9 3.7 Scripts Appropriate for Use in Identifiers 9 3.8 Modern Use Scripts 10 3.8.1 Common and Inherited 11 3.8.2 Scripts included in MSR-1 11 3.8.3 Scripts added in MSR-2 11 3.8.4 Scripts added in MSR-3 or MSR-4 12 3.8.5 Modern Scripts Ineligible for the Root Zone 12 3.9 Scripts for Possible Future MSRs 12 3.10 Scripts Identified in UAX#31 as Not Suitable for identifiers 13 4 Exclusions of Individual Code Points or Ranges 14 4.1 Historic and Phonetic Extensions to Modern Scripts 14 4.2 Code Points That Pose Special Risks 15 4.3 Code Points with Strong Justification to Exclude 15 4.4 Code Points That May or May Not be Excludable from the Root Zone LGR 15 4.5 Non-spacing Combining Marks 16 5 Discussion of Particular Code Points 18 Integration Panel: Maximal Starting Repertoire — MSR-3 Overview and Rationale 5.1 Digits and Hyphen 19 5.2 CONTEXT O Code Points 19 5.3 CONTEXT J Code Points 19 5.4 Code Points Restricted for Identifiers 19 5.5 Compatibility with IDNA2003 20 5.6 Code Points for Which the
    [Show full text]
  • How Unicode Came to "Dominate the World" Lee Collins 18 September 2014 Overview
    How Unicode Came to "Dominate the World" Lee Collins 18 September 2014 Overview • Original design of Unicode • Compromises • Technical • To correct flaws • Political • To buy votes • Dominates the world • But is it still “Unicode” Why Unicode • Mid-late 1980s growth of internationalization • Spread of personal computer • Frustration with existing character encodings • ISO / IEC 2022-based (ISO 8895, Xerox) • Font-based (Mac) • Code pages (Windows) Existing Encodings • No single standard • Different solutions based on single language • Complex multibyte encodings • ISO 2022, Shift JIS, etc. • Multilinguality virtually impossible • Barrier to design of internationalization libraries Assumptions • Encoding is foundation of layered model • Simple, stable base for complex processing • Characters have only ideal shape • Final shape realized in glyphs • Font, family, weight, context • Character properties • Directionality • Interaction with surrounding characters • Non-properties • Language, order in collation sequence, etc. • Depend on context Unicode Design • Single character set • Sufficient for living languages • Simple encoding model • “Begin at zero and add next character” — Peter Fenwick of BSI at Xerox 1987 • No character set shift sequences or mechanisms • Font, code page or ISO 2022 style • Fixed width of 16 bits • Encode only atomic elements • Assume sophisticated rendering technology • a + + = • = Early Strategy • Unicode as pivot code • Interchange between existing encodings • Focus on particular OSs • Xerox, Mac, NeXTSTEP,
    [Show full text]
  • Episode 3.09 – UTF-8 Encoding and Unicode Code Points
    Episode 3.09 – UTF-8 Encoding and Unicode Code Points Welcome to the Geek Author series on Computer Organization and Design Fundamentals. I’m David Tarnoff, and in this series we are working our way through the topics of Computer Organization, Computer Architecture, Digital Design, and Embedded System Design. If you’re interested in the inner workings of a computer, then you’re in the right place. The only background you’ll need for this series is an understanding of integer math, and if possible, a little experience with a programming language such as Java. And one more thing. Our topics involve a bit of figuring, so it might help to keep a pencil and paper handy. Have you ever wondered why your word processor or web application placed an odd-looking character, an empty box, or a numeric code where a character should be in your document? Since computers are only capable of working with numbers, that document was delivered as a sequence of binary codes, and sometimes, the encoding scheme used by the document’s creator doesn’t map properly to the encoding scheme used to read the document. In episode 3.8, we discussed how computers store symbols such as letters and punctuation by assigning a unique number to each symbol. We also presented one character encoding scheme called ASCII that used a 7-bit pattern to represent twenty-five punctuation marks and mathematical symbols, the space, the ten decimal digits, the twenty-six uppercase and twenty-six lowercase letters of the Latin alphabet, and thirty-eight unprintable control codes defining things like a carriage return, a delete, an escape, and an audible bell.
    [Show full text]