Compendium of Technical White Papers

Total Page:16

File Type:pdf, Size:1020Kb

Compendium of Technical White Papers COMPENDIUM OF TECHNICAL WHITE PAPERS Compendium of Technical White Papers from Kx Technical Whitepaper Contents Machine Learning 1. Machine Learning in kdb+: kNN classification and pattern recognition with q ................................ 2 2. An Introduction to Neural Networks with kdb+ .......................................................................... 16 Development Insight 3. Compression in kdb+ ................................................................................................................. 36 4. Kdb+ and Websockets ............................................................................................................... 52 5. C API for kdb+ ............................................................................................................................ 76 6. Efficient Use of Adverbs ........................................................................................................... 112 Optimization Techniques 7. Multi-threading in kdb+: Performance Optimizations and Use Cases ......................................... 134 8. Kdb+ tick Profiling for Throughput Optimization ....................................................................... 152 9. Columnar Database and Query Optimization ............................................................................ 166 Solutions 10. Multi-Partitioned kdb+ Databases: An Equity Options Case Study ............................................. 192 11. Surveillance Technologies to Effectively Monitor Algo and High Frequency Trading ................... 204 1 | P a ge Technical Whitepaper 1 Machine Learning in kdb+: kNN classification and pattern recognition with q Author: Emanuele Melis works for Kx as kdb+ consultant. Currently based in the UK, he has been involved in designing, developing and maintaining solutions for Equities data at a world leading financial institution. Keen on machine learning, Emanuele has delivered talks and presentations on pattern recognition implementations using kdb+. Technical Whitepaper CONTENTS MACHINE LEARNING IN KDB+:KNN CLASSIFICATION AND PATTERN RECOGNITION WITH Q ................................................................. 2 1.1 LOADING THE DATASET ..................................................................................................................................................... 4 1.2 LOADING THE DATASET ..................................................................................................................................................... 5 1.3 K-NEAREST NEIGHBORS AND PREDICTION ............................................................................................................................. 8 1.3.1 Nearest Neighbor k=1 ................................................................................................................................. 8 1.3.2 k>1 .............................................................................................................................................................. 9 1.3.3 Prediction test ............................................................................................................................................. 9 1.4 ACCURACY CHECKS ........................................................................................................................................................ 10 1.4.1 Running with k=5 ...................................................................................................................................... 10 1.4.2 Running with k=3 ...................................................................................................................................... 11 1.4.3 Running with k=1 ...................................................................................................................................... 11 1.5 FURTHER APPROACHES ................................................................................................................................................... 11 1.5.1 Use slave threads ...................................................................................................................................... 11 1.5.2 Code optimization ..................................................................................................................................... 12 1.5.3 Euclidean or Manhattan Distance?........................................................................................................... 12 1.6 CONCLUSIONS............................................................................................................................................................... 14 Machine Learning in kdb+ 3 | P a ge Technical Whitepaper 1.1 Loading the Dataset Once downloaded, this is how the dataset looks in a text editor: The last number on the right-hand side of each line is the class label, and the other sixteen numbers are the class attributes; 16 numbers representing the 8 Cartesian coordinates sampled from each handwritten digit. Figure 1. CSV dataset We start by loading both test and training sets into a q session: q) test:`num xkey flip ((`$'16#.Q.a),`num)!((16#"i"),"c"; ",") 0:`pendigits.tes q) show test num| a b c d e f g h i j k l m n o p ---| -------------------------------------------------------- 8 | 88 92 2 99 16 66 94 37 70 0 0 24 42 65 100 100 8 | 80 100 18 98 60 66 100 29 42 0 0 23 42 61 56 98 8 | 0 94 9 57 20 19 7 0 20 36 70 68 100 100 18 92 9 | 95 82 71 100 27 77 77 73 100 80 93 42 56 13 0 0 9 | 68 100 6 88 47 75 87 82 85 56 100 29 75 6 0 0 .. q) tra:`num xkey flip ((`$'16#.Q.a),`num)!((16#"i"),"c"; ",") 0:`pendigits.tra q) show tra num| a b c d e f g h i j k l m n o p ---| -------------------------------------------------------- 8 | 47 100 27 81 57 37 26 0 0 23 56 53 100 90 40 98 2 | 0 89 27 100 42 75 29 45 15 15 37 0 69 2 100 6 1 | 0 57 31 68 72 90 100 100 76 75 50 51 28 25 16 0 4 | 0 100 7 92 5 68 19 45 86 34 100 45 74 23 67 0 1 | 0 67 49 83 100 100 81 80 60 60 40 40 33 20 47 0 .. For convenience, the two resulting dictionaries are flipped into tables and keyed on the class attribute so that we can later leverage q-sql and some of its powerful features. Keying at this stage is done for display purposes only. Rows taken from the sets will be flipped back into dictionaries and the class label will be dropped2 while computing distance metric. 2 Un-keying and flipping a table into dictionary to drop the first key is more efficient than functionally deleting a column. Machine Learning in kdb+ 4 | Page Technical Whitepaper The column names do not carry any information and so the first 16 letters of the alphabet are chosen for the 16 integers representing the class attributes; while the class label, stored as character, is assigned a mnemonic tag “num”. 1.2 Calculating distance metric As mentioned previously, in a k-NN classifier the distance metric between instances is the distance between their feature arrays. In our dataset, the instances are rows of the tra and test tables, and their attributes are the columns. To better explain this, we demonstrate with two instances from the training and test set: q) show tra1:1#tra num| a b c d e f g h i j k l m n o p ---| ----------------------------------------------- 8 | 47 100 27 81 57 37 26 0 0 23 56 53 100 90 40 98 q) show tes1:1#test num| a b c d e f g h i j k l m n o p ---| ---------------------------------------------- 8 | 88 92 2 99 16 66 94 37 70 0 0 24 42 65 100 100 100 100 90 90 80 80 70 70 60 60 50 50 40 40 30 30 20 20 10 10 0 20 40 60 80 100 0 20 40 60 80 100 Training Instance Training Instance Test instance Test instance Figure 2-1 tra1 and tes1 point plot Figure 2-2 tra1 and tes1 visual approximation Both instances belong to the class digit “8”, as per their class labels. However, this is not clear by just looking at the plotted points and the num column is not used by the classifier, which will instead calculate the distance between matching columns of the two instances. That is, calculating how far the a to p columns of tes1 are from their counterparts in tra1. While only an arbitrary measure, the classifier will use these distances to identify the nearest neighbour(s) in the training set and make a prediction. Machine Learning in kdb+ 5 | Page Technical Whitepaper In q, this will be achieved using a dyadic function whose arguments can be two tables. Using the right adverbs, it is applied column by column, returning one table that stores the result of each iteration in the a to p columns. The main benefit of this approach is that it relieves the developer from the burden of looping and indexing lists when doing point-point computation. The metric that will be used to determine the distance between the feature points of two instances is the Manhattan distance =1 | |, and it is calculated as the sum of the absolute difference of their Cartesian coordinates. Using a Cartesian distance metric is intuitive and convenient as the columns in our ∑ − set represent indeed X or Y coordinates: q) dist:{abs x-y} q) tra1 dist' tes1 num| a b c d e f g h i j k l m n o p ---| --------------------------------------------- 8 | 41 8 25 18 41 29 68 37 70 23 56 29 58 25 60 2 Now that the resulting table represents the distance metric between each attribute of the two instances, we can sum all the values and obtain the distance between the two instances in the feature space: q) sums each tra1 dist' tes1 num| a b c d e f g h i j k l m n o p ---| ----------------------------------------------------------- 8 | 41 49 74 92 133 162 230
Recommended publications
  • An Array-Oriented Language with Static Rank Polymorphism
    An array-oriented language with static rank polymorphism Justin Slepak, Olin Shivers, and Panagiotis Manolios Northeastern University fjrslepak,shivers,[email protected] Abstract. The array-computational model pioneered by Iverson's lan- guages APL and J offers a simple and expressive solution to the \von Neumann bottleneck." It includes a form of rank, or dimensional, poly- morphism, which renders much of a program's control structure im- plicit by lifting base operators to higher-dimensional array structures. We present the first formal semantics for this model, along with the first static type system that captures the full power of the core language. The formal dynamic semantics of our core language, Remora, illuminates several of the murkier corners of the model. This allows us to resolve some of the model's ad hoc elements in more general, regular ways. Among these, we can generalise the model from SIMD to MIMD computations, by extending the semantics to permit functions to be lifted to higher- dimensional arrays in the same way as their arguments. Our static semantics, a dependent type system of carefully restricted power, is capable of describing array computations whose dimensions cannot be determined statically. The type-checking problem is decidable and the type system is accompanied by the usual soundness theorems. Our type system's principal contribution is that it serves to extract the implicit control structure that provides so much of the language's expres- sive power, making this structure explicitly apparent at compile time. 1 The Promise of Rank Polymorphism Behind every interesting programming language is an interesting model of com- putation.
    [Show full text]
  • Is Ja Dialect of APL? Reported by Jonathan Barman Eugene Mcdonnell - the Question Is Irrelevant
    VICTOR Vol.8 No.2 convert the noun back to verb and as a result various interesting control structures can be created. The verbal nouns thus formed can be put into arrays and some of the benefits that various people have suggested might arise from arrays of functions can be obtained. A design goal is to make compilation possible without limiting the expressive power of J. APL Technology of Computer Simulation, by A Boozin and I Popselov reported by Sylvia Camacho Both authors of this paper are academic mathematicians and their use of APL is just where one might expect it to be: in modelling linear differential equations and stochastic Markovian processes. Questions put showed that they first used APL 15 years ago on terminals to what was described as 'a big old computer'. Currently they use APL*PLUS/PC. They have set up an environment in which they can split the screen and display data graphically with some zoom facilities. This environment can be used in conjunction with raw APL while exploring the results of a model, or can be used in conjunction with data derived from APL or some other source. In fact it is often used by Pascal programmers. Most recently they have been setting up economic models to try to come to terms with the consequences of decentralisation and privatisation. They use both Roman and Cyrillic alphabets. Panel: Is Ja Dialect of APL? reported by Jonathan Barman Eugene McDonnell - The question is irrelevant. Surely proponents of J would not be thrown out of the APL community? Garth Foster - J is quite definitely not APL.
    [Show full text]
  • An APL Subset Interpreter for a New Chip Set / by James Hoskin
    An APL Subset Interpreter for a New Chip Set James Hoskin MSc (Physics), University of Calgary A THESIS SUBMI 1 TED IN L'ARTlAl f Ul FILLMENT OF THE REQUIKEMENTS FOR THE DEGREE OF MASTER OF SCIFNCE In the School of Cornput~ngSc~enre " James Hosk~n1987 SIMON FRASER UNIVERSITY May 1987 All rights reserved This thesis may not be reproduced in whole or in part. by photocopy or other means w~thoutthe permission of the author Approval Title of Thesis: An APL Subset Interpreter for a New Chip Set Name. James D Hoskin Degree: Master of ~iience Examining Committee. Chairperson. Dr. W. S. Luk Dr. R. F. Hobson Senior Supervisor Dr J& Weinkam,"I/, Dr. R. D. Cameron External Examiner Dr. Carl McCrosky External Examiner April 28, 1987 Date Approved: PART IAL COPYR l GHT L ICENSE I hereby grant to Simon Fraser University the right to lend my thesis, project or extended essay (the title of which is shown below) to users of the Simon Fraser University Library, and to make partial or single copies only for such users or in response to a request from the library of any other university, or other educational institution, on its own behalf or for one of its users. I further agree that permission for multiple copying of this work for scholarly purposes may be granted by me or the Dean of Graduate Studies. It is understood that copying or publication of this work for financial gain shall not be allowed without my written permission. Title of Thesis/Project/Extended Essay -- - Author: (signature) (date) Abstract The APL language provides a powerful set of functions and operators to handle dynamic array data.
    [Show full text]
  • Kx for Manufacturing Flyer
    Kx for At a Glance Powered by the world’s fastest Manufacturing time-series database, Kx enables manufacturers to improve performance at tool, factory and Accelerate Your Evolution Into Industry 4.0 user analytic levels. Kx is able to handle millions of events and Kx is the world’s fastest time-series database and analytics platform for measurements per second, manufacturing. It is a high-performance, cost-effective and low latency solution for gigabytes to petabytes of historical ingesting, processing, and analyzing real-time, streaming and historical data from data, with nanosecond resolution, industrial equipment sensors and factory systems. more efficiently and cost-effectively than any available alternatives. Kx is a single integrated software platform enabling quick, easy implementation and low total cost of ownership. It can be deployed on premises, in the cloud or as a hybrid configuration. The Kx Advantage • Ingest and process over 30 million sensor measurements per second • Supports any sensor measurement, frequency, tags and attributes with nanosecond precision • Stream processing platform with in-built Complex Event Processing • Augment existing systems to support massive increase in sensor data volumes • Ability to capture, store and Performance process 10TBs of data per day Featuring a superior columnar-structured time-series database, kdb+, Kx can scale • Deploy on-premises, in anywhere from thousands to hundreds of millions of sensors, at any measurement the cloud or as a hybrid frequency whilst maintaining extreme levels of performance. configuration • Rich customizable visualization Kx has the ability to ingest and process over 30 million sensor measurements per through integrated dashboards second using just one server.
    [Show full text]
  • KDB Kernel Debugger and Kdb Command
    AIX Version 7.2 KDB kernel debugger and kdb command IBM Note Before using this information and the product it supports, read the information in “Notices” on page 323. This edition applies to AIX Version 7.2 and to all subsequent releases and modifications until otherwise indicated in new editions. © Copyright International Business Machines Corporation 2015. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents About this document.............................................................................................ix Highlighting..................................................................................................................................................ix Case-sensitivity in AIX................................................................................................................................ ix ISO 9000......................................................................................................................................................ix KDB kernel debugger and kdb command................................................................ 1 KDB kernel debugger................................................................................................................................... 1 Invoking the KDB kernel debugger........................................................................................................ 2 The kdb command..................................................................................................................................3
    [Show full text]
  • To the Members of First Derivatives Plc
    First Derivatives plc First Derivatives plc Annual report and accounts 2019 Annual report and accounts 2019 AT THE CORE OF DATA SCIENCE FD’s world-leading analytics technology and data science expertise are disrupting industries, helping our clients to generate more revenue and increase their operational efficiency See more online at: firstderivatives.com and kx.com STRATEGIC REPORT CORPORATE GOVERNANCE FINANCIAL STATEMENTS 01 Highlights 24 Board of Directors 41 Independent auditor’s report 02 At a glance 26 Chairman’s governance 47 Consolidated statement 04 Chairman’s review statement of comprehensive income 06 Business model 27 Governance framework 49 Consolidated balance sheet 08 Business review 29 Report of the Audit Committee 50 Company balance sheet 14 Strategy 32 Report of the Nomination 51 Consolidated statement 15 Financial review Committee of changes in equity 20 Principal risks and uncertainties 34 Report of the Remuneration 53 Company statement of changes 22 People strategy Committee in equity 38 Directors’ report 55 Consolidated cash flow 40 Statement of Directors’ statement responsibilities 56 Company cash flow statement 57 Notes 116 Directors and advisers IBC Global directory Highlights FINANCIAL HIGHLIGHTS Report Strategic Revenue £m Operating profit £m £217.4m £18.7m 18.7 217.4 14.7 186.0 151.7 Corporate Governance 12.2 2017 2018 2019 2017 2018 2019 Adjusted diluted EPS p Net debt £m 83.2p £16.5m Financial Statements Financial 16.5 16.2 83.2 72.2 13.5 61.3 2017 2018 2019 2017 2018 2019 OPERATIONAL HIGHLIGHTS • FinTech
    [Show full text]
  • APL Literature Review
    APL Literature Review Roy E. Lowrance February 22, 2009 Contents 1 Falkoff and Iverson-1968: APL1 3 2 IBM-1994: APL2 8 2.1 APL2 Highlights . 8 2.2 APL2 Primitive Functions . 12 2.3 APL2 Operators . 18 2.4 APL2 Concurrency Features . 19 3 Dyalog-2008: Dyalog APL 20 4 Iverson-1987: Iverson's Dictionary for APL 21 5 APL Implementations 21 5.1 Saal and Weiss-1975: Static Analysis of APL1 Programs . 21 5.2 Breed and Lathwell-1968: Implementation of the Original APLn360 25 5.3 Falkoff and Orth-1979: A BNF Grammar for APL1 . 26 5.4 Girardo and Rollin-1987: Parsing APL with Yacc . 27 5.5 Tavera and other-1987, 1998: IL . 27 5.6 Brown-1995: Rationale for APL2 Syntax . 29 5.7 Abrams-1970: An APL Machine . 30 5.8 Guibas and Wyatt-1978: Optimizing Interpretation . 31 5.9 Weiss and Saal-1981: APL Syntax Analysis . 32 5.10 Weigang-1985: STSC's APL Compiler . 33 5.11 Ching-1986: APL/370 Compiler . 33 5.12 Ching and other-1989: APL370 Prototype Compiler . 34 5.13 Driscoll and Orth-1986: Compile APL2 to FORTRAN . 35 5.14 Grelck-1999: Compile APL to Single Assignment C . 36 6 Imbed APL in Other Languages 36 6.1 Burchfield and Lipovaca-2002: APL Arrays in Java . 36 7 Extensions to APL 37 7.1 Brown and Others-2000: Object-Oriented APL . 37 1 8 APL on Parallel Computers 38 8.1 Willhoft-1991: Most APL2 Primitives Can Be Parallelized . 38 8.2 Bernecky-1993: APL Can Be Parallelized .
    [Show full text]
  • A Lambda Calculus for Transfinite Arrays
    1 A Lambda Calculus for Transfinite Arrays Unifying Arrays and Streams ARTJOMS ŠINKAROVS, Heriot-Watt University SVEN-BODO SCHOLZ, Heriot-Watt University Array programming languages allow for concise and generic formulations of numerical algorithms, thereby providing a huge potential for program optimisation such as fusion, parallelisation, etc. One of the restric- tions that these languages typically have is that the number of elements in every array has to be finite. This means that implementing streaming algorithms in such languages requires new types of data structures, with operations that are not immediately compatible with existing array operations or compiler optimisations. In this paper, we propose a design for a functional language that natively supports infinite arrays. We use ordinal numbers to introduce the notion of infinity in shapes and indices. By doing so, we obtain a calculus that naturally extends existing array calculi and, at the same time, allows for recursive specifications as they are found in stream- and list-based settings. Furthermore, the main language construct that can be thought of as an n-fold cons operator gives rise to expressing transfinite recursion in data, something that lists or streams usually do not support. This makes it possible to treat the proposed calculus as a unifying theory of arrays, lists and streams. We give an operational semantics of the proposed language, discuss design choices that we have made, and demonstrate its expressibility with several examples. We also demonstrate that the proposed formalism preserves a number of well-known universal equalities from array/list/stream theories, and discuss implementation-related challenges.
    [Show full text]
  • Application and Interpretation
    Programming Languages: Application and Interpretation Shriram Krishnamurthi Brown University Copyright c 2003, Shriram Krishnamurthi This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License. If you create a derivative work, please include the version information below in your attribution. This book is available free-of-cost from the author’s Web site. This version was generated on 2007-04-26. ii Preface The book is the textbook for the programming languages course at Brown University, which is taken pri- marily by third and fourth year undergraduates and beginning graduate (both MS and PhD) students. It seems very accessible to smart second year students too, and indeed those are some of my most successful students. The book has been used at over a dozen other universities as a primary or secondary text. The book’s material is worth one undergraduate course worth of credit. This book is the fruit of a vision for teaching programming languages by integrating the “two cultures” that have evolved in its pedagogy. One culture is based on interpreters, while the other emphasizes a survey of languages. Each approach has significant advantages but also huge drawbacks. The interpreter method writes programs to learn concepts, and has its heart the fundamental belief that by teaching the computer to execute a concept we more thoroughly learn it ourselves. While this reasoning is internally consistent, it fails to recognize that understanding definitions does not imply we understand consequences of those definitions. For instance, the difference between strict and lazy evaluation, or between static and dynamic scope, is only a few lines of interpreter code, but the consequences of these choices is enormous.
    [Show full text]
  • Kx for Sensors
    KxKx forfor SensorsSensors AtAt aa GlanceGlance KxKx forfor SensorsSensors isis anan integratedintegrated VEEVEE Enable, Combine, Integrate and Action your data Enable,Enable, Combine,Combine, IntegrateIntegrate andand ActionAction youryour datadata andand AnalyticsAnalytics solutionsolution forfor ingesting,ingesting, validating,validating, estimatingestimating andand analyzinganalyzing The Industrial Internet of Things (IIoT) is the concept of connected objects and TheThe IndustrialIndustrial InternetInternet ofof ThingsThings (IIoT)(IIoT) isis thethe conceptconcept ofof connectedconnected objectsobjects andand massivemassive amountsamounts ofof streamingstreaming devices, everything from manufacturing equipment, automobiles, jet engines to smart devices,devices, everythingeverything fromfrom manufacturingmanufacturing equipment,equipment, automobiles,automobiles, jetjet enginesengines toto smartsmart (real-time)(real-time) andand historicalhistorical datadata fromfrom meters. Many analysts predict that there will be over 20 billion connected devices by meters.meters. ManyMany analystsanalysts predictpredict thatthat therethere willwill bebe overover 2020 billionbillion connectedconnected devicesdevices byby sensors,sensors, devices,devices, andand otherother datadata 2020. 2020.2020. sources.sources. TheseThese devicesdevices areare equippedequipped withwith sensorssensors thatthat capturecapture informationinformation aboutabout thethe device,device, service,service, weather,weather, location,location, vibration,vibration, motion,motion,
    [Show full text]
  • Handout 16: J Dictionary
    J Dictionary Roger K.W. Hui Kenneth E. Iverson Copyright © 1991-2002 Jsoftware Inc. All Rights Reserved. Last updated: 2002-09-10 www.jsoftware.com . Table of Contents 1 Introduction 2 Mnemonics 3 Ambivalence 4 Verbs and Adverbs 5 Punctuation 6 Forks 7 Programs 8 Bond Conjunction 9 Atop Conjunction 10 Vocabulary 11 Housekeeping 12 Power and Inverse 13 Reading and Writing 14 Format 15 Partitions 16 Defined Adverbs 17 Word Formation 18 Names and Displays 19 Explicit Definition 20 Tacit Equivalents 21 Rank 22 Gerund and Agenda 23 Recursion 24 Iteration 25 Trains 26 Permutations 27 Linear Functions 28 Obverse and Under 29 Identity Functions and Neutrals 30 Secondaries 31 Sample Topics 32 Spelling 33 Alphabet and Numbers 34 Grammar 35 Function Tables 36 Bordering a Table 37 Tables (Letter Frequency) 38 Tables 39 Classification 40 Disjoint Classification (Graphs) 41 Classification I 42 Classification II 43 Sorting 44 Compositions I 45 Compositions II 46 Junctions 47 Partitions I 48 Partitions II 49 Geometry 50 Symbolic Functions 51 Directed Graphs 52 Closure 53 Distance 54 Polynomials 55 Polynomials (Continued) 56 Polynomials in Terms of Roots 57 Polynomial Roots I 58 Polynomial Roots II 59 Polynomials: Stopes 60 Dictionary 61 I. Alphabet and Words 62 II. Grammar 63 A. Nouns 64 B. Verbs 65 C. Adverbs and Conjunctions 66 D. Comparatives 67 E. Parsing and Execution 68 F. Trains 69 G. Extended and Rational Arithmeti 70 H. Frets and Scripts 71 I. Locatives 72 J. Errors and Suspensions 73 III. Definitions 74 Vocabulary 75 = Self-Classify - Equal 76 =. Is (Local) 77 < Box - Less Than 78 <.
    [Show full text]
  • Chapter 1 Basic Principles of Programming Languages
    Chapter 1 Basic Principles of Programming Languages Although there exist many programming languages, the differences among them are insignificant compared to the differences among natural languages. In this chapter, we discuss the common aspects shared among different programming languages. These aspects include: programming paradigms that define how computation is expressed; the main features of programming languages and their impact on the performance of programs written in the languages; a brief review of the history and development of programming languages; the lexical, syntactic, and semantic structures of programming languages, data and data types, program processing and preprocessing, and the life cycles of program development. At the end of the chapter, you should have learned: what programming paradigms are; an overview of different programming languages and the background knowledge of these languages; the structures of programming languages and how programming languages are defined at the syntactic level; data types, strong versus weak checking; the relationship between language features and their performances; the processing and preprocessing of programming languages, compilation versus interpretation, and different execution models of macros, procedures, and inline procedures; the steps used for program development: requirement, specification, design, implementation, testing, and the correctness proof of programs. The chapter is organized as follows. Section 1.1 introduces the programming paradigms, performance, features, and the development of programming languages. Section 1.2 outlines the structures and design issues of programming languages. Section 1.3 discusses the typing systems, including types of variables, type equivalence, type conversion, and type checking during the compilation. Section 1.4 presents the preprocessing and processing of programming languages, including macro processing, interpretation, and compilation.
    [Show full text]