Overview Learning Inductive Learning Training and Testing

Total Page:16

File Type:pdf, Size:1020Kb

Overview Learning Inductive Learning Training and Testing Overview Learning Adapt through interaction with the world: rote memory to Learning in general • • developing a complex strategy Inductive learning (chapter 18) Types of learning: • • Statistical learning: neural networks (chapter 20; old 19) 1. Supervised learning (dense feedback) • 2. Unsupervised learning (no feedback) 3. Reinforcement learning (sparse feedback, environment altering), etc. Advantages (two, among many): • 1. Fault tolerance 2. No need for a complete specification to begin with Becoming a central focus of AI. • 1 2 Inductive Learning Training and Testing Given example pairs (x, f(x)), return a function h that Different Types of Error • approximates the function f : Training error • – pure inductive inference, or induction. Validation error • The function h is called a hypothesis. Test error • • Issues Generalization • Bias-Variance dillema • Overfitting, underfitting • Model complexity • 3 4 Inductive Learning and Inductive Bias Decision Trees Patrons? None Some Full o o o o o o o o No Yes Hungry? o o o o No Yes o o o o o o o o Type? No (a) (b) (c) (d) French Italian Thai Burger Yes No Fri/Sat? Yes Given (a) as the training data, we can come up with several different No Yes hypotheses: (b) to (d) No Yes selection of one hypothesis over another is called a inductive • learn to approximate discrete-valued target functions. bias (don’t confuse with other things called bias). • – exact match to training data step-by-step decision making (disjunction of conjunctions) • – prefer imprecise but smooth approximation applications: medical diagnosis, assess credit risk of loan • – etc. applicants, etc. 5 6 Decision Trees: What They Represent Decision Trees: What They Represent (cont’d) Wait or not (Yes/No)? The decision tree above corresponds to: In other words, for each instance (or example), there are attributes • (P atrons = Some) (Patrons, Hungry, etc.) and each instance have a full attribute value (P atrons = F ull Hungry = No T ype = F rench) assignment. ∨ ∧ ∧ (P atrons = F ull Hungry = No T ype = T hai F ri/Sat = Y es) ∨ ∧ ∧ ∧ For a given instance, it is classified into different discrete classes by the (P atrons = F ull Hungry = No T ype = Burger) • ∨ ∧ ∧ decision tree. Decision trees represent disjunction of conjunctions. For training, many (instance, class) pairs are used. • 7 8 Constructing Decision Trees from Examples Constructing Decision Trees: Trivial Solution Attributes Goal Example Alt Bar Fri Hun Pat Price Rain Res Type Est WillWait A trivial solution is to explicitly construct paths for each given X1 Yes No No Yes Some $$$ No Yes French 0±10 Yes • X2 Yes No No Yes Full $ No No Thai 30±60 No example. X3 No Yes No No Some $ No No Burger 0±10 Yes X4 Yes No Yes Yes Full $ No No Thai 10±30 Yes X5 Yes No Yes No Full $$$ No Yes French >60 No The problem with this approach is that it is not able to deal with X6 No Yes No Yes Some $$ Yes Yes Italian 0±10 Yes • X7 No Yes No No None $ Yes No Burger 0±10 No situations where, some attribute values are missing or new kinds X8 No No No Yes Some $$ Yes Yes Thai 0±10 Yes X9 No Yes Yes No Full $ Yes No Burger >60 No of situations arise. X10 Yes Yes Yes Yes Full $$$ No Yes Italian 10±30 No X11 No No No No None $ No No Thai 0±10 No X12 Yes Yes Yes Yes Full $ No No Burger 30±60 Yes Consider that some attributes may not count much toward the • final classification. Given a set of examples (training set), both positive and • negative, the task is to construct a decision tree that describes a concise decision path. Using the resulting decision tree, we want to classify new • instances of examples (either as yes or no). 9 10 Finding a Concise Decision Tree Finding a Concise Decision Tree (cont’d) Memorizing all cases may not be the best way. Basic idea: pick up attributes that can clearly separate positive • • and negative cases. We want to extract a decision pattern that can describe a large • number of cases in a concise way. These attributes are more important than others: the final • classification heavily depend on the value of these attributes. Such an inductive bias is called Ockham’s razor: The most likely • hypothesis is the simplest one that is consistent with all observations. In terms of a decision tree, we want to make as few tests before • reaching a decision, i.e. the depth of the tree should be shallow. 11 12 Finding a Concise Decision Tree (cont’d) Decision Tree Learning Algorithm +: X1,X3,X4,X6,X8,X12 (a) −: X2,X5,X7,X9,X10,X11 Patrons? function DECISION-TREE-LEARNING(examples, attributes, default) returns a decision tree Patrons? inputs: examples, set of Noneexamples Some Full None Some Full attributes, set of attributes No Yes Hungry? +: +: X1,X3,X6,X8 +: X4,X12 −: X7,X11 −: −: X2,X5,X9,X10 default, default value for the goal predicateNo Yes examples default +: X1,X3,X4,X6,X8,X12 if is empty then return Type? No (b) −: X2,X5,X7,X9,X10,X11 else if all examples have the same classi®cation then return the classi®cation Type? else if attributes is empty Frenchthen returnItalianMAJORITYThai-VALUE(examplesBurger) Yes No Fri/Sat? Yes French Italian Thai Burger else +: X1 +: X6 +: X4,X8 +: X3,X12 best CHOOSE-ATTRIBUTE(attributes, examples)No Yes −: X5 −: X10 −: X2,X11 −: X7,X9 tree a new decision tree with root test best No Yes value vi of best +: X1,X3,X4,X6,X8,X12 for each do −: X2,X5,X7,X9,X10,X11 ¢¡ (c) examplesi elements of examples with best = vi £ Patrons? subtree DECISION-TREE-LEARNING(examplesi, attributes ¤ best, MAJORITY-VALUE(examples)) None Some Full +: +: X1,X3,X6,X8 +: X4,X12 add a branch to tree with label vi and subtree subtree −: X7,X11 −: −: X2,X5,X9,X10 end No Yes Hungry? return tree Y N +: X4,X12 +: −: X2,X10 −: X5,X9 13 14 Resulting Decision Tree Accuracy of Decision Trees 1 0.9 0.8 0.7 0.6 % correct on test set 0.5 0.4 0 20 40 60 80 100 Training set size Some attributes are not tested at all. • Divide examples into training and test sets. Odd paths can be generated (Thai food branch). • • Train using the training set. Sometimes the tree can be incorrect for new examples • • Measure accuracy of resulting decision tree on the test set. (exceptional cases). • 15 16 Choosing the Best Attribute to Test First Entropy and Information Gain Use Shannon’s information theory to choose the attribute that give the Entropy(E) = P log (P ) maximum information gain. − i 2 i i C X∈ Pick an attribute such that the information gain (or entropy Ev • Gain(E, A) = Entropy(E) | | Entropy(Ev ) reduction) is maximized. − E v V alues(A) ∈ X | | Entropy measures the average surprisal of events. Less E: set of examples • • probable events are more surprising. A: a single attribute • E : set of examples where attribute A = v. • v S : cardinality of set S. •| | 17 18 Issues in Decision Tree Learning Key Points Noise and overfitting Decision tree learning: • Missing attribute values from examples What is the embodied principle (or bias)? • • Multi-valued attributes with large number of possible values How to choose the best attribute? Given a set of examples, • • choose the best attribute to test first. Continuous-valued attributes. • What are the issues? noise, overfitting, etc. • 19 20 Neural Networks Many Faces of Neural Networks Neural networks is one particular form of learning from data. Abstract mathematical/statistical model • simple processing elements: named units, or neurons Optimization algorithm • • connective structure and associated connection weights Pattern recognition algorithm • • learning: adaptation of connection weights Tools for understanding the function of the brain • • Neural networks mimic the human (or animal) nervous system. Robust engineering application • 21 22 The Central Nervous System Function of the Nervous System Function of the nervous system: Perception • Cognition • Motor control • Regulation of essential bodily functions • Cortex: thin outer sheet where most of the neurons are. • Sub-cortical nuclei: thalamus, hippocampus, basal ganglia, etc. • Midbrain, pons, and medulla, connects to the spinal cord. • Cerebellum (hind brain, or small brain) • 23 24 The Central Nervous System: Facts a How the Brain Differs from Computers Facts: human neocortex Densely connected. Thickness: 1.6mm • • Massively parallel. Area: 36cm 36cm (about 1.4 ft2) • • × Highly nonlinear. Neurons: 10 billion (1011) b • • Asynchronous: no central clock. Connections: 60 trillion (6 1013) to 100 trillion • • × Connections per neuron: 104 Fault tolerant. • • Energy usage per operation: 10 16J (compare to 10 6J in Highly adaptable. • − − • modern computers) Creative. a • Neural networks: a comprehensive foundation by Simon Haykin (1994), and Foundations of Vision by Brian Wandell (1995). May slightly differ from those in Russel & Norvig. No need to memorize these figures. Why are these crucial? b Note: More recent/accurate estimate is 86 billion neurons (see https://www.brainfacts.org/the-brain-facts-book) 25 26 Neurons: Basic Functional Unit of the Brain Propagation of Activation Across the Synapse AXON TERMINALS Synaptic Cleft Postsynaptic AXON TERMINALS Action Potential Potential CELL BODY CELL BODY AXON NUCLEUS AXON DENDRITE DENDRITIC ARBOR NUCLEUS Presynaptic Postsynaptic Neuron neurotransmitter Neuron DENDRITE DENDRITIC 1. Action potential reaches axon terminal. ARBOR 2. Neurotransmitters are released into synaptic cleft and bind to postsynaptic Dendrites receive input from upstream neurons. cell’s receptors. • + + Ions flow in to make the cell positively charged. 3. Binding allows ion channels to open (Na ), and Na ions flows in and • makes the postsynaptic cell depolarize. Once a firing threshold is reached, a spike is generated and transmitted • along the axon.
Recommended publications
  • Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
    Black Box Explanation by Learning Image Exemplars in the Latent Feature Space Riccardo Guidotti1, Anna Monreale2, Stan Matwin3;4, and Dino Pedreschi2 1 ISTI-CNR, Pisa, Italy, [email protected] 2 University of Pisa, Italy, [email protected] 3 Dalhousie University, [email protected] 4 Institute of Computer Scicne, Polish Academy of Sciences Abstract. We present an approach to explain the decisions of black box models for image classification. While using the black box to label im- ages, our explanation method exploits the latent feature space learned through an adversarial autoencoder. The proposed method first gener- ates exemplar images in the latent feature space and learns a decision tree classifier. Then, it selects and decodes exemplars respecting local decision rules. Finally, it visualizes them in a manner that shows to the user how the exemplars can be modified to either stay within their class, or to be- come counter-factuals by \morphing" into another class. Since we focus on black box decision systems for image classification, the explanation obtained from the exemplars also provides a saliency map highlighting the areas of the image that contribute to its classification, and areas of the image that push it into another class. We present the results of an experimental evaluation on three datasets and two black box models. Be- sides providing the most useful and interpretable explanations, we show that the proposed method outperforms existing explainers in terms of fidelity, relevance, coherence, and stability. Keywords: Explainable AI, Adversarial Autoencoder, Image Exemplars. 1 Introduction Automated decision systems based on machine learning techniques are widely used for classification, recognition and prediction tasks.
    [Show full text]
  • On the Boosting Ability of Top-Down Decision Tree Learning Algorithms
    On the Bo osting AbilityofTop-Down Decision Tree Learning Algorithms Michael Kearns Yishay Mansour AT&T Research Tel-Aviv University May 1996 Abstract We analyze the p erformance of top-down algorithms for decision tree learning, such as those employed by the widely used C4.5 and CART software packages. Our main result is a pro of that such algorithms are boosting algorithms. By this we mean that if the functions that lab el the internal no des of the decision tree can weakly approximate the unknown target function, then the top-down algorithms we study will amplify this weak advantage to build a tree achieving any desired level of accuracy. The b ounds we obtain for this ampli catio n showaninteresting dep endence on the splitting criterion used by the top-down algorithm. More precisely, if the functions used to lab el the internal no des have error 1=2 as approximation s to the target function, then for the splitting criteria used by CART and C4.5, trees 2 2 2 O 1= O log 1== of size 1= and 1= resp ectively suce to drive the error b elow .Thus for example, a small constant advantage over random guessing is ampli ed to any larger constant advantage with trees of constant size. For a new splitting criterion suggested by our analysis, the much stronger 2 O 1= b ound of 1= which is p olynomial in 1= is obtained, whichisprovably optimal for decision tree algorithms. The di ering b ounds have a natural explanation in terms of concavity prop erties of the splitting criterion.
    [Show full text]
  • Boosting with Multi-Way Branching in Decision Trees
    Boosting with Multi-Way Branching in Decision Trees Yishay Mansour David McAllester AT&T Labs-Research 180 Park Ave Florham Park NJ 07932 {mansour, dmac }@research.att.com Abstract It is known that decision tree learning can be viewed as a form of boosting. However, existing boosting theorems for decision tree learning allow only binary-branching trees and the generalization to multi-branching trees is not immediate. Practical decision tree al­ gorithms, such as CART and C4.5, implement a trade-off between the number of branches and the improvement in tree quality as measured by an index function. Here we give a boosting justifica­ tion for a particular quantitative trade-off curve. Our main theorem states, in essence, that if we require an improvement proportional to the log of the number of branches then top-down greedy con­ struction of decision trees remains an effective boosting algorithm. 1 Introduction Decision trees have been proved to be a very popular tool in experimental machine learning. Their popularity stems from two basic features - they can be constructed quickly and they seem to achieve low error rates in practice. In some cases the time required for tree growth scales linearly with the sample size. Efficient tree construction allows for very large data sets. On the other hand, although there are known theoretical handicaps of the decision tree representations, it seem that in practice they achieve accuracy which is comparable to other learning paradigms such as neural networks. While decision tree learning algorithms are popular in practice it seems hard to quantify their success ,in a theoretical model.
    [Show full text]
  • Inductive Bias in Decision Tree Learning • Issues in Decision Tree Learning • Summary
    Machine Learning Decision Tree Learning Artificial Intelligence & Computer Vision Lab School of Computer Science and Engineering Seoul National University Overview • Introduction • Decision Tree Representation • Learning Algorithm • Hypothesis Space Search • Inductive Bias in Decision Tree Learning • Issues in Decision Tree Learning • Summary AI & CV Lab, SNU 2 Introduction • Decision tree learning is a method for approximating discrete-valued target function • The learned function is represented by a decision tree • Decision tree can also be re-represented as if-then rules to improve human readability AI & CV Lab, SNU 3 Decision Tree Representation • Decision trees classify instances by sorting them down the tree from the root to some leaf node • A node – Specifies some attribute of an instance to be tested • A branch – Corresponds to one of the possible values for an attribute AI & CV Lab, SNU 4 Decision Tree Representation (cont.) Outlook Sunny Overcast Rain Humidity Yes Wind High Normal Strong Weak No Yes No Yes A Decision Tree for the concept PlayTennis AI & CV Lab, SNU 5 Decision Tree Representation (cont.) • Each path corresponds to a conjunction of attribute Outlook tests. For example, if the instance is (Outlook=sunny, Temperature=Hot, Sunny Rain Humidity=high, Wind=Strong) then the path of Overcast (Outlook=Sunny ∧ Humidity=High) is matched so that the target value would be NO as shown in the tree. Humidity Wind • A decision tree represents a disjunction of Yes conjunction of constraints on the attribute values of instances. For example, three positive instances can High Normal Strong Weak be represented as (Outlook=Sunny ∧ Humidity=normal) ∨ (Outlook=Overcast) ∨ (Outlook=Rain ∧Wind=Weak) as shown in the tree.
    [Show full text]
  • A Prediction for Student's Performance Using Decision Tree ID3 Method
    International Journal of Scientific & Engineering Research, Volume 5, Issue 7, July-2014 1329 ISSN 2229-5518 Data Mining: A prediction for Student's Performance Using Decision Tree ID3 Method D.BHU LAKSHMI, S. ARUNDATHI, DR.JAGADEESH Abstract— Knowledge Discovery and Data Mining (KDD) is a multidisciplinary area focusing upon methodologies for extracting useful knowledge from data and there are several useful KDD tools to extracting the knowledge. This knowledge can be used to increase the quality of education. But educational institution does not use any knowledge discovery process approach on these data. Data mining can be used for decision making in educational system. A decision tree classifier is one of the most widely used supervised learning methods used for data exploration based on divide & conquer technique. This paper discusses use of decision trees in educational data mining. Decision tree algorithms are applied on students’ past performance data to generate the model and this model can be used to predict the students’ performance. The most useful data mining techniques in educational database is classification, the decision tree (ID3) method is used here. Index Terms— Educational Data Mining, Classification, Knowledge Discovery in Database (KDD), ID3 Algorithm. 1. Introduction the students, prediction about students’ The advent of information technology in various performance and so on, the classification task is fields has lead the large volumes of data storage in used to evaluate student’s performance and as various formats like records, files, documents, there are many approaches that are used for data images, sound, videos, scientific data and many classification, the decision tree method is used new data formats.
    [Show full text]
  • Galaxy Classification with Deep Convolutional Neural Networks
    c 2016 Honghui Shi GALAXY CLASSIFICATION WITH DEEP CONVOLUTIONAL NEURAL NETWORKS BY HONGHUI SHI THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical and Computer Engineering in the Graduate College of the University of Illinois at Urbana-Champaign, 2016 Urbana, Illinois Adviser: Professor Thomas S. Huang ABSTRACT Galaxy classification, using digital images captured from sky surveys to de- termine the galaxy morphological classes, is of great interest to astronomy researchers. Conventional methods rely heavily on a few handcrafted mor- phological features while popular feature extraction methods that developed for natural images are not suitable for galaxy images. Deep convolutional neural networks (CNNs) are able to learn powerful features from images by hierarchical convolutional and pooling operations. This work applies state-of- the-art deep CNN technologies to galaxy classification for both a regression task and multi-class classification tasks. We also implement and compare the performance with several different conventional machine learning algorithms for a classification sub-task. Our experiments show that convolutional neural networks are able to learn representative features automatically and achieve high performance, surpassing both human recognition and other machine learning methods. ii To my family, especially my wife, and my friends near or far. To my adviser, to whom I owe much thanks! iii ACKNOWLEDGMENTS I would like to acknowledge my adviser Professor Thomas Huang, who has given me lots of guidance, support, and visionary insights. I would also like to acknowledge Professor Robert Brunner who led me to the topic and granted me lots of help during the research.
    [Show full text]
  • X-Trepan: a Multi Class Regression and Adapted Extraction of Comprehensible Decision Tree in Artificial Neural Networks
    X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE DECISION TREE IN ARTIFICIAL NEURAL NETWORKS Awudu Karim1, Shangbo Zhou2 College of Computer Science, Chongqing University, Chongqing, 400030, China. [email protected] ABSTRACT In this work, the TREPAN algorithm is enhanced and extended for extracting decision trees from neural networks. We empirically evaluated the performance of the algorithm on a set of databases from real world events. This benchmark enhancement was achieved by adapting Single-test TREPAN and C4.5 decision tree induction algorithms to analyze the datasets. The models are then compared with X-TREPAN for comprehensibility and classification accuracy. Furthermore, we validate the experimentations by applying statistical methods. Finally, the modified algorithm is extended to work with multi-class regression problems and the ability to comprehend generalized feed forward networks is achieved. KEYWORDS: Neural Network, Feed Forward, Decision Tree, Extraction, Classification, Comprehensibility. 1. INTRODUCTION Artificial neural networks are modeled based on the human brain architecture. They offer a means of efficiently modeling large and complex problems in which there are hundreds of independent variables that have many interactions. Neural networks generate their own implicit rules by learning from examples. Artificial neural networks have been applied to a variety of problem domains [1] such as medical diagnostics [2], games [3], robotics [4], speech generation [5] and speech recognition [6]. The generalization ability of neural networks has proved to be superior to other learning systems over a wide range of applications [7]. However despite their relative success, the further adoption of neural networks in some areas has been impeded due to their inability to explain, in a comprehensible form, how a decision has been arrived at.
    [Show full text]
  • Evaluation and Comparison of Word Embedding Models, for Efficient Text Classification
    Evaluation and comparison of word embedding models, for efficient text classification Ilias Koutsakis George Tsatsaronis Evangelos Kanoulas University of Amsterdam Elsevier University of Amsterdam Amsterdam, The Netherlands Amsterdam, The Netherlands Amsterdam, The Netherlands [email protected] [email protected] [email protected] Abstract soon. On the contrary, even non-traditional businesses, like 1 Recent research on word embeddings has shown that they banks , start using Natural Language Processing for R&D tend to outperform distributional models, on word similarity and HR purposes. It is no wonder then, that industries try to and analogy detection tasks. However, there is not enough take advantage of new methodologies, to create state of the information on whether or not such embeddings can im- art products, in order to cover their needs and stay ahead of prove text classification of longer pieces of text, e.g. articles. the curve. More specifically, it is not clear yet whether or not theusage This is how the concept of word embeddings became pop- of word embeddings has significant effect on various text ular again in the last few years, especially after the work of classifiers, and what is the performance of word embeddings Mikolov et al [14], that showed that shallow neural networks after being trained in different amounts of dimensions (not can provide word vectors with some amazing geometrical only the standard size = 300). properties. The central idea is that words can be mapped to In this research, we determine that the use of word em- fixed-size vectors of real numbers. Those vectors, should be beddings can create feature vectors that not only provide able to hold semantic context, and this is why they were a formidable baseline, but also outperform traditional, very successful in sentiment analysis, word disambiguation count-based methods (bag of words, tf-idf) for the same and syntactic parsing tasks.
    [Show full text]
  • Chapter 2. Machine Learning Overview
    Machine Learning Overview 2 2.1 Overview Learning denotes the process of acquiring new declarative knowledge, the organization of new knowledge into general yet effective representations, and the discovery of new facts and theories through observation and experimentation. Learning is one of the most important skills that mankind can master, which also renders us different from the other animals on this planet. To provide an example, according to our past experiences, we know the sun rises from the east and falls to the west; the moon rotates around the earth; 1 year has 365 days, which are all knowledge we derive from our past life experiences. As computers become available, mankind has been striving very hard to implant such skills into computers. For the knowledge which are clear for mankind, they can be explicitly represented in program as a set of simple reasoning rules. Meanwhile, in the past couple of decades, an extremely large amount of data is being generated in various areas, including the World Wide Web (WWW), telecommunication, climate, medical science, transportation, etc. For these applications, the knowledge to be detected from such massive data can be very complex that can hardly be represented with any explicit fine-detailed specification about what these patterns are like. Solving such a problem has been, and still remains, one of the most challenging and fascinating long-range goals of machine learning. Machine learning is one of the disciplines, which aims at endowing programs with the ability to learn and adapt. In machine learning, experiences are usually represented as data, and the main objective of machine learning is to derive models from data that can capture the complicated hidden patterns.
    [Show full text]
  • A Commodity Classification Framework Based on Machine
    S S symmetry Article A Commodity Classification Framework Based on Machine Learning for Analysis of Trade Declaration Mingshu He 1 , Xiaojuan Wang 1,*, Chundong Zou 1, Bingying Dai 2 and Lei Jin 3 1 School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China; [email protected] (M.H.); [email protected] (C.Z.) 2 Department of Statistics, Colorado State University, Fort Collins, CO 80523, USA; [email protected] 3 School of Computer Science, Beijing University of Posts and Telecommunications, Beijing 100876, China; [email protected] * Correspondence: [email protected] Abstract: Text, voice, images and videos can express some intentions and facts in daily life. By understanding these contents, people can identify and analyze some behaviors. This paper focuses on the commodity trade declaration process and identifies the commodity categories based on text information on customs declarations. Although the technology of text recognition is mature in many application fields, there are few studies on the classification and recognition of customs declaration goods. In this paper, we proposed a classification framework based on machine learning (ML) models for commodity trade declaration that reaches a high rate of accuracy. This paper also proposed a symmetrical decision fusion method for this task based on convolutional neural network (CNN) and transformer. The experimental results show that the fusion model can make up for the shortcomings of the two original models and some improvements have been made. In the two datasets used in this Citation: He, M.; Wang, X.; Zou, C.; Dai, B.; Jin, L. A Commodity paper, the accuracy can reach 88% and 99%, respectively.
    [Show full text]
  • COMP 307 — Lecture 07 Machine Learning 4 DT Learning And
    VICTORIA UNIVERSITY OF WELLINGTON COMP307 ML4 (DT/Perceptron): 2 Te Whare Wananga o te Upoko o te Ika a Maui Outline VUW • Decision tree learning: School of Engineering and Computer Science – Numeric attributes COMP 307 — Lecture 07 – Extension for DT learning • Perceptron learning: – Linear threshold unit Machine Learning 4 – Threshold transfer functions DT Learning and Perceptron Learning – Proceptron learning rule/algorithm – Property/Problem of perceptrons Mengjie Zhang (Meng) [email protected] COMP307 ML4 (DT/Perceptron): 3 COMP307 ML4 (DT/Perceptron): 4 Numeric Attributes Numeric Attributes (Continued) • Could split multiway — one brunch for each value Job Saving Family Class – bad idea, no generalisation Saving A true $10K single Approve B true $7K couple Approve C true $16K single Approve D true $25K single Approve E false $12K couple Approve • Could split on a simple comparison Saving > $10K 1 true $4K couple Reject – But what split point? true false 2 false $30K couple Reject 3 true $15K children Reject 4 false $6K single Reject • Could split on a subrange 5 false $8K children Reject $8K < Saving < $15K – But which range? What question goes in the node “Saving”? true false COMP307 ML4 (DT/Perceptron): 5 COMP307 ML4 (DT/Perceptron): 6 Numeric Attributes (Continued) Numeric Attributes to Binary Job Saving Family Class Consider the class boundaries, choose the best split: 1 true $4K couple Reject • 4 false $6K single Reject (Saving < 7): 0.2 imp(0:2) + 0.8 imp(5:3) = 0.188 B true $7K couple Approve • (Saving < 8): 0.3
    [Show full text]
  • Machine-Learning-Course Documentation Release 1.0
    Machine-Learning-Course Documentation Release 1.0 Amirsina Torfi May 12, 2019 Foreword 1 Introduction 1 1.1 Machine Learning Overview.......................................1 1.1.1 How was the advent and evolution of machine learning?...................1 1.1.2 Why is machine learning important?..............................1 1.1.3 Who is using ML and why (government, healthcare system, etc.)?..............2 1.1.4 Further Reading.........................................2 2 Cross-Validation 3 2.1 Motivation................................................3 2.2 Holdout Method.............................................3 2.3 K-Fold Cross Validation.........................................4 2.4 Leave-P-Out / Leave-One-Out Cross Validation.............................4 2.5 Conclusion................................................6 2.6 Motivation................................................6 2.7 Code Examples..............................................6 2.8 References................................................7 3 Linear Regression 9 3.1 Motivation................................................9 3.2 Overview................................................. 10 3.3 When to Use............................................... 12 3.4 Cost Function............................................... 12 3.5 Methods................................................. 16 3.5.1 Ordinary Least Squares..................................... 16 3.5.2 Gradient Descent........................................ 16 3.6 Code..................................................
    [Show full text]