On the Inductive Bias of Deep Learning Tarek Mansour

Total Page:16

File Type:pdf, Size:1020Kb

On the Inductive Bias of Deep Learning Tarek Mansour Deep Neural Networks are Lazy: On the Inductive Bias of Deep Learning by Tarek Mansour S.B., C.S. and Mathematics, M.I.T (2018) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY February 2019 ○c Tarek Mansour, MMXIX. All rights reserved. The author hereby grants to MIT permission to reproduce and to distribute publicly paper and electronic copies of this thesis document in whole or in part in any medium now known or hereafter created. Author................................................................ Department of Electrical Engineering and Computer Science February 1, 2019 Certified by. Aleksander Madry Associate Professor of Computer Science Thesis Supervisor Accepted by . Katrina LaCurts Chairman, Department Committee on Graduate Theses 2 Deep Neural Networks are Lazy: On the Inductive Bias of Deep Learning by Tarek Mansour Submitted to the Department of Electrical Engineering and Computer Science on February 1, 2019, in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science Abstract Deep learning models exhibit superior generalization performance despite being heav- ily overparametrized. Although widely observed in practice, there is currently very little theoretical backing for such a phenomena. In this thesis, we propose a step forward towards understanding generalization in deep learning. We present evidence that deep neural networks have an inherent inductive bias that makes them inclined to learn generalizable hypotheses and avoid memorization. In this respect, we pro- pose results that suggest that the inductive bias stems from neural networks being lazy: they tend to learn simpler rules first. We also propose a definition of simplicity in deep learning based on the implicit priors ingrained in deep neural networks. Thesis Supervisor: Aleksander Madry Title: Associate Professor of Computer Science 3 4 Acknowledgments I would like to start by thanking my advisor Aleksander Madry for the guidance and mentorship during both my undergraduate and graduate careers at MIT. Aleksander introduced me to deep learning science and constantly pushed me to think critically about problems that arise in research. He played a big role in shaping me as an engineer as well as a scientist. This thesis would not have been possible without his mentoring and support. Having Aleksander as a mentor was a phenomenal experience. I could not have hoped for a better advisor. I would like to thank Kai Yuanqing Xiao for his significant contributions to the research presented in this thesis. He helped me throughout and played a key role in developing the ideas proposed. This work would not have been possible without him. I would like to thank the Theory of Computation group. They provided a great environment for research through reading groups and constant discussions about deep learning science. I really enjoyed being part of such an interesting group of people. I would also like to thank my MIT friends for the constant support they have given me throughout. I would like to thank my family for everything. Without them, I would not be where I am today. This thesis is dedicated to them. 5 6 Contents 1 Introduction 17 1.1 The Statistical Learning Problem . 18 1.1.1 Preliminaries and Notation: The Learning Setup . 18 1.1.2 Generalization and the Bias-Variance Tradeoff . 19 1.1.3 Feature Maps . 20 1.2 Deep Learning . 20 1.2.1 Preliminaries and Notation . 20 1.2.2 The Science of Deep Learning . 22 1.2.3 Generalization in Deep Learning . 23 1.3 Contributions: the Inductive Bias . 23 1.3.1 The Inductive Bias: a Definition . 23 1.3.2 Laziness, or Learning Simple Things First . 24 1.3.3 Simplicity is Not General . 24 1.4 Outline . 24 2 Related Works 27 2.1 The Quest to Uncover Deep Learning Generalization . 27 2.1.1 Stochastic Gradient Descent (SGD) as a Driver of Generalization 28 2.1.2 Overparametrization as a Feature . 28 2.1.3 Interpolation is not Equivalent to Overfitting . 29 2.2 Memorization in Deep Learning . 30 2.2.1 Noise Robustness in Deep Learning . 31 2.2.2 Memorization is Secondary . 31 7 2.3 Priors in Deep Learning . 32 2.3.1 Priors as Network Biases . 32 3 On the Noise Robustness of Deep Learning Models 35 3.1 Introdution . 35 3.1.1 Benign Noise and Adverserial Noise . 35 3.2 Generalization with High Output Domain Noise . 37 3.2.1 Non Linear Networks . 37 3.2.2 Linear Networks . 39 3.3 Generalization with High Input and Output Domains Noise . 41 3.3.1 Input Domain Noise as an "Easier" Task . 41 3.3.2 Towards the "Laziness" Property of Deep Neural Networks . 41 4 Learning Simple Things First: On the Inductive Bias in Deep Learn- ing Models 45 4.1 Introduction . 45 4.2 A Surprising Behavior: Generalization is Oblivious to Fake Images When it Matters . 47 4.2.1 Data Generation: the Gaussian Directions and CIFARp . 47 4.2.2 Generalization with Gaussian Directions . 48 4.2.3 Generalization in CIFARp .................... 52 4.3 Data Manifold Awareness . 53 4.3.1 Differential Treatment of Real and Synthetic Images . 54 4.3.2 Towards Identifying the Data Manifold: Unsupervised Learning 54 4.3.3 Towards Inductive Bias: Low Dimensional Compression . 56 4.4 Learning Simple Things First . 57 4.4.1 Data Generation: the Linear/Quadratic Dataset . 57 4.4.2 The Simplicity Bias: A Proof of Concept . 59 4.5 Laziness: a Force that Drives Generalization . 60 8 5 Inductive Bias through Priors: Simplicity is Preconditioned by Pri- ors 63 5.1 Introduction . 63 5.1.1 Priors as a Summary of Initial Beliefs . 63 5.1.2 Priors in Deep Learning . 64 5.1.3 Priors Matter for Deep Learning . 65 5.2 Simplicity, or Proximity to the Prior . 65 5.2.1 Bias through Non-Linear Activations . 66 5.2.2 Bias through Architecture . 67 5.2.3 Feature Engineering through Priors . 69 6 Conclusion 71 9 10 List of Figures 3-1 Adversarial example. The initial image (left) is correctly classified as a panda whereas the perturbed image (right) is classified as a gibbon, even though it looks exactly like the intial one to the human eye [GSS14]. 36 3-2 Test accuracy on true label test points in the uniform label MNIST dataset. The generalization error stays relatively low until very high values of alpha (∼ 50), then drops sharply. We attribute the drop to difficulty in optimization rather than a fundamental limitation ofthe training process. 38 3-3 Test accuracy on true label test points in the uniform label CIFAR10 dataset. The generalization accuracy drops slowly but stays relatively high for high noise levels. 39 3-4 Test accuracy on true label test points in the uniform label MNIST dataset, with a linear model. We can see that the model is very robust to noise and the generalization accuracy is affected minimally. 40 3-5 Test accuracy on true label test points in the white noise MNIST and CIFAR10 datasets. The added noisy images have no effect on the generalization accuracy. The accuracy on the uniform label dataset is added for comparison. 42 4-1 Images obtained after adding random gaussian directions to CIFAR10 images. We use different values of 휖 from left to right: 0, 50, 500, 5000. We see that for small epsilon the images are modified negligibly. 48 11 4-2 Test accuracy vs epsilon for the Gaussian Directions dataset with 훼 = 9. We see that after 휖 = 45 the test accuracy is the same as the accuracy obtained on the CIFAR10 dataset without any augmentation. 49 4-3 Training run on a Gaussian Directions dataset with 훼 = 9 and 휖 = 45. The network treats the real and fake images as two distinct entities: it learns on the true dataset first to reach good training set performance, then start memorizing the fake labels. 50 4-4 The Gaussian Directions dataset. True training sample (blue) are sur- rounded by a number of generated data points (red). 51 4-5 Training run on a CIFAR0:5 dataset. As in the Gaussian Directions case, the network learns on the true dataset first. 53 4-6 PCA analysis of the activations at the last hidden layer. The top im- ages show the activations for the entire test dataset, the bottom images show the activations for real images (x) with their fake counterparts (o). We can clearly see that there’s very little variation along the first 3 PCs for the fake data. The neural network maps the fake data to a very restricted subspace. 56 4-7 PCA analysis of the activations at the last hidden layer (single compo- nent view). The fake inputs activations are significantly concentrated, whereas the real inputs exhibit high variance. 57 4-8 The Linear/Quadratic Dataset. The image on the left shows the four different types of data and the image on the right shows their assigned labels. 58 4-9 Train accuracies on the Linear/Quadratic Dataset. The training accu- racy grows for the L points, which require a simpler classifier, first. .59 5-1 Train and test accuracies of the comparative run for ReLu and Quad activation. We can see that the linear dataset is easier for ReLU, and the quadratic dataset is easier for Quad. 66 12 5-2 Train and test accuracies of the comparative run for max pool and no max pool networks. The network without max pooling layers achieves high train and test accuracy faster than the network with the pooling layers.
Recommended publications
  • Incorporating Prior Domain Knowledge Into Inductive Machine Learning Its Implementation in Contemporary Capital Markets
    University of Technology Sydney Incorporating Prior Domain Knowledge into Inductive Machine Learning Its implementation in contemporary capital markets A dissertation submitted for the degree of Doctor of Philosophy in Computing Sciences by Ting Yu Sydney, Australia 2007 °c Copyright by Ting Yu 2007 CERTIFICATE OF AUTHORSHIP/ORIGINALITY I certify that the work in this thesis has not previously been submitted for a degree nor has it been submitted as a part of requirements for a degree except as fully acknowledged within the text. I also certify that the thesis has been written by me. Any help that I have received in my research work and the preparation of the thesis itself has been acknowledged. In addition, I certify that all information sources and literature used are indicated in the thesis. Signature of Candidate ii Table of Contents 1 Introduction :::::::::::::::::::::::::::::::: 1 1.1 Overview of Incorporating Prior Domain Knowledge into Inductive Machine Learning . 2 1.2 Machine Learning and Prior Domain Knowledge . 3 1.2.1 What is Machine Learning? . 3 1.2.2 What is prior domain knowledge? . 6 1.3 Motivation: Why is Domain Knowledge needed to enhance Induc- tive Machine Learning? . 9 1.3.1 Open Areas . 12 1.4 Proposal and Structure of the Thesis . 13 2 Inductive Machine Learning and Prior Domain Knowledge :: 15 2.1 Overview of Inductive Machine Learning . 15 2.1.1 Consistency and Inductive Bias . 17 2.2 Statistical Learning Theory Overview . 22 2.2.1 Maximal Margin Hyperplane . 30 2.3 Linear Learning Machines and Kernel Methods . 31 2.3.1 Support Vector Machines .
    [Show full text]
  • Efficient Machine Learning Models of Inductive Biases Combined with the Strengths of Program Synthesis (PBE) to Combat Implicit Biases of the Brain
    The Dangers of Algorithmic Autonomy: Efficient Machine Learning Models of Inductive Biases Combined With the Strengths of Program Synthesis (PBE) to Combat Implicit Biases of the Brain The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Halder, Sumona. 2020. The Dangers of Algorithmic Autonomy: Efficient Machine Learning Models of Inductive Biases Combined With the Strengths of Program Synthesis (PBE) to Combat Implicit Biases of the Brain. Bachelor's thesis, Harvard College. Citable link https://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37364669 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA The Dangers of Algorithmic Autonomy: Efficient Machine Learning Models of Inductive Biases combined with the Strengths of Program Synthesis (PBE) to Combat Implicit Biases of the Brain A Thesis Presented By Sumona Halder To The Department of Computer Science and Mind Brain Behavior In Partial Fulfillment of the Requirements for the Degree of Bachelor of Arts in the Subject of Computer Science on the Mind Brain Behavior Track Harvard University April 10th, 2020 2 Table of Contents ABSTRACT 3 INTRODUCTION 5 WHAT IS IMPLICIT BIAS? 11 Introduction and study of the underlying cognitive and social mechanisms 11 Methods to combat Implicit Bias 16 The Importance of Reinforced Learning 19 INDUCTIVE BIAS and MACHINE LEARNING ALGORITHMS 22 What is Inductive Bias and why is it useful? 22 How is Inductive Reasoning translated in Machine Learning Algorithms? 25 A Brief History of Inductive Bias Research 28 Current Roadblocks on Research with Inductive Biases in Machines 30 How Inductive Biases have been incorporated into Reinforced Learning 32 I.
    [Show full text]
  • Cognitive Biases and Interpretability of Inductively Learnt Rules
    Introduction Problem ML Model Plausibility Additional Experiments Algo design Conclusions References Cognitive biases and interpretability of inductively learnt rules Tom´aˇsKliegr Department of Information and Knowledge Engineering University of Economics, Prague Some parts are based on joint papers with prof. J F¨urnkranz,prof. H Paulheim, prof. E Izquierdo, Dr. S Bahn´ıkand Dr. S Voj´ıˇr. This presentation covers work in progress. KEG Seminar, Nov 8, 2018 Introduction Problem ML Model Plausibility Additional Experiments Algo design Conclusions References Outline Introducing Inductive and Cognitive Bias AI and cognitive biases Problem Background Study of Plausibility of Machine Learning Models Methodology Setup Main Experiments More Experiments Semantic coherence vs diversity Linda experiments Summary of results Incorporating Selected Cognitive Bias to Classification Algorithm Overview of cognitive bias-inspired learning algorithms Motivation QCBA: Quantitative Classification based on Associations Experiments Conclusions Introduction Problem ML Model Plausibility Additional Experiments Algo design Conclusions References Research background I Around 2010, I set out to investigate how can we transfer cognitive biases (originally monotonicity constraint) into a machine learning algorithm. I It turned out that the relation between cognitive and inductive biases is virtually unstudied. I The most direct area to explore was effect of cognitive biases on perception of results of existing machine learning algorithms I ! we added studying the effect of cognitive biases on comprehensibility of machine learning models among research objectives I Transfer of selected cognitive bias to a machine learning algorithm remained secondary objective. Introduction Problem ML Model Plausibility Additional Experiments Algo design Conclusions References Goals 1) Study semantic and pragmatic comprehension of machine learning models.
    [Show full text]
  • Phd Thesis Institute of Cognitive Science University of Osnabruck¨
    Data augmentation and image understanding PhD Thesis Institute of Cognitive Science University of Osnabruck¨ Alex Hern´andez-Garc´ıa Advisor: Prof. Peter Konig¨ July 2020 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License: http://creativecommons.org/licenses/by-nc-sa/4.0 Foreword There are certainly many people I am grateful to for their support and love. I hope I show them and they feel my gratitude and love often enough. Instead of naming them here, I will make sure to thank them personally. I am also grateful to my PhD adviser and collaborators. I acknowledge their specific contributions at the beginning of each chapter. I wish to use this space for putting down in words some thoughts that have been recurrently present throughout my PhD and are especially strong now. The main reason why I have been able to write a PhD thesis is luck. Specifically, I have been very lucky to be a privileged person. In my opinion, we often overestimate our own effort, skills and hard work as the main factors for professional success. I would not have got here without my privileged position, in the first place. There are some obvious comparisons. To name a few: During the first months of my PhD, Syrians had undergone several years of war1 and citizens of Aleppo were suffering one of the most cruel sieges ever2. Hundreds of thousands were killed and millions had to flee their land and seek refuge. While I was visiting Palestine after a PhD-related workshop in Jerusalem, 17 Palestinian people were killed by the Israeli army in Gaza during the Land Day3 protests.
    [Show full text]
  • Arxiv:1911.10500V2 [Cs.LG] 23 Dec 2019 Level
    CAUSALITY FOR MACHINE LEARNING Bernhard Schölkopf Max Planck Institute for Intelligent Systems, Max-Planck-Ring 4, 72076 Tübingen, Germany [email protected] ABSTRACT Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning. This article discusses where links have been and should be established, introducing key concepts along the way. It argues that the hard open problems of machine learning and AI are intrinsically related to causality, and explains how the field is beginning to understand them. 1 Introduction The machine learning community’s interest in causality has significantly increased in recent years. My understanding of causality has been shaped by Judea Pearl and a number of collaborators and colleagues, and much of it went into a book written with Dominik Janzing and Jonas Peters (Peters et al., 2017). I have spoken about this topic on various occasions,1 and some of it is in the process of entering the machine learning mainstream, in particular the view that causal modeling can lead to more invariant or robust models. There is excitement about developments at the interface of causality and machine learning, and the present article tries to put my thoughts into writing and draw a bigger picture. I hope it may not only be useful by discussing the importance of causal thinking for AI, but it can also serve as an introduction to some relevant concepts of graphical or structural causal models for a machine learning audience. In spite of all recent successes, if we compare what machine learning can do to what animals accomplish, we observe that the former is rather bad at some crucial feats where animals excel.
    [Show full text]
  • Description of Structural Biases and Associated Data in Sensor-Rich Environments
    Description of Structural Biases and Associated Data in Sensor-Rich Environments Massinissa Hamidi and Aomar Osmani LIPN-UMR CNRS 7030, Univ. Sorbonne Paris Nord fhamidi,[email protected] Abstract In this article, we study activity recognition in the context of sensor-rich en- vironments. We address, in particular, the problem of inductive biases and their impact on the data collection process. To be effective and robust, activity recognition systems must take these biases into account at all levels and model them as hyperparameters by which they can be controlled. Whether it is a bias related to sensor measurement, transmission protocol, sensor deployment topology, heterogeneity, dynamicity, or stochastic effects, it is important to un- derstand their substantial impact on the quality of activity recognition models. This study highlights the need to separate the different types of biases arising in real situations so that machine learning models, e.g., adapt to the dynamicity of these environments, resist to sensor failures, and follow the evolution of the sensors topology. We propose a metamodeling process in which the sensor data is structured in layers. The lower layers encode the various biases linked to transformations, transmissions, and topology of data. The upper layers encode biases related to the data itself. This way, it becomes easier to model hyperpa- rameters and follow changes in the data acquisition infrastructure. We illustrate our approach on the SHL dataset which provides motion sensor data for a list of human activities collected under real conditions. The trade-offs exposed and the broader implications of our approach are discussed with alternative techniques to encode and incorporate knowledge into activity recognition models.
    [Show full text]
  • Inductive Bias
    pick up an exercise! ethics in NLP CS 585, Fall 2019 Introduction to Natural Language Processing http://people.cs.umass.edu/~miyyer/cs585/ Mohit Iyyer College of Information and Computer Sciences University of Massachusetts Amherst many slides from Yulia Tsvetkov stuff from last class • milestones due today! • are we going to cover “natural language understanding”? • what are we talking about today? • many NLP systems affect actual people • systems that interact with people (conversational agents) • perform some reasoning over people (e.g., recommendation systems, targeted ads) • make decisions about people’s lives (e.g., parole decisions, employment, immigration) • questions of ethics arise in all of these applications! why are we talking about it? • the explosion of data, in particular user-generated data (e.g., social media) • machine learning models that leverage huge amounts of this data to solve certain tasks Language and People The common misconception is that language has to do with words and what they mean. It doesn’t. It has to do with people and what they mean. Dan Jurafsky’s keynote talks at CVPR’17 and EMNLP’17 Learn to Assess AI Systems Adversarially ● Who could benefit from such a technology? ● Who can be harmed by such a technology? ● Representativeness of training data ● Could sharing this data have major effect on people’s lives? ● What are confounding variables and corner cases to control for? ● Does the system optimize for the “right” objective? ● Could prediction errors have major effect on people’s lives? let’s start
    [Show full text]
  • Jacob Andreas / MIT 6.806-6.864 / Spring 2020 an Alien Broadcast
    (Advanced) Natural Language Processing Jacob Andreas / MIT 6.806-6.864 / Spring 2020 An alien broadcast f84hh4zl8da4dzwrzo40hizeb3zm8bbz9e8dzj74z1e0h3z0iz0zded4n42kj8l4z38h42jehzdels9z chzl8da4dz8iz2708hc0dze5z4bi4l84hzdlzj74z3kj27zfk1b8i78d6z6hekfhk3ebf7z06d4mzvvz o40hizeb3z0d3z5ehc4hz2708hc0dze5z2edieb830j43z6eb3z584b3izfb2m0izd0c43z0zded4n42 kj8l4z38h42jehze5zj78iz1h8j8i7z8d3kijh80bz2ed6bec4h0j4z05ehcze5z0i14ijeized24zki 43zjezc0a4za4djz2860h4jj4z58bj4hiz70iz20ki43z0z7867f4h24dj064ze5z20d24hz340j7iz0 ced6z0z6hekfze5zmeha4hiz4nfei43zjez8jzceh4zj70dztqo40hiz06ezh4i40h274hizh4fehj43 zj74z0i14ijeiz5814hz2he283eb8j4z8izkdkik0bboh4i8b84djzed24z8jz4dj4hizj74zbkd6izm 8j7z4l4dz1h845z4nfeikh4izjez8jz20ki8d6iocfjecizj70jzi7emzkfz342034izb0j4hzh4i40h Predictability f84hh4zl8da4dzwrzo40hizeb3zm8bbz9e8dzj74z1e0h3z0iz0zded4n42kj8l4z38h42jehzdels9z chzl8da4dz8iz2708hc0dze5z4bi4l84hzdlzj74z3kj27zfk1b8i78d6z6hekfhk3ebf7z06d4mzvvz o40hizeb3z0d3z5ehc4hz2708hc0dze5z2edieb830j43z6eb3z584b3izfb2m0izd0c43z0zded4n42 p(Xt ∣ X1:t−1) Can I guess what character is coming next? Predictability f84hh4zl8da4dzwrzo40hizeb3zm8bbz9e8dzj74z1e0h3z0iz0zded4n42kj8l4z38h42jehzdels9z chzl8da4dz8iz2708hc0dze5z4bi4l84hzdlzj74z3kj27zfk1b8i78d6z6hekfhk3ebf7z06d4mzvvz o40hizeb3z0d3z5ehc4hz2708hc0dze5z2edieb830j43z6eb3z584b3izfb2m0izd0c43z0zded4n42 p(� ∣ �����) Can I guess what character is coming next? Predictability f84hh4zl8da4dzwrzo40hizeb3zm8bbz9e8dzj74z1e0h3z0iz0zded4n42kj8l4z38h42jehzdels9z chzl8da4dz8iz2708hc0dze5z4bi4l84hzdlzj74z3kj27zfk1b8i78d6z6hekfhk3ebf7z06d4mzvvz o40hizeb3z0d3z5ehc4hz2708hc0dze5z2edieb830j43z6eb3z584b3izfb2m0izd0c43z0zded4n42
    [Show full text]
  • The Induction Problem: a Machine Learning Vindication Argument
    The induction problem: a machine learning vindication argument Gianluca Bontempi Machine Learning Group Computer Science Department, Faculty of Sciences ULB, Universit´eLibre de Bruxelles, Belgium mlg.ulb.ac.be Abstract. The problem of induction is a central problem in philoso- phy of science and concerns whether it is sound or not to extract laws from observational data. Nowadays, this issue is more relevant than ever given the pervasive and growing role of the data discovery process in all sciences. If on one hand induction is routinely employed by automatic machine learning techniques, on the other most of the philosophical work criticises induction as if an alternative could exist. But is there indeed a reliable alternative to induction? Is it possible to discover or predict something in a non inductive manner? This paper formalises the question on the basis of statistical notions (bias, variance, mean squared error) borrowed from estimation theory and statistical machine learning. The result is a justification of induc- tion as rational behaviour. In a decision-making process a behaviour is rational if it is based on making choices that result in the most optimal level of benefit or utility. If we measure utility in a prediction context in terms of expected accuracy, it follows that induction is the rational way of conduct. 1 Introduction The process of extraction of scientific laws from observational data has interested the philosophy of science during the last two centuries. Though not definitely settled, the debate is more relevant than ever given the pervasive and growing role of data discovery in all sciences.
    [Show full text]
  • Arxiv:2007.06761V2 [Cs.CL] 23 Sep 2020
    Can neural networks acquire a structural bias from raw linguistic data? Alex Warstadt ([email protected]) Department of Linguistics, New York University New York, NY 10003 USA Samuel R. Bowman ([email protected]) Department of Linguistics & Center for Data Science & Department of Computer Science, New York University New York, NY 10003 USA Abstract Training Has the man who has gone seen the cat? Has the man who gone has seen the cat? We evaluate whether BERT, a widely used neural network for sentence processing, acquires an inductive bias towards form- Hypothesis Space ? ing structural generalizations through pretraining on raw data. We conduct four experiments testing its preference for struc- Structural Generalization: Move the Linear Generalization: Move the tural vs. linear generalizations in different structure-dependent structurally highest auxiliary to the front. linearly last auxiliary to the front. phenomena. We find that BERT makes a structural general- ization in 3 out of 4 empirical domains—subject-auxiliary in- version, reflexive binding, and verb tense detection in embed- has the man who has gone has seen the cat? ded clauses—but makes a linear generalization when tested on NPI licensing. We argue that these results are the strongest ev- idence so far from artificial learners supporting the proposition that a structural bias can be acquired from raw data. If this con- clusion is correct, it is tentative evidence that some linguistic universals can be acquired by learners without innate biases. Test behavior: Structural bias observed Test behavior: Linear bias observed However, the precise implications for human language acqui- sition are unclear, as humans learn language from significantly Has the man seen the cat who has gone? Has the man seen the cat who has gone? less data than BERT.
    [Show full text]
  • Cybergenetics © 2003-2016 1
    Overcoming Bias in DNA Mixture Interpretation American Academy of Forensic Sciences February, 2016 Las Vegas, NV Mark W Perlin, PhD, MD, PhD Cybergenetics, Pittsburgh, PA Cybergenetics © 2003-2016 DNA Does Not Advocate Gold standard of forensic evidence However, ... there may be problems ... with how the DNA was ... interpreted, such as when there are mixed samples Cybergenetics © 2003-2016 1 Case context impact With context Without context Include 2 1 Exclude 12 Inconclusive 4 Cybergenetics © 2003-2016 2 DNA mixture Genotype 1 Genotype 2 Data 10, 12 11, 12 10 11 12 (oversimplified cartoon diagram) Interpret #1: separate Data Genotype 1 Genotype 2 10, 10 @ 10% 10, 10 @ 10% Separate 10, 11 @ 20% 10, 11 @ 10% 10, 12 @ 40% 10, 12 @ 10% 11, 11 @ 10% 11, 11 @ 10% 11, 12 @ 10% 11, 12 @ 40% 10 11 12 12, 12 @ 10% 12, 12 @ 20% Unmix the mixture Interpret #2: compare Data Genotype 2 10, 10 @ 10% 10, 11 @ 10% 10, 12 @ 10% 11, 11 @ 10% 11, 12 @ 40% 10 11 12 12, 12 @ 20% Compare with 11,12 Prob{match} 40% Match statistic = = = 10 Prob{coincidence} 4% Cybergenetics © 2003-2016 3 Cognitive bias Illogical thinking affects decisions • Anchoring – rely on first information • Apophenia – perceive meaningful patterns • Attribution bias – find causal explanations • Confirmation bias – interpretation confirms belief • Framing – social construction of reality • Halo effect – sentiments affect evaluation • Oversimplification – simplicity trumps accuracy • Self-serving bias – distort to maintain self-esteem Contextual bias Background information affects decisions
    [Show full text]
  • Towards Unbiased Artificial Intelligence: Literary Review of Debiasing Techniques
    TOWARDS UNBIASED ARTIFICIAL INTELLIGENCE: LITERARY REVIEW OF DEBIASING TECHNIQUES Smulders, C.O., Ghebreab, S. Abstract Historical bias has been feeding into the algorithmic bias inherent in artificial intelligence systems. When considering negative social bias, this process becomes indirectly discriminatory and leads to faulty artificial intelligence systems. This problem has only recently been highlighted as a possible way in which bias can propagate and entrench itself. The current research attempt to work toward a debiasing solution which can be used to correct for negative bias in artificial intelligence. A literary analysis was done on technical debiasing solutions, and actors which effect their implementation. A recommendation for the creation of a debiasing open source platform is made, in which several technical proposed solutions should be implemented. This allows for a low-cost way for companies to use debiasing tools in their application of artificial intelligence systems. Furthermore, development would be sped up, taking proposed solutions out of the highly restricted academic field and into the world of external application. A final note is made on the feasibility of elimination of bias in artificial intelligence, as well as society at large 1: Introduction “Artificial intelligence presents a cultural shift as much as a technical one. This is similar to technological inflection points of the past, such as the introduction of the printing press or the railways” (Nonnecke, Prabhakar, Brown, & Crittenden, 2017). The invention of the printing press was the cause of much social progress: the spread of intellectual property, and the increase in social mobility being just two facets of this breakthrough. It allowed for a shake-up of the vestiges of absolute power derived from medieval society, and reduced age-old economic and social biases (Dittmar, 2011).
    [Show full text]