Using Global Constraints and Reranking to Improve Cognates Detection

Total Page:16

File Type:pdf, Size:1020Kb

Using Global Constraints and Reranking to Improve Cognates Detection Using Global Constraints and Reranking to Improve Cognates Detection Michael Bloodgood Benjamin Strauss Department of Computer Science Computer Science and Engineering Dept. The College of New Jersey The Ohio State University Ewing, NJ 08628 Columbus, OH 43210 [email protected] [email protected] Abstract guistics, we set a broader definition. We use the word ‘cognate’ to denote, as in (Kondrak, 2001): Global constraints and reranking have not “...words in different languages that are similar in been used in cognates detection research form and meaning, without making a distinction to date. We propose methods for using between borrowed and genetically related words; global constraints by performing rescoring for example, English ‘sprint’ and the Japanese bor- of the score matrices produced by state of rowing ‘supurinto’ are considered cognate, even the art cognates detection systems. Using though these two languages are unrelated.” These global constraints to perform rescoring is broader criteria are motivated by the ways scien- complementary to state of the art methods tists develop and use cognate identification algo- for performing cognates detection and re- rithms in natural language processing (NLP) sys- sults in significant performance improve- tems. For cross-lingual applications, the advan- ments beyond current state of the art per- tage of such technology is the ability to identify formance on publicly available datasets words for which similarity in meaning can be ac- with different language pairs and various curately inferred from similarity in form; it does conditions such as different levels of base- not matter if the similarity in form is from strict line state of the art performance and dif- genetic relationship or later borrowing (Mericli ferent data size conditions, including with and Bloodgood, 2012). more realistic large data size conditions Cognates detection has received a lot of atten- than have been evaluated with in the past. tion in the literature. The research of the use of 1 Introduction statistical learning methods to build systems that can automatically perform cognates detection has This paper presents an effective method for us- yielded many interesting and creative approaches ing global constraints to improve performance for for gaining traction on this challenging task. Cur- cognates detection. Cognates detection is the task rently, the highest-performing state of the art sys- of identifying words across languages that have a tems detect cognates based on the combination common origin. Automatic cognates detection is of multiple sources of information. Some of the important to linguists because cognates are needed most indicative sources of information discovered to determine how languages evolved. Cognates to date are word context information, phonetic in- are used for protolanguage reconstruction (Hall formation, word frequency information, temporal and Klein, 2011; Bouchard-Cotˆ e´ et al., 2013). information in the form of word frequency dis- Cognates are important for cross-language dictio- tributions across parallel time periods, and word nary look-up and can also improve the quality of burstiness information. See section3 for fuller ex- machine translation, word alignment, and bilin- planations of each of these sources of information gual lexicon induction (Simard et al., 1993; Kon- that state of the art systems currently use. Scores drak et al., 2003). for all pairs of words from language L1 x language A word is traditionally only considered cognate L2 are generated by generating component scores with another if both words proceed from the same based on these sources of information and then ancestor. Nonetheless, in line with the conven- combining them in an appropriate manner. Simple tions of previous research in computational lin- methods of combination are giving equal weight- This paper was published in the Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1983-1992, Vancouver, Canada, July 2017. c 2017 Association for Computational Linguistics DOI: https://doi.org/10.18653/v1/P17-1181 ing for each score, while state of the art perfor- matrices for other language pairs and data mance is obtained by learning an optimal set of sets have state of the art score matrices that weights from a small seed set of known cognates. are already able to achieve 11-point interpo- Once the full matrix of scores is generated, the lated average precision of 57%. word pairs with the highest scores are predicted as being cognates. Although we are not aware of work using global The methods we propose in the current paper constraints to perform rescoring to improve cog- consume as input the final score matrix that state nates detection, there are related methodologies of the art methods create. We test if our meth- for reranking in different settings. Methodologi- ods can improve performance by generating new cally related work includes past work in structured rescored matrices by rescoring all of the pairs of prediction and reranking (Collins, 2002; Collins words by taking into account global constraints and Roark, 2004; Collins and Koo, 2005; Taskar that apply to cognates detection. Thus, our meth- et al., 2005a,b). Note that in these past works, ods are complementary to previous methods for there are many instances with structured outputs creating cognates detection systems. Using global that can be used as training data to learn a struc- constraints and performing rescoring to improve tured prediction model. For example, a semi- cognates detection has not been explored yet. We nal application in the past was using online train- find that rescoring based on global constraints im- ing with structured perceptrons to learn improved proves performance significantly beyond current systems for performing various syntactic analyses state of the art levels. and tagging of sentences such as POS tagging and The cognates detection task is an interesting base noun phrase chunking (Collins, 2002). Note task to apply our methods to for a few reasons: that in those settings the unit at which there are structural constraints is a sentence. Also note that • It’s a challenging unsolved task where ongo- there are many sentences available so that online ing research is frequently reported in the lit- training methods such as discriminative training of erature trying to improve performance; structured perceptrons can be used to learn struc- tured predictors effectively in those settings. In • There is significant room for improvement in contrast, for the cognates setting the unit at which performance; there are structural constraints is the entire set of • It has a global structure in its output clas- cognates for a language pair and there is only one 1 sifications since if a word lemma wi from such unit in existence (for a given language pair). language L1 is cognate with a word lemma We call this a single overarching global structure wj from language L2, then wi is not cognate to make the distinction clear. The method we with any other word lemma from L2 different present in this paper deals with a single overar- from wj and wj is not cognate with any other ching global structure on the predictions of all in- stances in the entire problem space for a task. For word lemma wk from L1. this type of setting, there is only a single global • There are multiple standard datasets freely structure in existence, contrasted with the situa- and publicly available that have been worked tion of there being many sentences each impos- on with which to compare results. ing a global structure on the tagging decisions for that individual sentence. Hence, previous struc- • Different datasets and language pairs yield tured prediction methods that require numerous initial score matrices with very different qual- instances each having a structured output on which ities. Some of the score matrices built using to train parameters via methods such as perceptron the existing state of the art best approaches training are inapplicable to the cognates setting. In yield performance that is quite low (11-point this paper we present methods for rescoring effec- interpolated average precision of only ap- tively in settings with a single overarching global proximately 16%) while some of these score structure and show their applicability to improv- 1A lemma is a base form of a word. For example, in En- ing the performance of cognates detection. Still, glish the words ‘baked’ and ‘baking’ would both map to the we note that philosophically our method builds lemma ‘bake’. Lemmatizing software exists for many lan- guages and lemmatization is a standard preprocessing task on previous structured prediction methods since in conducted before cognates detection. both cases there is a similar intuition in that we’re using higher-level structural properties to inform where the task is to extract (x; y) pairs such that and accordingly alter our system’s predictions of (x; y) are in some relation R. Here are examples: values for subitems within a structure. • X might be a set of states and Y might be a In section2 we present our methods for per- set of cities and the relation R might be “is forming rescoring of matrices based on global the capital of”; constraints such as those that apply for cognates detection. The key intuition behind our approach • X might be a set of images and Y might be is that the scoring of word pairs for cognateness a set of people’s names and the relation R ought not be made independently as is currently might be “is a picture of”; done, but rather that global constraints ought to be taken into account to inform and potentially alter • X might be a set of English words and Y system scores for word pairs based on the scores of might be a set of French words and the re- other word pairs. In section3 we provide results of lation R might be “is cognate with”.
Recommended publications
  • Artificial Intelligence? 1 V2.0 © J
    TU Darmstadt, WS 2012/13 Einführung in die Künstliche Intelligenz Einführung in die Künstliche Intelligenz Dozenten Prof. Johannes Fürnkranz (Knowledge Engineering) Prof. Ulf Brefeld (Knowledge Mining and Assessment) Homepage http://www.ke.informatik.tu-darmstadt.de/lehre/ki/ Termine: Dienstag 11:40-13:20 S202/C205 Donnerstag 11:40-13:20 S202/C205 3 VO + 1 UE Vorlesungen und Übungen werden in Doppelstunden abgehalten Übungen Terminplan wird auf der Web-Seite aktualisiert Ü voraussichtlich: 30.10., 13.11., 27.11., 11.12., 15.1., 29.1. Tafelübungen What is Artificial Intelligence? 1 V2.0 © J. Fürnkranz TU Darmstadt, WS 2012/13 Einführung in die Künstliche Intelligenz Text Book The course will mostly follow Stuart Russell und Peter Norvig: Artificial Intelligence: A Modern Approach. Prentice Hall, 2nd edition, 2003. Deutsche Ausgabe: Stuart Russell und Peter Norvig: Künstliche Intelligenz: Ein Moderner Ansatz. Pearson- Studium, 2004. ISBN: 978-3-8273-7089-1. 3. Auflage 2012 Home-page for the book: http://aima.cs.berkeley.edu/ Course slides in English (lecture is in German) will be availabe from Home-page What is Artificial Intelligence? 2 V2.0 © J. Fürnkranz TU Darmstadt, WS 2012/13 Einführung in die Künstliche Intelligenz What is Artificial Intelligence Different definitions due to different criteria Two dimensions: Thought processes/reasoning vs. behavior/action Success according to human standards vs. success according to an ideal concept of intelligence: rationality. Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally What is Artificial Intelligence? 3 V2.0 © J. Fürnkranz TU Darmstadt, WS 2012/13 Einführung in die Künstliche Intelligenz Definitions of Artificial Intelligence What is Artificial Intelligence? 4 V2.0 © J.
    [Show full text]
  • The Machine That Builds Itself: How the Strengths of Lisp Family
    Khomtchouk et al. OPINION NOTE The Machine that Builds Itself: How the Strengths of Lisp Family Languages Facilitate Building Complex and Flexible Bioinformatic Models Bohdan B. Khomtchouk1*, Edmund Weitz2 and Claes Wahlestedt1 *Correspondence: [email protected] Abstract 1Center for Therapeutic Innovation and Department of We address the need for expanding the presence of the Lisp family of Psychiatry and Behavioral programming languages in bioinformatics and computational biology research. Sciences, University of Miami Languages of this family, like Common Lisp, Scheme, or Clojure, facilitate the Miller School of Medicine, 1120 NW 14th ST, Miami, FL, USA creation of powerful and flexible software models that are required for complex 33136 and rapidly evolving domains like biology. We will point out several important key Full list of author information is features that distinguish languages of the Lisp family from other programming available at the end of the article languages and we will explain how these features can aid researchers in becoming more productive and creating better code. We will also show how these features make these languages ideal tools for artificial intelligence and machine learning applications. We will specifically stress the advantages of domain-specific languages (DSL): languages which are specialized to a particular area and thus not only facilitate easier research problem formulation, but also aid in the establishment of standards and best programming practices as applied to the specific research field at hand. DSLs are particularly easy to build in Common Lisp, the most comprehensive Lisp dialect, which is commonly referred to as the “programmable programming language.” We are convinced that Lisp grants programmers unprecedented power to build increasingly sophisticated artificial intelligence systems that may ultimately transform machine learning and AI research in bioinformatics and computational biology.
    [Show full text]
  • Backpropagation with Callbacks
    Backpropagation with Continuation Callbacks: Foundations for Efficient and Expressive Differentiable Programming Fei Wang James Decker Purdue University Purdue University West Lafayette, IN 47906 West Lafayette, IN 47906 [email protected] [email protected] Xilun Wu Grégory Essertel Tiark Rompf Purdue University Purdue University Purdue University West Lafayette, IN 47906 West Lafayette, IN, 47906 West Lafayette, IN, 47906 [email protected] [email protected] [email protected] Abstract Training of deep learning models depends on gradient descent and end-to-end differentiation. Under the slogan of differentiable programming, there is an increas- ing demand for efficient automatic gradient computation for emerging network architectures that incorporate dynamic control flow, especially in NLP. In this paper we propose an implementation of backpropagation using functions with callbacks, where the forward pass is executed as a sequence of function calls, and the backward pass as a corresponding sequence of function returns. A key realization is that this technique of chaining callbacks is well known in the programming languages community as continuation-passing style (CPS). Any program can be converted to this form using standard techniques, and hence, any program can be mechanically converted to compute gradients. Our approach achieves the same flexibility as other reverse-mode automatic differ- entiation (AD) techniques, but it can be implemented without any auxiliary data structures besides the function call stack, and it can easily be combined with graph construction and native code generation techniques through forms of multi-stage programming, leading to a highly efficient implementation that combines the per- formance benefits of define-then-run software frameworks such as TensorFlow with the expressiveness of define-by-run frameworks such as PyTorch.
    [Show full text]
  • Stuart Russell and Peter Norvig, Artijcial Intelligence: a Modem Approach *
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Artificial Intelligence ELSEVIER Artificial Intelligence 82 ( 1996) 369-380 Book Review Stuart Russell and Peter Norvig, Artijcial Intelligence: A Modem Approach * Nils J. Nilsson Robotics Laboratory, Department of Computer Science, Stanford University, Stanford, CA 94305, USA 1. Introductory remarks I am obliged to begin this review by confessing a conflict of interest: I am a founding director and a stockholder of a publishing company that competes with the publisher of this book, and I am in the process of writing another textbook on AI. What if Russell and Norvig’s book turns out to be outstanding? Well, it did! Its descriptions are extremely clear and readable; its organization is excellent; its examples are motivating; and its coverage is scholarly and thorough! End of review? No; we will go on for some pages-although not for as many as did Russell and Norvig. In their Preface (p. vii), the authors mention five distinguishing features of their book: Unified presentation of the field, Intelligent agent design, Comprehensive and up-to-date coverage, Equal emphasis on theory and practice, and Understanding through implementation. These features do indeed distinguish the book. I begin by making a few brief, summary comments using the authors’ own criteria as a guide. l Unified presentation of the field and Intelligent agent design. I have previously observed that just as Los Angeles has been called “twelve suburbs in search of a city”, AI might be called “twelve topics in search of a subject”.
    [Show full text]
  • The Dartmouth College Artificial Intelligence Conference: the Next
    AI Magazine Volume 27 Number 4 (2006) (© AAAI) Reports continued this earlier work because he became convinced that advances The Dartmouth College could be made with other approaches using computers. Minsky expressed the concern that too many in AI today Artificial Intelligence try to do what is popular and publish only successes. He argued that AI can never be a science until it publishes what fails as well as what succeeds. Conference: Oliver Selfridge highlighted the im- portance of many related areas of re- search before and after the 1956 sum- The Next mer project that helped to propel AI as a field. The development of improved languages and machines was essential. Fifty Years He offered tribute to many early pio- neering activities such as J. C. R. Lick- leiter developing time-sharing, Nat Rochester designing IBM computers, and Frank Rosenblatt working with James Moor perceptrons. Trenchard More was sent to the summer project for two separate weeks by the University of Rochester. Some of the best notes describing the AI project were taken by More, al- though ironically he admitted that he ■ The Dartmouth College Artificial Intelli- Marvin Minsky, Claude Shannon, and never liked the use of “artificial” or gence Conference: The Next 50 Years Nathaniel Rochester for the 1956 “intelligence” as terms for the field. (AI@50) took place July 13–15, 2006. The event, McCarthy wanted, as he ex- Ray Solomonoff said he went to the conference had three objectives: to cele- plained at AI@50, “to nail the flag to brate the Dartmouth Summer Research summer project hoping to convince the mast.” McCarthy is credited for Project, which occurred in 1956; to as- everyone of the importance of ma- coining the phrase “artificial intelli- sess how far AI has progressed; and to chine learning.
    [Show full text]
  • Artificial Intelligence: a Modern Approach (3Rd Edition)
    [PDF] Artificial Intelligence: A Modern Approach (3rd Edition) Stuart Russell, Peter Norvig - pdf download free book Free Download Artificial Intelligence: A Modern Approach (3rd Edition) Ebooks Stuart Russell, Peter Norvig, PDF Artificial Intelligence: A Modern Approach (3rd Edition) Popular Download, Artificial Intelligence: A Modern Approach (3rd Edition) Full Collection, Read Best Book Online Artificial Intelligence: A Modern Approach (3rd Edition), Free Download Artificial Intelligence: A Modern Approach (3rd Edition) Full Popular Stuart Russell, Peter Norvig, I Was So Mad Artificial Intelligence: A Modern Approach (3rd Edition) Stuart Russell, Peter Norvig Ebook Download, PDF Artificial Intelligence: A Modern Approach (3rd Edition) Full Collection, full book Artificial Intelligence: A Modern Approach (3rd Edition), online pdf Artificial Intelligence: A Modern Approach (3rd Edition), Download Free Artificial Intelligence: A Modern Approach (3rd Edition) Book, pdf Stuart Russell, Peter Norvig Artificial Intelligence: A Modern Approach (3rd Edition), the book Artificial Intelligence: A Modern Approach (3rd Edition), Download Artificial Intelligence: A Modern Approach (3rd Edition) E-Books, Read Artificial Intelligence: A Modern Approach (3rd Edition) Book Free, Artificial Intelligence: A Modern Approach (3rd Edition) PDF read online, Artificial Intelligence: A Modern Approach (3rd Edition) Ebooks, Artificial Intelligence: A Modern Approach (3rd Edition) Popular Download, Artificial Intelligence: A Modern Approach (3rd Edition) Free Download, Artificial Intelligence: A Modern Approach (3rd Edition) Books Online, Free Download Artificial Intelligence: A Modern Approach (3rd Edition) Books [E-BOOK] Artificial Intelligence: A Modern Approach (3rd Edition) Full eBook, CLICK HERE FOR DOWNLOAD The dorian pencil novels at the end of each chapter featured charts. makes us the meaning of how our trip and unk goes into trouble with others like land.
    [Show full text]
  • Artificial Intelligence
    Outline: Introduction to AI Artificial Intelligence N-ways Introduction Introduction to AI - Personal Information and - What is Intelligence? Introduction Background - An Intelligent Entity - The Age of Intelligent Machines Course Outline: - Definitions of AI - Requirements and Expectation - Behaviourist’s View on Intelligent - Module Assessment Machines - Recommended Books - Turing’s Test - Part 1 & 2 - Layout of Course (14 lessons) - History of AI Lecture 1 - Examples of AI systems - Office Hours Course Delivery Methods General Reference for the Course Goals and Objectives of module 2 Course Outline: General Reference for the Course Recommended Books Artificial Intelligence by Patrick Henry Winston AI related information. Whatis.com (Computer Science Dictionary) Logical Foundations of Artificial Intelligence by Michael R. http://whatis.com/search/whatisquery.html Genesereth, Nils J. Nislsson, Nils J. Nilsson General computer-related news sources. Technology Encyclopedia Artificial Intelligence, Luger, Stubblefield http://www.techweb.com/encyclopedia/ General Information Artificial Intelligence : A Modern Approach by Stuart J. Russell, Technology issues. Peter Norvig Computing Dictionary http://wombat.doc.ic.ac.uk/ Artificial Intelligence by Elaine Rich, Kevin Knight (good for All web links in my AI logic, knowledge representation, and search only) website. Webster Dictionary http://work.ucsd.edu:5141/cgi- bin/http_webster 3 4 Goals and Objectives of module Introduction to AI: What is Intelligence? Understand motivation, mechanisms, and potential Intelligence, taken as a whole, consists of of Artificial Intelligence techniques. the following skills:- Balance of breadth of techniques with depth of 1. the ability to reason understanding. 2. the ability to acquire and apply knowledge Conversant with the applicational scope and 3. the ability to manipulate and solution methodologies of AI.
    [Show full text]
  • Apply Artificial Intelligence to Get Value AI2V 101 by Alexandre Dietrich
    AI2V Apply Artificial Intelligence to get Value AI2V 101 By Alexandre Dietrich AlexD.ai What is AI ? AlexD.ai What is AI ? Google search: “Artificial Intelligence definition” Encyclopaedia Brittanica: Dictionary: Artificial intelligence (AI), the ability of a digital computer the theory and development of computer systems or computer-controlled robot to perform tasks commonly able to perform tasks that normally require human associated with intelligent beings. The term is frequently intelligence, such as visual perception, speech applied to the project of developing systems endowed with recognition, decision-making, and translation the intellectual processes characteristic of humans, such as between languages. the ability to reason, discover meaning, generalize, or learn from past experience. Stanford University – Computer Science Department: Q. What is artificial intelligence? A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. Mckinsey & Company: Artificial intelligence: A definition AI is typically defined as the ability of a machine to perform cognitive functions we associate with human minds, such as perceiving, reasoning, learning, and problem solving. Examples of technologies that enable AI to solve business problems are robotics and autonomous vehicles, computer vision, language, virtual agents, and machine
    [Show full text]
  • Python Is Fun! (I've Used It at Lots of Places!)
    "Perl is worse than Python because people wanted it worse." -- Larry Wall (14 Oct 1998 15:46:10 -0700, Perl Users mailing list) "Life is better without braces." -- Bruce Eckel, author of Thinking in C++ , Thinking in Java "Python is an excellent language[, and makes] sensible compromises." -- Peter Norvig (Google), author of Artificial Intelligence What is Python? An Introduction +Wesley Chun, Principal CyberWeb Consulting [email protected] :: @wescpy cyberwebconsulting.com goo.gl/P7yzDi (c) 1998-2013 CyberWeb Consulting. All rights reserved. Python is Fun! (I've used it at lots of places!) (c) 1998-2013 CyberWeb Consulting. All rights reserved. (c) 1998-2013 CyberWeb Consulting. All rights reserved. I'm here to give you an idea of what it is! (I've written a lot about it!) (c) 1998-2013 CyberWeb Consulting. All rights reserved. (c) 1998-2013 CyberWeb Consulting. All rights reserved. I've taught it at lots of places! (companies, schools, etc.) (c) 1998-2013 CyberWeb Consulting. All rights reserved. (c) 1998-2013 CyberWeb Consulting. All rights reserved. About You SW/HW Engineer/Lead Hopefully familiar Sys Admin/IS/IT/Ops with one other high- Web/Flash Developer level language: QA/Testing/Automation Java Scientist/Mathematician C, C++, C# Toolsmith, Hobbyist PHP, JavaScript Release Engineer/SCM (Visual) Basic Artist/Designer/UE/UI/UX Perl, Tcl, Lisp Student/Teacher Ruby, etc. Django, TurboGears/Pylons, Pyramid, Plone, Trac, Mailman, App Engine (c) 1998-2013 CyberWeb Consulting. All rights reserved. Why are you here? You… Have heard good word-of-mouth Came via Django, App Engine, Plone, etc. Discovered Google, Yahoo!, et al.
    [Show full text]
  • Where We're At
    Where We’re At Progress of AI and Related Technologies How Big is the Field of AI? ● +50% publications/5 years ● 106 journals, 7,125 organizations ● Approx $56M funding from NSF in 2011, but ● Most funding is from private companies and hard to tally ● Large influx of corporate funding recently Deep Learning Causal Networks Geoffrey E. Hinton, Simon Pearl, Judea. Causality: Models, Osindero, Yee-Whye Teh: A fast Reasoning and Inference (2000) algorithm for deep belief nets (2006) Recent Progress DeepMind ● Combined deep learning (convolutional neural net) with reinforcement learning to play 7 Atari games ● Sold to Google for >$400M in Jan 2014 Demis Hassabis required an “AI ethics board” as a condition of sale ● Demis says: 50% chance of AGI within 15yrs Google ● Commercially deployed: language translation, speech recognition, OCR, image classification ● Massive software and hardware infrastructure for large- scale computation ● Large datasets: Web, Books, Scholar, ReCaptcha, YouTube ● Prominent researchers: Ray Kurzweil, Peter Norvig, Andrew Ng, Geoffrey Hinton, Jeff Dean ● In 2013, bought: DeepMind, DNNResearch, 8 Robotics companies Other Groups Working on AI IBM Watson Research Center Beat the world champion at Jeopardy (2011) Located in Cambridge Facebook New lab headed by Yann LeCun (famous for: backpropagation algorithm, convolutional neural nets), announced Dec 2013 Human Brain Project €1.1B in EU funding, aimed at simulating complete human brain on supercomputers Other Groups Working on AI Blue Brain Project Began 2005, simulated
    [Show full text]
  • Artificial Intelligence
    Artificial Intelligence Chapter 1 AIMA Slides °c Stuart Russell and Peter Norvig, 1998 Chapter 1 1 Outline } Course overview } What is AI? } A brief history } The state of the art AIMA Slides °c Stuart Russell and Peter Norvig, 1998 Chapter 1 2 Administrivia Class home page: http://www-inst.eecs.berkeley.edu/~cs188 for lecture notes, assignments, exams, grading, o±ce hours, etc. Assignment 0 (lisp refresher) due 8/31 Book: Russell and Norvig Arti¯cial Intelligence: A Modern Approach Read Chapters 1 and 2 for this week's material Code: integrated lisp implementation for AIMA algorithms at http://www-inst.eecs.berkeley.edu/~cs188/code/ AIMA Slides °c Stuart Russell and Peter Norvig, 1998 Chapter 1 3 Course overview } intelligent agents } search and game-playing } logical systems } planning systems } uncertainty|probability and decision theory } learning } language } perception } robotics } philosophical issues AIMA Slides °c Stuart Russell and Peter Norvig, 1998 Chapter 1 4 What is AI? \[The automation of] activities that we \The study of mental faculties through associate with human thinking, activ- the use of computational models" ities such as decision-making, problem (Charniak+McDermott, 1985) solving, learning : : :" (Bellman, 1978) \The study of how to make computers \The branch of computer science that do things at which, at the moment, peo- is concerned with the automation of in- ple are better" (Rich+Knight, 1991) telligent behavior" (Luger+Stubble¯eld, 1993) Views of AI fall into four categories: Thinking humanly Thinking rationally
    [Show full text]
  • The End of Humanity
    The end of humanity: will artificial intelligence free us, enslave us — or exterminate us? The Berkeley professor Stuart Russell tells Danny Fortson why we are at a dangerous crossroads in our development of AI Scenes from Stuart Russell’s dystopian film Slaughterbots, in which armed microdrones use facial recognition to identify their targets Danny Fortson Saturday October 26 2019, 11.01pm GMT, The Sunday Times Share Save Stuart Russell has a rule. “I won’t do an interview until you agree not to put a Terminator on it,” says the renowned British computer scientist, sitting in a spare room at his home in Berkeley, California. “The media is very fond of putting a Terminator on anything to do with artificial intelligence.” The request is a tad ironic. Russell, after all, was the man behind Slaughterbots, a dystopian short film he released in 2017 with the Future of Life Institute. It depicts swarms of autonomous mini-drones — small enough to fit in the palm of your hand and armed with a lethal explosive charge — hunting down student protesters, congressmen, anyone really, and exploding in their faces. It wasn’t exactly Arnold Schwarzenegger blowing people away — but he would have been proud. Autonomous weapons are, Russell says breezily, “much more dangerous than nuclear weapons”. And they are possible today. The Swiss defence department built its very own “slaughterbot” after it saw the film, Russell says, just to see if it could. “The fact that you can launch them by the million, even if there’s only two guys in a truck, that’s a real problem, because it’s a weapon of mass destruction.
    [Show full text]