January the ERA Atlas for the NSA [Oct 24]

Total Page:16

File Type:pdf, Size:1020Kb

January the ERA Atlas for the NSA [Oct 24] In 1947 the Navy awarded ERA design, and other problems the “Task 13” contract to build involving differential equations. January the ERA Atlas for the NSA [Oct 24]. In 1950 ERA started selling this machine MTAC Begins commercially as the ERA 1101 [Dec 10], 1101 being Jan. 1943 binary for 13 of course. The first computing journal was Seymour Cray [Sept 28] probably “Mathematical Tables joined the company in and Other Aids to Computation” 1951, and his first design (MTAC), whch was founded by credit was the ERA 1103 Raymond Clare Archibald in [Oct 00]. Washington D.C. during this In 1952, Remington Rand month. [Jan 25] acquired ERA, and As the name suggests, it initially continued to sell the 1101, focussed on maths, but also although now as the found space to publish the “UNIVAC 1101”; naturally, landmark article, “The the 1103 became the “UNIVAC Electronic Numerical Integrator 1103”. Harder perusing a prototype and Computer (ENIAC)” by Anacom (1946). Photo by Edwin The ERA group within Herman H. Goldstine [Sept 13] Harder. Remington maintained close ties and Adele Goldstine [Dec 21] in to the NSA, creating the "Bogart" July 1946. for them in 1954. It was the first The Anacom continued to be By 1960, reflecting the computer to employ solid state employed until the end of the increasing obsolescence of diodes, and also used core 1980's for analyzing nonlinear tables, the journal changed its memory [May 11]. electric power systems, although name to “Mathematics of Disappointingly, it wasn't named it became increasingly unique. Computation”. after the actor Humphrey Vannevar Bush‘s [March 11] much better known differential Bogart, but John B. Bogart, city editor of The New York Sun analyzers [June 23] were all newspaper. Bogart is chiefly decommissioned by the early ERA Founded remembered for the quote: 1960's. Jan. 1946 "When a dog bites a man, that's For the oldest working digital not news. But if a man bites a computer, see [April 00]. During WWII, code-breaking dog, that's news." work in the US Navy was run by a clandestine group with the deliberately vague title, Faster Than "Communications The Anacom Supplementary Activity – Jan. 1948 Thought Washington" (CSAW). For example, CSAW was responsible Westinghouse's Edwin L. Harder Jan. 1953 for building versions of the UK's led the team that built the first The 'popular' textbook, “Faster Colossus [Jan 18] for breaking general-purpose analog than Thought: A Symposium in Japanese codes. computer, the Anacom (short for Digital Computing Machines,” ANAlog COMputer). A After the war, budgets were cut edited by Bertram Vivian description of the device by Bowden (later Baron Bowden), for most military projects, Harder and G.D. including CSAW, and the Navy was published in the UK. McCannappeared in the Jan. Bowden is sometimes called was worried that the group’s 1948 issue of AIEE Transactions. expertise would be lost. The England’s first computer answer was private enterprise – The Anacom comfortably filled a salesman due to his involvement Engineering Research 40-foot long room. It noisily in promoting the Ferranti Mark Associates, Inc. (ERA) was employed mechanical relays 1 [Feb 12]. formed in Jan. 1946. It was until 1953 when the machine The book's preface begins: based in the hangars of a former was upgraded to use vacuum “During the last year or two aircraft factory in St. Paul, tube-based switches. most people must have heard of Minnesota. Before the rise of digital the remarkable devices often The technical side of ERA was computers, the Anacom was the called “Electronic Brains”; every headed by Howard Engstrom, workhorse calculating device at schoolboy knows that there are William Norris [July 14], Ralph Westinghouse, used for oil-flow in existence some very Meader, and around forty other problems, nuclear reactor complicated machines which are former members of CSAW. capable of astounding feats of 1 arithmetic. This book contains Whirlwind I.” John Backus [Dec It was designed to sit under a descriptions of several of these 3] called it “an elegant concept (big) desk, and came with monsters…" elegantly realized.” several largish peripherals that sat on top of the desk – a tape Incidentally, the use of Laning and Zierler were reader, typewriter, tape punch, “electronic brain” had become members of Charles Adam’s [Feb and console. popular after a speech by Lord 6] Science and Engineering Louis Mountbatten [Oct 31] in Computation Group at MIT, The RECOMP II may have been 1946. which was responsible for many the first commercial of the programming firsts transistorized computer, but the “Faster than Thought" wasn’t associated with the Whirlwind. IBM 608 [Oct 7] probably the first 'popular' book on digital shipped first, in Dec. 1957, but computers (e.g. see [Feb 22], Other possibilities for first was marketed as a calculator. [March 27], and [Nov 26]), but it compiler are those for Hopper’s IBM’s first transistorized stored- was remarkable for its range of A-2 [May 00] and IBM’s program 'computer' was the contributors, a stellar cast of Speedcoding [Sept 9]. IBM 7070 from 1960, mostly British researchers, who Another caveat is that "George" introduced as part of the 7000 contributed 26 chapters wasn’t a general-purpose series [April 26]. covering the history of programming language, instead computing, and current For those of you wondering focusing on solving algebraic application areas. They included about the "II", the RECOMP I was equations. John Bennett [July 31], Tom designed for the military, and Kilburn [Aug 11], Christopher Unfortunately, "George" could completed the previous year. Strachey [Nov 16], Alan Turing generate code that took ten For the world’s first ‘mobile’ [June 23], Maurice Wilkes [June times longer to run than hand- computer, in the very general 26], and Frederic Williams [June crafted machine code for the sense of being able to move 26]. same task. It was only with about, see the DYSEAC [April FORTRAN [Dec 00] that this Turing and Strachey 00]. For the first mass-produced problem of speed vs. abstraction collaborated on a chapter about portable microcomputer, see was solved. games, which looked at chess [April 3]. [June 25], draughts (checkers), and Nim (specifically the inner workings of the Nimrod [May RECOMP II is 5]). RPG Introduced Appendix 1 was a copy of Ada Portable Jan. 1961 Lovelace’s [Dec 10] “Sketch of Jan. 1958 In 1959, IBM assigned the task the Analytical Engine Invented of designing software for by Charles Babbage . with The Autonetics RECOMP II was ordinary business users to Notes by the Translator” [July an early transistorized Barbara Wood and Bernard 10], the first account in English computer, which was proudly Silkowitz. Their answer was of Babbage’s [Dec 26] Analytical advertised as being ‘portable’. "Report Program Generator" Engine [Dec 23]. However, the computer weighed around 200 pounds, and was 4.7 (RPG), introduced a few months The book remained in print until cubic feet large. Tellingly, the after the first IBM 1401's [Oct 5] 1968. ads showed two men carrying it had shipped. across a building site. A user filled out “specification sheets” for a business problem, George in the such as a payroll calculation which listed the input, the Whirlwind output format, and the calculation to be executed in Jan. 1954 between. The “Algebraic System” RPG was part of an attempt to (sometimes known as “George”) move customers away from IBM was perhaps the first compiler Electric Accounting Machine for a “high-level” language, in (EAM) equipment, towards that it translated mathematical computers. IBM's flagship EAM formulae into machine code. product was the 407, which goes "George" was implemented by J. some way to explaining RPG's Halcombe Laning and Neal design. RPG parallelled how a Zierler on the Whirlwind [April user had to wire a 407’s control 20], and described in “A panel which had specific areas Program for Translation of Part of a Autonetics Recomp II for input, calculations, and Mathematical Equations for ad (1958). Evan Koblentz. (c) output. North American Aviation, Inc. 2 for talking about a called AI winter [Oct 28] of the Dial F for minicomputer is probably due to late 1980's. DEC's John Leng [Aug 26], by In particular, Minsky and Papert Frankenstein way of fashion designer, Mary rejected the use of multi-layer Quant. Jan. 1964 neural nets, which they termed a “Dial F For Frankenstein” a short “sterile” extension of the story by Arthur C. Clarke [Dec perceptron idea. This was before 16], appeared in the Jan. 1964 Perceptrons it was realized just how issue of Playboy magazine. It powerful such multi-layer recounts how a complex Published extensions actually were. telephone network becomes Jan. 1969 The book's hypnotic cover (pink sentient, and thereafter causes spirals on a neon red global chaos. Namely, “At 0150 “Perceptrons: an Introduction to background) refers to one of the GMT on December 1, 1975, Computational Geometry” was perceptron's limitations – every telephone in the world written by Marvin Minsky [Aug defining a function that correctly started to ring”! Next day will 9] and Seymour Papert [Feb 29]. determines a shape's see chaos all over - radio It should not be confused with connectedness. stations shutting down, stock the revised edition from 1987, markets and banks shutting which spent a considerable Frank Rosenblatt [July 7] down, traffic signaling systems number of pages addressing the published the first paper on down, electricity grid behaving criticisms of this first edition. perceptrons in 1958. He and erratically, military weapons Minsky knew each other at the launched without human Bronx High School of Science. authorization, planes almost Minsky later compared the first crashing, …" edition of his and Papert's book At some point, Tim Berners-Lee to the fictional "Necronomicon" [June 8] read the story (perhaps in H.
Recommended publications
  • A Framework for Representing Knowledge Marvin Minsky MIT-AI Laboratory Memo 306, June, 1974. Reprinted in the Psychology of Comp
    A Framework for Representing Knowledge Marvin Minsky MIT-AI Laboratory Memo 306, June, 1974. Reprinted in The Psychology of Computer Vision, P. Winston (Ed.), McGraw-Hill, 1975. Shorter versions in J. Haugeland, Ed., Mind Design, MIT Press, 1981, and in Cognitive Science, Collins, Allan and Edward E. Smith (eds.) Morgan-Kaufmann, 1992 ISBN 55860-013-2] FRAMES It seems to me that the ingredients of most theories both in Artificial Intelligence and in Psychology have been on the whole too minute, local, and unstructured to account–either practically or phenomenologically–for the effectiveness of common-sense thought. The "chunks" of reasoning, language, memory, and "perception" ought to be larger and more structured; their factual and procedural contents must be more intimately connected in order to explain the apparent power and speed of mental activities. Similar feelings seem to be emerging in several centers working on theories of intelligence. They take one form in the proposal of Papert and myself (1972) to sub-structure knowledge into "micro-worlds"; another form in the "Problem-spaces" of Newell and Simon (1972); and yet another in new, large structures that theorists like Schank (1974), Abelson (1974), and Norman (1972) assign to linguistic objects. I see all these as moving away from the traditional attempts both by behavioristic psychologists and by logic-oriented students of Artificial Intelligence in trying to represent knowledge as collections of separate, simple fragments. I try here to bring together several of these issues by pretending to have a unified, coherent theory. The paper raises more questions than it answers, and I have tried to note the theory's deficiencies.
    [Show full text]
  • Generative Linguistics and Neural Networks at 60: Foundation, Friction, and Fusion*
    Generative linguistics and neural networks at 60: foundation, friction, and fusion* Joe Pater, University of Massachusetts Amherst October 3, 2018. Abstract. The birthdate of both generative linguistics and neural networks can be taken as 1957, the year of the publication of foundational work by both Noam Chomsky and Frank Rosenblatt. This paper traces the development of these two approaches to cognitive science, from their largely autonomous early development in their first thirty years, through their collision in the 1980s around the past tense debate (Rumelhart and McClelland 1986, Pinker and Prince 1988), and their integration in much subsequent work up to the present. Although this integration has produced a considerable body of results, the continued general gulf between these two lines of research is likely impeding progress in both: on learning in generative linguistics, and on the representation of language in neural modeling. The paper concludes with a brief argument that generative linguistics is unlikely to fulfill its promise of accounting for language learning if it continues to maintain its distance from neural and statistical approaches to learning. 1. Introduction At the beginning of 1957, two men nearing their 29th birthdays published work that laid the foundation for two radically different approaches to cognitive science. One of these men, Noam Chomsky, continues to contribute sixty years later to the field that he founded, generative linguistics. The book he published in 1957, Syntactic Structures, has been ranked as the most influential work in cognitive science from the 20th century.1 The other one, Frank Rosenblatt, had by the late 1960s largely moved on from his research on perceptrons – now called neural networks – and died tragically young in 1971.
    [Show full text]
  • Mccarthy As Scientist and Engineer, with Personal Recollections
    Articles McCarthy as Scientist and Engineer, with Personal Recollections Edward Feigenbaum n John McCarthy, professor emeritus of com - n the late 1950s and early 1960s, there were very few people puter science at Stanford University, died on actually doing AI research — mostly the handful of founders October 24, 2011. McCarthy, a past president (John McCarthy, Marvin Minsky, and Oliver Selfridge in of AAAI and an AAAI Fellow, helped design the I Boston, Allen Newell and Herbert Simon in Pittsburgh) plus foundation of today’s internet-based computing their students, and that included me. Everyone knew everyone and is widely credited with coining the term, artificial intelligence. This remembrance by else, and saw them at the few conference panels that were held. Edward Feigenbaum, also a past president of At one of those conferences, I met John. We renewed contact AAAI and a professor emeritus of computer sci - upon his rearrival at Stanford, and that was to have major con - ence at Stanford University, was delivered at the sequences for my professional life. I was a faculty member at the celebration of John McCarthy’s accomplish - University of California, Berkeley, teaching the first AI courses ments, held at Stanford on 25 March 2012. at that university, and John was doing the same at Stanford. As – AI Magazine Stanford moved toward a computer science department under the leadership of George Forsythe, John suggested to George, and then supported, the idea of hiring me into the founding fac - ulty of the department. Since we were both Advanced Research Project Agency (ARPA) contract awardees, we quickly formed a close bond concerning ARPA-sponsored AI research and gradu - ate student teaching.
    [Show full text]
  • Arxiv:2106.11534V1 [Cs.DL] 22 Jun 2021 2 Nanjing University of Science and Technology, Nanjing, China 3 University of Southampton, Southampton, U.K
    Noname manuscript No. (will be inserted by the editor) Turing Award elites revisited: patterns of productivity, collaboration, authorship and impact Yinyu Jin1 · Sha Yuan1∗ · Zhou Shao2, 4 · Wendy Hall3 · Jie Tang4 Received: date / Accepted: date Abstract The Turing Award is recognized as the most influential and presti- gious award in the field of computer science(CS). With the rise of the science of science (SciSci), a large amount of bibliographic data has been analyzed in an attempt to understand the hidden mechanism of scientific evolution. These include the analysis of the Nobel Prize, including physics, chemistry, medicine, etc. In this article, we extract and analyze the data of 72 Turing Award lau- reates from the complete bibliographic data, fill the gap in the lack of Turing Award analysis, and discover the development characteristics of computer sci- ence as an independent discipline. First, we show most Turing Award laureates have long-term and high-quality educational backgrounds, and more than 61% of them have a degree in mathematics, which indicates that mathematics has played a significant role in the development of computer science. Secondly, the data shows that not all scholars have high productivity and high h-index; that is, the number of publications and h-index is not the leading indicator for evaluating the Turing Award. Third, the average age of awardees has increased from 40 to around 70 in recent years. This may be because new breakthroughs take longer, and some new technologies need time to prove their influence. Besides, we have also found that in the past ten years, international collabo- ration has experienced explosive growth, showing a new paradigm in the form of collaboration.
    [Show full text]
  • History and Philosophy of Neural Networks
    HISTORY AND PHILOSOPHY OF NEURAL NETWORKS J. MARK BISHOP Abstract. This chapter conceives the history of neural networks emerging from two millennia of attempts to rationalise and formalise the operation of mind. It begins with a brief review of early classical conceptions of the soul, seating the mind in the heart; then discusses the subsequent Cartesian split of mind and body, before moving to analyse in more depth the twentieth century hegemony identifying mind with brain; the identity that gave birth to the formal abstractions of brain and intelligence we know as `neural networks'. The chapter concludes by analysing this identity - of intelligence and mind with mere abstractions of neural behaviour - by reviewing various philosophical critiques of formal connectionist explanations of `human understanding', `mathematical insight' and `consciousness'; critiques which, if correct, in an echo of Aristotelian insight, sug- gest that cognition may be more profitably understood not just as a result of [mere abstractions of] neural firings, but as a consequence of real, embodied neural behaviour, emerging in a brain, seated in a body, embedded in a culture and rooted in our world; the so called 4Es approach to cognitive science: the Embodied, Embedded, Enactive, and Ecological conceptions of mind. Contents 1. Introduction: the body and the brain 2 2. First steps towards modelling the brain 9 3. Learning: the optimisation of network structure 15 4. The fall and rise of connectionism 18 5. Hopfield networks 23 6. The `adaptive resonance theory' classifier 25 7. The Kohonen `feature-map' 29 8. The multi-layer perceptron 32 9. Radial basis function networks 34 10.
    [Show full text]
  • WHY PEOPLE THINK COMPUTERS CAN't Marvin
    WHY PEOPLE THINK COMPUTERS CAN'T Marvin Minsky, MIT First published in AI Magazine, vol. 3 no. 4, Fall 1982. Reprinted in Technology Review, Nov/Dec 1983, and in The Computer Culture, (Donnelly, Ed.) Associated Univ. Presses, Cranbury NJ, 1985 Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what's happening. Indeed, when computers first appeared, most of their designers intended them for nothing only to do huge, mindless computations. That's why the things were called "computers". Yet even then, a few pioneers -- especially Alan Turing -- envisioned what's now called "Artificial Intelligence" - or "AI". They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains. Today, with robots everywhere in industry and movie films, most people think Al has gone much further than it has. Yet still, "computer experts" say machines will never really think.
    [Show full text]
  • The Qualia Manifesto (C) Ken Mogi 1998 [email protected] the Qualia Manifesto Summar
    The Qualia Manifesto (c) Ken Mogi 1998 [email protected] http://www.qualia-manifesto.com The Qualia Manifesto Summary It is the greatest intellectual challenge for humanity at present to elucidate the first principles behind the fact that there is such a thing as a subjectiveexperience. The hallmark of our subjective experiences is qualia. It is the challenge to find the natural law behind the neural generation of qualia which constitute the percepts in our mind, or to go beyond the metaphor of a "correspondence" between physical and mental processes. This challenge is necessary to go beyond the present paradigm of natural sciences which is based on the so-called objective point of view of description. In order to pin down the origin of qualia, we need to incorporate the subjective point of view in a non-trivial manner. The clarification of the nature of the way how qualia in our mind are invoked by the physical processes in the brain and how the "subjectivity" structure which supports qualia is originated is an essential step in making compatible the subjective and objective points of view. The elucidation of the origin of qualia rich subjectivity is important not only as an activity in the natural sciences, but also as a foundation and the ultimate justification of the whole world of the liberal arts. Bridging the gap between the two cultures (C.P.Snow) is made possible only through a clear understanding of the origin of qualia and subjectivity. Qualia symbolize the essential intellectual challenge for the humanity in the future.
    [Show full text]
  • A Programming and Problem-Solving Seminar
    Dcccm bcr 1983 Report No. STAN-B-83-990 A Programming and Problem-Solving Seminar bY John D. Hobby and Donald E. Knuth Department of Computer Science Stanford University Stanford, CA 94305 . -_ -_y-- _ - Computer Science Department A PROGRAMMING AND PROBLEM-SOLVING SEMINAR bY John D. Hobby and Donald E. Knuth This report contains edited transcripts of the discussions held in Stanford’s course CS204, Problem Seminar, during autumn quarter 1982. Since the topics span a large range of ideas in computer science, and since most of the important research paradigms and programming paradigms were touched on during the discussions, these notes may be ‘of interest to graduate students of computer science at other universities, as well as to their professors and to professional people in the “real world.” The present report is the fifth in a series of such transcripts, continuing the tradi- tion established in STAN-CS-77-606 (Michael J. Clancy, 1977), STAN-CS-79-707 (Chris Van’ Wyk, 1979), STAN-CS-81-863 (Allan A. Miller, 1981), STAN-CS-83-989 (Joseph S. Weening, 1983). The production of this report was partially supported by National Science Foundation grant MCS-8300984. Table of Contents Index to problems in the five seminar transcripts . 3 Cast of characters . 4 Intro due tion . ..*.................................. 5 Problem l-Bulgarian Solitaire ........................................................ 6 October 5 ........................................................................ 6 October 7 .......................................................................
    [Show full text]
  • CAP 5636 - Advanced Artificial Intelligence
    CAP 5636 - Advanced Artificial Intelligence Introduction This slide-deck is adapted from the one used by Chelsea Finn at CS221 at Stanford. CAP 5636 Instructor: Lotzi Bölöni http://www.cs.ucf.edu/~lboloni/ Slides, homeworks, links etc: http://www.cs.ucf.edu/~lboloni/Teaching/CAP5636_Fall2021/index.html Class hours: Tue, Th 12:00PM - 1:15PM COVID considerations: UCF expects you to get vaccinated and wear a mask Classes will be in-person, but will be recorded on Zoom. Office hours will be over Zoom. Motivating artificial intelligence It is generally not hard to motivate AI these days. There have been some substantial success stories. A lot of the triumphs have been in games, such as Jeopardy! (IBM Watson, 2011), Go (DeepMind’s AlphaGo, 2016), Dota 2 (OpenAI, 2019), Poker (CMU and Facebook, 2019). On non-game tasks, we also have systems that achieve strong performance on reading comprehension, speech recognition, face recognition, and medical imaging benchmarks. Unlike games, however, where the game is the full problem, good performance on a benchmark does not necessarily translate to good performance on the actual task in the wild. Just because you ace an exam doesn’t necessarily mean you have perfect understanding or know how to apply that knowledge to real problems. So, while promising, not all of these results translate to real-world applications Dangers of AI From the non-scientific community, we also see speculation about the future: that it will bring about sweeping societal change due to automation, resulting in massive job loss, not unlike the industrial revolution, or that AI could even surpass human-level intelligence and seek to take control.
    [Show full text]
  • Transforming Ad Hoc EDA to Algorithmic EDA
    Transforming Ad Hoc EDA to Algorithmic EDA Jason Cong Chancellor’s Professor, UCLA Director, Center for Domain-Specific Computing 1 The Early Years 蔡高中學 成功大學 麻省理工 學 院 Choikou Middle School National Cheng Kung University MIT Macau Taiwan USA 1952 1956 1958 2 Graduate Study at MIT (1958 – 1963) ▪ MS thesis: A Study in Machine-Aided Learning − A pioneer work in distant learning ▪ Advisor: Ronald Howard 3 Graduate Study at MIT ▪ PhD thesis: “Some Memory Aspects of Finite Automata” (1963) ▪ Advisor: Dean Arden − Professor of EE, MIT, 1956-1964 − Involved with Whirlwind Project − Also PhD advisor of Jack Dennis ▪ Jack was PhD advisor of Randy Bryant -- another Phil Kaufman Award Recipient (2009) 4 Side Story: Dean Arden’s Visit to UIUC in 1992 I am glad that I have better students than you 5 Side Story: Dean Arden’s Visit to UIUC in 1992 I feel blessed that I had a better advisor than all of you 6 Two Important Books in Computer Science in 1968 ▪ The Art of Computer Programming, Vol. 1, Fundamental Algorithms, Donald E. Knuth, 1968 ▪ Introduction to Combinatorial Mathematics, C. L. Liu, 1968 7 Sample Chapters in “Introduction to Combinatorial Mathematics” ▪ Chapter 3: Recurrence Relations ▪ Chapter 6: Fundamental Concepts in the Theory of Graphs ▪ Chapter 7: Trees, Circuits, and Cut-sets ▪ Chapter 10: Transport Networks ▪ Chapter 11: Matching Theory ▪ Chapter 12: Linear Programming ▪ Chapter 13: Dynamic Programming 8 Project MAC ▪ Project MAC (Project on Mathematics and Computation) was launched 7/1/1963 − Backronymed for Multiple Access Computer,
    [Show full text]
  • The Logic of Qualia 1 Introduction
    The Logic of Qualia Drew McDermott, 2014-08-14 Draft! Comments welcome! Do not quote! Abstract: Logic is useful as a neutral formalism for expressing the contents of mental representations. It can be used to extract crisp conclusions regarding the higher- order theory of phenomenal consciousness developed in (McDermott, 2001, 2007). A key aspect of conscious perceptions is their connection to the distinction between appearance and reality. Perceptions must often be corrected. To do so requires that the logic of perception be able to represent the logical structure of judgment events, that is, to include the formulas of the logic as objects to be reasoned about. However, there is a limit to how finely humans can examine their own represen- tations. Terms representing primary and secondary qualities seemed to be locked, so that the numbers (or levels of neural activation) that are their essence are not directly accessible. Humans feel a need to invoke “intrinsic,” “nonrelational” prop- erties of many secondary qualities — their qualia — to “explicate” how we compare and discriminate among them, although this is not actually how the comparisons are accomplished. This model of qualia explains several things: It accounts for the difference between “normal” and “introspective” access to a perceptual module in terms of quotation. It dissolves Jackson’s knowledge argument by explaining what Mary learns as a fictional but undoubtable belief structure. It makes spectrum in- version logically impossible by providing a degree of freedom between the physical structure of the brain and the representations it contains that redescribes putative cases of spectrum inversion as alternative but equivalent ways of mapping physical states to representational states.
    [Show full text]
  • Russell & Norvig, Chapters
    DIT411/TIN175, Artificial Intelligence Russell & Norvig, Chapters 1–2: Introduction to AI RUSSELL & NORVIG, CHAPTERS 1–2: INTRODUCTION TO AI DIT411/TIN175, Artificial Intelligence Peter Ljunglöf 16 January, 2018 1 TABLE OF CONTENTS What is AI? (R&N 1.1–1.2) What is intelligence? Strong and Weak AI A brief history of AI (R&N 1.3) Notable AI moments, 1940–2018 “The three waves of AI” Interlude: What is this course, anyway? People, contents and deadlines Agents (R&N chapter 2) Rationality Enviroment types Philosophy of AI Is AI possible? Turing’s objections to AI 2 WHAT IS AI? (R&N 1.1–1.2) WHAT IS INTELLIGENCE? STRONG AND WEAK AI 3 WHAT IS INTELLIGENCE? ”It is not my aim to surprise or shock you – but the simplest way I can summarize is to say that there are now in the world machines that can think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until — in a visible future — the range of problems they can handle will be coextensive with the range to which human mind has been applied.” by Herbert A Simon (1957) 4 STRONG AND WEAK AI Weak AI — acting intelligently the belief that machines can be made to act as if they are intelligent Strong AI — being intelligent the belief that those machines are actually thinking Most AI researchers don’t care “the question of whether machines can think… …is about as relevant as whether submarines can swim.” (Edsger W Dijkstra, 1984) 5 WEAK AI Weak AI is a category that is flexible as soon as we understand how an AI-program works, it appears less “intelligent”.
    [Show full text]