1956 and the Origins of Artificial Intelligence Computing

Total Page:16

File Type:pdf, Size:1020Kb

1956 and the Origins of Artificial Intelligence Computing UC Berkeley UC Berkeley Previously Published Works Title Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing Permalink https://escholarship.org/uc/item/88q1j6z3 ISBN 9781476357638 Author Skinner, Rebecca Elizabeth Publication Date 2012-05-01 eScholarship.org Powered by the California Digital Library University of California Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing Rebecca E. Skinner copyright 2012 by Rebecca E. Skinner ISBN 978-0-9894543-1-5 Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing Chapter .5. Preface Chapter 1. Introduction Chapter 2. The Ether of Ideas in the Thirties and the War Years Chapter 3. The New World and the New Generation in the Forties Chapter 4. The Practice of Physical Automata Chapter 5. Von Neumann, Turing, and Abstract Automata Chapter 6. Chess, Checkers, and Games Chapter 7. The Impasse at End of Cybernetic Interlude Chapter 8. Newell, Shaw, and Simon at the Rand Corporation Chapter 9. The Declaration of AI at the Dartmouth Conference Chapter 10. The Inexorable Path of Newell and Simon Chapter 11. McCarthy and Minsky begin Research at MIT Chapter 12. AI’s Detractors Before its Time Chapter 13. Conclusion: Another Pregnant Pause Chapter 14. Acknowledgements Chapter 15. Bibliography Chapter 16. Endnotes Chapter .5. Preface Introduction Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing is a history of the origins of AI. AI, the field that seeks to do things that would be considered intelligent if a human being did them, is a universal of human thought, developed over centuries. Various efforts to carry this out appear- in the forms of robotic machinery and more abstract tools and systems of symbols intended to artificially contrive knowledge. The latter sounds like alchemy, and in a sense it certainly is. There is no gold more precious than knowledge. That this is a constant historical dream, deeply rooted in the human experience, is not in doubt. However, it was not more than a dream until the machinery that could put it into effect was relatively cheap, robust, and available for ongoing experimentation. The digital computer was invented during the years leading to and including the Second World War, and AI became a tangible possibility. Software that used symbols to enact the steps of problem-solving could be designed and executed. However, envisioning our possibilities when they are in front of us is often a more formidable challenge than bringing about their material reality. AI in the general sense of intelligence cultivated through computing had also been discussed with increasing confidence through the early 1950s. As we will see, bringing it into reality as a concept took repeated hints, fits, and starts until it finally appeared as such in 1956. Our story is an intellectual saga with several supporting leads, a large peripheral cast, and the giant sweep of Postwar history in the backdrop. There is no single ‘great man’ in this opus. As far as the foundation of AI is concerned, all of the founders were great. Even the peripheral cast was composed of people who were major figures in other fields. Nor, frankly, is there a villain either. Themes and Thesis The book tells the story of the development of the cognitive approach to psychology, computer science (software), and the development of software that undertook to do ‘intelligent’ things during mid-century. To this end, I study the early development of computing and psychology in the middle decades of the century, ideas about ‘Giant Brains’, and the formation of the field of study known as AI. Why did this particular culture spring out of this petri dish, at this time? In addition to ‘why’, I consider the accompanying where, how, and who. This work is expository: I am concerned with the enrichment of the historical record. Notwithstanding the focus on the story, the author of necessity participates in the thematic concerns of historians of computer science. Several themes draw our attention. The role of the military in the initial birth and later development of the computer and its ancillary technologies should not be erased, eroded, or diminished. Make no mistake: war is abhorrent. But sustained nation-building and military drives can yield staggering technological advances. War is a powerful driver of technological innovations (1). This is particularly the case with the development of ‘general-purpose technologies’, that is, those which present an entirely new way of processing material or information (2). These technologies of necessity create and destroy new industries, means of locomotion, creation of energy, and processing of information (steel rather than iron, the book, the electric generator, the automobile, the digital computer). In the process, these fundamental technologies will bring about new forms of communication, cultural activities, and numerous ancillary industries. We repeat, for effect: AI is the progeny of the Second World War, as is the digital computer, the microwave oven, the transistor radio and portable music devices, desktop and laptop computers, cellular telephones, the iPod, iPad, computer graphics, and thousands of software applications. The theory of the Cold War’s creative power and fell hand in shaping the Computer Revolution is prevalent in the current academic discourse on this topic: its paradoxical creative power can’t be denied (3). The role of the Counterculture in creating the Computer Revolution is affectively appealing. In its strongest form, this theory holds that revolutionary hackers created most, if not all of the astonishing inventions in computer applications (4). For many of the computer applications of the Sixties and Seventies, including games, software systems, security and vision, this theory holds a good deal of force. However, the period under discussion in this book refutes the larger statement, though. The thesis has its chronology backwards. The appearance of the culturally revolutionary ‘t-shirts’ was preceded by a decade and a half of hardware, systems and software language work by the culturally conservative ‘white-shirts’. (Throughout the Fifties, IBM insisted on a dress code of white shirts, worn by white Protestant men) (5). Yet there is one way in which those who lean heavily on the cultural aspect of the Computer Revolution are absolutely correct. An appropriate and encouraging organizational culture was also essential in the development of the computer in every aspect, along the course of the entire chronology. This study emphasizes more emphatically the odd mélange of a number of different institutional contexts in computing, and how they came together to study one general endeavor. AI in its origins started with individual insights and projects, rather than cultivated in any single research laboratory. The establishment of AI preceded its social construction. We could say that AI’s initial phase as revolutionary science (or exogenous shock, in the economist’s terms) preceded its institutionalization and establishment of an overall “ecology of knowledge” (6). However, once AI was started, it too relied heavily on its institutional settings. In turn, the founders of AI established and cultivated research environments that would continue to foster innovation. The cultivation of such an environment is more evident in the later history of AI, rather than in the tentative movements in the 1950s. Yet another salient theme of this work is the sheer audacity of the paradigm transition that AI itself entailed. The larger manner of thinking about intelligence as a highly tangible quality, and about thinking as something that could have qualitative aspects to it, required a vast change between the late 1930s and the mid- 1950s. As with any such alteration of focus, this required the new agenda to be made visible- and envisioned while it still seemed like an extreme and far-fetched concept (7). A final theme is the larger role of AI in the history of the Twentieth century. It is certainly true that the Cold War’s scientific and research environment was a ‘Closed World’ in which an intense, intellectually charged, politically obsessive culture thrived. The stakes involved in the Cold War itself were the highest possible ones; the intensity of its inner circles this is no surprise. However, in this case, the larger cultural themes of the Twentieth century had created their own “closed world”. Between them, Marxian political philosophy and Freudian influence on culture had robbed the arts, politics and literature of its vitality. This elite literary ‘Closed world’ saw science and technology as aesthetically unappealing and inevitably hijacked by the political forces that funded research. The resolution of the Cold War, and the transformation of the economy and ultimately of culture by the popularization of computing, would not take place for decades. Yet the overcoming of the cultural impasse of the Twentieth century would be a long-latent theme in which AI and computing would later play a part (8). Outline of the Text In Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing , we examine the way in which AI was formed at its start, its originators, the world they lived in and how they chose this unique path, the computers that they used and the larger cultural beliefs about those machines, and the context in which they managed to find both the will and the way to achieve this. 1956 was the tipping point, rather than the turning point, for this entry into an alternative way of seeing the world. Our chapter outline indicates the line the book follows. The chapter outline delineates the book’s course. The Introduction and Conclusion chapters frame the book and discuss history outside of the time frame covered in BTSM, and establishes AI as a constant in world intellectual history (Chapter One). The other chapters are narrative, historical, and written within the context of their time.
Recommended publications
  • Artificial Consciousness and the Consciousness-Attention Dissociation
    Consciousness and Cognition 45 (2016) 210–225 Contents lists available at ScienceDirect Consciousness and Cognition journal homepage: www.elsevier.com/locate/concog Review article Artificial consciousness and the consciousness-attention dissociation ⇑ Harry Haroutioun Haladjian a, , Carlos Montemayor b a Laboratoire Psychologie de la Perception, CNRS (UMR 8242), Université Paris Descartes, Centre Biomédical des Saints-Pères, 45 rue des Saints-Pères, 75006 Paris, France b San Francisco State University, Philosophy Department, 1600 Holloway Avenue, San Francisco, CA 94132 USA article info abstract Article history: Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to Received 6 July 2016 implement sophisticated forms of human intelligence in machines. This research attempts Accepted 12 August 2016 to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable Keywords: once we overcome short-term engineering challenges. We believe, however, that phenom- Artificial intelligence enal consciousness cannot be implemented in machines. This becomes clear when consid- Artificial consciousness ering emotions and examining the dissociation between consciousness and attention in Consciousness Visual attention humans. While we may be able to program ethical behavior based on rules and machine Phenomenology learning, we will never be able to reproduce emotions or empathy by programming such Emotions control systems—these will be merely simulations. Arguments in favor of this claim include Empathy considerations about evolution, the neuropsychological aspects of emotions, and the disso- ciation between attention and consciousness found in humans.
    [Show full text]
  • Biographies of Computer Scientists
    1 Charles Babbage 26 December 1791 (London, UK) – 18 October 1871 (London, UK) Life and Times Charles Babbage was born into a wealthy family, and started his mathematics education very early. By . 1811, when he went to Trinity College, Cambridge, he found that he knew more mathematics then his professors. He moved to Peterhouse, Cambridge from where he graduated in 1814. However, rather than come second to his friend Herschel in the final examinations, Babbage decided not to compete for an honors degree. In 1815 he co-founded the Analytical Society dedicated to studying continental reforms of Newton's formulation of “The Calculus”. He was one of the founders of the Astronomical Society in 1820. In 1821 Babbage started work on his Difference Engine designed to accurately compile tables. Babbage received government funding to construct an actual machine, but they stopped the funding in 1832 when it became clear that its construction was running well over-budget George Schuetz completed a machine based on the design of the Difference Engine in 1854. On completing the design of the Difference Engine, Babbage started work on the Analytical Engine capable of more general symbolic manipulations. The design of the Analytical Engine was complete in 1856, but a complete machine would not be constructed for over a century. Babbage's interests were wide. It is claimed that he invented cow-catchers for railway engines, the uniform postal rate, a means of recognizing lighthouses. He was also interested in locks and ciphers. He was politically active and wrote many treatises. One of the more famous proposed the banning of street musicians.
    [Show full text]
  • Computability and Complexity
    Computability and Complexity Lecture Notes Herbert Jaeger, Jacobs University Bremen Version history Jan 30, 2018: created as copy of CC lecture notes of Spring 2017 Feb 16, 2018: cleaned up font conversion hickups up to Section 5 Feb 23, 2018: cleaned up font conversion hickups up to Section 6 Mar 15, 2018: cleaned up font conversion hickups up to Section 7 Apr 5, 2018: cleaned up font conversion hickups up to the end of these lecture notes 1 1 Introduction 1.1 Motivation This lecture will introduce you to the theory of computation and the theory of computational complexity. The theory of computation offers full answers to the questions, • what problems can in principle be solved by computer programs? • what functions can in principle be computed by computer programs? • what formal languages can in principle be decided by computer programs? Full answers to these questions have been found in the last 70 years or so, and we will learn about them. (And it turns out that these questions are all the same question). The theory of computation is well-established, transparent, and basically simple (you might disagree at first). The theory of complexity offers many insights to questions like • for a given problem / function / language that has to be solved / computed / decided by a computer program, how long does the fastest program actually run? • how much memory space has to be used at least? • can you speed up computations by using different computer architectures or different programming approaches? The theory of complexity is historically younger than the theory of computation – the first surge of results came in the 60ties of last century.
    [Show full text]
  • Women in Computing
    History of Computing CSE P590A (UW) PP190/290-3 (UCB) CSE 290 291 (D00) Women in Computing Katherine Deibel University of Washington [email protected] 1 An Amazing Photo Philadelphia Inquirer, "Your Neighbors" article, 8/13/1957 2 Diversity Crisis in Computer Science Percentage of CS/IS Bachelor Degrees Awarded to Women National Center for Education Statistics, 2001 3 Goals of this talk ! Highlight the many accomplishments made by women in the computing field ! Learn their stories, both good and bad 4 Augusta Ada King, Countess of Lovelace ! Translated and extended Menabrea’s article on Babbage’s Analytical Engine ! Predicted computers could be used for music and graphics ! Wrote the first algorithm— how to compute Bernoulli numbers ! Developed notions of looping and subroutines 5 Garbage In, Garbage Out The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. — Ada Lovelace, Note G 6 On her genius and insight If you are as fastidious about the acts of your friendship as you are about those of your pen, I much fear I shall equally lose your friendship and your Notes. I am very reluctant to return your admirable & philosophic 'Note A.' Pray do not alter it… All this was impossible for you to know by intuition and the more I read your notes the more surprised I am at them and regret not having earlier explored so rich a vein of the noblest metal.
    [Show full text]
  • Inventing Computational Rhetoric
    INVENTING COMPUTATIONAL RHETORIC By Michael W. Wojcik A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Digital Rhetoric and Professional Writing — Master of Arts 2013 ABSTRACT INVENTING COMPUTATIONAL RHETORIC by Michael W. Wojcik Many disciplines in the humanities are developing “computational” branches which make use of information technology to process large amounts of data algorithmically. The field of computational rhetoric is in its infancy, but we are already seeing interesting results from applying the ideas and goals of rhetoric to text processing and related areas. After considering what computational rhetoric might be, three approaches to inventing computational rhetorics are presented: a structural schema, a review of extant work, and a theoretical exploration. Copyright by MICHAEL W. WOJCIK 2013 For Malea iv ACKNOWLEDGEMENTS Above all else I must thank my beloved wife, Malea Powell, without whose prompting this thesis would have remained forever incomplete. I am also grateful for the presence in my life of my terrific stepdaughter, Audrey Swartz, and wonderful granddaughter Lucille. My thesis committee, Dean Rehberger, Bill Hart-Davidson, and John Monberg, pro- vided me with generous guidance and inspiration. Other faculty members at Michigan State who helped me explore relevant ideas include Rochelle Harris, Mike McLeod, Joyce Chai, Danielle Devoss, and Bump Halbritter. My previous academic program at Miami University did not result in a degree, but faculty there also contributed greatly to my the- oretical understanding, particularly Susan Morgan, Mary-Jean Corbett, Brit Harwood, J. Edgar Tidwell, Lori Merish, Vicki Smith, Alice Adams, Fran Dolan, and Keith Tuma.
    [Show full text]
  • Ada Lovelace the first Computer Programmer 1815 - 1852
    Ada Lovelace The first computer programmer 1815 - 1852 Biography Ada Lovelace Day I Born on December 10th, 1815 in London as Augusta Ada Byron Each second Tuesday in October is Ada Lovelace Day. A day to raise the I Parents separated when she was a baby profile of women in science, technology, engineering, and maths to create new role models for girls and women in these fields. During this day the I Father Lord Byron was a poet and died when she was 8 years old accomplishments of those women are celebrated. I Mother Lady Wentworth was a social reformer I Descended from a wealthy family I Early interest in mathematics and science, encouraged by her mother Portrait I Obtained private classes and got in touch with intellectuals, e.g. Mary Sommerville who tutored her and later introduced Lovelace to Charles Babbage at the age of 17 I Married in 1835 William King at the age of 19, shortly after becoming the Countess of Lovelace I By 1839, she had given birth to 3 children I Continued studying maths, supported among others by Augustus De Morgan, a math professor in London who taught her via correspondence I In 1843, she published a translation of an Italian academic paper about Babbage's Analytical Engine and added her famous note section (see Contributions) I Died on November 27th, 1852 at the age of 36 Contributions I First computer programmer, roughly a century before the electronic computer I A two decade lasting correspondence with Babbage about his idea of an Analytical Engine I Developed an algorithm that would enable the Analytical Engine to calculate a sequence of Bernoulli numbers, unfortunately, the machine was never built I First person to realize the power of computer programs: Not only used for calculations with numbers I Combined arts and logic, calling it poetical science Figure 3:Ada Lovelace I First reflections about artificial intelligence, but she rejected the idea Bernoulli Numbers Quotes I Play an important role in several domains of mathematics, e.g.
    [Show full text]
  • Computability Theory
    CSC 438F/2404F Notes (S. Cook and T. Pitassi) Fall, 2019 Computability Theory This section is partly inspired by the material in \A Course in Mathematical Logic" by Bell and Machover, Chap 6, sections 1-10. Other references: \Introduction to the theory of computation" by Michael Sipser, and \Com- putability, Complexity, and Languages" by M. Davis and E. Weyuker. Our first goal is to give a formal definition for what it means for a function on N to be com- putable by an algorithm. Historically the first convincing such definition was given by Alan Turing in 1936, in his paper which introduced what we now call Turing machines. Slightly before Turing, Alonzo Church gave a definition based on his lambda calculus. About the same time G¨odel,Herbrand, and Kleene developed definitions based on recursion schemes. Fortunately all of these definitions are equivalent, and each of many other definitions pro- posed later are also equivalent to Turing's definition. This has lead to the general belief that these definitions have got it right, and this assertion is roughly what we now call \Church's Thesis". A natural definition of computable function f on N allows for the possibility that f(x) may not be defined for all x 2 N, because algorithms do not always halt. Thus we will use the symbol 1 to mean “undefined". Definition: A partial function is a function n f :(N [ f1g) ! N [ f1g; n ≥ 0 such that f(c1; :::; cn) = 1 if some ci = 1. In the context of computability theory, whenever we refer to a function on N, we mean a partial function in the above sense.
    [Show full text]
  • A Framework for Representing Knowledge Marvin Minsky MIT-AI Laboratory Memo 306, June, 1974. Reprinted in the Psychology of Comp
    A Framework for Representing Knowledge Marvin Minsky MIT-AI Laboratory Memo 306, June, 1974. Reprinted in The Psychology of Computer Vision, P. Winston (Ed.), McGraw-Hill, 1975. Shorter versions in J. Haugeland, Ed., Mind Design, MIT Press, 1981, and in Cognitive Science, Collins, Allan and Edward E. Smith (eds.) Morgan-Kaufmann, 1992 ISBN 55860-013-2] FRAMES It seems to me that the ingredients of most theories both in Artificial Intelligence and in Psychology have been on the whole too minute, local, and unstructured to account–either practically or phenomenologically–for the effectiveness of common-sense thought. The "chunks" of reasoning, language, memory, and "perception" ought to be larger and more structured; their factual and procedural contents must be more intimately connected in order to explain the apparent power and speed of mental activities. Similar feelings seem to be emerging in several centers working on theories of intelligence. They take one form in the proposal of Papert and myself (1972) to sub-structure knowledge into "micro-worlds"; another form in the "Problem-spaces" of Newell and Simon (1972); and yet another in new, large structures that theorists like Schank (1974), Abelson (1974), and Norman (1972) assign to linguistic objects. I see all these as moving away from the traditional attempts both by behavioristic psychologists and by logic-oriented students of Artificial Intelligence in trying to represent knowledge as collections of separate, simple fragments. I try here to bring together several of these issues by pretending to have a unified, coherent theory. The paper raises more questions than it answers, and I have tried to note the theory's deficiencies.
    [Show full text]
  • Lovelace & Babbage and the Creation of the 1843 'Notes'
    Lovelace & Babbage and the Creation of the 1843 ‘Notes’ John Fuegi and Jo Francis Flare/MITH Augusta Ada Lovelace worked with Charles Babbage to create a description of Babbage’s unbuilt invention, the Analytical Engine, a highly advanced mechanical calculator often considered a forerunner of the electronic calculating computers of the 20th century. Ada Lovelace’s “Notes,” describing the Analytical Engine, published in Taylor’s Scientific Memoirs in 1843, contained a ground-breaking description of the possibilities of programming the machine to go beyond number-crunching to “computing” in the wider sense in which we understand the term today. This article expands on research first presented by the authors in their documentary film, To Dream Tomorrow. What shall we do to get rid of Mr. Babbage and known to have crossed the intellectual thresh- his calculating Machine? Surely if completed it old between conceptualizing computing as would be worthless as far as science is con- only for calculation on the one hand, and on cerned? the other hand, computing as we know it —British Prime Minister Sir Robert Peel, 18421 today: with wider applications made possible by symbolic substitution. The Analytical Engine does not occupy common In an early background interview at the ground with mere ‘calculating machines.’ … In Science Museum (London) for the historical enabling mechanism to combine together gen- documentary film about collaboration between eral symbols, in successions of unlimited variety Lovelace and Babbage, To Dream Tomorrow,3 and extent, a uniting link is established between Babbage authority Doron Swade mentioned the operations of matter and the abstract mental that he thought Babbage and Lovelace had processes of the most abstract branch of mathe- “very different qualities of mind.” Swade’s matical science.
    [Show full text]
  • A Brief History of Computers
    History of Computers http://www.cs.uah.edu/~rcoleman/Common/History/History.html A Brief History of Computers Where did these beasties come from? Ancient Times Early Man relied on counting on his fingers and toes (which by the way, is the basis for our base 10 numbering system). He also used sticks and stones as markers. Later notched sticks and knotted cords were used for counting. Finally came symbols written on hides, parchment, and later paper. Man invents the concept of number, then invents devices to help keep up with the numbers of his possessions. Roman Empire The ancient Romans developed an Abacus, the first "machine" for calculating. While it predates the Chinese abacus we do not know if it was the ancestor of that Abacus. Counters in the lower groove are 1 x 10 n, those in the upper groove are 5 x 10 n Industrial Age - 1600 John Napier, a Scottish nobleman and politician devoted much of his leisure time to the study of mathematics. He was especially interested in devising ways to aid computations. His greatest contribution was the invention of logarithms. He inscribed logarithmic measurements on a set of 10 wooden rods and thus was able to do multiplication and division by matching up numbers on the rods. These became known as Napier’s Bones. 1621 - The Sliderule Napier invented logarithms, Edmund Gunter invented the logarithmic scales (lines etched on metal or wood), but it was William Oughtred, in England who invented the sliderule. Using the concept of Napier’s bones, he inscribed logarithms on strips of wood and invented the calculating "machine" which was used up until the mid-1970s when the first hand-held calculators and microcomputers appeared.
    [Show full text]
  • Introduction to the Theory of Computation Computability, Complexity, and the Lambda Calculus Some Notes for CIS262
    Introduction to the Theory of Computation Computability, Complexity, And the Lambda Calculus Some Notes for CIS262 Jean Gallier and Jocelyn Quaintance Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104, USA e-mail: [email protected] c Jean Gallier Please, do not reproduce without permission of the author April 28, 2020 2 Contents Contents 3 1 RAM Programs, Turing Machines 7 1.1 Partial Functions and RAM Programs . 10 1.2 Definition of a Turing Machine . 15 1.3 Computations of Turing Machines . 17 1.4 Equivalence of RAM programs And Turing Machines . 20 1.5 Listable Languages and Computable Languages . 21 1.6 A Simple Function Not Known to be Computable . 22 1.7 The Primitive Recursive Functions . 25 1.8 Primitive Recursive Predicates . 33 1.9 The Partial Computable Functions . 35 2 Universal RAM Programs and the Halting Problem 41 2.1 Pairing Functions . 41 2.2 Equivalence of Alphabets . 48 2.3 Coding of RAM Programs; The Halting Problem . 50 2.4 Universal RAM Programs . 54 2.5 Indexing of RAM Programs . 59 2.6 Kleene's T -Predicate . 60 2.7 A Non-Computable Function; Busy Beavers . 62 3 Elementary Recursive Function Theory 67 3.1 Acceptable Indexings . 67 3.2 Undecidable Problems . 70 3.3 Reducibility and Rice's Theorem . 73 3.4 Listable (Recursively Enumerable) Sets . 76 3.5 Reducibility and Complete Sets . 82 4 The Lambda-Calculus 87 4.1 Syntax of the Lambda-Calculus . 89 4.2 β-Reduction and β-Conversion; the Church{Rosser Theorem . 94 4.3 Some Useful Combinators .
    [Show full text]
  • Mccarthy As Scientist and Engineer, with Personal Recollections
    Articles McCarthy as Scientist and Engineer, with Personal Recollections Edward Feigenbaum n John McCarthy, professor emeritus of com - n the late 1950s and early 1960s, there were very few people puter science at Stanford University, died on actually doing AI research — mostly the handful of founders October 24, 2011. McCarthy, a past president (John McCarthy, Marvin Minsky, and Oliver Selfridge in of AAAI and an AAAI Fellow, helped design the I Boston, Allen Newell and Herbert Simon in Pittsburgh) plus foundation of today’s internet-based computing their students, and that included me. Everyone knew everyone and is widely credited with coining the term, artificial intelligence. This remembrance by else, and saw them at the few conference panels that were held. Edward Feigenbaum, also a past president of At one of those conferences, I met John. We renewed contact AAAI and a professor emeritus of computer sci - upon his rearrival at Stanford, and that was to have major con - ence at Stanford University, was delivered at the sequences for my professional life. I was a faculty member at the celebration of John McCarthy’s accomplish - University of California, Berkeley, teaching the first AI courses ments, held at Stanford on 25 March 2012. at that university, and John was doing the same at Stanford. As – AI Magazine Stanford moved toward a computer science department under the leadership of George Forsythe, John suggested to George, and then supported, the idea of hiring me into the founding fac - ulty of the department. Since we were both Advanced Research Project Agency (ARPA) contract awardees, we quickly formed a close bond concerning ARPA-sponsored AI research and gradu - ate student teaching.
    [Show full text]