Building the Second Mind, 1961-1980: from the Ascendancy of ARPA to the Advent of Commercial Expert Systems Copyright 2013 Rebecca E

Total Page:16

File Type:pdf, Size:1020Kb

Building the Second Mind, 1961-1980: from the Ascendancy of ARPA to the Advent of Commercial Expert Systems Copyright 2013 Rebecca E Building the Second Mind, 1961-1980: From the Ascendancy of ARPA to the Advent of Commercial Expert Systems copyright 2013 Rebecca E. Skinner ISBN 978 09894543-4-6 Forward Part I. Introduction Preface Chapter 1. Introduction: The Status Quo of AI in 1961 Part II. Twin Bolts of Lightning Chapter 2. The Integrated Circuit Chapter 3. The Advanced Research Projects Agency and the Foundation of the IPTO Chapter 4. Hardware, Systems and Applications in the 1960s Part II. The Belle Epoque of the 1960s Chapter 5. MIT: Work in AI in the Early and Mid-1960s Chapter 6. CMU: From the General Problem Solver to the Physical Symbol System and Production Systems Chapter 7. Stanford University and SRI Part III. The Challenges of 1970 Chapter 8. The Mansfield Amendment, “The Heilmeier Era”, and the Crisis in Research Funding Chapter 9. The AI Culture Wars: the War Inside AI and Academia Chapter 10. The AI Culture Wars: Popular Culture Part IV. Big Ideas and Hardware Improvements in the 1970s invert these and put the hardware chapter first Chapter 11. AI at MIT in the 1970s: The Semantic Fallout of NLR and Vision Chapter 12. Hardware, Software, and Applications in the 1970s Chapter 13. Big Ideas in the 1970s Chapter 14. Conclusion: the Status Quo in 1980 Chapter 15. Acknowledgements Bibliography Endnotes Forward to the Beta Edition This book continues the story initiated in Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing. Building the Second Mind, 1961-1980: From the Establishment of ARPA to the Advent of Commercial Expert Systems continues this story, through to the fortunate phase of the second decade of AI computing. How quickly we forget the time during which the concept of AI itself was considered a strange and impossible effort ! The successes and rapid progress of AI software programs, with related work in adjacent fields such as robotics and complementary fields such as hardware, thus are our theme and meat of our narrative. In provenance, the entire Building the Second Mind effort traces back to the author's Ph.D. dissertation, which studied the commercialization of AI during the 1980s. Curiously, portions from the author’s original Ph.D. dissertation itself now make up several chapters of the third volume. This work is neither highly technical nor entirely comprehensive: my interests are skewed in favor of knowledge representation. It draws heavily upon both academic and journalistic nonfiction sources regarding AI. Much work in AI is necessarily specialist, but the value of synoptic works should not be underestimated. Everyone is someone’s popularizer. New technological advances allow alterations of the traditional publishing conventions, leaving social and professional convention to adjust. Paper publishing schedules and the pace of review have historically slowed publication and precluded revisions. This author instead asks for editorial input from the Cloud for this Beta edition. This will result in far more reviews from a larger pool of knowledgeable people. This will make it far easier for this text to include hyperlinks, refs to oral histories and museum and scholarly websites, as well as corrections and editorial improvements. I will take comments for three months and then release a revised ePub edition later in 2012. Preface Building the Second Mind, 1961-1980: From the Establishment of ARPA to the Advent of Commercial Expert Systems tells the story of the development, during the 1960s and 1970s, of AI, the field that sought to get computers to do things that would be considered intelligent if a person did them. In the late 1950s, the field was founded and began to undertake extremely rudimentary logic and problem-solving programs. In the 1960s, the immense growth in funding given to the field of computing, the development of integrated circuits for mainframe computing, and the increasing numbers of people in AI and in computer science in general, tremendously advanced the field. This is evidenced in more complex problem-solving, development of an understanding of the field of knowledge representation; appearance of fields such as graphic representation and problem-solving for more knowledge-intensive fields. Finally, the early integrated circuits of the 1960s gave way to microprocessors and microcomputers in the early 1970s. This heralded a near future time at which computers would be increasingly fast and ubiquitous. In a clear virtuous cycle, the enhanced cheapness of processing power and storage would encourage the proliferation of applications. This work is the sequel to Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing, which studied this field from the distant prehistory of abstract thinking through its early formation as a field of study in the mid-1950s. Watching the advances of the 1960s and 1970s offers a satisfying vindication of the early efforts of AI’s founders- John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. Enthralled with Technology “ O Brave New World that has such people in it !” Aldous Huxley, Brave New World. The early 1960s was an expansive, ebullient era. The confrontation and showdown of the United States with Cuba, and by proxy with the USSR, during the Cuban Missile Crisis of 1962-1963, proved that the USA at least held its own and could keep the Soviets from maintaining a military presence in the Western Hemisphere. The NASA mission to land a man on the Moon, initiated in 1961 and achieved in 1969, was one of the most memorable events of both the Cold War and scientific and technical history. The conquest of diseases through vaccines continued. The cultural fascination with the products of technological modernity continued: new materials in the form of novel fabrics, building materials, and jarring and atonal music and art, ensured that cultural enlightenment included an element of discomfort and shock. High Modernist architecture provided exposed metal and glass, with enough sharp edges and right angles to fill a geometry textbook. Freeways plowed through cities: an elevated highway wrecked the aesthetics of the waterfront of San Francisco until an earthquake took it down, and a highway nearly was run through the dense streets of Greenwich Village in New York City. The sleek and metallic and manufactured and aerodynamic were lauded; the quaint and candlelit and rustic had no place. Modernism- including faith in AI and the progress offered by computation- would see profound challenges. Some of the central cultural shocks which began to break Modernism apart had not taken place yet. Jane Jacobs’ influential masterwork The Death and life of great American cities appeared in 1961 (1)1, and Silent Spring by Rachel Carson was published in 1963 (2)2. Jacobs helped to bring about the new urban movement of densely populated, village-like urban centers, and Silent Spring decried environmental pollution. Both helped to bring caveats to technological growth, but the early 1960s saw only the beginning of questions to its munificence. Building the Second Mind will be an advocate of computer technology as an augmentation technology for human abilities and as a research technology for cognitive science. General belief, if not blind adhesion, to scientific progress, is an occupational requirement for writing this work. However, the caveats to technological progress issued by such challenges to Modernism are significant, and will be considered later in this book. The Context of AI’s Progress Artificial Intelligence was without a doubt one of the great triumphs of postwar technological optimism. During the 1960s and 1970s, the field of AI developed tremendously. It started to devise ways to represent visual data by computation, as in the 1966 MIT Summer Vision project; began to examine the mysteries of creativity (Simon article); tore apart sentences for their meaning (Schank conceptual primitives, Bobrow calculus problem parser); and developed means to represent high-level scientific problem-solving and classification knowledge (Buchanan and Feigenbaum’s Dendral program), among other things. AI’s earlier history, in the years of the postwar research and development frenzy, had been more precarious. After the Second World War, not only had the idea of making machines think seemed strange and disturbed. It had often been perceived as morally bad, and the whole concept sinister. The appearance of arguments in its favor, or at least those urging measured equivocation, was a welcome salve. But by the mid-1960s and late 1960s, the situation was immensely different. The field had achieved success with chess-playing programs, with the Logic Theorist, with calculus-solving programs, with early robotic arms, and most recently with the Dendral program (which solved diagnoses in internal medicine). With the appearance of the Advanced Research Projects Agency (ARPA, known as Darpa starting in 1972), AI had attained research centers, tenure-track professors, and influence in money-intensive timesharing projects. Throughout the 1970s the tasks which the field was capable of doing proceeded to be come far more erudite, as better forms of abstraction and embodiment of knowledge were created. As Edward Feigenbaum, a Stanford researcher who will be important to this book said, the field had a great deal of low fruit, easily attained, to pick. This book tells this story. 2. Themes The thematic concerns of this work should not overshadow its historical nature. We recognize that there is a voluminous and enthusiastic body of knowledge regarding the history and sociology of technology and science. Our concern with contribution to this body of knowledge is first and foremost expository rather than theoretical. We will revisit- and in several cases repeat- the themes of the last book. In other cases they are revised because of the change in the lessons of history. First, the successive successes of AI are salient in their most unalloyed form in this phase of the work.
Recommended publications
  • Unrestricted Immigration and the Foreign Dominance Of
    Unrestricted Immigration and the Foreign Dominance of United States Nobel Prize Winners in Science: Irrefutable Data and Exemplary Family Narratives—Backup Data and Information Andrew A. Beveridge, Queens and Graduate Center CUNY and Social Explorer, Inc. Lynn Caporale, Strategic Scientific Advisor and Author The following slides were presented at the recent meeting of the American Association for the Advancement of Science. This project and paper is an outgrowth of that session, and will combine qualitative data on Nobel Prize Winners family histories along with analyses of the pattern of Nobel Winners. The first set of slides show some of the patterns so far found, and will be augmented for the formal paper. The second set of slides shows some examples of the Nobel families. The authors a developing a systematic data base of Nobel Winners (mainly US), their careers and their family histories. This turned out to be much more challenging than expected, since many winners do not emphasize their family origins in their own biographies or autobiographies or other commentary. Dr. Caporale has reached out to some laureates or their families to elicit that information. We plan to systematically compare the laureates to the population in the US at large, including immigrants and non‐immigrants at various periods. Outline of Presentation • A preliminary examination of the 609 Nobel Prize Winners, 291 of whom were at an American Institution when they received the Nobel in physics, chemistry or physiology and medicine • Will look at patterns of
    [Show full text]
  • Stanford University Medical Experimental Computer Resource (SUMEX) Records SC1248
    http://oac.cdlib.org/findaid/ark:/13030/c8s46z8g Online items available Guide to the Stanford University Medical Experimental Computer Resource (SUMEX) Records SC1248 Daniel Hartwig & Jenny Johnson Department of Special Collections and University Archives January 2018 Green Library 557 Escondido Mall Stanford 94305-6064 [email protected] URL: http://library.stanford.edu/spc Guide to the Stanford University SC1248 1 Medical Experimental Computer Resource (SUMEX) Records SC... Language of Material: English Contributing Institution: Department of Special Collections and University Archives Title: Stanford University Medical Experimental Computer Resource (SUMEX) records Identifier/Call Number: SC1248 Physical Description: 33 Linear Feet Date (inclusive): 1975-1991 Special Collections and University Archives materials are stored offsite and must be paged 48 hours in advance. For more information on paging collections, see the department's website: http://library.stanford.edu/spc. Conditions Governing Access Materials are open for research use. Audio-visual materials are not available in original format, and must be reformatted to a digital use copy. Conditions Governing Use All requests to reproduce, publish, quote from, or otherwise use collection materials must be submitted in writing to the Head of Special Collections and University Archives, Stanford University Libraries, Stanford, California 94304-6064. Consent is given on behalf of Special Collections as the owner of the physical items and is not intended to include or imply permission from the copyright owner. Such permission must be obtained from the copyright owner, heir(s) or assigns. Restrictions also apply to digital representations of the original materials. Use of digital files is restricted to research and educational purposes.
    [Show full text]
  • Mccarthy.Pdf
    HISTORY OF LISP John McCarthy A rtificial Intelligence Laboratory Stanford University 1. Introduction. 2. LISP prehistory - Summer 1956 through Summer 1958. This paper concentrates on the development of the basic My desire for an algebraic list processing language for ideas and distinguishes two periods - Summer 1956 through artificial intelligence work on the IBM 704 computer arose in the Summer 1958 when most of the key ideas were developed (some of summer of 1956 during the Dartmouth Summer Research Project which were implemented in the FORTRAN based FLPL), and Fall on Artificial Intelligence which was the first organized study of AL 1958 through 1962 when the programming language was During this n~eeting, Newell, Shaa, and Fimon described IPL 2, a implemented and applied to problems of artificial intelligence. list processing language for Rand Corporation's JOHNNIAC After 1962, the development of LISP became multi-stranded, and different ideas were pursued in different places. computer in which they implemented their Logic Theorist program. There was little temptation to copy IPL, because its form was based Except where I give credit to someone else for an idea or on a JOHNNIAC loader that happened to be available to them, decision, I should be regarded as tentatively claiming credit for It and because the FORTRAN idea of writing programs algebraically or else regarding it as a consequence of previous decisions. was attractive. It was immediately apparent that arbitrary However, I have made mistakes about such matters in the past, and subexpressions of symbolic expressions could be obtained by I have received very little response to requests for comments on composing the functions that extract immediate subexpresstons, and drafts of this paper.
    [Show full text]
  • Facebook Timeline
    Facebook Timeline 2003 October • Mark Zuckerberg releases Facemash, the predecessor to Facebook. It was described as a Harvard University version of Hot or Not. 2004 January • Zuckerberg begins writing Facebook. • Zuckerberg registers thefacebook.com domain. February • Zuckerberg launches Facebook on February 4. 650 Harvard students joined thefacebook.com in the first week of launch. March • Facebook expands to MIT, Boston University, Boston College, Northeastern University, Stanford University, Dartmouth College, Columbia University, and Yale University. April • Zuckerberg, Dustin Moskovitz, and Eduardo Saverin form Thefacebook.com LLC, a partnership. June • Facebook receives its first investment from PayPal co-founder Peter Thiel for US$500,000. • Facebook incorporates into a new company, and Napster co-founder Sean Parker becomes its president. • Facebook moves its base of operations to Palo Alto, California. N. Lee, Facebook Nation, DOI: 10.1007/978-1-4614-5308-6, 211 Ó Springer Science+Business Media New York 2013 212 Facebook Timeline August • To compete with growing campus-only service i2hub, Zuckerberg launches Wirehog. It is a precursor to Facebook Platform applications. September • ConnectU files a lawsuit against Zuckerberg and other Facebook founders, resulting in a $65 million settlement. October • Maurice Werdegar of WTI Partner provides Facebook a $300,000 three-year credit line. December • Facebook achieves its one millionth registered user. 2005 February • Maurice Werdegar of WTI Partner provides Facebook a second $300,000 credit line and a $25,000 equity investment. April • Venture capital firm Accel Partners invests $12.7 million into Facebook. Accel’s partner and President Jim Breyer also puts up $1 million of his own money.
    [Show full text]
  • The Cedar Programming Environment: a Midterm Report and Examination
    The Cedar Programming Environment: A Midterm Report and Examination Warren Teitelman The Cedar Programming Environment: A Midterm Report and Examination Warren Teitelman t CSL-83-11 June 1984 [P83-00012] © Copyright 1984 Xerox Corporation. All rights reserved. CR Categories and Subject Descriptors: D.2_6 [Software Engineering]: Programming environments. Additional Keywords and Phrases: integrated programming environment, experimental programming, display oriented user interface, strongly typed programming language environment, personal computing. t The author's present address is: Sun Microsystems, Inc., 2550 Garcia Avenue, Mountain View, Ca. 94043. The work described here was performed while employed by Xerox Corporation. XEROX Xerox Corporation Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, California 94304 1 Abstract: This collection of papers comprises a report on Cedar, a state-of-the-art programming system. Cedar combines in a single integrated environment: high-quality graphics, a sophisticated editor and document preparation facility, and a variety of tools for the programmer to use in the construction and debugging of his programs. The Cedar Programming Language is a strongly-typed, compiler-oriented language of the Pascal family. What is especially interesting about the Ce~ar project is that it is one of the few examples where an interactive, experimental programming environment has been built for this kind of language. In the past, such environments have been confined to dynamically typed languages like Lisp and Smalltalk. The first paper, "The Roots of Cedar," describes the conditions in 1978 in the Xerox Palo Alto Research Center's Computer Science Laboratory that led us to embark on the Cedar project and helped to define its objectives and goals.
    [Show full text]
  • Person of the Year" Covers for Time Magazine
    UNLV Theses, Dissertations, Professional Papers, and Capstones 12-1-2012 Where in the World are the Women of Time? Women and the "Person of the Year" Covers for Time Magazine Krystle Lynne Anttonelli University of Nevada, Las Vegas Follow this and additional works at: https://digitalscholarship.unlv.edu/thesesdissertations Part of the Gender, Race, Sexuality, and Ethnicity in Communication Commons, Mass Communication Commons, and the Women's Studies Commons Repository Citation Anttonelli, Krystle Lynne, "Where in the World are the Women of Time? Women and the "Person of the Year" Covers for Time Magazine" (2012). UNLV Theses, Dissertations, Professional Papers, and Capstones. 1704. http://dx.doi.org/10.34917/4332685 This Thesis is protected by copyright and/or related rights. It has been brought to you by Digital Scholarship@UNLV with permission from the rights-holder(s). You are free to use this Thesis in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/ or on the work itself. This Thesis has been accepted for inclusion in UNLV Theses, Dissertations, Professional Papers, and Capstones by an authorized administrator of Digital Scholarship@UNLV. For more information, please contact [email protected]. WHERE ARE THE WOMEN OF TIME? WOMEN AND THE “PERSON OF THE YEAR” COVERS FOR TIME MAGAZINE by Krystle Anttonelli Bachelor
    [Show full text]
  • Kirwan Update July/August 2010
    Kirwan Update July/August 2010 The Changing Face of Black America Executive Notes Charisma S. Acey number of reasons, including increasingly The immigration issue Assistant Professor of restrictive immigration policies among City and Regional Planning has exploded again with a joint appointment European countries. Moreover, changes into the national spot- at the Kirwan Institute to U.S. immigration policy have directly light with Arizona’s affected the rates of legal migration to the draconian law. Given Most of the increase in scholarship United States: The 1965 act eliminated the the issue’s complexity, on immigration following the 1965 quota system, and refugee policies in the I would like to touch Immigration and Naturalization Act has 1980s also facilitated immigration. The most on four key points. Professor john a. powell followed the dramatic rise in non-European dramatic jump in African immigrant popu- Rather than looking at immigrant diversity, but has been heavily lation was between 2000 and 2005, accord- the immigration issue focused on Latino, Asian, and European ing to the 2007 report by Mary Mederios in isolation, we should socioeconomic attainment and assimila- Kent, “Immigration and America’s Black recognize the effect of our current socio- tion. According to the last U.S. Census, Population,” when 40 percent of the current political situation as the country deals African immigrants to the United States African immigrant population arrived. with a deep recession in the Obama era. from the mid-20th century now num- How do Black Caribbean and Black African Secondly, we should consider the role of ber approximately one million persons, migration differ? Studies of Black Caribbean immigration in Anglo-American global- mostly from West, East, and North Africa, migration have found evidence of assimila- ization, which has become the received with smaller numbers from Southern and tion with Black America, in terms of resi- wisdom for elites.
    [Show full text]
  • A Framework for Representing Knowledge Marvin Minsky MIT-AI Laboratory Memo 306, June, 1974. Reprinted in the Psychology of Comp
    A Framework for Representing Knowledge Marvin Minsky MIT-AI Laboratory Memo 306, June, 1974. Reprinted in The Psychology of Computer Vision, P. Winston (Ed.), McGraw-Hill, 1975. Shorter versions in J. Haugeland, Ed., Mind Design, MIT Press, 1981, and in Cognitive Science, Collins, Allan and Edward E. Smith (eds.) Morgan-Kaufmann, 1992 ISBN 55860-013-2] FRAMES It seems to me that the ingredients of most theories both in Artificial Intelligence and in Psychology have been on the whole too minute, local, and unstructured to account–either practically or phenomenologically–for the effectiveness of common-sense thought. The "chunks" of reasoning, language, memory, and "perception" ought to be larger and more structured; their factual and procedural contents must be more intimately connected in order to explain the apparent power and speed of mental activities. Similar feelings seem to be emerging in several centers working on theories of intelligence. They take one form in the proposal of Papert and myself (1972) to sub-structure knowledge into "micro-worlds"; another form in the "Problem-spaces" of Newell and Simon (1972); and yet another in new, large structures that theorists like Schank (1974), Abelson (1974), and Norman (1972) assign to linguistic objects. I see all these as moving away from the traditional attempts both by behavioristic psychologists and by logic-oriented students of Artificial Intelligence in trying to represent knowledge as collections of separate, simple fragments. I try here to bring together several of these issues by pretending to have a unified, coherent theory. The paper raises more questions than it answers, and I have tried to note the theory's deficiencies.
    [Show full text]
  • The Computational Attitude in Music Theory
    The Computational Attitude in Music Theory Eamonn Bell Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2019 © 2019 Eamonn Bell All rights reserved ABSTRACT The Computational Attitude in Music Theory Eamonn Bell Music studies’s turn to computation during the twentieth century has engendered particular habits of thought about music, habits that remain in operation long after the music scholar has stepped away from the computer. The computational attitude is a way of thinking about music that is learned at the computer but can be applied away from it. It may be manifest in actual computer use, or in invocations of computationalism, a theory of mind whose influence on twentieth-century music theory is palpable. It may also be manifest in more informal discussions about music, which make liberal use of computational metaphors. In Chapter 1, I describe this attitude, the stakes for considering the computer as one of its instruments, and the kinds of historical sources and methodologies we might draw on to chart its ascendance. The remainder of this dissertation considers distinct and varied cases from the mid-twentieth century in which computers or computationalist musical ideas were used to pursue new musical objects, to quantify and classify musical scores as data, and to instantiate a generally music-structuralist mode of analysis. I present an account of the decades-long effort to prepare an exhaustive and accurate catalog of the all-interval twelve-tone series (Chapter 2). This problem was first posed in the 1920s but was not solved until 1959, when the composer Hanns Jelinek collaborated with the computer engineer Heinz Zemanek to jointly develop and run a computer program.
    [Show full text]
  • Andrew Grove 3: Mastering Change
    Robert Heller’s Masterclasses Andrew Grove 3 Mastering Change Mastering change Anticipating and exploiting change is the key to success in the twenty-first century. You need to recognize the signs of change early and then develop a timely strategy to cope with it. You must also make sure you carry people with you in the transformation of the organization. PROFITING FROM PARANOIA Whether or not you agree with Andy Grove’s maxim “only the paranoid survive”, the thought is a powerful tool for effective management of change. Strictly speaking, paranoia is a delusion: but the existence of competitive and other threats, whatever your industry, is likely to be only too real. Take your guide from what is happening in the present and forget what has happened in the past in your business, which may be wholly irrelevant. Fight against the tendency to stick to the business and the methods that have sustained your fortunes for so long. AVOIDING DENIAL The opposite of healthy paranoia is unhealthy denial. Change that comes in the form of bad news tends to be denied. To avoid the denial trap, analyze your response honestly and rationally. DEALING WITH BAD NEWS – Do I want this news to be wrong? – Is that why I am denying it? – Or have I conducted a thorough, dispassionate analysis that shows it to be wrong? – Either way, what will be the worst possible result if the news is right and I have done nothing? – What action can and should I take if the news is right? – What is the worst possible result of that action? – What is the best possible result of that action? Rational evaluation of a possible threat as soon as it appears will undermine false optimism and demonstrate what risks you are running through denial and inaction.
    [Show full text]
  • ©2007 Melissa Tracey Brown ALL RIGHTS RESERVED
    ©2007 Melissa Tracey Brown ALL RIGHTS RESERVED ENLISTING MASCULINITY: GENDER AND THE RECRUITMENT OF THE ALL-VOLUNTEER FORCE by MELISSA TRACEY BROWN A Dissertation submitted to the Graduate School-New Brunswick Rutgers, The State University of New Jersey in partial fulfillment of the requirements for the degree of Doctor of Philosophy Graduate Program in Political Science written under the direction of Leela Fernandes and approved by ________________________ ________________________ ________________________ ________________________ New Brunswick, New Jersey October, 2007 ABSTRACT OF THE DISSERTATION Enlisting Masculinity: Gender and the Recruitment of the All-Volunteer Force By MELISSA TRACEY BROWN Dissertation Director: Leela Fernandes This dissertation explores how the US military branches have coped with the problem of recruiting a volunteer force in a period when masculinity, a key ideological underpinning of military service, was widely perceived to be in crisis. The central questions of this dissertation are: when the military appeals to potential recruits, does it present service in masculine terms, and if so, in what forms? How do recruiting materials construct gender as they create ideas about soldiering? Do the four service branches, each with its own history, institutional culture, and specific personnel needs, deploy gender in their recruiting materials in significantly different ways? In order to answer these questions, I collected recruiting advertisements published by the four armed forces in several magazines between 1970 and 2003 and analyzed them using an interpretive textual approach. The print ad sample was supplemented with television commercials, recruiting websites, and media coverage of recruiting. The dissertation finds that the military branches have presented several versions of masculinity, including both transformed models that are gaining dominance in the civilian sector and traditional warrior forms.
    [Show full text]
  • LISP Session
    LISP Session Chairman: Barbara Liskov Speaker: John McCarthy Discussant: Paul Abrahams PAPER: HISTORY OF LISP John McCarthy 1. Introduction This paper concentrates on the development of the basic ideas and distinguishes two periods--Summer 1956 through Summer 1958, when most of the key ideas were devel- oped (some of which were implemented in the FORTRAN-based FLPL), and Fall 1958 through 1962, when the programming language was implemented and applied to problems of artificial intelligence. After 1962, the development of LISP became multistranded, and different ideas were pursued in different places. Except where I give credit to someone else for an idea or decision, I should be regarded as tentatively claiming credit for it or else regarding it as a consequence of previous deci- sions. However, I have made mistakes about such matters in the past, and I have received very little response to requests for comments on drafts of this paper. It is particularly easy to take as obvious a feature that cost someone else considerable thought long ago. As the writing of this paper approaches its conclusion, I have become aware of additional sources of information and additional areas of uncertainty. As a programming language, LISP is characterized by the following ideas: computing with symbolic expressions rather than numbers, representation of symbolic expressions and other information by list structure in the memory of a computer, representation of in- formation in external media mostly by multilevel lists and sometimes by S-expressions, a small
    [Show full text]