1956 and the Origins of Artificial Intelligence Computing

Total Page:16

File Type:pdf, Size:1020Kb

1956 and the Origins of Artificial Intelligence Computing Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing Rebecca E. Skinner copyright 2012 by Rebecca E. Skinner ISBN 978-0-9894543-1-5 Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing Chapter .5. Preface Chapter 1. Introduction Chapter 2. The Ether of Ideas in the Thirties and the War Years Chapter 3. The New World and the New Generation in the Forties Chapter 4. The Practice of Physical Automata Chapter 5. Von Neumann, Turing, and Abstract Automata Chapter 6. Chess, Checkers, and Games Chapter 7. The Impasse at End of Cybernetic Interlude Chapter 8. Newell, Shaw, and Simon at the Rand Corporation Chapter 9. The Declaration of AI at the Dartmouth Conference Chapter 10. The Inexorable Path of Newell and Simon Chapter 11. McCarthy and Minsky begin Research at MIT Chapter 12. AI’s Detractors Before its Time Chapter 13. Conclusion: Another Pregnant Pause Chapter 14. Acknowledgements Chapter 15. Bibliography Chapter 16. Endnotes Chapter .5. Preface Introduction Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing is a history of the origins of AI. AI, the field that seeks to do things that would be considered intelligent if a human being did them, is a universal of human thought, developed over centuries. Various efforts to carry this out appear- in the forms of robotic machinery and more abstract tools and systems of symbols intended to artificially contrive knowledge. The latter sounds like alchemy, and in a sense it certainly is. There is no gold more precious than knowledge. That this is a constant historical dream, deeply rooted in the human experience, is not in doubt. However, it was not more than a dream until the machinery that could put it into effect was relatively cheap, robust, and available for ongoing experimentation. The digital computer was invented during the years leading to and including the Second World War, and AI became a tangible possibility. Software that used symbols to enact the steps of problem-solving could be designed and executed. However, envisioning our possibilities when they are in front of us is often a more formidable challenge than bringing about their material reality. AI in the general sense of intelligence cultivated through computing had also been discussed with increasing confidence through the early 1950s. As we will see, bringing it into reality as a concept took repeated hints, fits, and starts until it finally appeared as such in 1956. Our story is an intellectual saga with several supporting leads, a large peripheral cast, and the giant sweep of Postwar history in the backdrop. There is no single ‘great man’ in this opus. As far as the foundation of AI is concerned, all of the founders were great. Even the peripheral cast was composed of people who were major figures in other fields. Nor, frankly, is there a villain either. Themes and Thesis The book tells the story of the development of the cognitive approach to psychology, computer science (software), and the development of software that undertook to do ‘intelligent’ things during mid-century. To this end, I study the early development of computing and psychology in the middle decades of the century, ideas about ‘Giant Brains’, and the formation of the field of study known as AI. Why did this particular culture spring out of this petri dish, at this time? In addition to ‘why’, I consider the accompanying where, how, and who. This work is expository: I am concerned with the enrichment of the historical record. Notwithstanding the focus on the story, the author of necessity participates in the thematic concerns of historians of computer science. Several themes draw our attention. The role of the military in the initial birth and later development of the computer and its ancillary technologies should not be erased, eroded, or diminished. Make no mistake: war is abhorrent. But sustained nation-building and military drives can yield staggering technological advances. War is a powerful driver of technological innovations (1). This is particularly the case with the development of ‘general-purpose technologies’, that is, those which present an entirely new way of processing material or information (2). These technologies of necessity create and destroy new industries, means of locomotion, creation of energy, and processing of information (steel rather than iron, the book, the electric generator, the automobile, the digital computer). In the process, these fundamental technologies will bring about new forms of communication, cultural activities, and numerous ancillary industries. We repeat, for effect: AI is the progeny of the Second World War, as is the digital computer, the microwave oven, the transistor radio and portable music devices, desktop and laptop computers, cellular telephones, the iPod, iPad, computer graphics, and thousands of software applications. The theory of the Cold War’s creative power and fell hand in shaping the Computer Revolution is prevalent in the current academic discourse on this topic: its paradoxical creative power can’t be denied (3). The role of the Counterculture in creating the Computer Revolution is affectively appealing. In its strongest form, this theory holds that revolutionary hackers created most, if not all of the astonishing inventions in computer applications (4). For many of the computer applications of the Sixties and Seventies, including games, software systems, security and vision, this theory holds a good deal of force. However, the period under discussion in this book refutes the larger statement, though. The thesis has its chronology backwards. The appearance of the culturally revolutionary ‘t-shirts’ was preceded by a decade and a half of hardware, systems and software language work by the culturally conservative ‘white-shirts’. (Throughout the Fifties, IBM insisted on a dress code of white shirts, worn by white Protestant men) (5). Yet there is one way in which those who lean heavily on the cultural aspect of the Computer Revolution are absolutely correct. An appropriate and encouraging organizational culture was also essential in the development of the computer in every aspect, along the course of the entire chronology. This study emphasizes more emphatically the odd mélange of a number of different institutional contexts in computing, and how they came together to study one general endeavor. AI in its origins started with individual insights and projects, rather than cultivated in any single research laboratory. The establishment of AI preceded its social construction. We could say that AI’s initial phase as revolutionary science (or exogenous shock, in the economist’s terms) preceded its institutionalization and establishment of an overall “ecology of knowledge” (6). However, once AI was started, it too relied heavily on its institutional settings. In turn, the founders of AI established and cultivated research environments that would continue to foster innovation. The cultivation of such an environment is more evident in the later history of AI, rather than in the tentative movements in the 1950s. Yet another salient theme of this work is the sheer audacity of the paradigm transition that AI itself entailed. The larger manner of thinking about intelligence as a highly tangible quality, and about thinking as something that could have qualitative aspects to it, required a vast change between the late 1930s and the mid- 1950s. As with any such alteration of focus, this required the new agenda to be made visible- and envisioned while it still seemed like an extreme and far-fetched concept (7). A final theme is the larger role of AI in the history of the Twentieth century. It is certainly true that the Cold War’s scientific and research environment was a ‘Closed World’ in which an intense, intellectually charged, politically obsessive culture thrived. The stakes involved in the Cold War itself were the highest possible ones; the intensity of its inner circles this is no surprise. However, in this case, the larger cultural themes of the Twentieth century had created their own “closed world”. Between them, Marxian political philosophy and Freudian influence on culture had robbed the arts, politics and literature of its vitality. This elite literary ‘Closed world’ saw science and technology as aesthetically unappealing and inevitably hijacked by the political forces that funded research. The resolution of the Cold War, and the transformation of the economy and ultimately of culture by the popularization of computing, would not take place for decades. Yet the overcoming of the cultural impasse of the Twentieth century would be a long-latent theme in which AI and computing would later play a part (8). Outline of the Text In Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing , we examine the way in which AI was formed at its start, its originators, the world they lived in and how they chose this unique path, the computers that they used and the larger cultural beliefs about those machines, and the context in which they managed to find both the will and the way to achieve this. 1956 was the tipping point, rather than the turning point, for this entry into an alternative way of seeing the world. Our chapter outline indicates the line the book follows. The chapter outline delineates the book’s course. The Introduction and Conclusion chapters frame the book and discuss history outside of the time frame covered in BTSM, and establishes AI as a constant in world intellectual history (Chapter One). The other chapters are narrative, historical, and written within the context of their time. The intellectual environment of the Thirties and the war years is foreign to us. It lacked computers, and likewise lacked any sort of computational metaphor for intelligence. Studies of intelligence without any real reference to psychology nevertheless abounded (Chapter Two).
Recommended publications
  • Bernard M. Oliver Oral History Interview
    http://oac.cdlib.org/findaid/ark:/13030/kt658038wn Online items available Bernard M. Oliver Oral History Interview Daniel Hartwig Stanford University. Libraries.Department of Special Collections and University Archives Stanford, California November 2010 Copyright © 2015 The Board of Trustees of the Leland Stanford Junior University. All rights reserved. Note This encoded finding aid is compliant with Stanford EAD Best Practice Guidelines, Version 1.0. Bernard M. Oliver Oral History SCM0111 1 Interview Overview Call Number: SCM0111 Creator: Oliver, Bernard M., 1916- Title: Bernard M. Oliver oral history interview Dates: 1985-1986 Physical Description: 0.02 Linear feet (1 folder) Summary: Transcript of an interview conducted by Arthur L. Norberg covering Oliver's early life, his education, and work experiences at Bell Laboratories and Hewlett-Packard. Subjects include television research, radar, information theory, organizational climate and objectives at both companies, Hewlett-Packard's associations with Stanford University, and Oliver's association with William Hewlett and David Packard. Language(s): The materials are in English. Repository: Department of Special Collections and University Archives Green Library 557 Escondido Mall Stanford, CA 94305-6064 Email: [email protected] Phone: (650) 725-1022 URL: http://library.stanford.edu/spc Information about Access Reproduction is prohibited. Ownership & Copyright All requests to reproduce, publish, quote from, or otherwise use collection materials must be submitted in writing to the Head of Special Collections and University Archives, Stanford University Libraries, Stanford, California 94304-6064. Consent is given on behalf of Special Collections as the owner of the physical items and is not intended to include or imply permission from the copyright owner.
    [Show full text]
  • "Shakey the Robot" [Videorecording]
    http://oac.cdlib.org/findaid/ark:/13030/kt2s20358k No online items Guide to "Shakey the Robot" [videorecording] Daniel Hartwig Stanford University. Libraries.Department of Special Collections and University Archives Stanford, California November 2010 Copyright © 2015 The Board of Trustees of the Leland Stanford Junior University. All rights reserved. Note This encoded finding aid is compliant with Stanford EAD Best Practice Guidelines, Version 1.0. Guide to "Shakey the Robot" V0209 1 [videorecording] Overview Call Number: V0209 Creator: Stanford University Title: "Shakey the Robot" [videorecording] Dates: 1995 Physical Description: 0.125 Linear feet (1 videotape-VHS) Summary: Program of the Bay Area Computer History Perspectives held at SRI International on October 24, 1995, on a 1970 SRI project to create a robot (dubbed Shakey) that used rudimentary artificial intelligence to interact with its surroundings. Speakers were Nils Nilsson, Charles Rosen, Bertram Raphael, Bruce Donald, Richard Fikes, Peter Hart, and Stuart Russell. The video includes clips from the original "Shakey the Robot" movie. Language(s): The materials are in English. Repository: Department of Special Collections and University Archives Green Library 557 Escondido Mall Stanford, CA 94305-6064 Email: [email protected] Phone: (650) 725-1022 URL: http://library.stanford.edu/spc Information about Access This collection is open for research. Ownership & Copyright All requests to reproduce, publish, quote from, or otherwise use collection materials must be submitted in writing to the Head of Special Collections and University Archives, Stanford University Libraries, Stanford, California 94304-6064. Consent is given on behalf of Special Collections as the owner of the physical items and is not intended to include or imply permission from the copyright owner.
    [Show full text]
  • Artificial Consciousness and the Consciousness-Attention Dissociation
    Consciousness and Cognition 45 (2016) 210–225 Contents lists available at ScienceDirect Consciousness and Cognition journal homepage: www.elsevier.com/locate/concog Review article Artificial consciousness and the consciousness-attention dissociation ⇑ Harry Haroutioun Haladjian a, , Carlos Montemayor b a Laboratoire Psychologie de la Perception, CNRS (UMR 8242), Université Paris Descartes, Centre Biomédical des Saints-Pères, 45 rue des Saints-Pères, 75006 Paris, France b San Francisco State University, Philosophy Department, 1600 Holloway Avenue, San Francisco, CA 94132 USA article info abstract Article history: Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to Received 6 July 2016 implement sophisticated forms of human intelligence in machines. This research attempts Accepted 12 August 2016 to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable Keywords: once we overcome short-term engineering challenges. We believe, however, that phenom- Artificial intelligence enal consciousness cannot be implemented in machines. This becomes clear when consid- Artificial consciousness ering emotions and examining the dissociation between consciousness and attention in Consciousness Visual attention humans. While we may be able to program ethical behavior based on rules and machine Phenomenology learning, we will never be able to reproduce emotions or empathy by programming such Emotions control systems—these will be merely simulations. Arguments in favor of this claim include Empathy considerations about evolution, the neuropsychological aspects of emotions, and the disso- ciation between attention and consciousness found in humans.
    [Show full text]
  • Computability and Complexity
    Computability and Complexity Lecture Notes Herbert Jaeger, Jacobs University Bremen Version history Jan 30, 2018: created as copy of CC lecture notes of Spring 2017 Feb 16, 2018: cleaned up font conversion hickups up to Section 5 Feb 23, 2018: cleaned up font conversion hickups up to Section 6 Mar 15, 2018: cleaned up font conversion hickups up to Section 7 Apr 5, 2018: cleaned up font conversion hickups up to the end of these lecture notes 1 1 Introduction 1.1 Motivation This lecture will introduce you to the theory of computation and the theory of computational complexity. The theory of computation offers full answers to the questions, • what problems can in principle be solved by computer programs? • what functions can in principle be computed by computer programs? • what formal languages can in principle be decided by computer programs? Full answers to these questions have been found in the last 70 years or so, and we will learn about them. (And it turns out that these questions are all the same question). The theory of computation is well-established, transparent, and basically simple (you might disagree at first). The theory of complexity offers many insights to questions like • for a given problem / function / language that has to be solved / computed / decided by a computer program, how long does the fastest program actually run? • how much memory space has to be used at least? • can you speed up computations by using different computer architectures or different programming approaches? The theory of complexity is historically younger than the theory of computation – the first surge of results came in the 60ties of last century.
    [Show full text]
  • Historical Perspective and Further Reading 162.E1
    2.21 Historical Perspective and Further Reading 162.e1 2.21 Historical Perspective and Further Reading Th is section surveys the history of in struction set architectures over time, and we give a short history of programming languages and compilers. ISAs include accumulator architectures, general-purpose register architectures, stack architectures, and a brief history of ARMv7 and the x86. We also review the controversial subjects of high-level-language computer architectures and reduced instruction set computer architectures. Th e history of programming languages includes Fortran, Lisp, Algol, C, Cobol, Pascal, Simula, Smalltalk, C+ + , and Java, and the history of compilers includes the key milestones and the pioneers who achieved them. Accumulator Architectures Hardware was precious in the earliest stored-program computers. Consequently, computer pioneers could not aff ord the number of registers found in today’s architectures. In fact, these architectures had a single register for arithmetic instructions. Since all operations would accumulate in one register, it was called the accumulator , and this style of instruction set is given the same name. For example, accumulator Archaic EDSAC in 1949 had a single accumulator. term for register. On-line Th e three-operand format of RISC-V suggests that a single register is at least two use of it as a synonym for registers shy of our needs. Having the accumulator as both a source operand and “register” is a fairly reliable indication that the user the destination of the operation fi lls part of the shortfall, but it still leaves us one has been around quite a operand short. Th at fi nal operand is found in memory.
    [Show full text]
  • John Mccarthy
    JOHN MCCARTHY: the uncommon logician of common sense Excerpt from Out of their Minds: the lives and discoveries of 15 great computer scientists by Dennis Shasha and Cathy Lazere, Copernicus Press August 23, 2004 If you want the computer to have general intelligence, the outer structure has to be common sense knowledge and reasoning. — John McCarthy When a five-year old receives a plastic toy car, she soon pushes it and beeps the horn. She realizes that she shouldn’t roll it on the dining room table or bounce it on the floor or land it on her little brother’s head. When she returns from school, she expects to find her car in more or less the same place she last put it, because she put it outside her baby brother’s reach. The reasoning is so simple that any five-year old child can understand it, yet most computers can’t. Part of the computer’s problem has to do with its lack of knowledge about day-to-day social conventions that the five-year old has learned from her parents, such as don’t scratch the furniture and don’t injure little brothers. Another part of the problem has to do with a computer’s inability to reason as we do daily, a type of reasoning that’s foreign to conventional logic and therefore to the thinking of the average computer programmer. Conventional logic uses a form of reasoning known as deduction. Deduction permits us to conclude from statements such as “All unemployed actors are waiters, ” and “ Sebastian is an unemployed actor,” the new statement that “Sebastian is a waiter.” The main virtue of deduction is that it is “sound” — if the premises hold, then so will the conclusions.
    [Show full text]
  • Uluslararası Ders Kitapları Ve Eğitim Materyalleri Dergisi
    Uluslararası Ders Kitapları ve Eğitim Materyalleri Dergisi The Effect of Artifiticial Intelligence on Society and Artificial Intelligence the View of Artificial Intelligence in the Context of Film (I.A.) İpek Sucu İstanbul Gelişim Üniversitesi, Reklam Tasarımı ve İletişim Bölümü ABSTRACT ARTICLE INFO Consumption of produced, quick adoption of discovery, parallel to popularity, our interest in new and different is at the top; We live in the age of technology. A sense of wonder and satisfaction that mankind has existed in all ages throughout human history; it was the key to discoveries and discoveries. “Just as the discovery of fire was the most important invention in the early ages, artificial intelligence is also the most important project of our time.” (Aydın and Değirmenci, 2018: 25). It is the nature of man and the nearby brain. It is Z Artificial Intelligence ”technology. The concept of artificial intelligence has been frequently mentioned recently. In fact, I believe in artificial intelligence, the emergence of artificial intelligence goes back to ancient times. Various artificial intelligence programs have been created and robots have started to be built depending on the technological developments. The concepts such as deep learning and natural language processing are also on the agenda and films about artificial intelligence. These features were introduced to robots and the current concept of “artificial intelligence was reached. In this study, the definition, development and applications of artificial intelligence, the current state of artificial intelligence, the relationship between artificial intelligence and new media, the AI Artificial Intelligence (2001) film will be analyzed and evaluated within the scope of the subject and whether the robots will have certain emotions like people.
    [Show full text]
  • The Mother of All Demos
    UC Irvine Embodiment and Performativity Title The Mother of All Demos Permalink https://escholarship.org/uc/item/91v563kh Author Salamanca, Claudia Publication Date 2009-12-12 Peer reviewed eScholarship.org Powered by the California Digital Library University of California The Mother of All Demos Claudia Salamanca PhD Student, Rhetoric Department University of California Berkeley 1929 Fairview St. Apt B. Berkeley, CA, 94703 1 510 735 1061 [email protected] ABSTRACT guide situated at the mission control and from there he takes us This paper analyses the documentation of the special session into another location: a location that Levy calls the final frontier. delivered by Douglas Engelbart and William English on This description offered by Levy as well as the performance in December 9, 1968 at the Fall Computer Joint Conference in San itself, shows a movement in time and space. The name, “The Francisco. Mother of All Demos,” refers to a temporality under which all previous demos are subcategories of this performance. Furthermore, the name also points to a futurality that is constantly Categories and Subject Descriptors in production: all future demos are also included. What was A.0 [Conference Proceedings] delivered on December 9, 1968 captured the past but also our future. In order to explain this extended temporality, Engelbart’s General Terms demo needs to be addressed not only from the perspective of the Documentation, Performance, Theory. technological breakthroughs but also the modes in which they were delivered. This mode of futurality goes beyond the future simple tense continuously invoked by rhetorics of progress and Keywords technology. The purpose of this paper is to interrogate “The Demo, medium performance, fragmentation, technology, Mother of All Demos” as a performance, inquiring into what this augmentation system, condensation, space, body, mirror, session made and is still making possible.
    [Show full text]
  • Claude Elwood Shannon (1916–2001) Solomon W
    Claude Elwood Shannon (1916–2001) Solomon W. Golomb, Elwyn Berlekamp, Thomas M. Cover, Robert G. Gallager, James L. Massey, and Andrew J. Viterbi Solomon W. Golomb Done in complete isolation from the community of population geneticists, this work went unpublished While his incredibly inventive mind enriched until it appeared in 1993 in Shannon’s Collected many fields, Claude Shannon’s enduring fame will Papers [5], by which time its results were known surely rest on his 1948 work “A mathematical independently and genetics had become a very theory of communication” [7] and the ongoing rev- different subject. After his Ph.D. thesis Shannon olution in information technology it engendered. wrote nothing further about genetics, and he Shannon, born April 30, 1916, in Petoskey, Michi- expressed skepticism about attempts to expand gan, obtained bachelor’s degrees in both mathe- the domain of information theory beyond the matics and electrical engineering at the University communications area for which he created it. of Michigan in 1936. He then went to M.I.T., and Starting in 1938 Shannon worked at M.I.T. with after spending the summer of 1937 at Bell Tele- Vannevar Bush’s “differential analyzer”, the an- phone Laboratories, he wrote one of the greatest cestral analog computer. After another summer master’s theses ever, published in 1938 as “A sym- (1940) at Bell Labs, he spent the academic year bolic analysis of relay and switching circuits” [8], 1940–41 working under the famous mathemati- in which he showed that the symbolic logic of cian Hermann Weyl at the Institute for Advanced George Boole’s nineteenth century Laws of Thought Study in Princeton, where he also began thinking provided the perfect mathematical model for about recasting communications on a proper switching theory (and indeed for the subsequent mathematical foundation.
    [Show full text]
  • John Mccarthy – Father of Artificial Intelligence
    Asia Pacific Mathematics Newsletter John McCarthy – Father of Artificial Intelligence V Rajaraman Introduction I first met John McCarthy when he visited IIT, Kanpur, in 1968. During his visit he saw that our computer centre, which I was heading, had two batch processing second generation computers — an IBM 7044/1401 and an IBM 1620, both of them were being used for “production jobs”. IBM 1620 was used primarily to teach programming to all students of IIT and IBM 7044/1401 was used by research students and faculty besides a large number of guest users from several neighbouring universities and research laboratories. There was no interactive computer available for computer science and electrical engineering students to do hardware and software research. McCarthy was a great believer in the power of time-sharing computers. John McCarthy In fact one of his first important contributions was a memo he wrote in 1957 urging the Director of the MIT In this article we summarise the contributions of Computer Centre to modify the IBM 704 into a time- John McCarthy to Computer Science. Among his sharing machine [1]. He later persuaded Digital Equip- contributions are: suggesting that the best method ment Corporation (who made the first mini computers of using computers is in an interactive mode, a mode and the PDP series of computers) to design a mini in which computers become partners of users computer with a time-sharing operating system. enabling them to solve problems. This logically led to the idea of time-sharing of large computers by many users and computing becoming a utility — much like a power utility.
    [Show full text]
  • Machine Learning
    Graham Capital Management Research Note, September 2017 Machine Learning Erik Forseth1, Ed Tricker2 Abstract Machine learning is more fashionable than ever for problems in data science, predictive modeling, and quantitative asset management. Developments in the field have revolutionized many aspects of modern life. Nevertheless, it is sometimes difficult to understand where real value ends and speculative hype begins. Here we attempt to demystify the topic. We provide a historical perspective on artificial intelligence and give a light, semi-technical overview of prevailing tools and techniques. We conclude with a brief discussion of the implications for investment management. Keywords Machine learning, AI 1Quantitative Researcher 2Head of Research and Development 1. Introduction by the data they are fed—which attempt to find a solution to some mathematical optimization problem. There remains a Machine Learning is the study of computer algorithms that vast gulf between the space of tasks which can be formulated improve automatically through experience. in this way, and the space of tasks that require, for example, – Tom Mitchell (1997) reasoning or abstract planning. There is a fundamental divide Machine Learning (ML) has emerged as one of the most between the capability of a computer model to map inputs to exciting areas of research across nearly every dimension of outputs, versus our own human intelligence [Chollet (2017)]. contemporary digital life. From medical diagnostics to recom- In grouping these methods under the moniker “artificial in- mendation systems—such as those employed by Amazon or telligence,” we risk imbuing them with faculties they do not Netflix—adaptive computer algorithms have radically altered have.
    [Show full text]
  • Communicative Capital for Prosthetic Agents Patrick M
    This is an unpublished technical report undergoing peer review, not a final typeset article. First draft: July 23, 2016. Current Draft: November 9, 2017. Communicative Capital for Prosthetic Agents Patrick M. Pilarski 1;2∗, Richard S. Sutton 2, Kory W. Mathewson 1;2, Craig Sherstan 1;2, Adam S. R. Parker 1;2, and Ann L. Edwards 1;2 1Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, AB, Canada 2Reinforcement Learning and Artificial intelligence Laboratory, Department of Computing Science, University of Alberta, Edmonton, AB, Canada Correspondence*: Patrick M. Pilarski, Division of Physical Medicine and Rehabilitation, Department of Medicine, 5-005 Katz Group Centre for Pharmacy and Health Research, University of Alberta, Edmonton, AB, Canada, T6G 2E1. [email protected] ABSTRACT This work presents an overarching perspective on the role that machine intelligence can play in enhancing human abilities, especially those that have been diminished due to injury or illness. As a primary contribution, we develop the hypothesis that assistive devices, and specifically artificial arms and hands, can and should be viewed as agents in order for us to most effectively improve their collaboration with their human users. We believe that increased agency will enable more powerful interactions between human users and next generation prosthetic devices, especially when the sensorimotor space of the prosthetic technology greatly exceeds the conventional control and communication channels available to a prosthetic user. To more concretely examine an agency-based view on prosthetic devices, we propose a new schema for interpreting the capacity of a human-machine collaboration as a function of both the human’s and machine’s degrees of agency.
    [Show full text]