Assessing Artificial Consciousness a Collective Review Article

Assessing Artificial Consciousness a Collective Review Article

Igor Aleksander, Uziel Awret, Selmer Bringsjord, Ron Chrisley, Robert Clowes, Joel Parthemore, Susan Stuart, Steve Torrance and Tom Ziemke Assessing Artificial Consciousness A Collective Review Article Background While the recent special issue of JCS on machine consciousness (Vol- ume 14, Issue 7) was in preparation, a collection of papers on the same topic, entitled Artificial Consciousness and edited by Antonio Chella and Riccardo Manzotti, was published.1 The editors of the JCS special issue, Ron Chrisley, Robert Clowes and Steve Torrance, thought it would be a timely and productive move to have authors of papers in their collection review the papers in the Chella and Manzotti book, and include these reviews in the special issue of the journal. Eight of the JCS authors (plus Uziel Awret) volunteered to review one or more of the fifteen papers in Artificial Consciousness;theseindividual reviews were then collected together with a minimal amount of edit- ing to produce a seamless chapter-by-chapter review of the entire book. Because the number and length of contributions to the JCS issue was greater than expected, the collective review of Artificial Con- sciousness had to be omitted, but here at last it is. Each paper’s review is written by a single author, so any comments made may not reflect the opinions of all nine of the joint authors! Correspondence: Ron Chrisley, COGS/Dept of Informatics, University of Sussex, Falmer, Brighton BN1 9QH, U.K. Email: [email protected] [1] Artificial Consciousness, ed. A.Chella & R. Manzotti (Imprint Academic, 2007) CopyrightJournal of Consciousness (c) Imprint Studies Academic, 15,No.7,2008,pp.95–110 2005 For personal use only -- not for reproduction 96 REVIEW ARTICLE The Chapters Reviewed It’s entirely fitting that Vincenzo Tagliasco begins his survey of the history of artificial consciousness (‘Artificial Consciousness: A Tech- nological Discipline’) with an anecdote about his own introduction to the field: consciousness is, after all, ultimately a personal affair. Descartes comes in for kind words; engineers are quite happy to get on with the metaphor of humans as machines, and leave the worries about dualism to the philosophers. That’s Tagliasco’s main point: research- ers in the field are more engineers than theoreticians (though philoso- phers are welcome!). They want to build things and see what interesting properties they exhibit. Making a copy of human con- sciousness isn’t on the table. Producing a robot that evolves into one with recognizably conscious behaviour, on the other hand, is. ‘An arti- ficial conscious being would’, he writes, ‘be a being which appears to be conscious because [it] acts and behaves as a conscious human being.’ What is ‘artificial consciousness’: artificial consciousness or artifi- cial consciousness? Is it ‘real’ consciousness achieved by artificial means, or something that resembles consciousness in the way an arrangement of artificial flowers resembles the real thing? From a dis- tance they’re quite impressive; just don’t examine them too closely! Agreeing on one set of terms or one approach in what is a young disci- pline is, Tagliasco believes, a distraction at best. The first step must be to make sure researchers aren’t just talking past each other. The advantage of the engineering perspective is in putting theory into practice: ‘technology’, Tagliasco writes, ‘overcomes ambiguity’ — a point that philosophers might do well to remember! John Taylor’s paper, ‘Through Machine Attention to Machine Consciousness’, aims to make three contributions: a philosophical analysis of the architectural requirements for consciousness, a demon- stration that a particular, independently-motivated, control-theoretic model of attention can meet these requirements, and a discussion of the specific issues that must be resolved in attempting to implement such a model in an artificial system such as a robot. Consciousness is taken to involve not only control of attention, but two other compo- nents, one concerning contentful, world-directed states, the other (fol- lowing the phenomenological tradition) being the (contentless) pre-reflective self. The pre-reflective self is that which confers our ownership of our perceptions, that which ‘gives us the sense of “being there”, of “what it is like to be”’. It is this which, Taylor claims, allows our representations to mean anything to us. Taylor then presents the Copyright (c) Imprint Academic 2005 For personal use only -- not for reproduction ASSESSING ARTIFICIAL CONSCIOUSNESS 97 CODAM (Corollary Discharge of Attention Movement) model, a control-theoretic architecture comprising a plant, goal modules, inverse models and forward models, supplemented with a sensory working memory buffer, and the corollary discharge buffer (WMcd), which is a prediction of the attended input given the attentional move- ment. It is shown how this model can explain various perceptual and attentional phenomena, such as those seen in the Posner movement benefit paradigm (Rushworth et al.,1997)andtheattentionalblink (Vogel et al.,1998)ontheonehand,andpureconsciousexperience (Forman, 1999) on the other. This explanatory capacity is meant to establish a connection between WMcd and pre-reflective conscious- ness: modelling pure conscious experience establishes the contentless nature of the WMcd, while the anticipation of input used in explaining the Posner benefit and attentional blink is meant to confer the ‘owner- ship’ aspects of the pre-reflective self. But this is left as a tantalizing suggestion, leaving the reader to wonder how the equation is supposed to be secured. Why would (cor- rect) anticipation confer a sense of ownership on perceptions? The increase in reaction times associated with the attentional blink not withstanding, we still experience unexpected, unanticipated inputs as our own. On the other hand, the model seems too simple to be suffi- cient for consciousness — Taylor doesn’t (here) make it clear why a complex thermostat couldn’t implement CODAM. This leaves one wondering what else must be added to elevate CODAM from a model of some aspects of consciousness-related processing to actually being sufficient for experience, as (ambitious) machine consciousness requires. The final section enumerates some implementation issues that those attempting machine consciousness should consider (although the Searle-like argument for why CODAM will only succeed in producing consciousness if implemented in hardware rather than software is too brief to persuade anyone). Nevertheless, the model seems a very good place to start, and the questions it raises seem to be the right ones to ask. Of further interest is a handy table listing aspects of conscious experience and known aspects of the nervous system that seem to support such phenomena. In ‘What’s Life Got To Do With It?’, Tom Ziemke claims, and he is not wrong, that in our attempt to create embodied AI, autonomous agents, and artificial consciousness we have paid too little attention to theoretical biology and have not yet grasped the crucial role that the living body plays in the constitution of the self and of forms or aspects of consciousness. He claims that when ‘we refer to both living organ- isms and robots as “autonomous agents”, it is important to keep in Copyright (c) Imprint Academic 2005 For personal use only -- not for reproduction 98 REVIEW ARTICLE mind that their “autonomy” and “agency” are fundamentally differ- ent’. We should adopt a position of ‘caveat spectator’ and not take similarity of behaviour for similarity of underpinning. The underpin- ning, the biology, the internal constitution and regulation are crucial because ‘the way an organism constructs itself also shapes the way it constructs its self’. Thus, he asks, what has life got to do with con- sciousness and the development of a sense of self? In answer to this question he weaves together von Uexküll’s organ- ismic biology and the concept of autonomy, Maturana and Varela’s theory of autopoiesis, and work in evolutionary and epigenetic robot- ics — all of which have at their heart the construction of knowledge ‘in sensorimotor interaction with the environment with the goal of achieving some “fit” of “equilibrium” between internal behavioural/ conceptual structures and experiences of the environment’ — with Damasio’s somatic theory and pre-conscious proto-self which is able to continuously map the physical state and structure of the organism in its dynamic engagement with its environment. It is this organisation, this self-construction and -preservation, that Ziemke now emphasizes, for it is this natural autopoiesis within an operationally open system of agent–environment interaction that is the source of more developed notions of core- and extended-consciousness. The use of ‘natural’ to qualify ‘autopoiesis’ is not Ziemke’s; his distinction is between auto- and allo-poietic systems, with man-made artefacts like robots and software agents conforming to the latter cate- gory for their ‘components are produced by other processes that are independent of the organisation of the system’. And here is where a criticism of Ziemke’s enterprise arises. There is an autonomy that an autopoietic system possesses and an allopoietic system lacks, and this autonomy is crucial for the development of consciousness in any sys- tem. But if there is no way for an allopoietic system to ever become an autopoietic one, then it would seem that the construction of truly con- scious machines, other than biological machines, is beyond us,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us