arXiv:0712.1126v1 [nlin.AO] 7 Dec 2007 ∗ nmto saalbefo ubro aesuis In studies. case of number suspended a long-term from available accidental, is humans. from animation using evidence results fact, and similar 2005) state In reach al., Such to et prevents Blackston 2004; nothing organ- 2005). Roth, different and al., using (Nystul et isms experimentally obtained (Alam been possible activ- has cellular be state slowed would reversible deeply ity a or in halted life completely human where preserving of possibility no work. where at scenario is, is There feasible speculation such experimentally speculation. philosophical an are of however, implications matter potential a The and mostly achieved. speculation be of the never realm All might the shutdown experience? inhabit transient conscious situations the a previous affect can activity telepor- How brain as of such 1989). contexts, Simi- (Penrose, different tation in 1994). arise Minsky problems 1994; lar Egan, replace- 1988; technological (Moravec, or recovery ment after cryogenization emerging the long-term ques- consciousness including from new Multiple scenarios, the previous of 2003). the nature from Koch, emerge and von tions 1994; Crick (Locke, Svenson, 1994; unknown 1994; Wright, Hesslow, largely 1991; is organization Dennett, mapping and 1995; brain-mind nature the and the of activity although brain wiring, of neuronal a property is emergent consciousness agree that self-organized, would perspectives) different exceptions, with philosophy. them few (even and of with science neuroscientists, some between Most questions, boundaries the nontrivial in gen- of right status number special a Such erates 2000). first- (Searle, subjective, a ontology is person it from that ques- differs in itself phenomena this old biological phenomenon of other the neurosciences, spite and open in the remain (Searle, But tions from decades 2003). attention two 1995, last increasing Koch, and the Crick over 2000, enquiry scientific of lcrncades [email protected] address: Electronic eety dacsi upne nmto ugs the suggest animation suspended in advances Recently, topic hot a become has consciousness of problem The erlhrwr,mlil osiu atrsaeconsist are patterns non-un a conscious simple is multiple there hardware, hardware, neural brain a on resides somehow thus and ASnumbers: one. PACS new consciou a by of replaced stream is continuous irreversi and the the vanishes unless to way, leads fundamental activity a brain of shutdown transient lhuhtecncossaei osdrda mretprop emergent an considered is state conscious the Although .INTRODUCTION I. gedankenexperiment eec imdc eBreoa rAgae 8 80 Barce 08003 88, Aiguader Dr Barcelona. de Biomedica Recerca 1 osiuns,ban n h elc problem replica the and brains Consciousness, CE-ope ytm a,UiesttPme ar.Parc Fabra. Pompeu Universitat Lab, Systems ICREA-Complex htti a nipra oi osqec:ayseai in scenario any consequence: logic importan an has this that iadV Sol´e V. Ricard fpoon yohri icse bv)aelkl to likely are above) induction discussed the as profound (such available of scenarios not equivalent technology some high-level today, a neural require considered scale would case large special here in the relevance although However, real dis- a dynamics. have our not to to should relevant expected effects not be quantum is con- because that particularly below but this cussion, described 2005) In be al., the to et forbids experiment (Scarani possible. mechanics ideal quantum the be that of to realization aware experiment are expected thought we a not text, is thus this is that and stress unexpected to the It important show mapping. is to Gedankenexperi- brain-mind one-to-many argument a the logic thus of consequences is a It as used possible. ment, ever be to pected fpraeto rnin)tedaho ujciecon- subjective of considered. be death to the matter needs transient) (no sciousness of death brain permanent with if Together hardware. to rela- when tion its work and consciousness at involving is scenarios fundamental considering more much a call something consider will that we we form, which problem specific replica experiment, more (Gedanken) a To in mental case? question the con- really the your that put and Is you altogether. words, up other weak In sciousness afterwards. just back stopped, be might temporally to is you consciousness principle, your In that think experience? self-conscious the of hypothermic cog- surviving clini- or impairment. the deficits nitive in neurological of results detectable none displayed The models, but animals death, 2005). animal brain al., swine cal et in in- (Alam used be manner can method, repeatable states a reversible in such duced that confirm preserva- fluids organ tion appropriate with profound using together on hypothermia, long- research any Ongoing without recover pulse complications. to (no term able sign were vital activity) any brain of nor lack showing over and hours hpothermia several severe experiencing humans cases, these h olwn xeieti niaiayoe o ex- not one, imaginary an is experiment following The o a htono ri ciiyatrtenature the alter activity brain of shutdown a can How ∗ l et ftecncoseprec.In experience. conscious the of death ble nwt t eew hw ymaso a of means by show, we Here it. with en ns sgaate,tepeiu self previous the guaranteed, is sness vclmpigbtenbt.Given both. between mapping ivocal ryo h neligbanactivity brain underlying the of erty I H ELC PROBLEM REPLICA THE II. eo eso,uigalgcargument, logic a using show, we Below . oa Spain lona, de ovn the volving the 2

a {A,C} b {A,C} brain copy

S S φ {A',φ } {A, c } c φ {A, c } R transfer R

R {A',φ } {A',C'} {A,C'} c A=A'

FIG. 1: (a) The extended replica problem, as defined in the text. Here we start with an individual defined as the brain-mind ′ pair {A,C}. A copy of the brain hardware is made, with no activity and thus no consciousness, here indicated as {A ,φc}. Since the new physical hardware is an exact copy, no experiment would be able to distinguish between A and A′. If activated ′ (dashed line, lower right) the copied system would obviously display a separated conscious experience, here indicated as C . ′ ′ If the A’s brain is extracted and replaced by A , we would have exactly the same hardware (so effectively A = A ) and no difference would be measurable. However, once activated again, it would not exhibit the initial subjective conscious experience, but a different one. The previous experiment is equivalent to the situation shown in (b) where we simply shut down brain activity and afterwards reverse the unconscious state into a conscious one. be soon applied to human beings. Let us now consider A, with an associated conscious Let us take a given individual brain A, experiencing experience C. The brain-mind pair {A, C} thus fully de- a given (self-)conscious activity. We can indicate that fines the individual. Let us assume that brain activity the conscious experience C is somehow generated by this is stopped through some process S. If no brain activity brain A using a mapping: is present, no conscious experience exists. The individ- ual’s brain is dead, and will be indicated as φc, meaning A C −→ (1) ’no consciousness’ (here the symbol φc indicates lack of Where C must be interpreted as an emergent property consciousness, without explicit reference to a given C). of brain activity and involves both subjectivity and self- Now let us imagine that the brain is reactivated through awareness. Let us now imagine that thanks to a very some other process R. The standard view considers the advanced technology a full copy of A can be obtained following causal set of events: instantaneously at t = t0. Considering instantaneous S R formation is not strictly necessary, but makes the argu- {A, C} −→ {A, φc} −→ {A, C} (3) ment simpler, since it liberates us from considering the This logical chain of events corresponds to a common further divergence of the two replicated systems. Let us ′ reasoning: my brain is freezed and stops working, but call this new brain hardware A . This replica, if active, once a reverse process is used, brain activity returns and would generate a different conscious experience, which ′ I wake up. Is that a correct answer? Which consciousness we indicate as C . Clearly we have now: ′ is experienced: the previous one (C) or a new one (C )? ′ ′ A −→ C (2) As shown below, a new consciousness is effectively at work, i. e. the correct sequence is in fact: the important point here is that, although exactly the ′ S R ′ same hardware is being used, we have C 6= C (differ- {A, C} −→ {A, φc} −→ {A, C } (4) ent subjective conscious experiences). This is true in spite that no single experiment made by some external and thus, in terms of consciousness, we never “wake up”. observer would be able (at t = t0) to distinguish be- The reason is that the hardware does not univocally de- tween A and A′. The existence of a replica of A gener- fine the conscious experience, and thus there is no reason ates a somewhat strange situation, since clearly indicates why the conscious activity emerging after recovering the that brain activity does not univocally define conscious- stopped brain would be the same. However, you might ness. This is what we name the replica problem. This argue that it is the same brain what is at work, and problem has been explored by a number of authors (see thus cannot be properly related with the replica prob- http://www.benbest.com/philo/doubles.html) and is our lem, where two identical, but different brains are being starting point. used. 3

An additional experiment allows to better understand support) leads to a state of “dead consciousness”. As a the implications of the replica problem. This extended consequence of the non-unique mapping between brain replica problem can be used to see clearly why the new structure and conscioussness, death of a given conscious conscious experience is necessarily a different one. The experience will be irreversible. basic steps to be described below are summarized in fig- ure 1. IV. DISCUSSION

III. THE EXTENDED REPLICA PROBLEM In this paper we have seen how the one-to-many map- ping between brain and mind implies that any scenario We now describe a special mental experiment involv- involving transient leads to the death of con- ing the formation of a replica. In figure 1, individuals sciousness, as defined by a subjective, first-person ontol- involving an active (and thus conscious) brain are indi- ogy. The subjective nature of the self makes brain trans- cated as framed black circles. If brain activity is stopped, fer and teleportation non-viable in terms of a reliable way the non-conscious state is indicated as an empty circle. of transfering the self to the new individual. These are, If no brain is present, an empty box is shown. however, science fiction scenarios. However, as shown Let us assume that we start with {A, C} and we make above, the same situation must be applied to surgery in- a material (but not active) copy A′ of the initial brain. volving profound hypothermia: you (meaning your self) ′ We have a new brain-mind system {A , φc} with no con- would never truly wake up once the normal brain func- sciouss activity (φc) and physically separated from the tion is recovered again. Someone else will, with exactly initial one (see upper part of figure 1a). If activated, A′s the same external features and memories as you, but ex- brain would generate its own subjective conscious state, periencing a different consciousness. Under this view, no i. e. true (the immortal nature of your self) is

′ R ′ ′ possible. {A , ¬C} −→ {A , C } (5) Although future technology might allow building a ′ copy of our brains and make our memories and feelings with C obviously different from C (lower right, fig. 1a). survive, something will be inevitably lost. The argument Now we shut down the activity of A i.e. provided here suggests that the “self” persists (it is alive) S provided that the stream of consciousness flows in a con- {A, C} −→ {A, φc} (6) tinuous manner and is never interrupted. If it is, death of ′ the self occurs in a non-reversible manner. This seems to And now let us replace A by A , i. e. provide an interesting twist to the mind-body problem. ′ Although the argument presented here is a logical one, {A, φc}→{A , φc} (7) further extensions of this study would involve brain states Since the two brains are physically identical, no measure- not necessarily associated to a complete lack of activity. ment would be able to detect any difference between the More quantitative analyses could be made, involving dif- previous and the new hardware, and thus we have the ferent features of consciousness (Seth et al., 2006) and the equivalence: possible localization of the conscious self-representation (Lou et al., 2004 and references therein). In this context, ′ {A, φc}≡{A , φc} (8) further questions arise: What are the minimum require- ments in terms of brain activity able to sustain a con- The logic implication is that they can be exchanged by scious pattern? Are there partial changes inducing a loss each other (and any other exact copy) and would not be of self-awareness related to our previous discussion? distinguished. But it is know obvious that the implanted brain, though identical, is not going to maintain the sub- jective conscious experience that we had at the beginning: Acknowledgments it was a copy and following the previous implications we would have The author would like to thank the members of the R ′ Complex Systems Lab for useful discussions. Special {A, φc} −→ {A, C } (9) thanks to Bernat Corominas-Murtra for comments on The sequence of events described above is logically equiv- the manuscript. I also thank the editor and referees of alent to starting from {A, C}, stopping A from being ac- Minds and Machines who kindly rejected this paper with tive and restoring its function (ending up in {A, C′}, as no rational explanation. indicated in figure 1b. This completes our argument. To summarize: any process that either stops brain activity References (and thus leaves us with a “just hardware” individual) or replaces a given brain structure by a completely new one (after stopping consciousness in its previous physical 1. Alam, H. B. et al. 2005. Profound Hypothermia 4

Protects Neurons and Astrocytes, and Preserves Harvard. Cognitive Functions in a Swine Model of Lethal Hemorrhage. J. Surg. Res. 126, 172-181. Nystul, T. and Roth, M. B. 2005. Carbon monoxide-induced suspended animation protects Blackstone, E., Morrison, M. and Roth, M. B. against hypoxic damage in Caenorhabditis elegans. 2005. Hydrogen sulfide induces a suspended Proc. Natl. Acad. Sci. USA 101, 9133-9136. animation-like state in mice. Science 308, 518.

Penrose, R. 1989. The emperor’s new mind. Flesh and machines: How Brooks, R. A. 2002. Vintage Books, London. Robots Will Chage Us. Pantheon Books, New York.

Crick, F. C. and Koch, C. 1995. Why neuroscience Safar P, Tisherman SA, Behringer W, Capone A, may be able to explain consciousness. Sci. Am. Prueckner S, Radovsky A, Stezoski WS, Woods 273, 84-85. RJ. (2000) Suspended animation for delayed resuscitation from prolonged that is unresuscitable by standard cardiopulmonary- Crick, F. C. and Koch, C. 2003. A framework for cerebral resuscitation. Crit. Care Med. 28 consciousness. Nature Neurosci. 6, 119-126. (Suppl), N214-218. Dennett, D. C. 1991. Consciousness explained. Little, Brown and Company, Boston. Scarani, V., Iblisdir, S. and Gisin, N. 2005. Quan- tum cloning. Rev. Mod. Phys. 77, 1225-1256. Egan, G. 1994. Permutation City. Milennium , London. Seth, A. K., Izhikecih, E., Reeke, G. N. and Edelman, G. M. (2006) Theories and measures Hesslow, G. 1994. Will neuroscience explain of consciousness: an extended framework. Proc. consciousness? J. Theor. Biol. 171, 29-39. Natl. Acad. Sci. USA 103, 10799-10804.

Locke, J. 1995. An Essay Concerning Human Understanding. Prometheus Books, New York. Svensson, G. (1994) Reflections on the problem of indentifying mind and brain. J. Theor. Biol. 171, 93-100. Lou, H. C. 2004. Parietal cortex and representa- tion of the mental self. Proc. Natl. Acad. Sci. USA 101, 6827-6832. Tisherman, S. A. (2004) Suspended animation for resuscitation from exsanguinating hemorrhage. Crit. Care Med. 32(2 Suppl), S46-50. Minsky, M. 1994. Will robots inherit Earth? Sci. Am. 271, 108-13 von Wright, G. H. (1994) On mind and matter. J. Moravec, H. 1988. Mind children. The future of Theor. Biol. 171, 101-110. robot and human intelligence. Harvard U. Press,