BioSystems 148 (2016) 4–11
Contents lists available at ScienceDirect
BioSystems
jo urnal homepage: www.elsevier.com/locate/biosystems
Review article
The body of knowledge: On the role of the living body in grounding
embodied cognition
a,b,∗
Tom Ziemke
a
Interaction Lab, School of Informatics, University of Skövde, 54128 Skövde, Sweden
b
Cognition & Interaction Lab, Human-Centered Systems Division, Department of Computer & Information Science, Linköping University, 58183 Linköping,
Sweden
a r t i c l e i n f o a b s t r a c t
Article history: Embodied cognition is a hot topic in both cognitive science and AI, despite the fact that there still is
Received 6 August 2016
relatively little consensus regarding what exactly constitutes ‘embodiment’. While most embodied AI and
Accepted 9 August 2016
cognitive robotics research views the body as the physical/sensorimotor interface that allows to ground
Available online 16 August 2016
computational cognitive processes in sensorimotor interactions with the environment, more biologically-
based notions of embodied cognition emphasize the fundamental role that the living body – and more
Keywords:
specifically its homeostatic/allostatic self-regulation – plays in grounding both sensorimotor interactions
Allostasis
and embodied cognitive processes. Adopting the latter position – a multi-tiered affectively embodied
Cognitive systems
Grounding view of cognition in living systems – it is further argued that modeling organisms as layered networks of
Homeostasis bodily self-regulation mechanisms can make significant contributions to our scientific understanding of
Embodied AI embodied cognition.
Embodied cognition © 2016 The Author. Published by Elsevier Ireland Ltd. This is an open access article under the CC
Emotion BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Intentionality
Predictive regulation Representation
Contents
1. Introduction ...... 4
2. What’s that thing called embodiment? ...... 5
3. Does life matter to embodied cognition? ...... 6
4. Discussion and conclusion ...... 9
Acknowledgement ...... 9
References ...... 9
1. Introduction
AI research, however, also makes use of simulated robots or other
At some point in the long and winding write-up of this paper, types of non-physical agents, e.g. so-called virtual agents or differ-
its title was “What makes embodied AI embodied?”. That title even- ent types of more abstract artificial-life agents. Hence, one might
tually disappeared again, but the question is still highly relevant ask (cf. Ziemke, 2004) whether embodied AI really is about embod-
to this paper – and the answer is not as straightforward as one ied (i.e. physical, robotic, etc.) models of cognition, or rather about
might think. Robots are, no doubt, considered ‘embodied’ by most models – any type of model: robotic ones obviously, but also purely
AI researchers, and in fact the obvious AI approach to modeling computational ones – of embodied cognition (whatever that is), or
natural embodied cognition or synthesizing artificial equivalents maybe both? If you find that question somewhat confusing, you are
thereof (cf., e.g. Ziemke, 2003; Morse et al., 2011). Much embodied not alone. As discussed in more detail in Section 2, despite more
than 25 years of research on embodied cognition and AI, and by
now a number of books on the topic (e.g. Varela et al., 1991; Clark,
∗
1997; Pfeifer and Scheier, 1999; Gallagher, 2005; Ziemke et al.,
Correspondence address: IDA, Linköping University, 58183 Linköping, Sweden.
E-mail addresses: [email protected], [email protected] 2006; Johnson, 2007; Thompson, 2007; Shapiro, 2010; Lindblom,
http://dx.doi.org/10.1016/j.biosystems.2016.08.005
0303-2647/© 2016 The Author. Published by Elsevier Ireland Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc- nd/4.0/).
T. Ziemke / BioSystems 148 (2016) 4–11 5
2015), there still is a perplexing diversity of notions of embodied sent. Dreyfus (1979), for example, inspired by Heidegger’s notion
cognition as well as claims concerning its nature and relevance. of being-in-the-world, argued that any computer program “is
Given that this paper is part of a journal special issue on the rela- not always-already-in-a-situation. Even if it represents all human
tion between embodied AI and synthetic biology, it should come as knowledge in its stereotypes, including all possible types of human
no surprise that it is argued here that synthetic biology research situations, it represents them from the outside . . . It isn’t situated in
might be able to make significant contributions to embodied AI any one of them, and it may be impossible to program it to behave
(the details of how are beyond the scope of this paper though) – and as if it were”. Searle’s (1980) criticism of computational AI systems,
thereby also might help to clarify the role that biological embod- based on his famous Chinese Room Argument, was that “the opera-
iment plays in natural cognition. To what degree the underlying tion of such a machine is defined solely in terms of computational
biological mechanisms really do play a role in cognitive processes processes over formally defined elements”, and that such “formal
and capacities, is another open question in the cognitive sciences, properties are not by themselves constitutive of intentionality” –
and in fact not everybody would agree that they actually do play which is the characteristic of human cognition that allows it to be
any role at all, other than that of a particular physical implementa- about the world. Harnad’s (1990) argument was based on Searle’s,
tion that could just as well be replaced by another, non-biological but he referred to the problem of intentionality as a lack of ‘intrinsic
– e.g. computational and/or robotic – implementation. Different meaning’ in purely computational systems, which he argued could
arguments supporting the view that the underlying biology in gen- be resolved by what he termed symbol grounding, i.e. the grounding
eral, and bodily self-regulation in particular, actually does play a of internal symbolic representations in sensorimotor interactions
crucial role in embodied cognition are discussed in more detail in with the environment.
Section 3. Embodied approaches to AI – using robotic or simulated
Section 4 then, finally, presents some discussion and con- ‘autonomous agents’ – at least at a first glance, allow computer
clusions. It will be argued that embodied cognition is not only programs and the representations they are using, if any, to be
grounded in sensorimotor interaction with the environment – grounded in interactions with the physical environment through
a claim that most proponents of embodied cognition, and even the robot/agent platform’s sensorimotor capacities. Brooks, for
some of its opponents, would agree to – but that at least natu- example, one of the pioneers of embodied AI, formulated what
ral cognition is furthermore also deeply rooted in the underlying he called “the two cornerstones of the new approach to Artifi-
biological mechanisms, and more specifically layered/nested net- cial Intelligence, situatedness and embodiment” (Brooks, 1991).
works of homeostatic/allostatic bodily self-regulation mechanisms. Embodiment from this perspective simply means that “robots have
Hence, the potential contribution of synthetic biology to embod- bodies and experience the world directly – their actions are part of a
ied cognition and AI, it will be argued, lies first and foremost in dynamic with the world and have immediate feedback on their own
modeling/understanding/synthesizing the nature of organisms as sensations” (Brooks, 1991). According to Brooks, such systems are
such layered networks. This would be an important complement to physically grounded, and hence internally “everything is grounded
current work in embodied AI and cognitive architectures/robotics, in primitive sensor motor patterns of activation” (Brooks, 1993).
much of which is predominantly concerned with layered architec- Situatedness, accordingly, means that “robots are situated in the
tures for dealing with the complexities of perceiving and acting in world – they do not deal with abstract descriptions, but with the
the external environment. here and now of the world directly influencing the behavior of the
system” (Brooks, 1991).
Hence, from the embodied AI perspective, things might seem
2. What’s that thing called embodiment? relatively uncomplicated: robots are embodied and situated in
roughly the same sense that humans and other animals are, and
The embodied approach in cognitive science and AI has received thereby they at least potentially can overcome traditional AI’s
increasingly much attention in recent years. In fact, “Embod- problems with intentionality or intrinsic meaning. The problem of
ied Cognition is sweeping the planet”, at least according to Fred computer programs dealing with ungrounded representations is
Adams’ backcover book endorsement of the paperback edition of solved through physical grounding and either not having any repre-
Shapiro’s (2010) book on the topic. Research on embodied cog- sentations at all (a la Brooks) or acquiring internal representations
nition has received significant attention in the cognitive sciences through symbol/representation grounding (a la Harnad), i.e. devel-
for at least 25 years now, if you count from the appearance of oping such representations in the course of interaction with the
Varela, Thompson and Rosch’s book “The Embodied Mind” in 1991. external world (e.g. learning a map of the environment). It should
It should be noted though that despite this, at least at this point in be noted though that this does not necessarily resolve the philo-
time, there actually is no such thing as the embodied mind thesis sophical problems discussed above. Searle, for example, already
or paradigm. This is reflected, for example, by recent paper titles back in 1980, presented – and rejected – what he called the ‘robot
such as “Embodied cognition is not what you think it is” (Wilson and reply’ to his own Chinese Room Argument. This entailed pretty much
Golonka, 2013) and recent debates about the alleged “poverty of exactly what is now called embodied AI, namely computer pro-
embodied cognition” (Goldinger et al., 2016; Killeen, 2016) that grams running inside robots that interact with their environment
reveal deep misunderstandings and wildly different (mis-) concep- through sensors and actuators. In the terms of Searle’s argument,
tions of even the most basic tenets of embodied cognition research. to the person inside the Chinese Room, it does not make any dif-
From the embodied AI researcher’s perspective, on the other ference whether or not inputs to and outputs from the Chinese
hand, what is and what is not embodied might seem relatively Room are connected to the sensors and motors of a robot – the per-
straightforward: the computer programs of traditional AI research son inside the room still lacks the intentionality that characterizes
are widely considered ‘disembodied’, whereas robots obviously are human cognition.
embodied – at least in some sense (cf. Ziemke, 2001b; Ziemke and At this point it should be noted that for the purposes of this paper
Thill, 2014). Much early embodied AI research was to some degree it does not actually matter at all whether or not the reader is familiar
driven by criticisms of traditional AI formulated by philosophers with the details of Searle’s Chinese Room Argument, let alone con-
such as Dreyfus (1979), Searle (1980) and Harnad (1990). A key vinced of its validity. The argument has been discussed for more
point in these criticisms was the lack of interaction between the than 35 years now (e.g. Harnad, 1989, 1990; Ziemke, 1999; Zlatev,
internal representations – at the time typically symbolic ones – of 2001; Preston and Bishop, 2002) without reaching much consensus.
AI programs and the external world they were supposed to repre- What is more interesting here though is that there are quite many
6 T. Ziemke / BioSystems 148 (2016) 4–11
1. Representational and computational views of embodied cogni-
tion are wrong.
2. Embodied cognition should be explained using a particular set
of tools T, including dynamical systems theory.
3. The explanatory tools in set T do not posit mental representa-
tions.
To summarize the discussion so far, it should now be clearer
why exactly it is still surprisingly difficult to pinpoint what embod-
ied cognition is, or what kind of embodiment an artificial cognitive
system might require. There are different positions along at least
a couple of dimensions of embodiment: physicality, the view of
representation, and the role of the underlying biology. Embodied
AI researchers emphasize the importance of physical grounding,
but in their research practice they commonly make use of software
simulations (cf. Ziemke, 2003), and the computer programs control-
ling their robots – physical or simulated – are for the most part still
Fig. 1. Current notions of embodied cognitive science and their historical roots. just as computational as the computer programs of traditional AI.
Adapted from Chemero (2009: 30).
Radical embodied cognitive science, at least according to Chemero,
is strictly anti-representationalist, whereas mainstream embodied
cognitive science more or less still embraces the traditional com-
embodied AI researchers who – like Searle – take the Chinese Room
putationalist/representationalist framework, but emphasizes the
Argument to be a valid argument against traditional AI, but at the
need for representations to be grounded, i.e. a robotic functionalism
same time – unlike Searle – consider the physical and sensorimotor
instead of the traditional computational functionalism. Naturally,
embodiment provided by current robots to be sufficient to over-
the role of the biological mechanisms underlying (embodied) cog-
come the problem (e.g. Harnad, 1989, 1990; Brooks, 1991, 1993;
nition is also fundamentally different on the left and the right side of
Clark, 1999; Zlatev, 2001; Chrisley and Ziemke, 2002; cf. Ziemke,
Chemero’s diagram (cf. Fig. 1). While on the right/mainstream side,
1999). In Harnad’s (1989) terms, this type of embodied AI has gone
the biological embodiment of natural cognition would be consid-
from a computational functionalism to a robotic functionalism. Zlatev
ered as just one possible ‘implementation’, which could as well be
(2001), for example, explicitly formulated the functionalist posi-
replaced by alternative, e.g. computational and/or robotic imple-
tion that there is “no good reason to assume that intentionality
mentations, the left/radical side is at least more open to the idea of
is an exclusively biological property (pace e.g. Searle)”, and “thus a
the living body actually having some fundamental role in consti-
robot with bodily structures, interaction patterns and development
tuting embodied cognition. Which leads us to the next section.
similar to those of human beings . . . could possibly recapitulate
[human] ontogenesis, leading to the emergence of intentionality,
consciousness and meaning”. Others, including Searle naturally, do
3. Does life matter to embodied cognition?
indeed believe that there are good reasons to assume that human-
like – or, more generally, organism-like – intentionality is in fact
As pointed out in the previous section, the nature, role, and
a biological property, and that it does in fact require a biological
conception of ‘the body’ are actually still far from well-defined in
body (e.g. Varela et al., 1991; Stewart, 1996; Varela, 1997; Ziemke,
embodied cognitive science – and embodied AI in particular. As
2001a,b, 2007, 2008; Sharkey and Ziemke, 2001; Zlatev, 2002;
discussed in more detail elsewhere (Ziemke, 2000, 2004, 2007),
Bickhard, 2009; Froese and Ziemke, 2009; Vernon et al., 2015). The
on the one hand, much embodied AI research, in particular the
latter perspective is elaborated in more detail in Section 3.
widespread emphasis of the importance of physical embodiment
But, before we get there, let us a have quick look at Chemero’s
(e.g. Brooks, 1991, 1993; Steels, 1994; Pfeifer, 1995; Pfeifer and
(2009) characterization of the current embodied cognition research
Scheier, 1999), is actually to a high degree compatible with the
landscape, which is illustrated in Fig. 1. Chemero points out that
view of robotic functionalism (Harnad, 1989), according to which
there currently are at least two very different positions/traditions
embodiment is mainly about symbol/representation grounding
that are both referred to as ‘embodied cognitive science’. One of these,
(Harnad, 1990; cf. Anderson, 2003; Chrisley, 2003; Pezzulo et al.,
which Chemero refers to as radical embodied cognitive science, is
2013), whereas cognition can still be conceived of as computa-
grounded in the anti-representationalist and anti-computationalist
tion. On the other hand, much of the rhetoric in the embodied AI
traditions of eliminativism, American naturalism, and Gibsonian
field, in particular early embodied AI researchers’ rejection of tra-
ecological psychology. The other, more mainstream version of
ditional notions of representation and cognition as computation
embodied cognitive science, on the other hand, in line with what
(e.g. Brooks, 1991; Pfeifer, 1995; Pfeifer and Scheier, 1999), sug-
was referred to as robotic functionalism above, is derived from
gests sympathy for more radical notions of embodied cognition
traditional representationalist and computationalist theoretical
that view all of cognition as embodied and/or rooted in the mech-
frameworks, and therefore also still is more or less compatible
anisms of the living body (e.g. Maturana and Varela, 1987; Varela
with these – as illustrated maybe most prominently by the notion
et al., 1991; Thompson, 2007; Johnson, 2007; Froese and Ziemke,
of symbol/representation grounding, as opposed to the more radi-
2009). More specifically, part of the problem with embodied AI
cal position of anti-representationalism. As Chemero rightly points
is that, despite its strong biological inspiration, early embodied
out, although – or maybe because – the mainstream version of
AI research very much focused on establishing itself as a new
embodied cognitive science can be considered a “watered-down”
paradigm within AI and cognitive science, i.e. as an alternative to
version of its more radical counterpart, it currently receives signif-
the traditional functionalist/computationalist paradigm (e.g. Beer,
icantly more attention in the cognitive sciences.
1995; Pfeifer, 1995; Pfeifer and Scheier, 1999). Less effort was made,
The position of radical embodied cognition, according to
on the other hand, to make the connection to other theories, out-
Chemero (2009), can be summarized in two positive claims and
side AI, e.g. in theoretical biology or phenomenology, addressing
one negative one:
core conceptual issues of autonomy, embodiment, etc. Similarly,
T. Ziemke / BioSystems 148 (2016) 4–11 7
much embodied AI research distinguishes itself from its traditional that specify it, while at the same time realizing it (the system)
AI counterpart in its interactive view of knowledge. For exam- as a concrete unit in space and time, which makes the network
ple, work on adaptive robotics, in particular evolutionary robotics of production of components possible. More precisely defined: An
(Nolfi and Floreano, 2000) and developmental/epigenetic robotics autopoietic system is organized (defined as a unity) as a network
(e.g. Zlatev and Balkenius, 2001; Berthouze and Ziemke, 2003; of processes of production (synthesis and destruction) of compo-
Lungarella et al., 2003), is largely compatible with the construc- nents such that these components: (i) continuously regenerate and
tivist/enactivist/interactivist view (e.g. Piaget, 1954; Varela et al., realize the network that produces them, and (ii) constitute the sys-
1991; Bickhard, 1993, 2009; Ziemke, 2001a) of knowledge con- tem as a distinguishable unity in the domain in which they exist”.
struction in sensorimotor interaction with the environment, with Prime examples of autopoiesis are living cells and organisms, also
the goal of achieving some ‘fit’ or ‘equilibrium’ between inter- referred to as “first-order” and “second-order autopoietic unities”
nal, conceptual/behavior-generating mechanisms and the external respectively (e.g. Maturana and Varela, 1987).
environment (for a more detailed discussion of this aspect see Somewhat controversially, Maturana and Varela (1980, 1987)
Ziemke, 2001a). actually consider all living systems to be cognitive systems. Nat-
However, the organic roots of these processes, which were urally, this has been criticized by a number of authors who wish
emphasized in, for example, the theoretical biology of von Uexküll to reserve the term ‘cognitive’ for higher-level psychological pro-
(1928, 1982) or Maturana and Varela’s (1980, 1987) theory of cesses (cf., e.g., Clark, 1997; Barandiaran and Moreno, 2006). Varela,
autopoiesis (cf. below), are usually ignored in embodied AI, most of however, defended the use of the term ‘cognitive’ as follows: “The
which still operates with a view of the body that is largely com- reader may balk at my use of the term cognitive for cellular systems.
patible with mechanistic theories in psychology and a view of But from what I have said it should be clear that the constitution of
control mechanisms that is still largely compatible with compu- a cognitive domain links organisms and their worlds in a way that
tationalism (cf. Ziemke, 2000, 2001a). That means, the robot body is the very essence of intentionality as used in modern cognitive
is typically viewed as some kind of input and output device that science, and as it was originally introduced in phenomenology. My
provides physical grounding to the internal computational mecha- proposal makes explicit the process through which intentionality
nisms. As discussed above, this view of the physical body as the arises: it amounts to an explicit hypothesis about how to transform
computational mind’s sensorimotor interface to the world per- the philosophical notion of intentionality into a principle of natural
vades much of cognitive science and philosophy of mind. Thus, in science. The use of the term cognitive here is thus justified because
practice, embodied AI as a result of its history and interdisciplinary it is at the very base of how intentionality arises in nature” (Varela,
influences, has become a theoretical hybrid, combining a mechan- 1997, pp. 80–81).
ical/behaviorist view of the body with the constructivist notion For a competing, but closely related theoretical framework, let
of interactive knowledge as well as the functionalist hardware- us also have a look at Christensen and Hooker’s (2000) theory
software distinction and its view of the activity of the nervous of autonomy, aimed to propose a naturalistic theory of intelli-
system as computational (cf. Ziemke, 2000, 2001a,b, 2004, 2007). gent agency as an embodied feature of organized, typically living,
As Greenspan and Baars (2005) have pointed out (cf. also dynamical systems. According to their view, agents are entities that
Sharkey and Ziemke, 1998, 2001; Ziemke, 2001a), the mechanis- engage in goal-directed, normatively constrained, interaction with
tic/reductionistic approach to biology and psychology of leading their environment. More specifically, “[l]iving systems are a partic-
early 19th-century researchers like Loeb (1918) and Pavlov (1927) ular kind of cohesive system . . . in which there are dynamical bonds
paved the way for the strong dominance of behaviorism in amongst the elements of the system which individuate the system
psychology, as pursued by Watson (1925) and Skinner (1938). from its environment”. Some examples: A gas has no internal cohe-
Cognitive science, with its traditional emphasis on representa- sion, its shape and condition are imposed by the environment. A
tion and computation, is widely considered to have replaced the rock, on the other hand, has internal bonds and behaves as an inte-
overly mechanistic view of behaviorism. However, as Costall (2006) gral whole. However, these cohesive bonds are passive and rigid
pointed out that, “it is not the case that mainstream cognitive (i.e. stable deep-energy-well interactions are constraining the con-
psychology entirely replaced the traditional mechanistic model. It stituents spatially), and they are local (i.e. there are no essential
retains the old mechanistic image of the body. The new mecha- constraints on the boundary of the system). A cell, finally, has cohe-
nism of mind has been merely assimilated to the old dualism of sive bonds and acts as an integrated whole, but those bonds are
mind and body, along with the existing conception of the body as active (i.e. chemical bonds formed by shallow-energy-well inter-
a passive machine”. Although Costall’s critique is directed mainly actions and continually actively remade), flexible (i.e. interactions
at traditional cognitive science and AI, rather than embodied AI, it can vary, are sensitive to system and environmental changes), and
should be noted that the mind/body or hardware/software dualism holistic (i.e. binding forces depend on globally organized interac-
that he accuses modern psychology of is the exact same dualism tions; i.e. local processes must interact globally to ensure the cell’s
that Searle (1980) accuses computationalist theories of. And, as survival). Autonomous systems then, according to Christensen and
discussed above, as Searle pointed out already back then in his Hooker (2000), are cohesive systems of the same general type as
robot reply, whether or not the computational mind resides in the cell. Their examples of autonomous systems include cells and
a slightly less passive robotic body, i.e. a physical container that organisms, as for autopoiesis, but also molecular catalytic bi-cycles,
allows the computational mind to interact with its environment species, and colonies (for details see Christensen and Hooker, 2000).
through sensors and actuators, really does not make much of a dif- Regarding the differences between their theory of autonomy and
ference when it comes to such computational/robotic systems as the theory of autopoiesis, Christensen and Hooker (2000) argue that
models of human cognition, intentionality, etc. the paradigm case of autopoiesis is the operationally closed system
So, what is the alternative then? We have already seen various that produces all its components within itself, whereas their the-
glimpses of theoretical frameworks that emphasize the biological ory of autonomy emphasizes agent-environment interaction and a
nature and/or the organismic roots of natural embodied cognition, “directive organisation [that] induces pattern-formation of energy
but what exactly are these frameworks? flows from the environmental milieu into system-constitutive pro-
Let us start with the theory of autopoiesis (Varela et al., 1974; cesses”. However, as Varela (1997:82) pointed out, in the theory
Maturana and Varela, 1980, 1987; Varela, 1979, 1997). According of autopoiesis the term operational closure “is used in its mathe-
to Varela (1997, p. 75): “An autopoietic system – the minimal living matical sense of recursivity, and not in the sense of closedness or
organization – is one that continuously produces the components isolation from interaction, which would be, of course, nonsense”.
8 T. Ziemke / BioSystems 148 (2016) 4–11
What these theoretical frameworks share is the view that living
organisms have a particular organization, and that they take this
organization to be fundamental to natural cognition. This is also
the case for Bickhard’s (1993, 2009) notion of cognitive systems as
recursively self-maintaining, far-from-thermodynamic-equilibrium
systems. As Bickhard (personal communication) points out, current
robots have “no intrinsic stake in the world nor in their existence
in the world nor in their existence as social agents”. Discussing
the case of a robot that regularly recharges its battery, which is a
common scenario in embodied AI research (e.g. Montebelli et al.,
2013), Bickhard (2009) emphasizes that the “contrast with the bio-
logical case arises in the fact that most of the robot’s body is not
far-from-equilibrium, cannot be self-maintained, and certainly not
recursively self-maintained. Conversely, the only part of the robot Fig. 2. Damasio’s illustration of ‘levels of automated homeostatic regulation, from
simple to complex’, constituting what Panksepp (2005) called “a multi-tiered affec-
that is far from equilibrium, the battery, is not self-maintaining”.
tively embodied view of mind”. Illustration from Ziemke (2008: 406); adapted from
Interestingly, despite all similarities in the above theoretical
Damasio (2003: 32).
frameworks, while the theory of autopoiesis and the underlying
biology of cognition adopt a strictly anti-representationalist view
of cognition, in Bickhard’s interactivist theory of mind, with which ing the work of Damasio (1998, 1999, 2003), Panksepp (2005), and
also Christensen and Hooker sympathize, so-called interactive rep- Prinz (2004), as well their historical predecessors, James (1884) and
resentations play a crucial role. This indicates that Chemero’s above Lange (1885). What these theories agree on, in a nutshell, is that
illustration of the embodied cognition research landscape might emotions arise from multiple, nested levels of homeostatic reg-
not necessarily provide a complete picture, and there might be ulation of bodily activity (cf. Fig. 2), and that emotional feelings
room for conceptions of cognition as a biological phenomenon are feelings of such bodily changes (cf. Prinz, 2004). According to
that reject the traditional functionalist/computationalist view, Panksepp (2005), somatic theories constitute “a multi-tiered affec-
and maybe also reject the traditional notion of representation, tively embodied view of mind” (Panksepp, 2005, p. 63).
but without necessarily committing to the eliminativism/anti- Such somatic theories can be considered a biologically-
representationalism that characterizes radical embodied cognition based, but representational view of cognition, although with a
according to Chemero. We will get back to this point. non-traditional twist: here the representations are not body-
The above characterizations of cognitive systems as internal representations of body- or agent-external objects or
autonomous, self-maintaining, and to some degree self-producing, states of affairs, but rather the brain’s representations of first
living systems of course have a number of historical precursors. and foremost bodily activity, albeit indirectly of course also
This includes the concept of autonomy in von Uexküll’s (1928, the environment. According to Prinz, “emotions can represent
1982) theoretical biology and theory of meaning (cf. Ziemke, core relational themes without explicitly describing them. Emo-
2000, 2001a; Ziemke and Sharkey, 2001) as well as Spinoza’s 17-th tions track bodily states that reliably co-occur with important
century concept of the conatus, which Damasio (2003) summarized organism–environment relations, so emotions reliably co-occur
as follows: with important organism–environment relations. Each emotion is
both an internal body monitor and a detector of dangers, threats,
“It is apparent that the continuous attempt at achieving a state
losses, or other matters of concern. Emotions are gut reactions; they
of positively regulated life is a deep and defining part of our exis-
use our bodies to tell us how we are faring in the world” (Prinz,
tence – the first reality of our existence as Spinoza intuited when
2004, p. 69). Similarly, Damasio has proposed that the essence of
he described the relentless endeavour (conatus) of each being
feelings of emotion lies in the mapping of bodily emotional states
to preserve itself. . . . Interpreted with the advantages of current
in the body-sensing regions of the brain, including somato-sensory
hindsight, Spinoza’s notion implies that the living organism is
cortex (Damasio, 1999, 2004). Such mental imagery of emotional
constructed so as to maintain the coherence of its structures and
bodily reactions is also crucial to Damasio’s concept of the as-
functions against numerous life-threatening odds. The conatus
if body loop, a neural internal simulation mechanism (using the
subsumes both the impetus for self-preservation in the face
brain’s body maps, bypassing the actual body), whose cognitive
of danger and opportunities and the myriad actions of self-
function and adaptive value are as follows: “Whereas emotions
preservation that hold the parts of the body together. In spite
provide an immediate reaction to certain challenges and opportu-
of the transformations the body must undergo as it develops,
nities . . . [t]he adaptive value of feelings comes from amplifying the
renews its constituent, and ages, the conatus continues to form
mental impact of a given situation and increasing the probabilities
the same individual and respect the same structural design.”
that comparable situations can be anticipated and planned for in
(Damasio, 2003, p. 36)
the future so as to avert risks and take advantage of opportunities”
Damasio has criticized “the prevalent absence of a notion of (Damasio, 2004, pp. 56–57).
organism in the sciences of mind and brain” as a problem, which Since in the above discussion of somatic theories the notion
he elaborated as follows: “It is not just that the mind remained of homeostatic bodily regulation has been mentioned, it should
linked to the brain in a rather equivocal relationship, but that the be noted that the term homeostasis here is used in the wider
brain remained consistently separated from the body and thus not sense, as used by Damasio and Carvalho (2013), i.e. the “process of
part of the deeply interwoven mesh of body and brain that defines maintaining the internal milieu physiological parameters (such as
a complex living organism” (Damasio, 1998, p. 84). His own the- temperature, pH and nutrient levels) of a biological system within
oretical framework, much in line with his above interpretation of the range that facilitates survival and optimal function”, rather
Spinoza, is based on the view that “[nature has] built the apparatus than the narrower sense that emphasizes constancy. As discussed
of rationality not just on top of the apparatus of biological regula- in more detail elsewhere (Vernon et al., 2015), of particular rele-
tion, but also from it and with it” (Damasio, 1994, p. 128). This view vance to theories and models of embodied cognition is in fact the
is shared by somatic theories of emotion and consciousness, includ- concept of predictive (self-) regulation or allostasis (Sterling, 2004,
2012; Schulkin, 2011). In line with the notion of the as-if body loop
T. Ziemke / BioSystems 148 (2016) 4–11 9
discussed above, Sterling (2012) points out: “The brain monitors a interaction with the environment. At least in the case of natu-
very large number of external and internal parameters to anticipate ral cognition, that sensorimotor interaction with the environment
changing needs, evaluate priorities, and prepare the organism to is itself deeply rooted in the underlying biological mechanisms,
satisfy them before they lead to errors. The brain even anticipates and more specifically layered/nested networks of bodily self-
its own local needs, increasing flow to certain regions — before regulation mechanisms. According to Damasio and others, the
there is an error signal”. In a similar vein, Seth (2013) has argued connection lies in emotional mechanisms playing a crucial role in
that “an organism should maintain well-adapted predictive mod- this self-regulation, fulfilling on the one hand a survival-related
els of its own physical body . . . and of its internal physiological (bioregulatory, adaptive, homeostatic/allostatic) function, and, on
condition” (Seth, 2013, p. 567). the other hand, constituting the basis of higher-level cognition, self
To sum up this section, before we move to the next, there are a and consciousness. Panksepp (2005) refers to this as “a multi-tiered
number of overlapping theoretical frameworks that take cognition affectively embodied view of mind”, which clearly goes beyond the
to be a genuinely biological phenomenon occurring in living organ- physical/sensorimotor embodiment that current robots are limited
isms, and therefore emphasize the fundamental role played by the to.
living body in general, and mechanisms of homeostatic/allostatic If we adopt the latter view of embodied cognition as a first
self-regulation in particular, in natural embodied cognition in liv- and foremost biological phenomenon, then clearly embodied
ing organisms. This clearly goes beyond the somewhat mechanistic AI is still lacking in complex models of such multi-level net-
view of the physical body as the computational mind’s sensorimo- works/hierarchies of self-regulation mechanisms – reaching from
tor interface to the world, which pervades much of mainstream low-level bioregulatory mechanisms to higher levels of embod-
(embodied) cognitive science and in particular embodied AI. While ied emotion and cognition – and possibly the development of
the controversial issue of ‘representation’ naturally is too complex computational cognitive architectures for robotic systems tak-
to discuss in detail – let alone resolve – in this paper, the above dis- ing into account some of these mechanisms (cf. Ziemke and
cussion hopefully illustrates at least to some degree the potential Lowe, 2009; Lowe and Ziemke, 2011; Vernon et al., 2015).
1
role and nature of non-traditional ‘representations’ – in the sense Accordingly, an important potential contribution of synthetic biol-
of predictive models – in biologically-based, non-functionalist con- ogy research to embodied AI, and the understanding of natural
ceptions of embodied cognition. embodied cognition, then probably lies first and foremost in mod-
eling/understanding/synthesizing the nature of organisms as such
4. Discussion and conclusion layered networks. This would be an important complement to cur-
rent work in embodied AI and cognitive robotics, much of which
As Black (2014) has recently pointed out, we “seem to have an is predominantly concerned with layered architectures for dealing
innate propensity to see bodies wherever we look” (p. 16). This is with the complexities of perceiving and acting in the world.
due to the the fact we “consistently anthropomorphise machines, In closing, and at the risk of pointing out the obvious, it
our attempts to conceptualise unfamiliar new artefacts falling back should also be noted that the two types of embodied AI discussed
on the most fundamental and sophisticated frameworks for under- here, focusing on grounding embodied cognition in sensorimotor
standing animation we have – those related to the human body” interaction, and on grounding sensorimotor interaction in bodily
(Black, 2014, p. 38). Hence, the question “What is a body?” or “What regulation, respectively, are of course highly complementary. This
is embodied?” is actually rarely asked. In embodied AI research applies not only to embodied AI, but also to our understanding of
robots are usually considered as ‘embodied’ as a matter of fact, sim- embodied cognition in general. Hence, Johnson’s (2007) description
ply because, unlike most traditional AI systems, they are physical of his own work on the embodiment of language, which initially
and can interact with their environment through sensors and actu- focused on understanding the grounding of language in sensori-
ators. The fact that robot bodies in most cases actually have very motor interaction, also characterizes very well the development of
little in common with the bodies of living organisms is not given embodied approaches in cognitive science and AI in general:
equally much attention.
“In retrospect I now see that the structural aspects of our bodily
As discussed in Section 2, most work in embodied cognitive sci-
interactions with our environments upon which I was focusing
ence falls into the category Chemero (2009) refers to as mainstream
were themselves dependent on even more submerged dimen-
embodied cognitive science, which still is more or less compatible
sions of bodily understanding. It was an important step to probe
with traditional computationalist and representationalist concep-
below concepts, propositions, and sentences into the sensori-
tions of cognition, which to some degree reduce the body to the
motor processes by which we understand our world, but what
computational mind’s physical/sensorimotor interface to the world
is now needed is a far deeper exploration into the qualities,
that it represents internally. Radical embodied cognitive science
feelings, emotions, and bodily processes that make meaning
rejects these traditional notions, but at least in Chemero’s formu-
possible.” (Johnson, 2007).
lation of the main claims/tenets of radical embodied cognition, it
also does not emphasize the biological nature of embodied cog-
Acknowledgement
nition as such, but focuses on explaining perception and action
in dynamical-systems terms rather than representational terms.
This work was supported in part by the Knowledge Foun-
Accordingly, most research in embodied AI, of both the represen-
dation, Stockholm, under SIDUS grant agreement no. 20140220
tationalist and the anti-representationalist type, has focused on
(AIR, “Action and intention recognition in human interaction with
sensorimotor interaction between agents and their physical and
autonomous systems”). The author would like to thank Luisa Dami-
social environments, i.e. on grounding cognition in sensorimotor
ano, Sam Thellman, and the anonymous reviewers for useful
interaction. Naturally, physical robots and simulated robotic agents
feedback on earlier versions of this paper.
are the tools of choice for this type of embodied AI.
From the perspective of the theories discussed in Section 3
though, embodied cognition is not only grounded in sensorimotor References
Anderson, M., 2003. Embodied cognition: a field guide. Artif. Intell. 149 (1), 91–130.
Barandiaran, X., Moreno, A., 2006. On what makes certain dynamical systems
1
Considering the emphasis on prediction, it might in fact be more accurate to cognitive: a minimally cognitive organization program. Adapt. Behav. 14 (2),
think of them as pre-presentations, or simply pre-sentations. 171–185.
10 T. Ziemke / BioSystems 148 (2016) 4–11
Beer, R.D., 1995. A dynamical systems perspective on agent-environment cognition and computational modeling. Front. Psychol. 3, 612, http://dx.doi.
interaction. Artif. Intell. 72, 173–215. org/10.3389/fpsyg.2012.00612.
Berthouze, L., Ziemke, T. (Eds.), 2003. Connect. Sci. 15 (4), 147–150. Pfeifer, R., Scheier, C., 1999. Understanding Intelligence. MIT Press, Cambridge, MA.
Bickhard, M.H., 1993. Representational content in humans and machines. J. Exp. Pfeifer, R., 1995. Cognition – perspectives from autonomous agents. Rob. Auton.
Theor. Artif. Intell. 5, 285–333. Syst. 15, 47–70.
Bickhard, M.H., 2009. The biological foundations of cognitive science. New Ideas Piaget, J., 1954. The Construction of Reality in the Child. New York: Basic Books.
Psychol. 27, 75–84. Originally appeared as Piaget (1937) La construction du réel chez l’enfant.
Black, D., 2014. Embodiment and Mechanisation: Reciprocal Understanding of Neuchâtel, Switzerland : Delachaux et Niestlé.
Body and Machine from the Renaissanc to the Present. Ashgate, Farnham, UK. Preston, J., Bishop, M. (Eds.), 2002. Views Into the Chinese Room: New Essays on
Brooks, R.A., 1991. Intelligence without reason. In: Proceedings of the Twelfth Searle and Artificial Intelligence. Oxford University Press, Oxford.
International Joint Conference on Artificial Intelligence (IJCAI-91). Morgan Prinz, J., 2004. Gut Reactions – A Perceptual Theory of Emotion. Oxford University
Kauffmann, San Mateo, CA, pp. 569–595. Press, Oxford.
Brooks, R.A., 1993. The engineering of physical grounding. In: Proceedings of The Schulkin, J., 2011. Social allostasis: anticipatory regulation of the internal milieu.
Fifteenth Annual Conference of the Cognitive Science Society. Lawrence Front. Evolut. Neurosci. 2, 111, http://dx.doi.org/10.3389/fnevo.2010.00111.
Erlbaum Associates, Inc, Boulder, Colorado, pp. 153–154. Searle, J.R., 1980. Minds, brains, and programs. Behav. Brain Sci. 3 (3),
Chemero, A., 2009. Radical Embodied Cognitive Science. MIT Press, Cambridge, MA. 417–457.
Chrisley, R., Ziemke, T., 2002. Embodiment. In: Encyclopedia of Cognitive Science. Seth, A.K., 2013. Interoceptive inference, emotion, and the embodied self. Trends
Macmillan Publishers, London, pp. 1102–1108. Cogn. Sci. 17, 565–573, http://dx.doi.org/10.1016/j.tics.2013.09.007.
Chrisley, R., 2003. Embodied artificial intelligence. Artif. Intell. 149 (1), 131–150. Shapiro, L., 2010. Embodied Cognition. Routledge.
Christensen, W.D., Hooker, C.A., 2000. Autonomy and the emergence of Sharkey, N.E., Ziemke, T., 1998. A consideration of the biological and psychological
intelligence: organised interactive construction. Commun. Cogn. – Artif. Intell. foundations of autonomous robotics. Connect. Sci. 10 (3–4), 361–391.
17 (3–4), 133–157. Sharkey, N.E., Ziemke, T., 2001. Mechanistic vs. phenomenal embodiment: can
Clark, A., 1997. Being There. MIT Press, Cambridge, MA. robot embodiment lead to strong AI? Cognit. Syst. Res. 2 (4), 251–262.
Clark, A., 1999. An embodied cognitive science? Trends Cogn. Sci. 9, 345–351. Skinner, B.F., 1938. The Behavior of Organisms. Appleton-Century-Crofts, New
Costall, A., 2006. Bringing the body back to life: James Gibson’s ecology of agency. York.
In: Ziemke, T., Zlatev, J., Frank, R. (Eds.), Body, Language and Mind. Volume 1: Steels, L., 1994. The artificial life roots of artificial intelligence. Artif. Life 1, 75–110.
Embodiment. Mouton de Gruyter, Berlin. Sterling, P., 2004. Principles of allostasis. In: Schulkin, J. (Ed.), Allostasis,
Damasio, A., Carvalho, G.B., 2013. The nature of feelings: evolutionary and Homeostasis, and the Costs of Adaptation. Cambridge University Press,
neurobiological origins. Nat. Rev. Neurosci. 14, 143–152, http://dx.doi.org/10. Cambridge, pp. 17–64.
1038/nrn3403. Sterling, P., 2012. Allostasis: a model of predictive regulation. Physiol. Behav. 106,
Damasio, A.R., 1994. Descartes’ Error. Vintage Books. 5–15, http://dx.doi.org/10.1016/j.physbeh.2011.06.004.
Damasio, A.R., 1998. Emotion in the perspective of an integrated nervous system. Stewart, J., 1996. Cognition = life: implications for higher-level cognition. Behav.
Brain Res. Rev. 26, 83–86. Process. 35, 311–326.
Damasio, A.R., 1999. The Feeling of What Happens: Body, Emotion and the Making Thompson, E., 2007. Mind in Life. Harvard University Press, Cambridge, MA.
of Consciousness. Vintage, London. Varela, F.J., Maturana, H.R., Uribe, R., 1974. Autopoiesis: the organization of living
Damasio, A.R., 2003. Looking for Spinoza: Joy, Sorrow and the Feeling Brain. systems: its characterization and a model. Biosystems 5, 187–196.
Harcourt, Orlando, FL. Varela, F.J., Thompson, E., Rosch, E., 1991. The Embodied Mind: Cognitive Science
Damasio, A.R., 2004. Emotions and feelings: a neurobiological perspective. In: and Human Experience. MIT Press, Cambridge, MA.
Manstead, A., Frijda, N., Fischer, A. (Eds.), Feelings and Emotions – The Varela, F.J., 1979. Principles of Biological Autonomy. Elsevier, New York.
Amsterdam Symposium. Cambridge University Press. Varela, F.J., 1997. Patterns of life: intertwining identity and cognition. Brain Cogn.
Dreyfus, H., 1979. What Computers Can’t Do. MIT Press, Cambridge, MA. 34, 72–87.
Froese, T., Ziemke, T., 2009. Enactive artificial intelligence: investigating the Vernon, D., Lowe, R., Thill, S., Ziemke, T., 2015. Embodied cognition and circular
systemic organization of life and mind. Artif. Intell. 173, 466–500, http://dx. causality: on the role of constitutive autonomy in the reciprocal coupling of
doi.org/10.1016/j.artint.2008.12.001. perception and action. Front. Psychol. 6, 1660, http://dx.doi.org/10.3389/fpsyg.
Gallagher, S., 2005. How the Body Shapes the Mind. Oxford University Press, 2015.01660.
Oxford. von Uexküll, J., 1928. Theoretische Biologie. Springer Verlag, Berlin.
Goldinger, S.D., Papesh, M.H., Barnhart, A.S., Hansen, W.A., Hout, M.C., 2016. The von Uexküll, J., 1982. The Theory of Meaning. Semiotica, 42 (1), 25–82. Originally
poverty of embodied cognition. Psychon. Bull. Rev. 23, 959–978, http://dx.doi. appeared as: von Uexküll, J., 1940. Bedeutungslehre. Leipzig: Verlag J.A. Barth.
org/10.3758/s13423-015-0860-1. Watson, J., 1925. Behaviorism. Norton, New York.
Greenspan, R.J., Baars, B.J., 2005. Consciousness eclipsed: Jacques Loeb, Ivan P. Wilson, A.D., Golonka, S., 2013. Embodied cognition is not what you think it is.
Pavlov, and the rise of reductionistic biology after 1900. Conscious. Cogn. 24, Front. Psychol. 4, 58, http://dx.doi.org/10.3389/fpsyg.2013.00058.
219–230. Ziemke, T., Lowe, R., 2009. On the role of emotion in embodied cognitive
Harnad, S., 1989. Minds, machines and Searle. J. Exp. Theor. Artif. Intell. 1 (1), 5–25. architectures: from organisms to robots. Cogn. Comput. 1, 104–117, http://dx.
Harnad, S., 1990. The symbol grounding problem. Phys. D 42, 335–346. doi.org/10.1007/s12559-009-9012-0.
James, W., 1884. What is an emotion? Mind 9, 188–205. Ziemke, T., Sharkey, N.E., 2001. A stroll through the worlds of robots and animals.
Johnson, M., 2007. The Meaning of the Body: Aesthetics of Human Understanding. Semiotica 134 (1–4), 701–746.
University of Chicago Press, Chicago. Ziemke, T., Thill, S., 2014. Robots are not embodied! Conceptions of embodiment
Killeen, P., 2016. The House of the Mind. Commentary on Goldinger et al. (2016) and their implications for social human–robot interaction. In: Seibt, Hakli,
The poverty of embodied cognition. Available at: https://asu.academia.edu/ Nørskov (Eds.), Sociable Robots and the Future of Social Relations. IOS Press,
PeterKilleen (05.08.16). Amsterdam, pp. 49–53.
Lange, C.G., 1885. Om Sindsbevægelser – Et Psyko-fysiologisk Studie. Jacob Lunds, Ziemke, T., Zlatev, J., Frank, R. (Eds.), 2006. Body, Language and Mind. Volume 1:
Copenhagen. Embodiment. Mouton de Gruyter, Berlin.
Lindblom, J., 2015. Embodied Social Cognition. Springer Verlag, Heidelberg, Ziemke, T., 1999. Rethinking grounding. In: Riegler, A., Peschl, M., von Stein, A.
Germany. (Eds.), Understanding Representation in the Cognitive Sciences. Plenum Press,
Loeb, Jacques, 1918. Forced Movements, Tropisms, and Animal Conduct. Lippincott New York.
Company, Philadelphia. Ziemke, T., 2000. Situated Neuro-robotics and Interactive Cognition. Doctoral
Lowe, R., Ziemke, T., 2011. The feeling of action tendencies: on the emotional Dissertation. University of Sheffield, UK.
regulation of goal-directed behavior. Front. Psychol. 2, 346, http://dx.doi.org/ Ziemke, T., 2001a. The construction of ‘Reality’ in the robot. Found. Sci. 6 (1),
10.3389/fpsyg.2011.00346. 163–233.
Lungarella, M., Metta, G., Pfeifer, R., Sandini, G., 2003. Developmental robotics: a Ziemke, T., 2001b. Are robots embodied? In: Balkenius, C., Zlatev, J., Brezeal, C.,
survey. Connect. Sci. 15 (4), 151–190. Dautenhahn, K., Kozima, H. (Eds.), Proceedings of the First International
Maturana, H.R., Varela, F.J., 1980. Autopoiesis and Cognition. Reidel, Dordrecht. Workshop on Epigenetic Robotics: Modelling Cognitive Development in
Maturana, H.R., Varela, F.J., 1987. The Tree of Knowledge – The Biological Roots of Robotic Systems, vol. 85. Lund University Cognitive Studies, Lund, Sweden, pp.
Human Understanding. Shambhala, Boston, MA. 75–83.
Montebelli, A., Lowe, R., Ziemke, T., 2013. Towards metabolic robotics: insights Ziemke, T., 2003. What’s that thing called embodiment? In: Alterman, R., Kirsh, D.
from modeling embodied cognition in a bio-mechatronic symbiont. Artif. Life (Eds.), Proceedings of the 25th Annual Conference of the Cognitive Science
19 (3–4), 299–315. Society. Lawrence Erlbaum, Mahwah, NJ, pp. 1305–1310.
Morse, A., Herrera, C., Clowes, R., Montebelli, A., Ziemke, T., 2011. The role of Ziemke, T., 2004. Embodied AI as science: models of embodied cognition,
robotic modeling in cognitive science. New Ideas Psychol. 29 (3), 312–324. embodied models of cognition, or both? In: Iida, F., Pfeifer, R., Steels, L.,
Nolfi, S., Floreano, D., 2000. Evolutionary Robotics. MIT Press, Cambride, MA. Kuniyoshi, Y. (Eds.), Embodied Artificial Intelligence. Springer, Heidelberg,
Panksepp, J., 2005. Affective consciousness: core emotional feelings in animals and pp. 27–36.
humans. Conscious. Cogn. 14, 30–80. Ziemke, T., 2007. What’s life got to do with it? In: Chella, A., Manzotti, R. (Eds.),
Pavlov, I.P., 1927. Conditioned Reflexes. Oxford University Press, London. Artificial Consciousness. Imprint Academic, Exeter, pp. 48–66.
Pezzulo, G., Barsalou, L.W., Cangelosi, A., Fischer, M.H., McRae, K., Spivey, M.J., Ziemke, T., 2008. On the role of emotion in biological and robotic autonomy.
2013. Computational Grounded Cognition: a new alliance between grounded Biosystems 91, 401–408.
T. Ziemke / BioSystems 148 (2016) 4–11 11
Zlatev, J., 2001. The epigenesis of meaning in human beings, and possibly robots. Zlatev, J., Balkenius, C., 2001. Introduction: why epigenetic robotics? In: Balkenius,
Minds Mach. 11 (2), 155–195. C., Zlatev, J., Brezeal, C., Dautenhahn, K., Kozima, H. (Eds.), Proceedings of the
Zlatev, J., 2002. Meaning = life (+culture): an outline of a unified biocultural theory First International Workshop on Epigenetic Robotics: Modelling Cognitive
of meaning. Evol. Commun. 4, 253–296. Development in Robotic Systems, vol. 85. Lund University Cognitive Studies,
Lund, Sweden, pp. 1–4.