A Story of Resonance Vicente Raja MA in Philosophy, Universidad de Murcia (Spain) MA in Philosophy, University of Cincinnati (USA)

A thesis submitted for the degree of Doctor of Philosophy

(PhD)

Chair: Prof. Anthony Chemero

May 24th, 2018

Department of Philosophy McMicken College of Arts and Sciences University of Cincinnati

Abstract

I will tell you a story. A story that is also a theory. A story of embodied cognition that is as well a story of neuroscience. A story about , action, and other psychological events. A story about the role of the brain in these events. A story of resonance and an ecological cognitive architecture. Ecological psychology, I contend, must be complemented with a story about the role of the CNS in perception, action, and cognition. To arrive at such a story while staying true to the tenets of ecological psychology, it will be necessary to flesh out the central metaphor according to which organisms perceive their environment by resonating to information in energy patterns: what is needed is a theory of resonance. Here I offer the two main elements of such a theory: a framework (Anderson’s neural reuse) and a methodology based on behavioral and coordination dynamics. In doing so, I examine the significance of embodiment, the explanatory strategy of ecological psychology, the compatibility of different cognitive architectures and ecological psychology, and the plausibility of resonance both in biological and explanatory terms. Finally, I review some future directions for the research on resonance.

2

3

“Such then would be the scope of pragmatism—first, a method; and second, a genetic theory of what is meant by truth” W. James, Pragmatism

“We say that we are only criticizing some antiquated, specifically philosophical dogmas. But, of course, what we call dogmas are exactly what our opponents call common sense. Adherence to these dogmas is what they call being rational.” R. Rorty, Philosophy and Social Hope

“… sea water, this man is painting the sea with the sea—and it is a thought that brings a shiver.” A. Baricco, Ocean Sea

“Using words to talk of words is like using a pencil to draw a picture of itself, on itself.” Patrick Rothfuss, The Name of the Wind

“None should ask you for the name of the one who tells the story.” Blind Guardian, The Bard’s Song

4

Acknowledgements

My academic life is a lucky life. During all these years I have had the opportunity of meeting many people that have influenced and keep influencing my thought and my work. I am sure this dissertation would have been different if I had not met all these fantastic friends and scholars. I am in debt with and really grateful to all of them.

First, I have to thank the four persons that I take to be my mentors and my major inspiration. The first one of them is my advisor, Tony Chemero. During my years at the grad school, I have lived one of the most awesome experiences a student can live: my favorite alive philosopher has been also my advisor. It is difficult to say to what extent I am influenced by his work and how much I have enjoyed each one of our conversations. To this point, it is not even important that he is obviously wrong about Dio and Black Sabbath, or about The Rolling Stones and Led Zeppelin. I take my dissertation to be just a footnote to his Radical Embodied Cognitive

Science.

The second one is Paco Calvo. To say that he is one of the main reasons of whatever academic achievement I could make is just an understatement. He was the first person in academia that trusted in me, but not only that. He was the one who introduced me to the field of the cognitive sciences, he taught me the importance of being radical, he gave me the first opportunity to give a talk and to write an academic paper, he has been always smart and

5 supportive… This is a never-ending list. I still talk to him for advice when I have doubts regarding the path to follow. I just hope to be able to eventually work hard enough at the MINT

Lab to give back some of his help.

The third one is Monica Vilhauer. She was my philosophy professor the first time I came to the US. She taught me how to live philosophy with passion and how to write a philosophical paper. Her classes on Gadamer and Nietzsche are still among my favorite experiences in academia. And, more importantly, it was in her classes when I decided I wanted to follow the path of academia. I just wanted to be as a good philosopher as she is.

And the last but not least is Patricio Peñalver. He was the professor of the first class on philosophy I took and a long-lasting influence on me. With him, I learnt that philosophy is interesting, profound, fun, and ironic. He was always willing to argue in class with a smile in his face. I have no doubt that if my first philosophy professor would have been another person, I would not be writing these lines right now.

Of course, I have to thank the members of my dissertation committee. Valerie Hardcastle,

Tom Polger, and Michael Anderson have made this dissertation to improve to an extent that would have been unimaginable without their questions, challenges, advice, and inspiration. Also,

I have to thank the department of philosophy at University of Cincinnati for its support at many levels during the last four years. Both in the department of philosophy and in the department of psychology at University of Cincinnati I have found faculty and staff that have made my years at the grad school to be a great experience. Angela Potochnik, Zvi Biener, Steven Wilson, Larry

Jost, John McEvoy, Jenefer Robinson, Heidi Kloos, Mike Riley, Mike Richardson, Kevin

Shockley, and Paula Silva have helped me and taught me in innumerable occasions. And very special thanks to Vesna Kocani. The department of philosophy at University of Cincinnati would

6 not be the same without her and I do not know what would have happened to me without her help.

All my grad and postgrad fellows deserve a place here. My days at grad school have been infinitely better thanks to my friends Satoshi, Ed, Gui, Chris, Patric, Frank, and Jonathan, and all the hours we have spent discussing about the perils of our ideas. Also, I cannot imagine better people to hang out with or to discuss about philosophy or psychology than Ethan, Louie,

Maurice, Patrick, Walter, Richard, Chris, Johan, Kyle, Valentina, Tracy, Alex, Matt, Sahar,

Amanda, Cory, Mirabel, Scott, and Jenny. Because of them, I am very happy I chose Cincinnati to pursue my PhD.

Hay muchas otras personas a las que estoy muy agradecido por acompañarme en este viaje. Primero, sin duda, a Lore, Manolo, y Miguel. Ellos hacen que seguir pensando en información, percepción, affordances, o cualquier otra cosa sea mucho mas divertido. Y, por supuesto, a Valeria. Ella llegó al final para darle sentido a todo. Si tuviese que elegir qué me quiero llevar al resto de mi vida de mis años estudiando filosofía, ellos esturarían al principio de la lista.

Por otra parte, no tengo palabras para describir lo que Chari, Markitos, Franin, y Tomás son para mí. Mejor no digo nada, porque en el fondo ellos ya lo saben. Me gusta ser como soy porque soy un poco como ellos. Tampoco me quiero olvidar de Los Quintos—Frutos, Cañi, Pelli,

Chechu, Jose Luis, Pipa, Pep, Fran, y todos los demás—o de Lidia, Ana, Vane, Mari, Inés, mi paya Mari, y todos los que me dejo por ser un desastre.

Por supuesto, el mayor de mis agradecimientos es para mi familia. Mis abuelas y abuelos, mis tíos y tías, mis primos y primas, mi hermano… Todos ellos han contribuido de una u otra forma a mi trabajo. Y, sin lugar a duda, mis padres. No hay agradecimiento bastante en este

7 mundo, en universos por descubrir, o en la mayor fantasía que la mente humana pueda imaginar.

Mi mayor regalo es que puedan estar orgullosos de mí.

I am sure I forget very important people I am grateful to. I hope they can forgive me.

The ones here and the ones I am forgetting are fundamental parts of this work. Of course, you should blame them for whatever is wrong in the following lines, but… Who cares? I am happy they are my friends.

8

Table of Contents

List of Figures ...... 10

Opening: A Story of Resonance ...... 11

Chapter 1: Philosophical Maneuvers in Embodiment ...... 20 1.1 Embodiment ...... 21 1.2 Gibson ...... 29 1.3 Gibson, Mechanism, and Aristotelianism ...... 38 1.4 Further Consequences ...... 41

Chapter 2: Ecological Psychology and Resonance ...... 44 2.1 The Explanatory Strategy of Ecological Psychology ...... 45 2.2 Resonance ...... 53 2.3 Ecological Psychology as Anti-Computationalism ...... 60

Interlude: The Significance of Embodiment ...... 67 I.1 Kepler or the Instrumental Use of the Body ...... 68 I.2 Gibson or the Embodied Use of the Body ...... 71 I.3 The Significance of Embodiment Misunderstood ...... 76

Chapter 3: A Framework for the Story of Resonance ...... 81 3.1 ACT-R, SPA, Predictive Processing and the Like ...... 83 3.2 Neural Reuse ...... 101 3.3 Resonance with Neural Reuse ...... 112

Chapter 4: On Plausibility ...... 116 4.1 The Place of Resonance in Contemporary ...... 118 4.2 Biological Possibility ...... 133 4.3 Explanatory Possibility ...... 139

Chapter 5: Methods for the New Theory of Resonance ...... 144 5.1 Behavioral Dynamics and Resonance ...... 147 5.2 Multi-Scale Dynamics and Fractal Analysis...... 170 5.3 Revisiting Neural Reuse ...... 176

Coda: The Future ...... 185 C.1 Resonance and Radical Embodied Cognitive Neuroscience ...... 188 C.2 Resonance and the Future ...... 191

References ...... 197

9

List of Figures

Figure 1: Psychological Explanation in Computational Theories ...... 45 Figure 2: Psychological Explanation in Ecological Psychology ...... 51 Figure 3: Smart Instrument ...... 57 Figure 4: Cognitive Architectures ...... 85 Figure 5: ACT-R ...... 87 Figure 6: SPA Basic Unit ...... 89 Figure 7: SPA Units Connected ...... 90 Figure 8: Predictive Processing ...... 97 Figure 9: Functional Fingerprint for Neural Reuse...... 104 Figure 10: Modularity, Holism, and Neural Reuse ...... 105 Figure 11: A 2-Layered ART Architecture ...... 128 Figure 12: Tau in Nucleus Rotundus Neurons ...... 137 Figure 13: Tau-Coupling in Babies’ Visual Cortex ...... 138 Figure 14: Taga;s Model of Bipedal Locomotion...... 142 Figure 15: Geometrical Representations of Dynamic Systems ...... 151 Figure 16: Dynamic Systems Theory in Cognitive Science ...... 152 Figure 17: Behavioral Dynamics ...... 155 Figure 18: Fajen and Warren’s Model of Steering ...... 157 Figure 19: Fajen and Warren’s Model & Sensory Substitution ...... 159 Figure 20: Behavioral Dynamics and Resonance ...... 165 Figure 21: A Model for Resonance ...... 169 Figure 22: Schemas of Neural Coordination ...... 180 Figure 23: Coordiantion Dynamics and TALoNS ...... 192 Figure 24: Radall Beer’s Minimal Cognitive Agents ...... 193

10

Opening: A Story of Resonance

“We do not know much yet about the neural action of resonance at higher centers, but it too may prove to be the reaching of some optimal state of equilibrium. If the neurophysiologists stopped looking for the storehouse of memory perhaps they would find it.” J. J. Gibson, The Senses Considered as Perceptual Systems

I will tell you a story. A story that is also a theory. A story of embodied cognition that is as well a story of neuroscience. A story about perception, action, and other psychological events. A story about the role of the brain in these events. A story of resonance.

The reason why I choose to tell a story and just not to put forward an argument or to carry a conceptual analysis of some concept of the philosophical tradition, as it would be expected from a dissertation in the field, is not arbitrary. To put it simply, I am about to tell you a story of resonance because I think Stephen Jay Gould was to some extent right in his Wonderful

Life when he claimed:

How should scientists operate when they must try to explain the result of history,

those inordinately complex events that can occur but once in detailed glory?

Many large domains of nature—cosmology, geology, and evolution among

them—must be studied with the tools of history. The appropriate methods focus

on narrative, not experiment as usually conceived. (Gould 1989: 277; emphasis is

mine).

11

This claim should not be understood, I think, as dismissive regarding the empirical sciences, but as a defense of a very specific kind of approach to some specific kind of systems. The approach is the narrative one—storytelling—and the systems are the complex ones.

Complex systems are the most common systems in nature and in our societies. For example, complex systems are the Sun, the Earth, the oceans, a forest, the growth of a species within a niche, the global economy, the migratory processes, a cycling race, a cauldron boiling, a tiger, a dog, a brain, a cell, me, and you, among others. All of these are systems composed by many interacting parts that undergo qualitative changes through time; namely, all of them are systems with a history. It is precisely such a history what makes narratives the best way to explain them. An explanation of a complex system cannot be a snapshot. It must be a story. A story that tells the current way the system is, but also the way the system got to be how it is. This is the reason why, for instance, we study the first moments of the universe, the evolution of the species, or the temporal evolution of the markets.

The storytelling approach to complex systems provides a better understanding of these systems and the reasons why they are how they are. However, I think there is an even more interesting benefit from storytelling, a benefit that is especially relevant for philosophy and other theoretical enterprises. Through the story of a concept or an approach to a phenomenon, a narrative account of a science or a scientific paradigm can be built up, so the very concept(s) at play in such a field or paradigm may be see under a new light (e.g., in terms of its coherence or its integration in the overall field).1 A story not only tells about the facts told, but also about the story itself, about its pertinence and its relevance.

1 The reader might perceive the echoes of this kind of narrative approaches in the philosophy of the 20th century, maybe remembering the works of Michael Foucault (1966, 1969).

12

In the story I am about to tell you there is a main character, resonance, a world in which the story is developed, the sciences of the mind, and a plot that has to do with the role of the brain in in our lives. Like any other main character—like Sal Paradise, like Frodo, like Jim

Hawkins, like Harry, or like Kvothe—resonance will come from some place that puts it at the center of the stage, will find friends and enemies, and will live different kinds of adventures. By telling you the story of resonance I hope capture all these events. I will try to get you to know what resonance is, why it is needed at least in some approaches to the sciences of the mind, what its place is within these sciences is, and how we can give a scientific account of it. This story goes like this…

There are different ways to approach the study of the mind. It is fair to say that almost nobody studies the mind from a dualist perspective anymore and that behaviorism, despite several honorable exceptions, is past its golden age. At least since the 1950s, the sciences of the mind have been dominated by what Paul Thagard (2005) called the central analogy of cognitive science: the idea that mind is to the brain what software is to hardware in a computer, so we can study the mind/brain as if it were a computer. This analogy and its correlates, like information- processing and representation, has been at the core of most of the developments within the cognitive sciences and its philosophical counterpart (i.e., cognitivism or functionalism) in the last seven decades.

Despite its guiding role in the development of the cognitive sciences, the computer metaphor has been challenged, more or less strongly, since soon after it became part of the establishment. Among these challenges, the idea of embodiment or embodied cognition is receiving a great deal of attention in the last decades. At the core of embodied cognition is the idea that body and environment play a constitutive role in cognition, so an explanation reduced

13 to the computational properties of the brain is inadequate to understand psychological events. To be fair, not all embodied approaches to cognition are against the idea of computation. Some of them just consider that aspects of the body and the environment must be included in the computational process that accounts for cognition. However, a growing part of embodied cognitive science, sometimes referred as radical embodied cognitive science (see Chemero

2009), explicitly rejects the computer metaphor and purports a completely new understanding of our psychological lives.

The rejection of the computer metaphor and the embrace of body and environment as constitutive parts of cognitive events allow for a redefinition of many central concepts of the cognitive sciences and open new possibilities of research based both in classic and innovative methodologies. Nevertheless, the focus on body, environment, and their interactions have historically placed embodied cognitive science in a precarious situation regarding the brain.

While classic, computational cognitive science claims to have both theoretical and empirical tools to account for the role of the brain in cognition—i.e., the manipulation of mental representations using computational rules—embodied cognitive science lacks an equivalent story in terms of embodiment.

An especially relevant example of this issue is ecological psychology (Gibson 1966,

1979). Ecological psychology is at the roots of one of the most important radical embodied research programs in the cognitive sciences, ecological dynamics (Chemero 2009) and, therefore, it is one of our most promising paradigms for embodied cognition. However, it is fair to say that neither the classic ecological approach nor contemporary ecological dynamics provide a theory regarding the role of the brain (the CNS) in psychological events. The only reference to the activity of the CNS we can find within the different ecological approaches is the use of the

14 resonance metaphor: organisms resonate to ecological information and neural activity is somehow playing an important role in such a resonant state. However, the resonance metaphor remains a metaphor and has never been upgraded into a theory that really accounts for the role of the CNS in psychological events. Thus, the main aim of the story I am developing here is to provide such a theory. To do so, I aim to complement ecological psychology (Gibson 1966,

1979) with an operational account of resonance and, therefore, to offer a complete ecological picture of perception, cognition, and action.

More concretely, throughout this story it is my contention that ecological psychology must be accompanied with a theory about the system that enables organisms to resonate with environmental information. Put crudely, resonance is what is going on at the intra-organismic scale, especially in the CNS, with regard to what is going on at the ecological (organism- environment system) scale. It is the aspect of perception, cognition, and action that may be accounted for in terms of organism’s neurobiological and other physiological processes with respect to agency (e.g., the ongoing intra-organismic activity when it—the organism—detects some information, i.e., when perception happens), just like, say, ecological optics is the aspect of that may be accounted for in terms of environmental features with respect to agency.

My main thesis is that the best way to explain resonance while staying true to the main tenets of ecological psychology is to understand how the information generated at the ecological scale also constrains the activity at the intra-organismic scale. Just as when a sound reaches a surface and the latter resonates to the former by vibrating at the same frequency or at a lawfully related frequency, the intra-organismic system of resonance may be explained by the appeal to the very same variable or a variable lawfully related to the one we use to explain the organism-

15 environment interaction. In other words, resonance is enabled by a system whose functioning may be captured by the appeal to the same informational variables we use to explain organisms’ behavior in the environment. This fact entails that a complete explanation will make reference to both the ecological and the intra-organismic scales as well as a specific understanding of the role of brain, body, and environment in perception, action, and cognition. In this sense, the story of resonance is also a story of embodiment. The central question is how this can be.

The story of resonance starts by describing the world in which our main character is going to live its adventures. Namely, it starts analyzing the philosophical maneuvers of embodiment as the conceptual space in which resonance itself finds its place. In what follows, and echoing William James’ definition of pragmatism (1907), I take embodiment to be, first, an explanatory strategy and, second, a thesis regarding the specific constitutive role of body and environment in psychological events. I devote the first half of the story—Chapter 1, Chapter 2, and an Interlude—to set these ideas up.

In Chapter 1, I address the philosophical maneuvers that give rise to embodiment as an explanatory strategy. Contrary to the common understanding, I locate these philosophical maneuvers in the 17th century as fundamental components of the origin of Modern science (Raja et al. 2017). Embodiment consists of a shift in the explanatory strategy of natural philosophy

(science) from the appeal to intrinsic teleological properties of the bodies—the (in)famous

Aristotelian-Scholastic substantial forms—to laws of interaction between bodies. Such a shift is already present in Kepler’s work on optics or in Galileo’s physics,2 becomes explicit in

2 The theoretical conditions of possibility of embodiment do not just appear with Kepler or Galileo though. They can be traced, at least, to the 13th and the 14th centuries in Oxford, where Oxford Calculators (Mertonian Calculatores, e.g., Robert Grosseteste, Thomas Bradwardine, or Richard Swineshead) strongly claimed in favor of the use of mathematics for research in natural philosophy, highly influenced by studies coming from the Muslim world like Al-Battani's work on trigonometry in the 8th and 9th century (Raja, in preparation).

16

Descartes’ natural philosophy, and definitively takes the center of the stage with Newton’s

Philosophiæ Naturalis Principia Mathematica (1687). Thus, embodiment is not only a concept relevant within the field of the cognitive science, but it is present in the very constitution of sciences since the Modern ages.

When was embodiment introduced in the cognitive sciences then? In Chapter 2, I tackle this question. It is my contention that J. J. Gibson is the first psychologist/cognitive scientist that fully embraces the idea of embodiment. Actually, the very name of Gibson’s ecological psychology makes reference to this fact: that psychology is ecological means that behavior must be explained in terms of the interaction between organisms and environment and not by the appeal to intrinsic properties of organisms (e.g., representations or internal mental states). By making this move, Gibson pioneers the rejection of the computer metaphor and its correlates, such as representations, computation, and brain-centrism. However, as the computer metaphor is lost by embodiment, Gibson needs to offer, at least, a new metaphor for the role of the CNS in psychological events. It is in this very moment in which the main character appears in our story.

Gibson proposes the resonance metaphor as the way to start re-thinking neural activity in perception, action, and cognition. Resonance appears for the first time as a relevant component of an embodied cognitive science. In the second part of Chapter 2, I offer a preliminary description of resonance as the main character of and evaluate some of the theoretical implications that will be relevant in the rest of the story.

Before I turn to the second half of the story of resonance I analyze the second aspect of the definition of embodiment provided above. Namely, I offer an account of the specific constitutive role of body and environment in cognitive events. First, I address the Gibsonian idea of the structure of ecological information and how it is directly related to the rejection of the

17 bundle and constancy hypothesis already carried out by Gestalt psychologists (Koffka,

Wertheimer, Köhler; see Ellis 1938). Second, I address the Gibsonian idea of the exploratory agent and how it is related to phenomenological ideas of practice and skillful interaction with the environment coming from Heidegger and Merleau-Ponty. And finally, I analyze how the misunderstanding of the specific constitutive role of body and environment in psychological events creates some problems for some critics of embodiment. This part of the story is the

Interlude.

Once the world in which resonance appears is set up and the reasons why embodiment and ecological psychology need resonance are set up, the second part the story—Chapter 3,

Chapter 4, and Chapter 5—is devoted to building up a theory of resonance for the 21st century while staying true to the general notions provided in the first part. In Chapter 3, I explore different possibilities regarding the kind of CNS needed to enable a process of resonance. It is not to say, of course, that I look for an ad hoc theory of the brain to support my own theory of resonance. On the contrary, the need for a specific kind of CNS follows from the analysis of embodiment provided in the two first Chapters and the Interlude. An understanding of the CNS compatible with a theory of resonance relevant for embodiment and ecological psychology must meet two requirements: anti-computationalism and de-localization of cognitive function. I propose Michael Anderson’s (2014) version of neural reuse as the principle for the functional organization of the brain that provides that kind of understanding. By the end of Chapter 3, and thanks to the inclusion of neural reuse as a framework, I offer a concrete definition and a general model for resonance. We have been fully introduced to the main character of the story.

In Chapter 4, the story goes on by analyzing the plausibility of resonance within the sciences of the mind. In doing so I have two specific aims. First, to show that the model for

18 resonance provided in Chapter 3 meets both the biological and the explanatory requirements needed in the cognitive sciences. I review different models and studies that are either compatible with the idea and the model of resonance developed in this story or explicitly based in a process of resonance achieve this aim. And second, to evaluate different possibilities for studying resonance in the sciences of the mind. The models reviewed not only prove the plausibility of resonance but are also instances of possible mechanism for resonance already at place in experimental psychology and neuroscience.

The last chapter of the story of resonance, Chapter 5, reconciles my theory resonance with contemporary research in ecological psychology. Of course, resonance is a character that appears in the of embodiment through ecological psychology, so at some general level reconciliation is not needed. However, in a more concrete level, resonance has gained their own personality through this story and it must be integrated again within ecological research. Not in

Gibson’s research, but in contemporary ecological psychology. To do so, I take Warren’s behavioral dynamics (2006) as basis and describe a way of including resonance within them.

After that, I analyze how fractal analysis (Van Orden et al. 2003) and coordination dynamics

(Tognoni and Kelso 2007) provide coherence to the whole system.

Finally, in the Coda I summarize the theory of resonance as depicted in the five Chapters and the Interlude. Also, I briefly consider three directions for future research: further integration of resonance, neural reuse, behavioral dynamics, and coordination dynamics; the development of a simulation of a minimal cognitive agent—see Beer (2003)—with resonant properties; and the application of resonance to “real” cognition such as remembering or imagining. And once at that point, this story of resonance ends. For the time being.

19

Chapter 1: Philosophical Maneuvers in Embodiment3

At first glance, it seems that radically different approaches to scientific explanation are in vogue at different times.4 For example, before Gibson’s The Senses Considered as Perceptual System

(1966), embodied approaches to cognitive scientific explanation were scarce. Since, however, they have grown in popularity and are currently a mainstay of the cognitive sciences. I argue that these approaches are not as novel or distinctly contemporary as they might first appear. In fact, the origins of embodiment trace to the scientific revolution. I claim that the process of embodiment drove the creation of modern physics and modern science more generally. In other words, I show that embodiment is not a feature of cognitive science alone, but a philosophical maneuver—a strategy for constructing scientific explanations—that is central to the history of science. Embodiment lies at the heart of what we understand by science.

From this perspective, The Senses Considered as Perceptual Systems was both the impetus for contemporary embodied cognitive science and the culmination of a centuries-long embodiment movement. To illustrate this point, in this chapter, I first analyze different types of embodiment in the scientific revolution, paying special attention to the main motivation for early-modern embodiment approaches, namely, anti-Aristotelianism. Second, I explain why

3 A version of this chapter already appeared as an article in Ecological Psychology (see Raja, Biener, and Chemero 2017). 4 This is also a familiar scholarly opinion, see Kuhn (1970).

20 embodied approaches did not become central to the sciences of the mind until the latter half of the 20th century. And finally, I show how embodiment finds its role in psychology through

Gibson’s work, and how previous considerations on the nature of the embodiment can be found in that work. In this chapter, I will pay special attention to the philosophical maneuvers carried out by Gibson in the development of ecological psychology. The place and significance of ecological psychology within the so-called embodied approaches to cognitive science will be addressed in Chapter 2.

1.1 Embodiment

Embodied cognition (see Clark 1997, Shapiro 2014) is one of the central components of contemporary cognitive science. Put simply, it is the idea that in order to understand cognitive processes we must understand them as features of the whole body (including the brain) and its relations with the environment (i.e., the body embedded in a given environment, the body using different tools in a given environment, and so on). Although the details of the conception may differ across practitioners, the specification of the whole body, or the whole-body-in-an- environment, as the correct unit of analysis is common (for a review of diverse conceptions of embodied cognition, see Calvo and Gomila 2008). This basic idea is not an achievement of cognitive science alone. Embodiment played a central role in the very foundation of modern science (particularly physics) in the 16th and 17th centuries.

Although different, overlapping types of embodiment can be found in the rise and development of early modern science, two are particularly important to us: embodiment in physico-mathematics, particularly optics, and embodiment as an anti-Aristotelian explanatory strategy. Between them, they capture some of the most salient features of contemporary approaches to embodiment. Although anti-Aristotelianism is the focus of this chapter, it is worth

21 briefly considering physico-mathematical embodiment for two reasons. First, these types of embodiment are interrelated. Second, anti-Aristotelian approaches were multifaceted and general. Physico-mathematical embodiment provides a specific example through which we can discover the salient features of more general anti-Aristotelian approaches.

1.1.1 Physico-Mathematical Embodiment

Johannes Kepler was an influential proponent of physico-mathematical embodiment. In his

Astronomiae pars Optica of 1604, he made clear that the physical organ of vision—the eye— could not be considered independently of the remainder of the human body, and that human body could not be considered independently of its environment. He wrote:

[T]he eyes are attached to the head, so, through the head, they are attached to the

body; through the body, to the ship or the house, or to the entire region and its

perceptible horizon (p. 336).

The resemblance between this idea and some proposals concerning visual perception in the work of Gibson (1966, 1979) is surprising.5 In Kepler's case, however, the assertion is also significant because it promoted the idea that the human body is an instrument that can be studied for its reliability, bounds of accuracy, etc. Studying the human body thus became intertwined with a general justification of the scientific reliability of optical (and other) instruments and observations (Gal and Chen-Morris, 2013). It was a step towards the naturalization of mind.

Kepler also championed the mathematical study of light and vision. He revolutionized the field of optics by turning it “into a mathematical-physical study of the production of images by light” (Gal and Chen-Morris 2013: 35). Before him, optics was closely related to vision, but the

5 I address this parallelism in depth along with its implications and the significance for contemporary embodied approaches in cognitive science in Chapter 2, section 2.3.

22 study of perception, the study of the physical properties of light, and the study of geometrical optics were mostly separate. Perceived images, although carried by light, were thought to be objects in their own right. These so-called “species” or “ideas” were likenesses of the perceived object which were transported (more-or-less) by the medium of light, but which carried information that was not exhausted by the light that transported them. The mismatch meant that one could not understand perceived images simply by studying the light that generated them.

Kepler, however, championed the idea that studying light is a way to study the perceived images.

Because the study of the former was a mathematical activity since antiquity (at least partly, see

Lindberg 1978), Kepler’s position implied that even perceived images could be studied by mathematics, and, furthermore, relatively independently from the biological underpinning of the process of vision. In a sense, the eye became one more surface that light could influence.

Kepler was followed (and eventually overshadowed) by Rene Descartes, who exemplified most clearly the idea of physico-mathematical embodiment. According to Descartes, mathematics could be used to explain all natural phenomena. Before him, mathematics was thought to be the science of abstract or otherwise perfect entities (e.g., celestial bodies, which were thought to be constituted by the incorruptible fifth element, the quintessence). But

Descartes believed that mathematics held the key to explaining all physical phenomena, including the changing, natural bodies of the sublunary realm. In other words, in his hand mathematics became the main tool of natural philosophy.

Why this approach to mathematics counts as embodiment depends on Descartes’s metaphysical justification for the usefulness of mathematics. For him, bodies were simply “the objects of geometry made real” (Garber 1992: 63). They were essentially and wholly

“extension”, and “extension” was the mathematical concept used to describe the nature of space

23 and continuous magnitude. Descartes’s new metaphysics was a striking departure from scholastic-Aristotelian metaphysics. For Aristotelians, bodies (and “substances” more generally) were compounds of matter and form. Matter was the stuff from which substances were made.

Form was the essence of substances; it gave substances their unity and identity and made them the kind of substances they were. Although some believed that matter could be studied mathematically, form—the metaphysically primary component of any being—was not susceptible to mathematical analysis. Extension, in contrast, was (and is) a mathematical concept. Descartes’s reduction of all physical substances to extension meant that mathematics was not only useful for understanding the physical world, it was the only tool by which we could understand the physical world. This mathematization further entailed a plenist view of the universe: since extension and matter were identical, the concept of empty space was a contradiction in terms. Without empty space, however, the movement of any one bit of matter necessitated a compensatory movement in some other bit. In other words, there could be no motion that would not elicit changes in its environment, and no bit of matter that could be completely isolated from changes in its environment.

Descartes also believed that all physical processes arose from the collision of bits of matter. The collision-model had an important consequence: it suggested that in order to understand physical change, one had to study two or more bodies. In this way, the model, like the plenist view of the universe, suggested that in physics the appropriate unit of analysis was the object-in-its-environment. And because this analysis had to be rendered in the language of mathematics, the models further suggested that physical analysis concerned the mathematics of the object-in-its-environment. This is what I mean by “physico-mathematical” embodiment. It is the early-modern push to turns physics into a system of mathematical relations that governed the

24 interactions between bodies. Since the mid-20th century, this push has often characterized as a

“mathematization” or a “mechanization” of natural philosophy (e.g., Dijksterhius 1961). These are important labels. However, I am pointing out that mathematization and mechanization brought with them a contextualism/situationalism that has been under-appreciated, one that it central to embodied approaches.

1.1.2 Embodiment as Anti-Aristotelianism (and its Limits)

Physico-mathematical embodiment is a species of a more general type of embodiment prevalent in early-modernity: embodiment as a rejection of the scholastic-Aristotelian approach to scientific explanation. In scholastic-Aristotelianism, natural change was primarily explained by means of its end states, what Aristotelians called “final” or “telic” causes. At first glance, some processes might appear to have external end-states for Aristotelians: for example, a falling stone seeks its proper place at the center of the universe, a location that is independent of features of the stone. But this incorrectly locates the engine of Aristotelian change. In falling, the stone does not seek to attain an external state, but acts according to its form in a way that seeks to actualize the potential(s) inherent in it. Take also the case of a human being: a human being cannot act towards the good without a host of external circumstances that allow him/her to do so. Those circumstances determine what actions are possible, as well as their moral valence. However, acting towards the good is the actualization of a potential for virtue inherent in the human, a potential whose possession is independent of the situations in which it can be actualized. For both the stone and the human, what drives change is the expression of internal form, a constitutive component of both substances.

Embodiment as anti-Aristotelianism consists in ceasing to appeal to internal teleological states as explanations. As we saw in the case of physico-mathematical embodiment, it consists in

25 trying to explain natural behavior in terms of bodies as wholes and their interactions with other bodies. Following Gal and Chen Morris (2013), we can see this rejection of teleology—with the rejection of intentionality being a species thereof—in Kepler’s reformulation of optics, where images are causal effects produced by light bouncing off an object and falling on a surface, but where no forms or visual rays are invoked (p. 39). The key idea is that by changing the subject matter of optics, Kepler is able to avoid explanations of optical phenomena in terms of forms

(“species” or “ideas”) which travel from the object of vision to the organ of vision. This opens optical phenomena to explanation in mathematical/physical terms—i.e., without needing to appeal to a telos which guides the phenomena in some specific way. Similarly, once Descartes rejects forms in his metaphysics, he does away with teleological or intentional features of natural bodies. To see the connection between teleology, intentionality and forms, we must consider the relation of forms to the scholastic-Aristotelian account of scientific explanation.

As I noted above, in Aristotelian metaphysics bodies (and “substances” more generally) are compounds of matter and form, but form is of primary importance. Form constitutes the

“essence” of being, and this essence determines what any particular thing is, what it does, and what it is for (see Matthen 2009).6 Form thus determines telos (i.e., goal, intention, function, etc.), and an appeal to form is necessarily an appeal to teleology. See, for example, how it works for development:

The formal cause of generation is the definition of the animal’s substantial being,

while the final cause is the adult form, which is the goal of the process of

development… Aristotle tells us that these two causes refer to the same thing. This is

6 I am giving a canonical interpretation of Aristotle. There are, of course, some divergent opinions concerning Aristotelian essences. The view I outline here is the consensus on this point. See Cohen (2009) and Lewis (2009) for further analysis on the topic.

26

plain enough, since the form specified in the account of an animal’s substantial being

is also the telos of its natural development (Henry 2009: 379).

In this case, the form of the animal determines both what the animal is and also what the final state of the animal is. Consequently, for scientific explanations to capture the world as it really is, they must appeal to forms—internal, intrinsic features of substances. Since forms determine teleology, scientific explanations must also be teleological.7 Such explanations were almost always qualitative.

According to Garber (1992), this internal, qualitative kind of explanation was Descartes’ target in natural philosophy. He thought of forms as “little minds attached to bodies, causing the behavior characteristic of different sorts of substances” (p. 287). This was problematic because in his metaphysics there was an unbridgeable gap between material things (res extensa) and thinking things (res cogitans). Little minds could not be attached to bodies, since minds and bodies were essentially different. Thus, Descartes rejected internal, qualitative explanation, and with it (almost) the entirety of Aristotelian natural philosophy. I say “almost” because Descartes allowed for one exception: human beings.

To see the significance of Descartes’s treatment of human beings, we must step back a bit. I have contended that embodiment—the rejection of internal teleological states and promotion of mathematical, relational explanation—lies at the root of early modern physics.

Given the historical significance of early modern physics, embodiment is thus one of the main concepts that constitute modern science as we understand it today.8 However, if this is so, why is

7 Again, some objections could be raised to this claim by noting that Aristotle himself proposes four causes (material, effective, formal, and final) to be explained to give a full account of any phenomena. However, final causes are privileged within his schema. See Matthen (2009) and Lennox (2009). 8 What happened in physics happened later in other sciences: appeal to whole bodies and not inner-body entities became increasingly common. Perhaps, molecular biology and genetics still hold a different status

27 embodiment a recent phenomenon in cognitive science? What is special about cognitive science and its subject that made it seem dis-embodied until the last few decades? The story has to do, again, with Descartes. He is implicated in both the embodiment of modern physics and the dis- embodiment of the sciences of the mind.

My story, as before, starts with Kepler in his Astronomiae pars Optica (1604). Although

Kepler changed the subject matter of optics, he left vision itself out of the equation, i.e., he left the mental field, the psychological processes, out of the range of mathematical or physical analyses of the images produced by light:

How this image or picture is joined together with the visual spirits that reside in the

retina and in the nerve, and whether it is arraigned within by the spirits into the

caverns of the cerebrum to the tribunal of the soul or of the visual faculty; whether

the visual faculty, like a magistrate, given by the soul, descending from the

headquarters of the cerebrum outside to the visual nerve itself and the retina, as to

lower courts, might go forth to meet this image—this, I say, I leave to the natural

philosophers to argue about (p. 180).

Thus, whatever vision is—which remains a mystery according to Kepler—it is not to be explained by optics. Descartes’s case is similar. In a sense, he generalizes Kepler’s separation of perception from the physical processes leading to it. This generalization is made possible by his underlying metaphysics. Because Descartes separates what is into two mutually exclusive categories (res extensa and res cogitans), he “invents the eye of the mind, modeled on but completely independent from the eye of the flesh” (Gal and Chez-Morris 2013: 65). While he

in this sense—i.e., they still appeal to genes (and not whole organisms) as cause and agent of evolution and development. Nevertheless, there are dissenting voices (e.g., see Lewontin 2000, Oyama et al. 2001).

28 promotes embodiment in the material realm, he holds that embodiment is impossible in the mental realm. The same Cartesian metaphysics that made possible the shift towards embodiment in the physical sciences also blocks its application to the sciences of the mind.

Until the last two decades the framework within cognitive science did not differ too much from the Cartesian one—famously, Jerry Fodor (1987) claimed that the only difference between his proposal and Descartes’ one was the computer metaphor. It is not until ecological psychology and enactivism came around that embodiment gained importance in the field. Cognitive science before the embodied turn, of course, did not use the vocabulary of “forms” or “ideas”, but representations played similar roles. Representations are entities within bodies, which are not bodies themselves, and to which cognitive scientists appeal in order to explain the behaviors of bodies. After the embodied turn—at least in the radical embodied approaches (see Chemero

2009, Hutto and Myin 2013, or O’Regan and Nöe 2001)—the explanation of behavior ceases to appeal to representations and begins to appeal to lawful or law-like relations between bodies and their environments. This turn started, I contend, with Gibson’s The Sense Considered as

Perceptual Systems.

1.2 Gibson

The main tenets of ecological psychology, arguably the first proposal of embodiment in psychology and cognitive science, were developed by J. J. Gibson during the late 1950s and early 1960s. During these years, as he noted in the preface of The Senses Considered as

Perceptual Systems, he had to write his seminal book twice. The difficulty was that in order to offer a coherent theory of perception, he had to integrate new empirical evidence on the anatomy and physiology of the senses with his studies on 17th and 18th centuries’ perceptual theories. The resulting theory questioned the assumptions of the computational approach that was coming into

29 focus at the time and that is still held by the majority of cognitive scientists. This questioning was a direct consequence, I claim, of Gibson’s embodied approach: while the computational approach to cognitive science is an example of dis-embodiment or, in the terminology used in the previous section, a facet of explanation in Aristotelianism, the Gibsonian approach represents the anti-Aristotelianism that was applied to physics in the 17th century. Gibson applies this anti-

Aristotelianism to the sciences of the mind, more specifically, to perception.

In what follows, I will analyze the philosophical maneuvers that Gibson employs. These consist in a re-description of the environment, the organism, and their interactions in terminology suitable for a non-teleological, non-intentional, full-fledged mathematical theory of perception.

These maneuvers are parallel to those by Descartes described above, i.e., his description of the categories res extensa and res cogitans. However, before describing the content of The Senses

Considered as Perceptual Systems, I need to say a few words about the book’s form. In a book on perception, one expects to find chapters on anatomy, physiology, brain issues, etc. Gibson’s book starts with a chapter on the environment. Moreover, when at the beginning of its second half it focuses on visual perception, there is another chapter on the environment. So, Gibson spends an important part of the book talking about the environment (this is even more clear in

Gibson 1979). Far from being a mere curiosity, these chapters reflect the aim of making the environment an integral part of perceptual theory. In other words, the environment is as important a part of the perceptual phenomenon as the organism. Thus, from the very structure of

Gibson’s book we can see that the focus is moving from intra-organismic considerations toward incorporating an embodied-embedded twist. This movement is clearer in the details of Gibson’s philosophical maneuvers. I now turn to those.

30

1.2.1 The Environment: Ecological optics

In what I have characterized as a movement toward embodiment in optics, Kepler developed a fully physical-mathematical model in which optics are understood in terms of the interaction of light and surfaces, and in which the phenomenon of vision has no place. In 1966, Gibson develops a new understanding of optics that makes possible the study vision in the same physical-mathematical fashion. Gibson’s ecological optics is the first philosophical maneuver towards embodiment in vision.

Ecological optics is concerned with ambient light instead of being concerned with radiant light, i.e., with the structured of the light in the environment and not in the sources of light:

The only terrestrial surfaces on which light falls exclusively from the sun are planes

that face the sun’s rays at a given time of the day. Other surfaces may be partly or

wholly illuminated by light but not exposed to the sun. They receive diffused light

from the sky and reflected light from other surfaces. A “ceiling,” for example, is

illuminated wholly by reflected light. Terrestrial airspaces are thus “filled” with light;

they contain a flux of interlocking reflected rays in all directions at all points. This

dense reverberating network of rays is an important but neglected fact of optics, to

which we will refer in elaborating that may be called ecological optics. (Gibson 1966:

12).

The main idea is, then, that the ambient illumination does not solely depend on the influence of the radiant body (e.g., the sun or a light bulb) in a given surface (e.g., the top surface of a table or the lateral surface of a building), but it depends on the light source plus the set of reflections and refractions the light undergoes as it interacts with all the surfaces of the environment. Each point of the ambient light array is differently illuminated depending on all these factors. Such a

31 phenomenon generates a light structure that is different for any position the organism is situated within the environment, and that is specificational regarding the layout of the surfaces in that environment—i.e., the structures of light at each point specify the layout of the environment. In the case of vision, the environmental information needed for perception comes from these structures: “the environment consists of opportunities for perception, of available information, of potential stimuli” (Gibson 1966: 23), and information is available thanks to the structures in the ambient light. The stimulus for perception depends on this ambient light and not on radiant light, i.e., we need ecological optics and not physical optics to give an account of vision. According to

Gibson, this is a neglected fact in psychological sciences, and the main reason of their lack of progress:

The ecology of stimulation, as a basis for the behavioral sciences and psychology, is

an undeveloped discipline. These sciences have had to depend on the physics of

stimulation in a narrow sense. I believe that this situation has led to serious

misunderstandings (Gibson 1966: 21).

Any theory of perception based solely on Kepler’s optics (physical optics) will fail to account for the nature of perception. So, a re-description of the behavior of light in the environment, i.e., a new kind of optics, is needed for psychology and behavioral sciences. Gibson’s re-description of the role of environmental light and environmental information in visual perception unlocks the possibility of the embodiment of vision. In both Kepler’s and Descartes’ systems, vision was not part of the study of optics because physical optics were not able to describe the process of vision under the assumptions of the model. This is one of the reasons why vision remained disembodied. Now, however, the ecological re-description of the ambient light allows for an embodiment of vision in which the interaction of organisms with the structured ambient light is

32 the key to explaining the phenomenon and is also suitable for a mathematical model. So, the philosophical maneuver carried out by ecological optics enables the development of an embodied-embedded theory of vision.

1.2.2 The organism: Perceptual systems

Ecological information—of which ecological optics are an example—represent just one step towards embodiment in psychology and the cognitive sciences. One might accept the relevance of ecological information for perception but still think that the way in which organisms deal with it has to do with processing, computation, or another kind of teleological inner states (see Marr

1982: 29 & ff.). For example, one might claim that ecological information is represented in discrete inner entities (representations) within the mind/brain of the perceiving organism and that perception is explained by appealing to these inner entities. In this case, the explanation would be completely disembodied (i.e., Aristotelian; appealing to “little minds within the bodies”—as

Aristotle appealed to forms—to explain bodies’ behavior), even though ecological information were part of the equation. Thus, Gibson’s philosophical maneuvers towards an embodied psychology needed a further step that defined organisms as embodied, that is, as complete bodies interacting with other bodies and their environment, and blocking the possibility of the appealing to inner/mental states as the cause or the explanation of that interaction. He accomplishes this by considering the senses as perceptual systems:

We shall have to conceive the external senses in a new way, as active rather than

passive, as systems rather than channels, and as interrelated rather than mutually

exclusive. If they function to pick up information, not simply to arouse sensations,

this function should be denoted by a different term. They will be here called

perceptual systems (Gibson, 1966, p. 47).

33

We can find two re-descriptions of the senses in this quote. First, senses are systems and not individual organs. The visual system, for example, is constituted by moving eyes situated in a moving head, situated on a moving body, and also by the brain, by the optic nerve, by the extraocular muscles, etc. Put simply, the whole body constitutes the visual system—the exploratory skills of the organism, for instance, depend on the whole body. And second, perceptual systems are active and not passive. They are not just receiving stimuli, but actively exploring the environment and picking up information. They are open and directed towards the environment. This fact allows for a different understanding of the organs implicated in the system, taking the whole body and its interactions with the environmental information as the correct level of analysis of perception: perception becomes a lawful relation between ecological information and the perceptual system while the organism is acting in its environment. No more inner-states in charge of perception. No more Aristotelianism. Under the Gibsonian paradigm, to understand perception is to understand the organism-environment relation, to understand how the whole organism picks up the ecological information available in its environment, and to understand that the embodied scale is the ecological scale.

By the ecological reformulation, Gibson achieves an embodied approach to psychology that follows the process initiated by Kepler and Descartes in optics and physics during the seventeenth century. As the with his predecessors, Gibson needed to re-describe the components of the phenomena—in philosophical terminology, he needed to develop new metaphysical foundations to perception—in order to be able to offer an explanation in terms of bodies and their interactions. Just as within the Cartesian paradigm (in physics), we do not find form and matter anymore, within the Gibsonian paradigm, we do not find an outer environment modelled by organisms in series of teleological inner entities that are internally manipulated. In both cases

34 we just find bodies interacting with other bodies and their environments, and rules and laws to describe these interactions. In the case of Gibson’s proposal, this is possible thanks to a new conception of environmental information and the perceptually skilled organism. An organism, a body, taken as a whole, picks up specific environmental information. This is what makes the ecological approach an embodied approach. A question still remains, though: What mathematical tools do we have to quantify the interaction between bodies and environments?

1.2.3 Perception-action loops & dynamic systems

The embodiment movement in the cognitive science, initiated by Gibson’s ecological psychology, is a dramatic conceptual shift, just as was the embodiment movement in physics initiated by Descartes and Kepler. However, the shift would be sterile if it remained merely conceptual and was not accompanied by a substantial difference in the methodologies and tools used by these sciences. The methodological difference in early modern physics was deep: the one from qualitative to quantitative explanations. It is clear when Isaac Newton famously states in the General Scholium of his Philosophiæ Naturalis Principia Mathematica (1687)9 that he does not frame hypotheses and that “to us it is enough, that gravity does exist, and acts according to the laws which we have explained” (p. 392). This is what sometimes has been called “saving the phenomena”. Nevertheless, Newton tries to express that there is no need for postulating anything beyond the existence of a phenomenon or for pursuing further qualitative explanations once we have the power to explain and predict the motion and interactions of bodies. In other words, once you know the dynamics10 of the system (i.e., how the system changes over time) you

9 This quote comes from Motte’s translation (1968)—first published in 1729. 10 Notice that I am using “dynamics” in the sense of change. Classically, Leibniz attributed the term “dynamics” to Newton’s theory because it was a theory based on forces (δύναμις). These two senses are different although closely related (for the definition of motion—changes in the system—in terms of forces

35 have an explanation of the phenomenon.11 So, I contend that dynamics (motion, change over time) becomes the central concept for physical explanation after the embodied twist and that explanations are implemented by dynamic laws.12

The centrality of motion is clear both in Kepler (Gal and Chez-Morris 2013: 135-137) and in Descartes. For the latter, motion has both metaphysical and explanatory roles. Motion is the principle of individuation for bodies since, without it, the universe would be an undifferentiated lump of extension. Similarly, motion determines the shape and size of bodies and, therefore, is the central explanatory principle in Cartesian physics. The French philosopher claims that “all variation in matter, that is, all the diversity of its forms depends on motion”

(Principles Part II, art. 23; quoted in Garber 1992: 307; see also Garber 1992:303 & ff., Stein

2004: 257). However, it is in Newton’s works that the study of dynamics and motion, although with some significant differences from the Cartesian concept (DiSalle 2004: 37), achieves a completely distinctive role in physics: a new mathematical tool, calculus, is developed for the study of change (for a deeper study, see Brackenridge and Nauenberg 2004, Guicciardini 2004).

Calculus represents the mathematization of the underlying intuitions of embodiment. Namely, a specific tool was required to give a correct account of the interactions between bodies over time.

For example, for explaining the time one body needs to change its place, velocity, we use

see Gal and Chen-Morris 2013: 187 & ff.). Anyway, I am always going to use “dynamics” in the first sense. 11 Some may object that Newton does not offer “explanations”, a position endorsed by some contemporary scholars. A full discussion of this topic is beyond my scope, but I note that, for the historical Newton, the laws of motion and the law of gravity clearly offered “explanations”. In the Rules for the Study of Natural Philosophy that preceded the third book of the Principia (the book in which Newton deduces universal gravitation), Newton explicitly notes that he only employs causes that are “true and sufficient to explain their phenomena” (Newton 1999, emphasis added). This is not to say that Newton thought he could explain gravity in itself, but that is a wholly other issue. 12 This topic is worth of a whole paper. However, for a matter of space, I am not going to address it in depth. If the brief discussion we offer here is not persuasive, the reader can take it as one of my assumptions until we develop a further elaboration elsewhere.

36 calculus (velocity is the first derivative with respect to time of the space an object travels). It is the change of the position of body with respect to another body (e.g., the Earth or a field) over some temporal interval. The mathematical description of the behavior of the bodies in these terms, i.e., in terms of change over time, takes its modern form with the new tool of calculus.

Thus, embodiment is not just a theoretical change, it also entails a methodological change.

Embodied cognitive science, initially via ecological psychology, introduces some new mathematical tools as well.13 Dynamic systems theory (DST) is one of them (Kugler, Kelso, and

Turvey 1980). The central idea is that the perception-action loops constitute the correct unit of analysis of the perceptual phenomenon, and these perception-action loops are best explained using calculus. Perception-action loops are interactions between organisms and their environments. For example, a human is in a perception-action loop when she is searching for her eyeglasses. As she moves through the environment, her perception changes, which changes her actions, which changes her perception, and so on. In other words, perception constrains action while action constrains perception; they are two parts of the same loop. So, there is an ongoing perception-action loop when a given organism exhibits eyeglass-finding behavior. The primary explanatory aim of ecological psychology, in particular, and of an embodied cognitive science, in general, is to make the underlying mathematical laws that regulate this kind of perception-action loops explicit. Dynamic systems theory is one of the mathematical tools used to reach this aim, and these laws typically take the form of differential equations. In this sense, the use of DST as a methodology is just the use of calculus in cognitive science.

Beyond the terminological resemblance to Newtonian dynamics, DST plays the same role that calculus played in early modern physics: it gives cognitive science the possibility of

13 I picked DST as an example, but other tools, close or not to DST, might be chosen as well—e.g., matrix calculus to measure momentum of inertia tensor (see Shockley, Carello, and Turvey 2004, Turvey et al. 1992).

37 deploying a mathematical study of the interactions between bodies. As Newton’s universal gravitation law explains an interaction (force) between two bodies without appealing to any inner entity of the body, the HKB Model (see Kelso 1995) explains the interaction between two bodies engaging in rhythmic behavior—like walking, clapping, rocking in rocking chairs, etc. In both cases, the introduction of a new mathematical tool allows for the explanation of the dynamics of the whole system with no reference to substantial forms, in the case of early modern physics, or representations, in the case of cognitive science. In both cases, calculus is the methodological face of embodiment, and the intent is to replace the explanatory structures that came with the prior conceptualization of the phenomenon.

Embodiment in cognitive science, as we have said at the beginning of this section, boils down to taking bodies and their relation with other bodies and their environments (i.e., perception-action loops) as the correct unit of analysis in order to explain cognitive behaviors.

This is exactly the same move Descartes lead in early modern physics, but that he blocked in the sciences of the mind. Cognitive science, five centuries later, is going against both Aristotle and

Descartes at the same time and is using new mathematical tools—derived from Newton’s calculus—to do so. The embodied movement in cognitive science, started by Gibson and his philosophical maneuvers, is the same one that happened during the 17th century in physics. Thus, embodiment is not a feature of the cognitive sciences alone, but a philosophical maneuver at the very base of the sciences such as we understand them nowadays.

1.3 Gibson, Mechanism, and Aristotelianism

Before concluding, I must acknowledge that I am disagreeing with the analyses of several

Gibsonian psychologists, such as Lombardo (1987), Costall (1995), and Reed (1996). First, for these scholars, my lumping of Gibson with, say, Newton and Kepler is inappropriate because

38

Newton and Kepler, unlike Gibson, are mechanists. Second, their positions suggest that it is inappropriate to attribute anti-Aristotelian moves to Gibson because Gibson is an Aristotelian. I agree that Newton and Kepler are, in many ways, mechanists and that Gibson is, in many ways, an Aristotelian. However, the way in which Newton and Kepler are mechanists is not in conflict with the ways in which Gibson is an Aristotelian (i.e., in his views on the life sciences

[Lombardo 1987]). Let me extend these points a bit.

Some well-known ecological psychologists (e.g., Costall 1995, Reed 1996) claim that the

Gibsonian approach to psychology is openly anti-mechanistic and, thus, anti-modern. The foundation for this claim is that a mechanistic ontology is incompatible with some crucial features of mental phenomena. In other words, while mechanistic principles can account for strictly physical phenomena, they are unhelpful in accounting for the mental. This inability of mechanist ontology to provide an explanation for mental phenomena prompted scholars like

Lombardo (1987) to try to ground ecological psychology in pre-modern, Aristotelian concepts

(e.g., functions, teleology).

However, we have tried to show (albeit briefly) that there is nevertheless a deep similarity between the philosophical maneuvers carried out by Gibson and those carried out by early modern scientists. The similarity has gone unnoticed because the mechanization of the world picture—a historiographical thesis that is more than a half century old—has been largely misunderstood and certainly exaggerated.14 For example, some scholars cite the idea that early moderns saw the universe as a great big clock, and so conceived of animals (not to mention inanimate objects) merely as assemblages of parts, not unified wholes. But the clock metaphor is more complex. It certainly suggested that both animals and inanimate objects were composed of ontic parts, but it also suggested that those parts could not be understood independently of the

14 See Cohen (1994), Chapters 1 and 2.

39 wholes of which they were a part. Robert Boyle, for example, who coined the term mechanical philosophy, and repeatedly invoked the clock metaphor, wrote: “I consider... that the faculties and qualities of things [are] (for the most part) but certain relations, either to another, as between a lock and a key; or to men, as the qualities of external things referred to our bodies, and especially to the organs of sense...” (Boyle 1772: Vol. III, p. 479). The lock-and-key metaphor makes explicit what is only implicit in the clock metaphor: a key does not function as a key (it is not a key) without a lock, a lock is not a lock without an appropriate key. The mechanistic ontology does entail that the lock/key combination is decomposable into two objects, but it does not entail that the functions of those parts can be understood independently of one another. Only the whole gives meaning to its parts. The same goes for the clock-metaphor: a gear is not a ratio- wheel outside a clock; really, it is not even a gear unless it can interact appropriately with some other appropriately shaped object. This approach to explanation, we claim, is common to both

Gibson and the early-moderns.15 Embodiment is an explanatory shift, an appeal to relations between bodies and environments as the bases of explanation. It is shared by Gibson and early modern mechanists and, in early modernity, was a distinctly anti-Aristotelian explanatory strategy.

There are other features of the clock metaphor that are not apt to Gibson: e.g., the stress on blind necessity and the underdetermination of the clock mechanism by the time-keeping function. However, I do not claim that Gibson is similar to early-modern in all ways, only in the

15 Boyle’s sentiment also belies now-dated historiographical statements like: “[Modern science substituted] for our world of quality and sense perception, the world in which we live, and love, and die, another world – the world of quantity, of reified geometry, a world in which, though there is a place for everything, there is no place for man” (Koyré 1965: 23). Boyle clearly thought the world of quality and sense perception is part of the natural world and explicable in similar ways. This point, and the ways in which early moderns dealt with the sciences of the life and mind, is explored by the current vanguard of early-modern scholarship (see, e.g., Distelzweig 2013, Manning 2015).

40 stress on embodiment as we defined it. On my view, the tensions between Gibson and early modern mechanists highlighted by Reed and Costall can be traced to various features the two belief systems (e.g., appeal to teleology or ontology of exclusively geometrical properties) but not to their shared emphasis on understanding objects in their environments. Although Gibson holds some Aristotelian ideas regarding the life sciences (Lombardo 1987), this says relatively little about the concept of embodiment defined here. Embodiment is the rejection of explanations in terms of inner teleological states, such as substantial forms. This rejection is what Kepler,

Descartes, and Newton instantiated in physics and cosmology. In the sciences of the mind, such a rejection entails the repudiation of mental representations as explanatory entities. Gibson championed this in his 1966 book. Thus, he championed an anti-Aristotelian strategy. This strategy is compatible, however, with other plainly Aristotelian commitments at different levels or regarding some different issue (e.g., regarding functional states or a non-reductively- geometrical ontology).

1.4 Further Consequences

As a last remark, it is worth noting one of the main consequences of embodiment in cognitive sciences: the horizontality of explanation. Once embodiment and the new mathematical tools are applied, any phenomenon is explained in the same terms independently of its nature. The movement of the planets is explained by appealing to a law of interaction, as is the approach of a climbing bean to a climbing pole,16 the steering to a target when navigating in a slightly crowded environment (Fajen and Warren 2003), hummingbird feeding (Delafield-Butt et al. 2010), interpersonal coordination (Schmidt and Richardson 2008), sentence processing (Olmstead et al.

16 An unpublished technical report of this phenomenon can be found at: http://www.um.es/documents/2103613/2107123/Technical+report.pdf/ed976b1e-e2f0-47b6-be33- a04ddf283baf

41

2009), etc. Thus, any behavior (in the broadest sense) of any entity is explained by a law of interaction that appeals to the dynamics of a system of bodies as wholes. The scientific explanation is, thus, horizontal.

This horizontality was depicted by Turvey (1992) as strategic reductionism. There is no ontological reduction but a reductionism of the means of explanation used in sciences to give an account of different phenomena. The embodied movement in cognitive science, and especially its use of calculus, enables the explanation of cognitive phenomena in physical terms. Insofar as cognitive science was the last citadel of qualitative explanations, the embodied turn allows for the same kind of explanation in every science, i.e., allows for horizontality of explanation. The horizontality of explanation we propose here is different from, but compatible with, van Dijk and

Withagen’s horizontal worldview (2014, 2016). Their horizontal worldview is an ontological commitment contrary to the dominant vertical worldview in most sciences. Put simply, according to the vertical worldview, the world has a layered supervening ontology in which building blocks at lower layers constitute entities at higher layers (e.g, atoms constitute molecules, that constitute cells, that constitute organs, and so on). Under this kind of worldview, scientists tend to abstract from the target phenomenon to lower or higher levels to provide an explanation. On the contrary, according to van Dijk and Withagen, adopting a horizontal worldview would allow for explanations focused on the phenomenon at its own scale, where the attention is directed to the interactions at the given scale and not to higher or lower ontological levels. The horizontality based on strategic reductionism (Turvey 1992) I propose here has no specific ontological commitments. I claim that embodiment provides a way to explain phenomena at different scales by using the same tools, methodologies, and strategies. In this sense, explanation is horizontal.

42

Namely, all phenomena, whatever scale they belong to, may be explained by the appeal to laws of interaction between the entities that constitute such a phenomenon at that given scale.

43

Chapter 2: Ecological Psychology and Resonance

In the previous chapter, I paid special attention to some of Gibson’s philosophical maneuvers during the development of ecological psychology (1966, 1979).17 We have seen these maneuvers consisted in a re-description of the environment, the organism, and their relation in terms of a new explanatory strategy for the sciences of the mind based on embodiment. We have also seen the historical inheritance of such maneuvers. However, in the time Gibson proposed ecological psychology, he was not consciously trying to follow Kepler’s or Descartes’s insights applied to psychology. On the contrary, he was reacting to a growing contemporary approach to cognition which eventually would become the mainstream on the field: computationalism (or cognitivism).

In this chapter, I address three main topics to set ecological psychology in the context of contemporary cognitive science. First, I analyze the main differences between ecological psychology and computationalism in explanatory terms. In doing so, I pay special attention to several correlates of ecological psychology that will be relevant through the whole dissertation: ecological scale, the ideas of redundancy and complexity, action and interaction, invariants and coordinative structures, and dynamic systems. Second, I focus my attention in the concept of resonance in ecological psychology, which is a direct consequence of the new embodied explanatory strategy and its correlates. And third, after proposing some constraints for the

17 See Chemero (2009) for a contemporary account of ecological psychology.

44 development of an operational concept of resonance, I offer some reasons why the ecological approach to perception and action must be taken as radically anti-computational and, therefore, resonance should be defined along the same lines.

2.1 The Explanatory Strategy of Ecological Psychology

Generally speaking, the ecological approach to perception and action stands in opposition to computationalism and its related concepts.18 According to ecological psychologists, perception is direct, so no inner processing of information is needed and, therefore, no appeal to computational mechanisms or representations is made. In explanatory terms, all computational approaches to cognitive science share one main feature: psychological explanation consists in accounting for agents’ computational mechanism (Figure 1). Such a mechanism may be thought as a language- like one (Fodor 1975), as a neural network (Rumelhart and McClelland 1986), or as a Bayesian brain (see Clark 2015, reviewing Friston’s work), for example, but spelling out the mechanism is always the aim of these theories.

Figure 1. Psychological explanation in computational approaches. (‘A’ stands for Agent and ‘E’ for Environment). The interaction between E and A is taken to be an internalization of outer environmental stimulation by A. Such environmental stimulation

18 It is important to note that I use the wording “computational approach” or “computationalism” in the broadest possible sense. This means that by these terms I refer both to classic computational approaches and to new computational-like ones. If it is more comfortable for the reader, “computational approach” may be understood as “information-processing approach”.

45

is then processed by an internal computational mechanism in A that is able to retrieve stored information to enrich what is given in stimulation.

In all cases, the explanation appeals to some kind of internal system able to carry out some kind of computational (e.g., inferential, algorithmic) process that underlays and ultimately enables the psychological event to be explained. Such a computational process may vary in detail, but generally it consists of an input, which is serially or parallel processed by the application of some set of rules in discrete steps or in a more continuous fashion, and an output as a result of that processing. An important feature of the mechanism that carries out the computational process is that it is able to retrieve internal information stored in the agent (e.g., memories, knowledge) and to combine it with environmental stimulation to constitute the outcome of the process—in case of perception, for example, to constitute the representation of the state of affairs in the world.

Unlike computational models, one of the main tenets of ecological psychology is that psychological explanation remains ecological. That is, psychological processes are explained in terms of the lawful or law-like interactions between agents and their environments—what is known as the ecological scale and is directly related to embodiment as it is understood in the previous chapter. Explanation at the ecological scale is one of the main features of ecological psychology and makes it an essentially interactive approach to psychology; an approach that is based on changing relations between agents and environments and not in intrinsic properties of the former. The main reason for this essentially interactive approach to psychology is that

Gibson found in action (including agents’ active exploration of the environment) a way to deal with the complexity of the environmental information and the redundancy in the abilities an organism can use to face it. There is a potentially infinite set of situations an agent can face in her environment (complexity), and there is a potentially infinite set of solutions to those situations an agent can perform (redundancy). For example, a question could be: how are we able

46 to explain the way we deal with the environmental information we face in crowded environments so as to know which muscles must be activated, and in what specific order, to successfully navigate through these environments without bumping into the obstacles? According to Gibson, through their action, organisms generate optic flow that is informative of the environment and their own movement, and that serves to prospectively control their behavior (Gibson 1950, Lee

1980). So, what is potentially a highly demanding computational task to explain if we take as an input a series of discrete snapshots of retinal images to build up a model of the environment to which some computational rules may be applied in order to generate the adequate behavior as an output, might become simpler—at least in terms of the variables needed to build up a model—if we take the results of our actions (e.g., optic flow) and the regularities we can find in them regarding our own actions as the starting point for the explanation of the subsequent (output) behavior. Under the ecological approach, thus, both the complexity of the environmental information and the redundancy of behavioral resources to accomplish a desired action are reduced by means of putting them in relation in terms of ongoing actions: they are not two different elements that must be reconciled by means of an internal mechanism, but two sides of the same coin. And that coin is defined in terms of action.

Notice that the ecological way to deal with complexity and redundancy might be simpler than the computational one in a sense, but this fact does not entail it is easier. Gibson proposes to understand perceptual events in terms of action instead of in terms of computational solutions and such a move requires a whole new set of concepts and tools to characterize the phenomena of interest. For this reason, ecological psychologists have developed concepts such as invariants and coordinative structures to address these complexity/redundancy issues. Invariants are

47 structural properties of the environmental information and its transformations at the ecological scale—i.e., they are ecological information:

From a psychological point of view, invariants are those high-order patterns of

stimulation that underlie perceptual constancies, or, more generally, the persistent

properties of the environment that an animal is said to know. From the perspective

of ecological physics, invariants come from the lawful relations between objects,

places, and events in the environment (part of which is other animals) and the

structure or manner of change of patterns of light, sound, skin deformation, joint

configuration, and so on. (Michaels and Carello 1981: 40; see also Gibson 1979).

Invariants are high-order patterns because they are stimulation arrangements that remain constant through time and through the transformations provoked in the flux of stimulation by the actions of the agent. In this sense, invariants are ecological information because they point both to the constant features of the environment and the actions carried out by a given organism. An invariant, for example, is the expansion of any figure in the visual field of an organism as the figure approaches her. A ball for example, occupies a bigger and bigger part of a given organism’s visual field as it approaches her visual system. Such an expansion is invariant regarding other properties of the ball (e.g, size, figure, color, mass) and specifies some properties of the relation between the ball itself and the organism to which it is approaching (e.g., the time it will take for the ball to reach the organism). Moreover, other relations between the expansion of the ball in the visual system of an organism and other elements of her visual field, as far as they are invariant as well, may specify other properties of the organism-ball relation such as the direction from where the ball is coming to the organism or the movement of the organism regarding the ball. It is important to stress again that such invariants point to organism and

48 environment; namely, that they are properties of the organism-environment relation (or the organism-environment system) as such and not just properties of one of its components. In our example, the time it will take for the ball to reach the organism is a property both of the ball and of the organism: it is the time they both will contact.19

Coordinative structures are bodily functional structures that may be taken as a whole in terms of behavior, in the best tradition of synergies within Bernstein’s approach to the control of action (see Bernstein 1967, 1996). Synergies or coordinative structures are developed to solve what is known as the coordination problem in motor control, which is a redundancy problem in itself: how can the CNS coordinate all the elements needed to perform certain behavior during a given task? More specifically: how can the CNS possibly control each motor system’s degrees of freedom independently? For example, we might ask how we are able to grasp a glass from a table with one hand. In order to do so, the CNS has to be able to coordinate several components of our arm and their respective degrees of freedom (i.e., their respective ways to behave). In the case of our example, the arm is composed by 3 joints, 26 muscles, and more than 2000 motor units. Our joints have seven degrees of freedom (3 dimensions for the shoulder to move, 1 dimension for the elbow to move, and 3 dimensions for the wrist to move). Muscles have, at least, one degree of freedom each (congested/non-congested). And finally, each motor unit has, at least, one degree of freedom as well (active/inactive). This sums up to more than 2000 degrees of freedom for the whole system. The coordination problem is that such a number of degrees of freedom in a system might overwhelm the controller.20

19 This specific invariant, called tau (Lee 1981, 2009), will be the one most used as an example in the rest of this work—specially in Chapter 4—and its details will be further explained as needed. 20 Such overwhelming of the controlled is classically illustrated by an example known as “Charles’ problem” (Meijer 2001). King Charles V was obsessed with but could not achieve making his massive collection of clocks chime in perfect synchrony. He tried to make them to chime in perfect synchrony by setting their chime time one by one. His inability of coordinating them illustrates the inability to exert

49

The way Bernstein found to solve the coordination problem was to take coordination as preceding control. The CNS does not control each component of the motor system and its respective degrees of freedom, but a functional/structural unit that is already coordinated and that can be controlled as a whole. Such units are synergies or coordinative structures and are already coordinated thanks to the constraints and tacit interactions between their components:

A structural unit [synergy or coordinative structure] is always organized in a task-

specific way such that, if an element introduces an error into the common output,

other elements change their contributions to minimize the original error, and no

corrective action is required of the controller. (Latash et al. 2002: 27)

Coordination does not need to be explicitly controlled by the controller, but it is embodied in coordinative structures by freezing degrees of freedom by the mutual constraints components of the motor systems impose to each other (e.g., studies on synergies gained by expert shooters, see

Arutyunyan et al. 1968, Scholz et al. 2000; coordinative structures in jaw perturbations, see

Kelso et al. 1984). Given this, in our example, in order to control the trajectory followed by the arm-hand coordinative structure while the behavior of grasping given the specific task of getting the glass from the table with one hand, the controller does not need to explicitly manage all the components (joints, muscles, and motor units) and their degrees of freedom, but only the coordinative structure as a whole the arm constitutes to accomplishes such task. This fact solves the coordination problem, as coordination does not need to be controlled, and so it is a perfect move to deal with redundancy.

coordinated control by directly manipulating local mechanisms in a complex system. Thus, as Bernstein claimed, coordination must precede control.

50

The final version of an ecological explanation of perception and action will take the form of a law that connects invariants and coordinative structures, in the fashion of an embodied explanatory strategy as depicted in the previous chapter. This is why, more recently, as we have already seen in the previous chapter, ecological psychologists have drawn on dynamic systems theory (DST) as their preferred explanatory tool for the agent-environment interactions: behavioral regularities and their qualitative changes are expressed in terms of dynamical equations where perceptual information, among other variables, is used as a parameter.21

Given all these transformations and new concepts, the big picture of the ecological explanatory strategy includes, crucially, two new notions Gibson had to introduce for his theory to work. The first of them is ecological information, which I already introduced in the last chapter, and the second one is resonance, which I bring into this discussion in the next section.

They are the relevant features of the environment and the agent, respectively, in terms of psychological explanation (figure 2).

Figure 2. Psychological explanation and ecological psychology. The A-E interaction is not an internalization of E-stimuli anymore. The interaction happens at the ecological scale and so it has to be explained (bidirectional arrow). The

21 More on this in Chapters 4 and 5.

51

activity of A in such an interaction is not the internal computation of E-stimuli, but the resonation to ecological information.

To further highlight the difference between ecological and computational explanations of psychological phenomena, the outfielder problem is a classic example (Todd 1981; McBeath et al. 1995; Shaffer et al. 2004; and Fink et al. 2009): how is an outfielder in a baseball game able to catch a fly ball? How does she know where to run to in order to be in a position to catch it?

From the computational approach to cognition, the explanation of the behavior of the outfielder will involve a series of calculations applied to the trajectory of the fly ball which allow the fielder to predict the place of its landing. Such calculations will be carried out by a computational mechanism that will have 2D retinal images as input and behavior as output. As I have noted above, the details of such a mechanism may vary and be based on language-like rules, parallel processing, or Bayesian inferences, among other possibilities, but the overall computational picture is basically the same.

The ecological explanation, in contrast, appeals to the regularities in the behavior of the outfielder regarding some specific ecological information. In this case, put simply, if the actions of the outfielder consist in moving in order to cancel the lateral motion and vertical acceleration of the fly ball relative to her own movement and in changing her own place so as to maintain a homogenous expansion of the ball in her visual field, she will be in the right place to catch it before it lands. The relations between the relative motions and accelerations (vertical, lateral, and approaching/expansion) of the fly ball in the visual field of the outfielder constitute the ecological information that specifies both the landing place of the ball and the direction in which the fielder must run in order to successfully accomplish the task.22 An ecological explanation of

22 The explanation in ecological terms is, of course, more technical and complicated and involves the ecological variable tau, for example (Lee 2009). I leave these details out of the text to avoid technicalities

52 the outfielder problem will consist of making explicit the lawful connection between such information and the behavior of the outfielder.

In the last five decades, the notion of ecological information has received plenty of attention, for example, in the descriptions of the behavior of light relevant to visual perception in terms of ecological optics (Michaels and Carello 1981). The case of resonance is different. It has received little attention. The reason, put bluntly, is that, insofar as psychological explanation focuses on the agent-environment interaction itself—e.g., the relation between behavior of the outfielder and ecological information—the details of an agent’s ability to resonate are not crucial for the explanation. Nevertheless, I claim that a theory of resonance is needed to complement the ecological explanation of perception and controlled behavior: without such a theory, ecological psychology fails to offer a true alternative to computational explanations, which count among their virtues the power to generally account for the cognitive states of a system in mechanistic terms. Ecological psychology, I contend, needs an explanation for the system that enables the agent to detect the environmental information and to behave in the way she does, just as it offers an explanation for ecological information, which is the other side of the same coin. A better understanding of resonance will serve to this purpose.

2.2 Resonance

Resonance is a widely observed phenomenon in nature and has been described in several fields

(e.g., acoustic resonance, mechanical resonance, orbital resonance, optical resonance, electrical resonance, and so on). In simple terms, resonance occurs when a vibrating system forces another system to vibrate at a greater amplitude in some specific frequencies—especially in those related

that, for this specific example and at this specific point, do not add too much to its purpose. See Todd (1981) for a deeper study of those technicalities.

53 to the latter’s natural frequency.23 A canonical example of resonance might be the body of a violin or a guitar resonating to the vibration of one of its strings and amplifying its sound. By means of this example, it can be seen that resonance is closely related with two main concepts.

On the one hand, it is related to amplification. Resonance, in music, is used to amplify the sound of instruments while, at the same time, being part of this very sound. A guitar, for instance, is a guitar and sounds like a guitar because of the sound of the vibrating string but also because of the resonance introduced by its body. On the other hand, resonance is related to coupling. In resonant phenomena, both the vibrating system (the string, in the example) and the resonant system (the body of the instrument, in the example) are coupled at some specific frequency.

They vibrate at the same frequency or, at least, at two lawfully related frequencies. This second aspect of resonance is what allows radios to be tuned to specific radio stations by tuned circuits

(see Blanchard 1941), and it is what inspired Gibson to use the concept in ecological psychology.

In the very first pages of The Senses Considered as Perceptual Systems, Gibson (1966) proposes for the first time the concept of resonance as a metaphor for the role of the brain and the rest of the body in perceptual events:

Instead of supposing that the brain constructs or computes the objective

information from a kaleidoscopic inflow of sensations, we may suppose the

orienting of the organs of perception is governed by the brain so that the whole

system of input and output resonates to the external information. (1966: 5).24

23 Oher definitions might be used (see, for example: https://en.oxforddictionaries.com/definition/resonance), but this one is as basic and as descriptive as I need for the sake of explaining the usage of the concept in ecological psychology. 24 The wording used by Gibson in this quote is worth a closer look. He talks about the “system of input and output” to refer to the system that resonates. This claim might be interpreted in cognitivist terms, i.e., as a system that gathers information from the outside (input) and afterwards performs some behavior

54

In this quote we find, first, a claim against computationalism. And second, an alternative to computation based on resonance, which may be understood in terms of the coupling between the dynamics of the brain-body system and the external information. Gibson uses the concept a second time in The Ecological Approach to Visual Perception (1979) within a discussion of the perception of persistence and change, which, according to him, is at the roots of the understanding of perception itself.25 This time, when he writes about the perception of a persisting thing, he claims:

In the case of a persisting thing, I suggest, the perceptual system simply extract

the invariants from the flowing array; it resonates to the invariant structure or is

attuned to it. (Gibson 1979: 249).

In this second quote, the metaphor of the radio is more obvious than in the previous one. The perceptual system is attuned to the invariant structure of environmental information and extracts the invariants from the flowing array just as a radio is attuned to a radio station by extracting the signal of that particular radio station from the whole flowing array of frequencies. There is, thus, no computation involved, just tuning (i.e., detection, extraction, coupling) to the adequate information to control behavior in a given event. Ultimately, what Gibson’s words reflect is the

(output). This would be an error. Here, the system of input and output must be understood in ecological terms, i.e., in terms of the perception-action loop. The system of input and output refers to the active organism that resonates to environmental information. However, as the perception-action loop is a loop, the concepts of input and output must be understood in a circular fashion: the input might be perception or action, and vice versa. Perception can be the input for action, but action can count as an input as well for detecting some specific ecological information, which would be the output in this case. 25 “The perceiving of persistence and change (instead of color, form, space, time, and motion) can be stated in various ways. We can say that the perceiver separates the change from the nonchanged, notices what stays the same and what does not, or sees the continuing identity of things along with the events in which they participate. The question, of course, is how he does so. What is the information for persistence and change? The answer must be of this sort: The perceiver extracts the invariants of structure from the flux of stimulation while still noticing the flux. For the visual system in particular, he tunes in on the invariant structure of the ambient optic array that underlies the changing perspective structure caused by his movements.” (Gibson 1979: 279).

55 vital role of resonance in the explanatory strategy of ecological psychology: what allows for a lawful relation between behavioral regularities and ecological information is the capacity of the perceptual system, in particular, and the whole organism, in general, to resonate to the invariants of such information.

To this point, we have seen that, in both Gibson’s previous references to resonance, it is characterized as the proper activity of the organism coupled to ecological information (the invariant properties of the environmental information array in the second quote), and the brain seems to have a chief role in it. However, despite its importance within the ecological framework, the quotes above are some of the few places where resonance as such is referred to in

Gibson’s work. Moreover, neither Gibson himself nor Gibsonians give much more information regarding resonance in following works—i.e., resonance remains a metaphor or is consciously ignored (e.g., Turvey et al. 1981; Michaels and Carello 1981; Reed 1997; Chemero 2009). This fact might lead to the conclusion that either resonance is not really important within ecological psychology or has been abandoned by ecological psychologists.

Such a position is, to some extent, defended by James E. Cutting (1982). He claims that

Gibson himself, although did not fully abandon the idea of resonance, abandoned the radio metaphor. Also, according to Cutting, some Gibsonians as Michael Turvey or Bob Shaw changed the radio metaphor for the polar planimeter metaphor introduced by Sverker Runeson

(1977), which could have been a further erosion to the concept of resonance. A polar planimeter is a device able to measure the area of any figure just by moving a tracing point through its perimeter (see figure 3). The measurement of the area of the figure is possible thanks to the differential movement of the wheels and gears of the planimeter and does not need any previous measurement of length or height of its sides. In this sense, the planimeter is able to measure

56 high-order (structural) variables, like the area of a figure, without the need for measure basic lower-order dimensions like the length or the height of the figure.

Figure 3. Smart Instrument. A picture of a polar planimeter with all its parts. In order to measure an area, the Tracing Point goes through the perimeter of the figure and the different wheels and gears of the planimeter roll. When the movement of the Tracing Point ends up, the Integrating Wheel shows the area of the figure.

It is noteworthy, however, that although the radio metaphor is replaced by Turvey and Shaw, the polar planimeter works as a metaphor for the very same reason the radio did.26 Namely, the reason the polar planimeter works as a metaphor is that there is some sort of coupling between the activity of its wheels and gears and the movement of its tracing point through a figure that allows the whole system to measure the area of the latter, without computing it. In this sense, the polar planimeter resonates to the perimeter of the figure as a radio resonates to the frequency of a specific radio station: not by computational processes, but a kind of attunement to the right information. The polar planimeter captures the area of a figure because of the dynamic relation between its gears, its tracing point, and the perimeter of the figure, and the radio tunes in the radio station because its syntonizer is able to resonate to the correct frequency. The change of the

26 Actually, one of Cutting’s (1982) claims in his article is that the polar planimeter metaphor should be abandoned as well for the same reasons Gibson seemed to abandon the radio metaphor; i.e., because both the radio and the planimeter require a person—a central, intelligent controller—to make it work.

57 fine-grained metaphor of the radio for the fine-grained metaphor of the polar planimeter does not affect to the on-going usage of the more general metaphor of resonance as the proper activity of an organism regarding external perceptual information.27 Thus, a better developed concept of resonance is still needed. In order to do this, the first step must be to describe the requirements an operational definition of the notion of resonance has to meet in order to, first, be scientifically useful, and second, be compatible with the other ecological principles.

I want to claim resonance is what is going on inside the organism, especially in the CNS, with regard to what is going on at the ecological scale. This broad definition will work for now and grasps the character both of the ‘external information’ (ecological and constituted by the agent-environment interactions; for instance, the relative accelerations of the fly ball regarding the movements of the outfielder in the example above) and the organismic activity, which along with its relation to the ecological scale is the main explanandum of a theory of resonance.

Nevertheless, some constraints are needed to make it compatible with other ecological principles.

First, I have to make clear the scale at which resonance must be described. There are three possibilities: the agent-CNS interactions, the inner-CNS interactions, and CNS-environment interactions. I will recommend the agent-CNS interactions—i.e., the CNS activity in relation with the overall activity of the agent in her environment—as the correct scale for the analysis of resonance because it captures all the relevant features of both the ecological and the intra-

27 It is important to note that, although they are compatible, the idea of resonance exceeds the metaphor of the planimeter. The planimeter captures the ecological insight regarding the possibility of finding systems that are sensitive to high-order informational variables. In this sense, planimeters illustrate the possibility for some devices to resonate to a kind of information similar to ecological information. This is a reason on its own sake to use the metaphor of the planimeter. However, as in the case of the radio metaphor, planimeters are not active and, to some extent, they work in terms of discreet events—e.g., punctual, concrete events of measurement. As we shall see, the way perceptual systems resonate to ecological information occurs in a more pro-active and continuous fashion. Organisms hold their own intrinsic, endogenous dynamics that resonate to the information generated in the ongoing interaction at the organism-environment scale. This aspect of resonance is not captured by the metaphor of the planimeter.

58 organismic scales. There are problems with each of the other two possibilities. On the one hand, if we pick the inner-CNS interactions, the link to the ecological scale is broken. If we focus only on what is going on inside the organism, our explanation will be as non-ecological as the ones criticized by Gibson. Actually, such an explanation would be at the usual disembodied scale in classic cognitivist (computational, information-processing) approaches. On the other hand, if we focus on the CNS-environment interactions, we will misconceive the role of the body in perception and action, so we will violate another set of tenets of ecological psychology: the active body as central component of the explanation (see the Interlude, below). Gibson (1966) famously introduced the concept of perceptual system to capture the role of many parts of organisms and their very actions as constitutive elements of perceptual processes. The visual system, for example, is constituted by the CNS, but, also by the peripheral nervous system, and the eyes placed in a movable head, which at the same time is placed in a movable body, etc. All these elements are relevant to explain visual perception, and a theory of resonance based just on the CNS-environment interactions will be inadequate for grasping its complexity.

The second constraint on developing an operative concept of resonance is that it must refer to the relation between the ecological (agent-environment) and the intra-organismic scales without appealing to computation. To describe resonance in such terms would entail the violation of what might be seen as the main tenet of ecological psychology. This one tenet is that perception is direct and no neurological processing of information is needed. In more specific terms, ecological psychology is, since its first elaboration by Gibson (1966), an anti- computational approach to psychology and cognitive science; it is a reaction against the cognitive revolution started some years before. In this sense, it seems obvious that to explain what is going on at the intra-organismic scale with regard to the ecological scale (aka resonance)

59 in terms of computation would violate this core commitment of the ecological approach to psychology. Let’s analyze this second constraint closely.

2.3 Ecological Psychology as Anti-Computationalism

An explanation regarding why ecological psychology is an anti-computational approach to cognitive science is necessary because it might not be an obvious claim prima facie. Although anti-computationalism is foundational in ecological psychology and held by the vast majority of ecological psychologists,28 there seems to be no reason, in principle, for an ecological explanation and a computational explanation to be incompatible. Actually, they might be seen as two explanations of the same phenomenon at two different scales: the ecological explanation accounts for the changes of the agent-environment system at the agent-environment scale, while the computational explanation unveils the mechanism that makes those changes possible. To keep using the example of the outfielder, the ecological and the computational explanations might be seen as complementary: the ecological explanation unveils the lawful relations between behavior and ecological information, while the computational explanation describes the mechanism that enables the outfielder to behave and deal with such information. However, in the

Gibson quote above, it is clear that the he believed that brain does not construct or compute outer information. But if computation is not, in principle, incompatible with an ecological explanation, why is it boldly rejected by Gibson and other ecological psychologists? There are, at least, two different but interrelated reasons.29

28 Both Gibson’s main books (1996, 1979) and classic texts on ecological psychology (Turvey et al. 1981; Michaels and Carello 1981; Lombardo 1987; Reed 1997) offer arguments against computation. Such arguments may be also found in the contemporary ecological psychology literature (e.g., Richardson et al. 2008; Chemero 2009; Michaels and Palatinus 2014). 29 Other reasons such as the necessity of a homunculus (Turvey et al. 1981; Turvey et al. 1982) or the appeal to loans of intelligence (Dennett 1978; Kugler and Turvey 1987) for any computational theory to work have been described in the literature as well. Both take issue with the necessity of some kind of

60

First, probably the best way to understand why ecological psychologists reject computation is to analyze the reasons why any kind of computational process is posited to begin with. According to ecological psychologists, computation is needed because cognitive science uncritically assumes the Chomskyan argument of the poverty of stimulus.30 Put simply, the defenders of the argument of the poverty of stimulus claim that the stimuli arriving to the sensory receptors of agents are unspecific, ambiguous, and generally insufficient to support any cognitive task. This fact entails that an internal processing of such stimuli must be taking place in order to enrich and disambiguate them and make them suitable for the task.31 For example, think about perception of size and distance. Although two cars are of the same size in the environment, if they are at different distances from the sensory receptors of an agent, they will be different in size in her 2D retinal image—i.e., the stimulus is unspecific regarding the state of affairs in the world. Thus, a construction of a model of the 3D environment is needed for the agent to be able to recognize the correct state of affairs, namely, that the two cars are equal in size and they are at

intelligence at the basis of computational theories of cognition that, however, remains unexplained in such theories. For example, in order to run a computational algorithm over some sensory input, the first step is to decide which algorithm is going to be applied. Such a decision must be taken by some central controller that recognizes the kind of input received (i.e., that has previous knowledge about the input) and which algorithm must be applied. In other words, a human-like intelligence—a homunculus—is needed for an account of a computational process as the ones postulated in computational theories of cognition. The users of real computers are the ones who play this role regarding them (they decided what functions are carried out but the computers), but cognitivism has no resource to account for it. 30 See Chomsky (1980) or Fodor (1981) for the argument. See Michaels and Carello (1981), for a critique from ecological psychology. 31 In his influential textbook Cognitive Psychology (1967), Neisser claims: “These patterns of light at the retina are… one-sided in their perspective, shifting radically several times each second, unique and novel every moment… bear little resemblance to either the real object that gave rise to them or to the object of experience that the perceiver will construct… Visual cognition, then, deals with the process by which a perceived, remembered, and thought-about world is brought into being from as unpromising a beginning as the retinal patterns.” (pp. 7-8).

61 different distances. Such a model-construction requires some kind of information-processing in terms of computational mechanisms.32

Contrary to the argument of the poverty of stimulus, one of the foundational claims of ecological psychology is the idea of the richness of stimulus (Gibson 1966; Michaels and Carello

1981). Ecological information is specific and unambiguous, so, if existent, it is sufficient to support the different cognitive tasks an agent develops in her environment without the need for any processing, enrichment, or construction of an internal model. As Edward Reed (1997) states:

Standard theories of information processing in both neurophysiology and

psychology take for granted that there is no meaningful information [i.e.,

unspecific, ambiguous] available to an observer except what the observer’s brain

can construct out of sensory inputs. But if ecological information exists, then the

observer’s job is not to create information but to find it. (p. 65; emphasis is mine).

Ecological psychologists see an incorrect assumption in the idea that stimulation is impoverished and that it must be enriched by computation before it can serve in cognitive tasks. Ecological psychologists reject what they call the dogma of the poverty of stimulus. This rejection has the consequence that, if computation is posited only to solve the problem of the poverty of stimulus, and if stimulus poverty is in fact a non-problem, then computation is not needed after all. If ecological information exists, it does not need to be computed, it can simply be detected.

The second reason for the rejection of compuationalism by ecological psychologists is one many of them hold regarding cognitive systems. Namely, that cognitive systems are just a specific kind of physical system:

32 David Marr (1982), for instance, famously offered a mechanism for the construction of a 3D image from a 2D one, going through an intermediate step known as 21/2D.

62

In my opinion, the search for mental mechanisms (of either the symbolic or sub-

symbolic kind) is overvalued. The challenges facing cognitive theory are

considerably more profound, having to do with laws and principles formative of

the functional order characterizing nature’s ecological scale—the scale at which

animals and their environments are defined. I believe that the major concepts

needed to address cognition will not be found in the concepts provided by formal

logics, computational languages, or network architectures. Rather, the kinds of

concepts needed will be developed in the context of an emerging ecological

physics and must include a physical notion of information that satisfies the

conditions of information about, in the sense of specificity to, and a notion of

intentionality suited to the task of particularizing very general principles. (Turvey

1991, p. 85-86).

In this context, “physical” is used in terms of the science of physics, or ecological physics, and not in terms of ontology. The vast majority of cognitive scientists agree that cognitive systems are physical in ontological terms. Namely, that cognitive systems are of the same kind as bodies, trees, or sand; so they reject ontological dualism. However, what ecological psychologists mean when they claim that cognitive systems are physical systems is that cognitive systems must be described and explained by the appeal to the laws of physics and that no special explanatory strategy, like the computational one, is needed (see, for example, Turvey et al. 1981; Turvey et al. 1982; Tuller et al. 1982). Ecological psychologists claim that psychological explanation must be developed at the ecological scale (see figure 2). But also, by this claim, ecological psychologists exemplify within psychology the philosophical maneuver of embodiment originated in the beginnings of the scientific revolution regarding scientific explanation and

63 developed in the previous chapter: explanations must appeal to laws of interaction between entities and not to intrinsic/inner features of those entities as the realizers of the target explanandum (Raja et al. 2017). As a reminder, for example, in Aristotelian physics, the behavior of a falling stone was explained in terms of its substantial form. Namely, in terms of an intrinsic/inner feature that made the stone fall. In Newtonian physics, however, such a falling was explained in terms of a law (the Law of Universal Gravitation) that related the stone with another body (say, the Earth) under some conditions. Ecological psychology favors this second general explanatory strategy for psychology and rejects an explanation that posits an intrinsic/inner feature, a computational mechanism plus representations in our case, as the realizer of the psychological process.

Chemero (2009) puts this explanatory preference for the ecological scale and ecological physics in a very illuminating way. Imagine, for instance, a visual event. In a classic computational interpretation, there is some causal chain that brings light reflected from some object (or surface) to the retina of the observer. From this moment, light gets into the system in form of stimulus and a chain of computational operations is applied to it. The last link of that chain of computational operations is a representation of the external object. Then, explanation of that visual event appeals to an internal mechanism and a representation. According to Chemero, ecological psychologists reject this kind of interpretation of a visual event because they think nothing special happens in the retina, which is nothing more than another way to state the ecological standpoint. That is, there is no reason to change from a purely physical explanation of the event when we refer to the causal chain in the environment to a chain of computational operations after the light gets to the retina. An ecological psychologist would say: “Nothing magical happens in the retina! It’s just one causal chain and physical causation all the way

64 down!”. This position is reinforced by the previously discussed idea of the richness of stimulus: there is no reason to posit computational processes if there is no need for information processing.

What this example highlights is that ecological psychologists reject the assumption that inner states of the organism, and inner computational states specifically, must play a preferred role in the explanation of a visual event or any other perceptual event. In other words, they hold an ecological explanatory strategy. There is only one causal chain (the whole chain of the animal-environment system), and to explain, say, a visual event, is to explain the whole chain by the appeal to physical laws. To explain the whole chain requires the explanation at the ecological scale, and to understand cognitive systems as purely physical systems requires anti- computationalism. It requires the rejection of any kind of inner feature as the realizer of the target phenomenon—nothing special happens inside! Thus, thinking in terms of a chain of computational events after the retina is forbidden in the ecological worldview.

Chemero’s insight can be further reinforced. In the classical interpretation of vision, perception happens at the end of the inner computational chain: perception is the representation of the outer object, and such a representation is at the last link of the chain. In this sense, the object of perception is the representation itself. Of course, such a representation is related with an external object or an external event, but what matters for perception and what is perceived is, in terms of explanation, the construction and awareness of the representation. According to most ecological psychologists, however, perception is the whole chain. Perception is a state of the whole animal-environment system as a physical system. It is not something happening inside the organism in any sense. The inner end of the chain is not special in any regard because it is not the output of any serial process of computation that goes from the outside to the inside of the organism. Perception is just a physical state of affairs that happens to cut across the boundaries

65 between organisms and their environments. This is another way in which the idea of computation, as a serial processing of discreet inputs to get discreet outputs, is incompatible with the main tenets of ecological psychology.

It is important to note, however, that these ideas regarding the status of cognitive systems and the understanding of cognitive processes such as perception do not block the possibility of a description of the role of the CNS or other inner states of the agent in those processes. In other words, we are allowed to explain resonance. The only constraint posited by these considerations is that when we explain resonance, we have to do it having two caveats in mind. First, that resonance is just a part of the chain, so it only makes sense regarding the rest of the chain. And second, that it is a physical chain, so no matter if we explain the whole or a part, we have to appeal to physical tools to do it. In the rest of this work, I will try to develop an ecological cognitive architecture in which resonance can be explained while all the constraints posited in this section are met. However, before I turn to that task, I will pause to set out some ideas about the significance of embodiment in cognitive science and the reason why resonance is relevant to it.

66

Interlude: The Significance of Embodiment

Embodied cognition is without doubt one of the main trends of contemporary cognitive science

(Calvo and Gomila 2008, Shapiro 2014); and ecological psychology is, after all, a theory for an embodied cognitive science—a theory of embodiment, a theory of embodied cognition.

Furthermore, although ecological psychology is not the only one, it is the most consistently embodied and the first of all of them (as defended in Chapter 1). In this sense, ecological psychology not only constitutes a main example of a theory of embodiment, but can also provide some insights regarding the significance of embodiment in the cognitive sciences.

The significance of “embodied” in “embodied cognition” has been disputed by several voices (Adams and Aizawa 2001, Goldinger et al. 2016) that claim embodiment is either meaningless or scientifically irrelevant. I think such a claim is made because what the critics refer as “embodied cognition” is not embodied cognition. To this point, I have defended embodiment is an explanatory strategy that started with Modern science, and I have shown how

Kepler and Descartes were two main figures of that move. However, I have shown as well that none of them, but Gibson, was the founder of embodied cognition. Although both Kepler and

Descartes participated in discussions regarding optics, vision, etc., they never proposed a theory for an embodied cognitive science, but defended the opposite approach. This fact reflects that, in

67 order to develop an embodied cognitive science, some other features beyond the new explanatory strategy are needed. The aim of this Interlude is to discover these features.

The reason why Descartes is not the founder of embodied cognitive science is metaphysical—see the distinction between res extensa and res cogitans in Chapter 1. However, the reasons why Kepler is not the founder of embodied cognitive science are not as obvious. In the following, I will contrast Kepler’s and Gibson’s positions regarding vision to highlight the differences between them and to show the reasons why the latter, and not the former, is the founder of embodied cognitive science. To understand the contrast between them will provide us with a better comprehension of the extent of embodiment as an explanatory strategy, but also will provide us with the other two conditions sine qua non for an embodied cognitive science: (i) the structure of environmental information and (ii) the active role of the body. The combination of the new explanatory strategy with (i) and (ii) confers embodiment is full significance for the sciences of the mind.

I.1 Kepler or the Instrumental Use of the Body

As it has been already claimed, Johannes Kepler revolutionized the scientific field of optics with

Astronomiae pars Optica (1604), but he could have also been the first proposer of an embodied approach to cognition. In a previous quote I reproduce again, while describing the eyes, he said:

The eyes are attached to the head, so, through the head, they are attached to the

body; through the body, to the ship or the house, or to the entire region and its

perceptible horizon. (Kepler 1604: 336).

In this claim, Kepler introduces body and environment in the characterization of the organ of vision, so it might be an instantiation of a proto-embodied cognitive science. If, as it is usually

68 understood, embodied cognition is based on the inclusion of body and environment in the explanation of the cognitive skills, Kepler’s claim seems to be a fair description of the issue.

Moreover, the resemblance between Kepler’s statement and typical statements made by defenders of embodiment is noteworthy. For instance, Gibson claimed: “… [N]atural vision depends on the eyes in the head on a body supported by the ground.” (1979: 1). Thus, it might be true, after all, that Kepler was the founder of embodied cognition. He was not.

As we have already seen in Chapter 1, the Dutch scientist gave an account of vision based on the appeal to visual sprits. Somehow, the images reflected on the retinas get to join with visual spirits in the mind and so perception happens—this is an example of an instance in which the psychological explanation remains disembodied. The way such a process is carried out remains mysterious to Kepler and that is why he chooses to leave the pursuit of its solution to natural philosophers. However, it seems paradoxical that he introduced body and environment in the characterization of the organ of vision—as Gibson did afterwards—to then point towards a fully disembodied explanation of vision itself. This apparent paradox will provide one of the keys to understand the significance of embodiment in cognitive science.

The two apparent contradictory claims regarding vision made by Kepler are not so contradictory when one considers the context in which the Dutch scientist made them. Kepler did not intend to explain vision. In general terms, he was in the business of explaining the optics of the instruments used to study astronomy in his time. In more concrete terms, he was, basically, in the business of justifying telescopes and as reliable for scientific investigation.33 To do that, the path chosen by Kepler was to point out the similarities between lenses of telescopes and human eyes. According to the Dutch scientist, both lenses and eyes (retinas) are just screens light

33 Imagine looking through a telescope in the 17th century! The low-quality lenses, the aberration of the images, suspicious colors, blurry forms. All of them were seen as a kind of “dark magic” by the eyes of that time (see Gal and Chen-Morris 2013 for a further review).

69 reaches after being reflected or radiated from different objects in the environment. In this sense, lenses of telescopes are just as reliable as our eyes: they both equally are physical instruments that aid us to perceive the world. The eyes are just another instrument. This is the reason why body and environment are introduced in the characterization of the organ of vision. The body itself—the eyes in this particular example, but the claim can be extended to any other bodily part—is as a material instrument the mind uses to perceive the visual world just like a telescope.

By equating these two elements, telescopes get justified as instruments worth to use in the research on astronomy. However, and notably, such a movement makes the act of perception itself a matter of the way visual spirits join the images we get from these instruments. Thus, vision is still a disembodied phenomenon in which the mind just happens to be informed by some physical instruments such as eyes or telescopes. I will call this approach the instrumental use of the body.

The main consequence of the instrumental use of the body is that the explanatory strategy of embodiment is not enough for an embodied cognitive science. Even if embodiment as explanatory strategy is pursued, and body and environment are taken in account for the explanation—as in the case of Kepler—, still the proposal will be disembodied if the latter are considered mere instruments, as in the case of Kepler. But, given that, we may ask what the complete significance of embodiment is, or what the meaning of “embodied” in “embodied cognitive science” is. In other words, the question is what the difference between Kepler and

Gibson is that, even holding so similar claims, the latter and not the former is the founder of embodied cognitive science.

70

I.2 Gibson or the Embodied Use of the Body

The place to begin answering the questions above is, again, ecological psychology. Since Gibson proposed it as, allegedly, the first embodied approach cognition, it includes body and environment and their lawful relation as part of the psychological explanation.34 However, such an embodied approach to cognition also includes two essential conditions sine qua non to be claimed as embodied: (i) the structure of ecological information and (ii) the active body. They constitute what I call the embodied use of the body and, as they are essential conditions, whatever the significance of embodiment is, it must include them.

There are three interesting features regarding the embodied use of the body and the two conditions that constitute it. First, the fact they are conditions sine qua non. It is so because they picture a way for both body and environment to play an essential role in the psychological explanation. For example, the body is not just a tool used by the brain to gather information, but the body is active and, by being so, is a basic component of the psychological processes. The active body adds some fundamental features to the intrinsic character of perception, for instance, that are impossible to explain if it is taken to be a mere instrument. When Gibson claims, say, that the eyes are in a movable head, attached to a body, etc., he is not describing an instrument.

Rather, he is pointing out that the way our body is shapes the way our perception is. In this sense, Gibson escapes from the instrumental use of the body by setting these two conditions.

They are necessary for a fully embodied approach to cognitive science as we now know from the above discussion on Kepler that embodiment cannot be based in that instrumental use.

Second, the fact that the two conditions are part of the significance of embodiment precisely because they are sine qua non. By including the two conditions for the embodied use of

34 Although I use ecological psychology as a paradigm, other embodied approaches to cognitive science share this tenet, e.g., the sensorimotor approach to perception (O’Regan and Noë 2001) or some kinds of enactivism (e.g., Hutto and Myin 2013, Di Paolo et al. 2017).

71 the body, body and environment become part of the psychological explanation in a very specific way. Another way to put this is that the mere addition of body and environment in a non- qualified way to psychological explanation (as in the case of Kepler) is not enough to count as an embodied approach to cognitive science. Body and environment must be part of the explanation, but in the specific way pictured by the two conditions suggested by Gibson, so the avoidance of the instrumental use of the body may be assured.

Finally, an interesting feature of the embodied use of the body in ecological psychology is its intellectual inheritance. The ideas of the structure of perceptual information and the active body can be traced back to the philosophical and psychological traditions of Gestalt psychology and phenomenology, respectively. From Gestalt psychology, Gibson takes the idea of the primacy of the structure (form, gestalt) regarding perception and perceptual information, while from phenomenology he takes the idea of the primacy of skills and action in the relation of minded organisms and their environments. The combination of these two insights is at the roots of the two central concepts in ecological psychology: ecological information and resonance (see

Figure 2 above).

Ecological information is what is detected in perceptual events and, crucially, is neither constituted by nor dependent upon simple sensations. It is present in the invariant transformations of the structure of environmental energy arrays regarding the behavior of an organism (Turvey et al. 1981, Michaels and Carello 1981). This Gibsonian notion of information as a kind of physical and temporal structure—i.e., the optic array (physical structure) and its transformations (temporal structure)—is a radicalization or a re-elaboration of Gestalts’ rejection of both the bundle and the constancy hypotheses (Koffka 1923, Wertheimer 1923). According to

Gestalt psychologists, on the one hand, what we see is not a bundle of sensations. When we see a

72 landscape, for example, we see the landscape, not an addition of several thousands of simple sensations. On the other hand, the same stimulation might be differently experienced depending on the context. For example, the same shade of blue, as a simple sensation, might be differentially experience depending on its surroundings, i.e., depending on which other shades of colors are surrounding it. In both cases, simple sensations are not the basis for what is experienced. Experience has more to do with whole structures than with the atomic parts of stimulation: we do not experience the world by adding discrete sensations into big pictures, but we grasp structures that are already given in our experience. This is the idea on which Gibson constructs his notion of ecological information.

On the other hand, as said in the above paragraph, ecological information only makes sense regarding the behavior of the organism. This fact points to the idea that the physical and temporal structure that constitute ecological information are such only with respect to an active body of an organism, and that an organism is able to detect (to resonate to) such information only by being an active body. The idea of the need for an active body as precondition to have experiences (e.g., to perceive, to detect ecological information) incorporates the phenomenological influence on Gibson. As ecological information only appears in relation to a behaving organism, only an organism with specific skills and involved in a specific kind of action detects and constitutes ecological information. There are two senses in which a body is active within the ecological framework. On the one hand, a body is always actively coordinated in a task-centered fashion. The different components of the body are pro-actively coordinated while performing a specific action—this is the idea underlying coordinative structures (see §2.1).

On the other hand, a body is always active in the sense that it is always engaged in some kind of movement or task, and ecological information only appears within that active performance. For

73 example, optic flow only appears when an organism with a visual system moves through her environment. Furthermore, the very structure of the optic flow will vary depending on the specific behavior the organism is performing (e.g., moving forward, moving backwards, or turning).

This conception of ecological information experienced by organisms in virtue of their body skills and actions can be directly traced back to Heidegger’s (1927) notion of purposive practice as the primitive relation between the Dasein and the world, and also to Merleau-Ponty’s idea of body schema (1945/2012). In Being and Time (1927), Martin Heidegger develops what he calls an “analytic of Dasein”. This obscure name stands for the study of the fundamental ways the human being (the Dasein) relates with her environment. One of the main fruits of that study is that the original/fundamental relation between the Dasein and the stuff in the world is usage: we relate with stuff in the word, we get its meaning, and we understand it in a very primary sense, by using it—this kind of relation is labelled as “readiness-to-hand”. This, and not thinking or any conceptual enterprise, is the primary way we engage with pens, bicycles, or stairs, for example. Importantly, setting readiness-to-hand as the fundamental relation between organisms like us and our environment requires that those organisms are engaged in purposive practices.

When the stuff in the environment is defined in terms of usage, an active user engaged in practices of usage is required. The notion of an organism that is engaged with its environment by being intrinsically active and by being engaged in purposive practices is at the basis of

Heideggerian phenomenology and so it is received by Maurice Mearleau-Ponty, who further developed it in his concept of body schema.

In his Phenomenology of Perception (1945/2012), Merleau-Ponty develops the idea of body schema as the set of skills and possible actions bodies have or are able to develop,

74 respectively, and that shape the way the world appears to different organisms. For example, the fact I have hands and I am able to grasp some objects make some things in my environment

“graspable”. The stuff I can grasp given the skills I have and the actions I am engaged in is different from the one other kinds of organisms are able to grasp if they are able to grasp at all.

Merleau-Ponty gives several examples regarding how body schemas shape our relation with our environment. One of the them is typing. Many people can type, although probably they are not able to explain how or to say the exact place of many of the letters in the keyboard:

[Typing] is a knowledge in our hands, which is only given by a bodily effort and

cannot be translated by an objective designation. The subject knows where the

letters are on the keyboard just as we know where one of our limbs is. (Merleau-

Ponty 1945/2012: 145).

Typing is something we do with our bodies, but also something we know with our bodies. The underlying idea here is, again, that our body plays a constitutive role in our experience of the world. We type with our body, but also run with our body, drive with our body, or see with our body. It is not just a mere instrument, but one of the essentials of our mind.

The intellectual inheritance of what I have labelled as the two conditions sine qua non for an embodied cognitive science that Gibson provided in his development of ecological psychology sheds some light on the whole significance of embodiment. Embodiment is an explanatory strategy, but also a way to understand the constitutive role of body and environment in psychological processes. The absence of any of these two features entails an instrumental use of the body—the case of Kepler—as opposed to an embodied use of the body; and the main consequence of that missing is a disembodied approach to cognitive science. In the next section,

75

I point out how some problems related with the significance of embodiment arise in contemporary cognitive science.

I.3 The Significance of Embodiment Misunderstood

Misplacing the crucial influence of the structure of perceptual information and the active character of bodies for embodied theories in cognitive science has two main consequences in the contemporary cognitive sciences. First, some allegedly embodied approaches to cognitive science are more Keplerian (dis-embodied; instrumental use of the body) than Gibsonian

(embodied; embodied use of the body). They include bodies and environments in the explanation of cognitive phenomena in a purely instrumental fashion, but they lack the two conditions sine qua non for embodiment in cognitive science and, ultimately, the explanations they provide are purely cognitivist (i.e., disembodied). Examples are those Chemero (2009), and Käufer and

Chemero (2016), refer to as instantiations of “embodied cognitive science” as opposed to

“radical embodied cognitive science”—or Keplerian and Gibsonian, respectively, in my wording. Wide computationalism (Wilson 2004) or predictive coding (Clark 2015), for instance, would fall into the former category. Both of them include body and environment in the psychological explanation, but in a purely instrumental fashion; namely, body and environment are aids for the real realizer of the psychological phenomenon under study, which always happens to be some kind of mental mechanism that executes some kind of computation and that is somehow implemented in the brain. The case of predictive coding is paradigmatic in this sense. Although it is allegedly an embodied approach to cognition, the body is used just like

Kepler used it. At the end of the day the “magic” happens in the brain, and the psychological phenomenon is realized by a Bayesian mechanism that aims to reduce the matching error between sensory inputs and top-down priors (e.g., action-oriented representations) in order to

76 have some kind of experience. For instance, in perception, some guesses about what sensory input will be received are delivered in a top-down fashion within the Bayesian system.

Eventually, these guesses will be matched against the sensory input and the possible error of matching will be corrected following some rule.35 Such a correction will make the next guess more accurate and, eventually, the system will end up having a guess that is a good enough representation of the sensory stimulation and, hence, a good representation of the environment. It is easy to see how, instead of the matching between retinal images and visual sprits, Bayesian system match two other kinds of correlates, but the essence of the explanation remains

Keplerian, and body and environment play a merely instrumental role in the explanation (as it is suggested even by some defenders of this kind of architectures in cognitive science; see Hohwy

2016; see §3.1.2).

The second consequence of the misunderstanding of the role of body and environment in embodied cognitive science is that some recent criticisms to embodied cognition, like the ones proposed by Fred Adams and Kenneth Azawa (2001) and by Stephen D. Goldinger et al. (2016), are inaccurate insofar as the target of their criticisms is the kind of cognitive science that is more

Keplerian than Gibsonian. They criticize those approaches that identify themselves as embodied cognitive science but still hold an instrumental use of the body. Thus, ultimately, their criticisms against the idea of embodiment in cognitive science is misses its point.

According to Adams and Aizawa (2001), the idea of embodiment applied to cognitive science is just an overstatement. Such a position is reflected in the following quote:

35 It is probably not beneficial at this point to more deeply analyze Bayesian systems and predictive coding architectures insofar we do not need many more details to understand why they are instantiations of the instrumental use of the body. For a deeper analysis of these architectures, see Friston and Kiebel (2009), and Clark (2013, 2015).

77

[Just like bodies] Microscopes, telescopes, mass spectrometers, IR spectrometers,

stethoscopes, and high-speed photography convert environmental energy into a

form usable by our sensory apparatus. In all these cases, common sense has it that

our cognitive faculties, restricted to the confines of our brains, can be aided in any

manner of ways, by cleverly designed non-cognitive tools. (Adams and Aizawa

2001: 44).

I think this quote is interesting because of two reasons. On the one hand, they revert Kepler’s movement to justify the use of telescopes in astronomy. While the Dutch scientist justified the use of telescopes by equating them with eyes, Adams and Aizawa take as commonsensical to understand body and environment as if they were microscopes or telescopes. They uncritically embrace the instrumental use of the body as the way to make sense of the role of the body and the environment in cognition. On the other hand, such a position is a critique to a specific kind of embodied approaches to cognitive science, viz. the Keplerian ones I have just discussed above.

According to Adams and Aizawa, these approaches, in which the psychological process is ultimately realized by a mental mechanism implemented in the brain, are better understood in terms of the instrumental use of the body. Therefore, to label them as “embodied cognitive science” is inaccurate. I agree with them in this point. Approaches such as wide computationalism or predictive coding are essentially disembodied—for the reasons I gave above—and to claim they are instances of embodied cognitive science is an error. However,

Adams and Aizawa’s criticism to embodiment is of no use regarding the Gibsonian approaches.

Once we take body and environment to be basic constitutive parts of the psychological phenomena, they cannot be taken as mere instruments anymore—i.e., as body and environment constitute psychological phenomena, they cannot be treated as telescopes or microscopes. For

78 example, once the body is not a mere converter environmental energy into a form usable by our brain, but a provider of a kind of experience that is not achievable without it (e.g., Merleau-

Ponty’s typing example), the appeal to any kind of instrumental use is a misunderstanding of its role in cognition. When the real significance of embodiment in cognitive science is properly understood, Adams and Azaiwa’s criticism does not apply anymore.

Goldinger et al. (2016) provide a different kind of criticism regarding the idea of embodied cognition and regarding the possibility of an embodied approach to cognitive science.

According to them,

In recent years, there has been rapidly growing interest in embodied cognition, a

multifaceted theoretical proposition that (1) cognitive processes are influenced by

the body, (2) cognition exists in the service of action, (3) cognition is situated in

the environment, and (4) cognition may occur without internal representations.

(Goldinger et al. 2016: 1).

Given these propositions, their claim is that embodied cognition is scientifically and experimentally meaningless. The problem is that, as Adams and Aizawa did, they attack those allegedly embodied approaches to cognitive science that are based on the instrumental use of the body. Those approaches that think embodiment means that cognition is influenced by the body, for example. And, as in the case of Adams and Aizawa, I agree on that those approaches make embodiment meaningless. However, as I have defended through this section, those approaches are not embodied approaches as far as it is part of the significance of embodiment that body and environment constitute, and not only influence, cognition. Thus, due to the misunderstanding of the significance of embodiment in cognitive sciences, Goldinger et al.’s criticism of embodied cognition is aimed to the wrong theories.

79

To wrap up this Interlude, I want to summarize the significance of embodiment in cognitive sciences as twofold: embodiment is, firstly, an explanatory strategy and, secondly, a thesis on the constitutive role of body and environment regarding psychological phenomena.

Gibson developed an instance of embodied cognitive science for the first time when he developed ecological psychology. Crucially, he needed the concepts of ecological information and resonance to do that. Namely, to stay true to embodiment as an explanatory strategy and to the embodied use of the body, Gibson re-described organisms and environments precisely in terms of ecological information and resonance (the former having received far more attention than the latter in the last decades). Because resonance is a central and neglected concept for the significance of embodiment within the Gibsonian framework, I will devote the rest of this dissertation to the task of trying to make sense of it and to provide some tools for its further theoretical and empirical investigation.

80

Chapter 3: A Framework for the Story of Resonance

To an important extent, the organization of a system determines the kind of things that system can do. For example, the organization of the players of a football team in the football field constitute the spirit of the team; namely, the kind of things the team is better prepared to do (e.g., defending, counter-attack, combinatory playing, and so on). Until the 1960’s the dominant systems were 2-3-5 or 3-4-3.36 Those systems were better prepared for full attack without too many interactions between the three lines (defense, midfield, and attack). The game depended on individual actions to a great extent. During the 1960’s and the 1970’s, the organization of football teams experienced a revolution in, at least, two directions. On the one hand, Helenio

Herrera, a Spanish coach, came up with the catenaccio: a full-defense system based on a 5-3-2.

On the other hand, Rinus Michels, a Dutch coach, invented what since then has been called total football: a 4-3-3 organization priming the interaction between the lines and the fact that any player of the team can occupy any of the positions in the field. Based on these facts, total football is an organization that makes teams to play a more collective and interactive soccer than other organizations. In other words, if a football team is organized under the principles of total football, that team will likely play a markedly collective and combinatory style of football.

36 Meaning 2 defenders, 3 midfielders, and 5 strikers, or 3 defenders, 4 midfielders, and 3 strikers, respectively. This is the usual way to refer to the organization of football teams. Real football teams (emphasis is mine).

81

Different players will play multiple roles and the plays themselves will be based on holding the ball in long combinatory possessions (i.e., the opposite of the more individualistic football based on fast transitions and individual plays favored by other organizations).

Another example of the influence of organization in the kind of activities a system can perform is the organization of political parties. For instance, there are three big political parties in Spain: Partido Popular (PP), Partido Socialista Obrero Español (PSOE), and Podemos. The first one, PP, is a conservative party founded by a minister of the previous Spanish dictatorship and that holds a very strong hierarchical organization. The president of the party rules over all its aspects (e.g., who the candidates for elections are, who the leaders of the party in every region are, what the political program of the party is, who the next president of the party will be, and so on). The second one, PSOE, is a social-democratic party that is ruled by a national committee elected by regional committees that are composed by members of the party. This organization is not as centralized as PP’s one, but still a single governing body makes the decisions. The last one, Podemos, is a young party aligned with the new left (e.g., “occupy” movements, alternative

European left). It has a bottom-up organization in which citizen councils select the national administrative body of the party and participate in all the political decisions: all decisions are voted in open polls in which any Spanish citizen can participate. This fact seems to make

Podemos a better prepared party than PP or PSOE to meet Spanish citizens’ contemporary requirements regarding an improvement of both Spanish democracy and social participation in political decisions—or, at least, Podemos is considered to be the best party to do that by an important part of Spanish society precisely because its bottom-up collective organization, which is thought to be better prepared for the kind of activities needed in order to achieve a democratic improvement in Spain.

82

These two examples illustrate how the organization of a system constitutes the kind of activities the system is able to perform. I have talked about football and politics, but it is my contention that the a very similar idea applies to the organization of the CNS: in the case of football or political parties, our commitments regarding their organizational principles, determine they kind of activities they can perform; in the case of the CNS, depending on our commitments regarding the organizational principles we must use to describe and explain its functional structure, the kind of activities we can attribute to it will be different. For this project, I need to find a set of structural and functional organizational principles of a system able to resonate to ecological information while staying true to the tenets of ecological psychology. The aims of this chapter are to describe such a framework and to provide the reasons why it is appropriate for the overall objectives of my project. To achieve these aims, first, I provide the reasons why most of the contemporary organizational and functional descriptions of the CNS are inadequate for a theory of resonance. Second, I propose that Michael Anderson’s version of neural reuse (2014) is the best account of the structural and functional organization of the CNS for a theory of resonance. To do that, I describe Anderson’s theory of neural reuse and its structural and conceptual parallelism with ecological psychology. And, finally, I develop the conception of resonance and offer an abstract model for it within the framework of neural reuse.

3.1 ACT-R, SPA, Predictive Processing and the Like

In recent decades, the models proposing structural and functional organizations of the brain/CNS as a cognitive device can be mapped over two axes: the modularity-distribution axis and the computation axis (see Figure 4). These models can be classified, on the one hand, with regard to the degree of distribution they allow for the implementation of a cognitive capacity (i.e., how many components of the system—at different scales—are taken to participate in the cognitive

83 capacity; e.g., just a group of neurons, a network, or a set of networks). Regarding this dimension, there has been a transition from modular structures inspired by computational architectures to more biologically inspired architectures based on networks. On the other hand, models can be classified in terms of the extent they show a markedly computational character regarding the explanation of psychological processes (e.g., the specificity of certain computational modules as realizers of specific cognitive functions, the seriality of the information-processing, or the appeal to a central processor).37

The mapping of the different models over the modularity-distribution and computation axes has evolved quite naturally from a highly-computational plus highly-modular space to a highly-distributed plus not-that-highly-computational space in the last 30 years. Such an evolution is natural because it basically matches the overall evolution of the cognitive sciences in general.38 In this sense, the neural counterpart of cognitivism, computational neuroscience39 (see

Schwartz 1990, Bower 2013), progressively made space for other more distributed and less computational paradigms at the same time classic cognitive science progressively made space, for example, to more embodied-embedded frameworks. This parallel evolution has been self- conscious sometimes, as, for instance, when Andy Clark proposes predictive processing models

37 The two proposed dimensions are not completely independent. For example, usually, the more modular a model is, the more markedly computational is its architecture. However, as computation goes beyond modularity (e.g., connectionism is a computational but radically non-modular architecture), it makes sense to analyze them separately. 38 According to Eliasmith (2013), for example, such an evolution can be described in terms of symbolism, connectionism, and dynamicism. This classification is compatible with my proposal, although he does not consider the temporal vector in which the three different paradigms are mapped. 39 It might be claimed that computational neuroscience is only a way to approach cognitive neuroscience (see Gazzaniga 1984, Gazzaniga et al. 2009, Ochsner and Kosslyn 2013), which is the real partner of cognitivism regarding the activity of the CNS. I am not denying such a possibility, but computational neuroscience has been the dominant paradigm in terms of structural and functional organization of the CNS within cognitive neuroscience in the last decades, so it encompasses all the interesting features of cognitive neuroscience regarding the scope of this dissertation. Actually, to be fair, computational neuroscience reaches beyond the structural and functional organization of the CNS and also offers models for single-neuron activities or axonal patterning, but all of them are out of the scope of this work as well.

84 as “the perfect neuro-computational partner for work on the embodied mind” (Clark 2015: 1).

Clark understands that novel approaches to cognitive science require a novel understanding of the CNS in terms of its structural and functional organization. I take this idea to be correct and this is the reason why I am proposing a partner for ecological psychology at the CNS scale as the right framework for resonance. To do that, I will compare five of the most influential contemporary organizational paradigms—aka cognitive architectures—for the CNS: adaptive control of thought—rational (ACT-R), semantic pointer architecture (SPA), predictive processing, dynamic systems architecture for brain functions, and neural reuse.

Figure 4. Cognitive architectures (ACT-R, SPA, predictive processing, dynamic systems, and neural reuse) mapped in a bi-dimensional space of computation (x axis) and distribution (y axis). An increment in any of the axes means an increment in levels of computation and distribution respectively. For example, neural reuse occupies a low-computation/high-distribution space while ACT-R occupies a high-computation/low-distribution space.

In the figure above, the five organizational paradigms or cognitive architectures occupy a position in a bi-dimensional space depending on their levels of computation and distribution. The

85 importance of defining the space of cognitive architectures in terms of these two features is twofold. On the one hand, as we have already noted, the evolution of these architectures in the last decades is based on the variation over these dimensions for the different models. On the other hand, and highly relevant for the current work, distribution and computation match the two main constraints for an operative definition of resonance set out in the previous chapter. Namely, as an operative definition of resonance must be compatible with the main tenets of ecological psychology, such a definition must be compatible with the psychological explanation at the ecological scale and must offer a non-computational account of resonance. In other words, an appropriate cognitive architecture for the phenomenon of resonance must be highly distributed and not computational at all: ecological psychology rejects perception as computation (anti- computationalism; see §2.3) and rejects the idea of concrete internal mechanisms as realizers of psychological processes (high distribution; ecological scale, see §1.2, §2.1). Due to these constraints, the great majority of contemporary models of structural and functional organization of the CNS cannot serve as a basis for resonance. In the case of the five examples I am using here, four of them show their failure to meet at least one of the constraints either because of the scale of analysis, which is fully internal and not ecological, hence not distributed enough, or because of the appeal to computation.

3.1.1 The Antipodes: ACT-R and SPA

Among the cognitive architectures I am evaluating here, two of them are completely incompatible with the main tenets of ecological psychology. The Adaptive Control of Thought--

Rational (ACT-R) and the Semantic Pointer Architecture (SPA) violate the two constraints for a theory of resonance: they both are highly computational and, although SPA is less modular than

ACT-R, both are highly centralized.

86

ACT-R is a cognitive architecture first developed by John Robert Anderson (1983, 2007) and based on the classic principles of cognitivism and their instantiation as a cognitive architecture in the General Problem Solver (GPS) developed by Allen Newell and Herbert A.

Simon (1976). Because of this cognitivist basis, ACT-R main features are modularity, computational processing, and the presence of representational states. It is true that the model, as proposed by Anderson (2007), includes some elements inspired by networks, especially in the memory system, but it is not enough to make it a hybrid architecture.

The basic components of a classic ACT-R architecture are a group of seven modules bidirectionally connected to a central procedural module that controls the overall activity of the system and that is devoted to the production and selection of the different rules applied in task- oriented computational processes. The seven modules—a.k.a. buffers—are visual, aural, manual, vocal, imaginal, declarative, and goal. Each is in charge of specific functions that are accomplished by gathering rules from the central module. Some variations of the ACT-R model include other modules. For example, the ACT-R/PM model, developed by Kieras and Meyer

(1997), that includes a module specifically devoted to perception-action tasks (an example of a non-classical ACT-R model in Figure 5).

87

Figure 5. Schema of an ACT-R architecture. The central module (at the bottom) include three sub-modules: pattern matching, production execution, and procedural memory. There is another memory module devoted to declarative memory. In the upper part of the model, we find the specific modules or buffers: the vision and motor modules are separately instantiated, while the rest are grouped—probably as sub-modules—in a general module named “ACT-R buffers”. (From: http://act-r.psy.cmu.edu/about/, accessed 09/22/2017 3.30 PM).

That ACT-R is not well equipped to support resonance as it is required in ecological terms should be obvious at this point. First, ACT-R is highly modular. The architecture is based on a central controller and a set of adjacent modules that realize the psychological processes. In this sense, ACT-R is not compatible with the psychological explanation at the ecological scale. In

ACT-R models, the psychological explanation depends on the mechanism internal to the modules, so it is an internalist/cognitivist account of psychological phenomena (see Figure 1).

And second, ACT-R is explicitly computational. In the model, psychological processes are computational processes based on the application of symbolic rules provided by the central module to symbolic states provided by the one or some of the other modules or buffers. ACT-R is a classic computational model that applies internal rules over internal discrete states in serial discrete steps by means of an internal computational mechanism. This is not surprising insofar as the classic approach to computation, Newell and Simon’s GPS, is the main inspiration of ACT-

R. ACT-R is ill-equipped to serve as a framework for resonance.40

The second cognitive architecture that is completely incompatible with the tenets of ecological psychology and, thus, is not well suited to be a framework for the theory of resonance

I am trying to develop, is Chris Eliasmith’s SPA (2013). SPA is based on the concept of a semantic pointer, which is a neural representation that carries some semantic content (pointing to

40 The fact ACT-R requires representational states is a further reason to its rejection as a model for the structural and functional organization of a CNS capable of showing resonant behavior as those states are difficult to incorporate within the ecological approach to cognitive science. However, as I am not taking issue on representationalism in this work, I will not analyze this problem in depth.

88 more semantic information), that can be manipulated by dedicated subsystems, and that can be used to compose larger representational structures (see Figure 6). The processing (e.g., compression, dereferenciation, composition, etc.) of these semantic pointers is what allows for the manipulation of high-order representations and what supports high-order cognitive functions

(see Figure 7).

Figure 6. Basic unit of the SPA. It is basically composed by 2 subsystems: interface and transformation—the system of action selection, although depicted in the figure, is not strictly part of the basic unit, as we will see in Figure 7. High- order representations get into the system through the interface and are compressed by its hierarchical network to generate semantic pointers. Then, these semantic pointers are transformed in the transformation unit and eventually sent to a different basic unit of SPA. The kinds of transformations applied to semantic pointers are selected in the action selection unit by a process of correcting error signals. (Eliasmith 2013: 248, figure 7.1).

Each of the basic units, such as the one represented in Figure 6, corresponds to a cognitive capacity (e.g., vision, motor control, etc.) and is connected to the rest of the basic units of the architecture. The connections between units, which serve as inputs for the basic cognitive capacities, allow for high-order cognitive functions. For instance, the vision unit can generate a semantic pointer. Later in the process, this semantic pointer may be sent to a different unit and combined with a different pointer generated by that unit to make a high-order representation—

89 still a semantic pointer, but a more abstract one. Such high-order representation may be further compressed in the form of more and more abstract semantic pointers by other basic units that correspond to different cognitive functions. At the end of such a process, we find an instantiation of a high-order cognitive function like, for example, the complete experience of an environmental setting or the solution for a given cognitive task.

Figure 7. Different basic units connected in SPA. Thick lines connect different basic units of the architecture (e.g., send semantic pointers between units). Thin lines correspond to the systems of error and control. (Eliasmith 2013: 249, figure 7.2).

In general, SPA is more distributed and slightly less computational than ACT-R. Also, according to Chris Eliasmith, SPA is a good model for bringing perception and action together in psychological explanation and takes the role of environment and body as important to understand psychological processes, so it might be compatible with the explanation at the ecological scale

(see Eliashmith 2013: 107 & ff.). However, there are three reasons why SPA is not well suited as a framework for resonance.

First, even though Eliasmith claims SPA can integrate insights from embodied cognition as the body and the environment play a role in the psychological explanation, the truth is that

90

SPA is a paradigmatic example of the instrumental use of the body (see §I.1). Eliasmith refers to the environment and the body—he even refers to the idea of synergies in the body—but, at the end of the day, what is relevant to perception, control of action, and the relation between them, is the processing of internal representations coded in the form of semantic pointers. For example, he claims that the bi-directionality of the processing in SPA, “allows the system to learn appropriate representations (i.e., synergies).” (Eliashmith 2013: 108). However, contrary to the ecological approach (see §2.1), in SPA, synergies are representations; namely, synergies are internal states of the processing system. Thus, the role of the body is not active in an explanation based on SPA, but it is just an instrument controlled by internal states.

Second, even though SPA is not a classic computational system, information-processing in the form of computation is one of its basic features. According to Eliasmith, SPA is biologically inspired (2013: 33 & ff.), so it is based on a network structure. The basic units of

SPA (Figure 6 above) are composed by hierarchical bi-directional connectionist networks that process inputs in a parallel fashion to compress high-order representations into semantic pointers or vice versa. In this sense, SPA is still a computational system, although it is not a strongly modular, serial processing system as in the case of ACT-R, for example.

And third, SPA is distributed as far as networks are distributed, but some remnants of modularity are obvious in the architecture. On the one hand, each basic unit is in charge of a basic cognitive function (e.g., vision, motor). High-cognitive functions are built up by combining these basic functions into more abstract semantic pointers, but such a task is also developed by some basic unit. In this sense, although interconnected, basic units behave as modules.

Moreover, SPA has a system in charge of the selection of the different actions basic units perform—e.g., the kind of transformation applied to semantic pointers before they are sent to a

91 different unit—and the kind of interactions performed by the interconnected units. This system serves as a central controller and does not seem to be distributed (see, for example, Figure 7).

Thus, SPA is still framed in a modular fashion. This fact combined with its computational commitments and its use of the body as an instrument, makes SPA unfit for supporting an ecological theory of resonance.

3.1.2 The Neighbors: Dynamic Systems Architecture and Predictive Processing

There are two other cognitive architectures that are possibly compatible with main tenets of ecological psychology, at least prima facie. These two architectures are Predictive Processing and Dynamic Systems Architecture. They both are minimally computational, if not openly anti- computational, and have some degree of distribution. Thus, they deserve a closer look to check whether they could serve as a framework for resonance.

The label “dynamic systems architecture” encompasses similar, but not identical, models of the activity of the CNS based on tools provided by dynamic systems theory (Ward 2002). The use of those tools is not a mere methodological choice, but has a deep influence in the theoretical assumptions of the architecture. By using the tools and concepts of dynamic systems theory, dynamic systems architectures usually entail two hypotheses about the nature of the CNS and its activity: (i) the CNS is taken to be a non-linear dynamic system (see Freeman 1991, Skarda and

Freeman 1987) and (ii) the explanatory models aim to capture the stable patterns in the changing dynamics of the system. The first hypothesis (i) establishes a clear departure from computation as the foundational metaphor for study of cognition. As the CNS is a non-linear chaotic dynamic system, it cannot be described by computational means; namely, depictions of the CNS as a serial (or parallel) processor of discrete internal states according to explicit rules or algorithms are not accurate. The second hypothesis (ii) is a direct consequence of this fact: if we want our

92 models to capture the behavior of the CNS that is relevant for cognitive activities, we need to abandon computation and its correlates as tools for neuroscientific research and, instead, we need to turn to the tools of dynamic systems theory.41

Two examples of dynamic systems architectures are Dynamic Field Theory (DFT; see

Schöner et al. 2016) and the pragmatist neuroscience developed by Walter Freeman (2001).

Both architectures share the idea that cognitive functions are better understood if we take them to be implemented by neural networks (i.e., neuron populations) that change their own dynamic states when interacting with other networks or when receiving an input of sensory stimulation.42

In this sense, cognitive functions are not the processing of packages of information by applying some set of rules or transformations, but the synchronization of patterns of activation of neural networks around an attractor or group of attractors in a dynamic field (see Schöner 2008,

Spencer 2009). Thus, dynamic systems architectures, both in the form of DFT or in the form of pragmatist neuroscience, are clear instances of anti-computationalism.

There are, of course, some differences between DFT and pragmatist neuroscience. For example, while Freeman was empirically focused on single-neuron synchronization most of the time—as in his research on rabbit’s olfactory system—and eventually made room for large-scale integration in neural networks, DFT is concerned with the dynamics or larger neural networks from the beginning. Also, while, according to Freeman the communication between different neurons in different networks is performed by carrier waves (i.e., synchronous waves with a

41 As in the case of other paradigms regarding the structural and functional organization of the CNS, dynamic systems architectures are the manifestation at the neural scale of a wider general movement within the cognitive sciences; a movement that proposes dynamic systems as the most accurate tool to explain behavior (see, for example, van Gelder and Gelder 1995, van Gelder and Port 1995, van Gelder 1998, Richardson and Chemero 2014). 42 There are other dynamic approaches to neural activity at different scales (e.g., Izhikevich (2007) proposes that the dynamic properties of the spiking of individual neurons determinate their computational properties). I focus on DFT and pragmatist neuroscience because their scope is at the relevant scale for my work; namely, at the scale of structural and functional organization of the CNS.

93 specific frequency and different amplitudes that run across different parts of the brain), DFT takes such a communication a matter of coupling and constraints between neurons at different scales, what depict a hierarchical architecture of the brain. Finally, pragmatist neuroscience and

DFT differ in their commitment to representations. While Walter Freeman is against the idea of representation (Freeman and Skarda 1990), DFT supporters usually embrace them.43

The differences between pragmatist neuroscience and DFT, although important for inter- theoretical debates, does not differentiate them as possible frameworks for a theory of resonance.

Both of them are openly anti-computational and, although Freeman’s carrier waves might introduce an idea similar to some ideas in information-processing models, both meet the anti- computationalist requirement for an ecological theory of resonance. Thus, the question now is:

Do pragmatist neuroscience and DFT meet the distribution requirement—that the cognitive architecture must be compatible with the ecological scale for psychological explanation—for an ecological theory of resonance? In their current form, they do not meet this requirement. They both hold an underlying commitment too some form of modularity that makes them ill-equipped to serve as a framework for resonance. In the case of pragmatist neuroscience, the commitment is explicit. For example, when explaining the way perception occurs, Freeman claims that “every sensory module must have a mechanism for organizing its patterns of meaning in space-time.”

(2001: 97; emphasis is mine). Such a claim reveals that Freeman thinks of the brain in a modular fashion, and this is reinforced with his proposal that intentions reside in the limbic system (Ibid.:

90 & ff.).44

43 Schöner (2008), for example, claims that “localized peaks of activation [in neural networks] are units of representation.” (p.: 109). 44 Here, I am performing a charitable reading of Freeman’s proposal. A less charitable reading would take issue on his proposal of a form of top-down hypothesis that are confirmed or rejected when they meet the sensory input (see Freeman 2001: 97, figure 17). Such a framework could put pragmatist neuroscience in the same place predictive coding is regarding resonance (see below).

94

The case against DFT models is a little bit subtler. DFT models are not explicitly modular and probably they are not necessarily modular. However, a review of the recent DFT models for different cognitive functions (e.g., object recognition, in Faubel and Sch 2010; spatial memory and spatial language, in Lipinsky et al. 2006; or speech motor planning, in Brady 2009) reveals that all of them are composed by functionally dedicated networks, which entails some extent of modularity and weighs against the idea of the explanation of psychological processes at the ecological scale. Importantly, the allegedly modular idiosyncrasy of these models is likely due to a conscious decision. DFT was first developed as a model of infant reaching in the A-Not-B

Error (Spencer et al. 2001). Later, Spencer and Schöner made it into a control structure (Schöner et al. 2016). It went from being a model of brain-body-environment systems to a model of brains and of the modular control of different cognitive tasks. Thus, as in the case of pragmatist neuroscience, modularity makes DFT inappropriate as a framework for a theory of resonance.

As a last note on dynamic systems architectures, it is worth reinforcing the idea that they could eventually be good candidates to be a framework for resonance, or even that they could provide some insights to improve a theory or resonance. Dynamic systems architectures use tools from dynamic systems theory to account for the dynamics of the CNS, which I think is the correct move (see Chapter 5). It is true that, at the moment, they hold on to some conceptions regarding the structural and functional organization of the brain that make them ill-equipped for resonance. But, unlike ACT-R or SPA, it is possible for dynamic systems architectures to be interpreted so as to fully meet the requirements posited for an ecological theory of resonance.

The last cognitive architecture I am going to review in this section is Predictive

Processing, which is a trendy cognitive architecture based on Bayesian probabilistic generative models implemented on bi-directional hierarchical networks (see Friston and Kiebel 2009,

95

Friston 2010, Clark 2015). Predictive processing is constructed over two main concepts: the free- energy principle and predictive coding. The free-energy principle states that “any self-organizing system that is at equilibrium with its environment must minimize its free energy.” (Friston 2010:

127). Free energy is a concept from statistical physics and already in use, for example, in machine learning (e.g., Hinton and Zemel 1994). In thermodynamics, free energy measures the amount of energy available for useful work. Once in the sphere of the cognitive, “[free energy] emerges as the difference between the way the world is represented as being, and the way it actually is.” (Clark 2013: 186). The relation between the two senses of free energy has to do with the fit between the representation of the world and the world itself: the better the fit, the more energetic resources of the representational system are at work, so the less free energy available in the system.45

The motivation for putting forward the free-energy principle is that a cognitive system

(e.g., the brain), as a biological and self-organized system, aims to maintain its internal states while facing an everchanging environment. Put simply, according to Friston (2010; see also

Friston et al. 2006), a way to achieve such an aim is to be able to reduce the surprise this system faces, and minimizing free energy accounts for that:

A system cannot know whether its sensations are surprising and could not avoid

them even if it did know. This is where free energy comes in: free energy is an

45 Andy Clark—I think completely consciously—simplifies the meaning of free energy as it is used in the field of the cognitive sciences. In a more precise definition, free energy, or the difference between the representation of the world and the actual world, might be characterized in the following terms: “Free energy can be evaluated by an agent because the energy is the surprise about the joint occurrence of sensations and their perceived causes, whereas the entropy is simply that of the agent’s own recognition density.” (Friston 2010: 129; emphasis is mine). Where recognition density is a probability distribution of the causes of the sensory input. In this sense, the discrepancy captured by free energy is a discrepancy between the causes of sensations in the actual world and the causes attributed to sensations in our representations of the world. The lesser the free energy, the better the grasping of the causes of sensations by the internal model (representation) of the system. The free-energy principle, at the end of the day, states that cognitive systems’ job is to minimize such a discrepancy/to enhance such a grasping.

96

upper bound on surprise, which means that if agents minimize free energy, they

implicitly minimize surprise. (Friston 2010: 128).

In the case of perception, for example, what a cognitive system must do, according to the free- energy principle, is minimize the surprise it faces in sensory stimulation. Namely, it must reduce free energy or the discrepancy between the way it internally represents the world and the way the world actually is. The two ways to do it are changing its internal states or changing the pattern of stimulation itself through actions (see Figure 8).

Figure 8. This figure depicts, somewhat abstractly, the different elements of a system that works under the free energy principle. On the one hand, changing states of the environment, 푥̃, are inaccessible to the agent, and are constituted by previous environmental states, the actions of the agent, and other variables that are not relevant for this explanation (external states). These states generate sensory states, 푠̃, that are accessible for the agent (sensations). On the other hand, states of the agent, μ, and actions, a, minimize free energy, ƒ(푠̃,μ)—internal states and action or control signals. Thus, the system changes its action patterns, or its internal states, or both, to reduce the discrepancy between its internal states, μ, and sensations, 푠̃. In Clark’s wording, the system reduces the difference between the way the world is represented as being, and the way it actually is. (From Friston 2010: 128, figure 1).

97

The immediate question that arises once the free-energy principle is stated is a question about what kind of system is able to minimize surprise/free energy.46 In the literature on predictive processing, such a system is a generative model that functions as a probabilistic engine based on Bayes theorem. This is known as the Bayesian brain hypothesis (see Knill and

Pouget 2004, Friston et al. 2009) and may be described as:

[T]he brain is an inference machine that actively predicts and explains its

sensations. Central to this hypothesis is a probabilistic model that can generate

predictions, against which sensory samples are tested to update beliefs about their

causes. (Friston 2010: 129).

The generative model of the Bayesian brain is implemented in a bi-directional hierarchical structure (e.g., a bi-directional hierarchical network architecture) in which every level generates top down priors (guesses, hypotheses, predictions) that aim to predict the bottom up input arriving from lower levels. The error of matching—i.e., the surprise, the free energy—between the priors and the inputs is reduced on-line by the process known as predictive coding. Put simply, predictive coding is a process by which, at every level of the hierarchical system, a prediction error is generated and passed forward to drive the units at upper levels to help them to optimize their priors/predictions. Sensations, which constitute the main input, are at the bottom of the system, so all the hierarchy is informed by sensory data. By the process of predictive coding, priors are optimized at every level to be able to anticipate the sensory data they need to match. This is the basic way in which the system meets the free-energy principle.

46 I will continue using the example of perception and action, but the model can be generalized to other cognitive functions.

98

At this point, we have a full picture of predictive processing as a cognitive architecture. It consists of a bi-directional hierarchical inference machine (e.g., the brain) that is composed by generative engines that produce top-down predictions of bottom-up sensory data at every level of the hierarchy. Such predictions are optimized, following the free-energy principle instantiated as the minimization of surprise in sensations, by a process of reduction of matching error between the predictions themselves and the sensory input. This process is known as predictive coding.

This is the way in which the cognitive system generates and improves its representation of the world and, thus, its cognitive functions. However, predicting processing, in its current form, does not seem to be an adequate structural and organizational principle of the CNS for a theory of resonance. As in the case of dynamic systems architectures, the reason might not be obvious at first glance.

That predictive processing is not equipped to support a theory of resonance is not clear from the beginning because, prima facie, the free-energy principle is not incompatible with the main tenets of ecological psychology.47 I want to stress this point because, although I am not going to offer a deep analysis of the relation between the two topics, it is the reason why I restrict the inability of predictive processing to be the framework for resonance to its current formulation, but I leave open the possibility of eventually using the free-energy principle to explain resonance. There are two reasons to hold such a position. First, I do think the free-energy principle might be useful to ecological psychology. For example, it might be a way to fine-tune resonance to the right environmental variables. And second, there are works in ecological psychology that try to relate its main tenets with thermodynamic principles, so the inclusion of the free-energy principle would not be a rara avis at all (Swenson and Turvey 1991, Swenson

47 And I am not alone in this position; see Bruineberg and Rietveld (2014) or Bruineberg, Kiverstein, and Rietveld (2016).

99

1992, Swenson 1997). Given this, it seems clear that the problem with predictive processing is not the free-energy principle, but the current way it is implemented in terms of cognitive architecture; namely, the appeal to Bayesian brains as mechanisms through which behavior may be explained in respect of minimization of free energy.

Bayesian brains are incompatible with ecological psychology,48 because of the role of distribution and, especially, computation in the Bayesian brain theory. In terms of distribution, the hierarchical architecture of predictive processing is similar to SPA, but as far as in the predictive processing architecture there are not basic units related to specific cognitive functions, its degree of distribution is higher than the one of SPA. However, predictive processing is still brain-centered and, thus, incompatible with the ecological scale for psychological explanation.

Such brain-centrism is obvious in some authors like Hohwy (2016):

[Predictive Error Minimization] PEM should make us resist conceptions of this

relation on which the mind is in some fundamental way open or porous to the

world, or on which it is in some strong sense embodied, extended or enactive.

Instead, PEM reveals the mind to be inferentially secluded from the world, it

seems to be more neurocentrically skull-bound than embodied or extended, and

action itself is more an inferential process on sensory input than an enactive

coupling with the body and environment. (p. 259).

Other authors—like Andy Clark (2015)—want to make predictive processing the neuro- computational partner of embodied cognition, as we have seen. However, it is difficult to see how they escape from the instrumental use of the body when predictive coding is neurally

48 Some defenders of Bayesian brains strongly disagree with this claim and have tried to conflate the perspectives. See, for example, Friston et al. (2012) or Clark (2015; Chapter 6).

100 implemented. Even when reduction of surprise is carried out by a changing in an action pattern, this change is centrally governed by the brain (see Figure 8 and its Action and Control Signals).

The apparent inability of Bayesian brains to meet the distribution constraint might be enough to make them ill-equipped to support a theory of resonance. However, it is fair to say that they are highly distributed models, so it might be granted they somehow meet the constraint—it is possible that a purely distributed model is not achievable. Nevertheless, and although Bayesian brains are acceptable in terms of distribution, they cannot be accepted in terms of computation.

Bayesian brains and, therefore, predictive processing architectures are computational models.

Although the free-energy principle does not necessarily entail computation, predictive coding does. The whole activity of generating predictions, comparing them with sensory data, generating predictive error, and using it for optimizing predictions, is a computational activity. In this sense, there is not very much a difference between a Bayesian brain and any other bi- directional hierarchical computational networks but the kind of rules the former applies: all are based in a form of parallel computation in which top-down information flows interact with bottom-up information flows following a set of computational rules—in the case of Bayesian brains, such rules are also probabilistic. For this reason, predictive processing architectures, as the three previous ones, cannot be the framework for the ecological theory of resonance.

3.2 Neural Reuse

None of the previous four cognitive architectures meets the requirements of distribution and anti- computation to be the framework for an ecological theory of resonance. Given this situation, a different cognitive architecture, based on a completely different structural and organizational principle, is necessary to support resonance while staying true to the main tenets of ecological psychology. I want to propose Anderson’s version of neural reuse as the principle of cognitive

101 architecture (Anderson 2010, 2014, 2016). There are three main reasons for this choice. First, neural reuse allows for a degree of delocalization regarding cognitive functions that meets the requirements for an ecologically oriented psychology. Second, Anderson’s current interpretation of neural reuse is plausibly anti-computational. And third, proposals of neural reuse show a strong theoretical parallelism with ecological psychology although their scales of description are different. This fact allows for explanations at different scales—e.g., ecological scale and neural

(intra-organismic) scale—using the same general strategies, as required by most ecological psychologists.

3.2.1 What Is Neural Reuse?

Neural reuse is an organizational principle for the functions of the brain. Its core tenet is that different parts of the CNS are used and reused to accomplish different functions at multiple spatial scales, but, crucially, at the scale of neural networks or neural regions. More concisely, neural networks that came first in terms of evolution (e.g., networks developed for basic perceptual tasks or for control of action) are reused in functions that came evolutionarily later

(e.g., language or decision making). The phenomenon of neural reuse, according to Anderson, is achieved by the soft-assembling of different neural networks in different functional systems as required by the cognitive task:

Neural reuse theories generally accept… that, rather than developing new

structures de novo, resource constraints and efficiency considerations dictate that

whenever possible neural, behavioral, and environmental resources should have

been reused and redeployed in support of any newly emerging cognitive

capacities. (Anderson 2014: 7).

102

A set of experimental results at different neural scales supports the idea of neural reuse. For example, meta-analyses of more than 2.000 functional neuroimaging experiments to explore the functional diversity of different regions of the brain and their connectivity patterns were conducted in different studies (see Anderson et al. 2013, Anderson and Penner-Wilger, 2013). In these studies, two different hypotheses were tested. First, Anderson et al. (2013) hypothesized that, if neural reuse is the accurate organizational principle for the functions of the brain, the results of the meta-analysis should show an important degree of functional diversity in the different regions. They used a measurement of ecological diversity to approach this study

(Anderson 2016: 2) and the first hypothesis was corroborated by the results of their analysis (see

Figure 9). Second, Anderson and Penner-Wilger (2013) hypothesized that, if different neural regions of the brain are connected to different functional partners depending on the functional circumstances, different sets of regions should be more likely to be active at the same time in different tasks. This second hypothesis was also corroborated and the simultaneous activation of different sets of neural regions was found for different tasks of attention, emotion, and semantics among others (Anderson et al. 2016: 4). Besides these experimental results, other studies at different neural scales point to the plausibility of neural reuse as the organizational principle for the functions of the brain—e.g., studies on the ubiquity of neuromodulation at different scales

(Bargmann 2012; Hermans et al. 2011), or studies on mixed selectivity in activation of single neurons (Cisek 2007, Rigotti et al. 2013).

103

Figure 9. These three functional fingerprints exemplify the experimental results that point to the idea of the functional diversity of neural regions. Across 11 task categories, the functional fingerprints represent the relative amount of activity for three voxels from three different regions: left anterior insula (aIns-L), left thalamus (Th-L), and left auditory cortex (AudL), respectively. All of the volxels present some degree of functional diversity, although it is bigger in aIns-L (.88) and Th-L (.76) than in AudL (.41). This result is compatible with neural reuse although the voxel from AudL seems to be relatively specialized: functional diversity is not incompatible with some tendencies to specific task of some neural regions (see below). (From Anderson 2016: 3, figure 2).

With neural reuse as an organizational principle, Anderson (2014, 2016) suggests a cognitive architecture based on plastic networks. The neural reuse architecture consists of a set of interactive neural networks (or regions), with no clear boundaries between them (see Pessoa

2016), that actively soft-assemble in new patterns of connectivity, or actively soft-assemble in the form of existing connectivity patterns, to accomplish different and/or new tasks (interactive differentiation and search; see Anderson 2016: 5). The way in which these networks are able to perform such an active search is not clear within Anderson’s framework, but it is not especially

104 important at this point.49 The relevant issue regarding neural reuse architectures is whether the requirements of distribution and anti-computation are met by them. Let’s start with the distribution requirement.

As an organizational principle, neural reuse stands explicitly against componential (aka modular) accounts of the structural and functional organization of the brain. According to

Anderson (2010, 2014, 2016), contemporary cognitive neuroscience is based on a strongly componential version of the computational approach to cognitive science (see Edelman 2008;

Gallistel and King 2009) and so it is entangled with the idea of the brain as a group of different components or modules, each one of them devoted to performing a specific function (see

Anderson 2014: xix-xx; see also Figure 10).

Figure 10. Comparison between three different functional organizations in the brain. In all cases, there are 6 structures (boxes 1 to 6) and two different functions represented by the two lines (solid and dashed). The first organization (a) is modularity. Structures 1, 2, and 3 combine to support one task and 2, 3, 4, 5, and 6 to support another task. Both combinations have a residual if not a non-existent interaction, though. This is an organization based on segregated modules as proposed by the componential version of computation. The second organization

49 In Chapter 5, I will propose a plausible mechanism for active search based of concepts directly borrowed from the literature on dynamic systems (metastability) and self-organized systems (self- organized criticality).

105

(b) is holism, which is typical of connectionist models, for example. In this organization, all the structures participate in both tasks, but their pattern of connectivity is always the same. Finally, the third organization (c) is neural reuse. Here, most of the structures participate in both tasks (4 does not participate in the solid line task), but what changes dramatically is their pattern of (functional) connectivity depending on the task. (from Anderson 2014: 8, fig. 1.1).

Contrary to modular accounts of brain functions, neural reuse architectures are based on a plastic structural and functional organization of the CNS in which different neural networks combine differently to perform distinct functions depending on the constraints of the cognitive task at play

(Figure 10). Such an organization allows for a highly delocalized account of the different brain functions: they are not realized by specific components (e.g., specific neural networks) of the brain, but by neural regions interacting in novel ways and finding new patterns of functional connectivity depending on the demands of each task. Given this, neural reuse entails a highly distributed cognitive architecture in, at least, two senses. First, it is distributed in the same sense

SPA or Predictive Processing are distributed; namely, neural reuse architectures are distributed architectures as far as they are built up in terms of networks. And second, neural reuse is distributed in a deeper sense: it is not just that many components of a network, in opposition to a single component or module, account for a cognitive function. The deeper sense of distribution in neural reuse architectures incorporates the idea that every component may be part of different networks and, thus, of different brain functions. To put it in a different way, the distribution of neural reuse architectures is such that brain functions do not depend on specific networks or modules, but on the patterns of interaction between different neural regions. In this sense, to get back to the football example at the beginning or the chapter, neural reuse is a kind of total football where different components can play different functional roles depending on the demands of the task. This is so because the key to performing an action is not in the features of each individual component of the architecture, but in the way in which they relate to each other.

106

These architectures entail a higher degree of distribution inasmuch as the very idea of a specific component or basic unit dedicated to a brain or cognitive function has no place within it. Neural reuse suggests that neural regions have their own personalities (see Anderson 2014: 113 & ff.), i.e., that neural regions have tendencies to be active in specific situations. However, these tendencies cannot be understood in componential or modularity terms. The functions of the CNS are defined in terms of the interaction of different neural regions given the task an organism is involved in and not in terms of the intrinsic activities of those neural regions. This interactive nature of neural reuse architectures is what make them compatible with psychological explanation at the ecological scale (more on this in §3.2.2), but for the purposes of this section, the takeaway is that neural reuse architectures are highly distributed architectures.

Regarding the anti-computational requirement for an ecological theory of resonance, it is noteworthy that neural reuse as presented by Anderson since, at least, After Phrenology (2014), is explicitly anti-computational. In general terms, the brain is described as a mediator between actions of the organism and the environmental conditions of these actions:

Here I try to develop a picture of the brain as a complex causal mediator of the

relationship between body and environment… What emerges in the course of

these chapters is that the brain is best understood as first and foremost an action

controller, responsible for managing the values of salient organism-environment

relationships. (p. xxii)

There are two important considerations to make regarding this quote. First, that, interestingly enough, Anderson’s description of the function of the brain in psychological processes is pretty similar to Gibson’s one (see §2.2). Even more, this similarity is bidirectional and it is possible that Gibson also had in mind an idea close enough to neural reuse for the functional organization

107 and the role of the brain in perception, action, and cognition. In his Gibson biography, Edward

Reed (1988) refers to and quotes some unpublished works in the Gibson archive at Cornell

University. Reed writes:

Considering the capacity of the nervous system to adjust to stimulation in many

different ways, Gibson hypothesized that ‘a given set of neurons is equipotential

for various different functions in perception and behavior. The same neuron may

be excited for different uses at different times. [Therefore] neurons, nerves, and

parts of the brain have a vicarious function. A nerve cell is not the same unit in a

different combination of nerve cells.’ (Ibid.)” (Reed 1988: 224;).

Here, on the one hand, Gibson seems to be proposing the functional diversity of different components of the brain such as sets of neurons; namely, he is aligning himself with one of the more basic postulates of neural reuse.50 On the other hand, and although the scale of Gibson’s proposal (i.e., neurons, nerve cells) is different from Anderson’s one, both speak in favor of a functional description of the neural activities. The similarity between the two hypotheses is astonishing and reinforces the parallelism between ecological psychology and neural reuse I will try to defend in the following (see §3.2.2).

The second consideration refers more concisely to the anti-computational status of neural reuse architectures. I want to clearly state that I do not deny the possibility of finding a way to interpret Anderson’s proposals in After Phrenology as computational. Actually, it seems

Anderson himself supported this idea in previous works (e.g., Anderson 2010). However, I think that, at least in his book and later works, there are unambiguous claims explicitly pointing to anti-computationalism, such as:

50 To avoid anachronisms, it might be said that he aligns himself to neural reuse-like ideas.

108

[I]t is worth an initial if brief reflection on an important disanalogy between the

brain and a computer: whereas a computer is typically understood as a device that

carries out a specific instruction set on (and in response to) inputs, brain responses

to stimuli are characterized instead by specific deviations from intrinsic dynamics.

(Anderson 2014: xx).

Or, a little bit later:

My current approach to this problem… is to quantify the functional properties of

neural assemblies in a multidimensional manner, in terms of their tendency to

respond across a range of circumstances—that is, in terms of their dispositional

tendencies—rather than trying to characterize their activities in terms of

computational or information-processing operations. (Anderson 2014: xxii-xxii).

These claims are unambiguous enough to take neural reuse as an explicitly anti-computational functional organization of the brain. Anderson favors an understanding of the brain functions in terms of brains’ changing dynamics and not in terms of any kind of information-processing. As in the case of Freeman’s pragmatist neuroscience, the tools and concepts to understand brain activities in neural reuse architectures (e.g., interactive differentiation, active search, or multiple selectivity) come from dynamics systems theory and complexity science, and not from computations or symbol/formal systems.

To this point, I think it is pretty clear that neural reuse architecture is in the best position to serve as a framework for an ecological theory of resonance as far as the two requirements to be so (distribution and anti-computationalism) are met. For this reason, neural reuse architecture is a better candidate than ACT-R, SPA, Predictive Processing, or dynamic systems architectures

109 to partner with ecological psychology at the scale of the CNS. However, there are further reasons to connect the theories. I analyze them in the next section.

3.2.2 Neural Reuse and Ecological Psychology

Neural reuse architecture focuses on delocalization, interactions between neural regions, and the high sensitivity of patterns of functional connectivity to the demands of different cognitive tasks.

In this sense, this architecture, based on neural reuse as an organizational principle, offers a way to explain the intra-organismic scale in the same terms (i.e., with the same language51) ecological psychology uses to explain the ecological scale. When the focus is on interactions and soft- assembled structures enabling the agent to perform a concrete action, and not mechanisms and concrete areas of the brain, the activity of the CNS becomes easier to explain in terms compatible with the main tenets of the Gibsonian theory.

As we have seen, the requirements of the primacy of the ecological scale in psychological explanation (distribution) and the rejection of computation are met by neural reuse, and thus, there is a basic level of compatibility with ecological psychology. However, the compatibility between the two does not stop here and neural reuse and ecological psychology also show strong structural and theoretical parallels. Previously I have analyzed several correlates of ecological psychology: interactivity, the primacy of action, the interplay between complexity and redundancy, the appeal to synergies (coordinative structures), and the use of dynamic systems as explanatory tool (see §2.1). It is my contention that, as a theory of the structural and functional organization of the CNS, the neural reuse architecture shares these features with ecological

51 This aspect is crucial regarding NSF’s grand challenges for the sciences of the mind. More on this in the Coda.

110 psychology. It can be claimed that neural reuse is the ecological approach to brain functioning.52

Just like ecological psychology as a framework for agent-environment interactions, neural reuse is interactive and concerned with the complexity and redundancy problems at the neural scale. First, CNS functions are described in terms of the patterns of connectivity of brain regions and not in terms of latter’s intrinsic activities. Namely, the way the brain regions interact is the crucial feature of the cognitive architecture to explain brain functions. Second, the CNS offers many different ways to soft-assemble a functional system (redundancy) in order to accomplish any given different task (complexity): there are many neural regions and many connections between them, but only a subset of these constitute a soft-assembled functional system. The question is how specific patterns of connectivity are selected. Neural reuse architecture’s way of dealing with this complexity/redundancy problem resembles the way ecological psychology deals with it at the organism-environment scale. Patterns of connectivity between different neural regions are task-dependent soft-assemblies. In other words, they are synergies or coordinative structures: self-constrained organizations that accomplish some task- specific functional role. Importantly, these synergies are constituted by means of interactive differentiation and active search; namely, they are generated and soft-assembled by a kind of action. This is the sense in which the way neural reuse architecture faces the complexity/redundancy problem is based on the same resources (action and synergies) that ecological psychology uses to solve the same problem. Finally, the last parallel between the two theories is that both take dynamics systems theory (DST) as the source of the correct tools and concepts to analyze interactions among the components of their respective scale.

52 This idea not a rara avis in the ecological literature; see de Wit et al. 2015.

111

The parallels between neural reuse and ecological psychology are not trivial, but have deep implications for our understanding of the relation between the ecological and the neural scales. As noted before, it can be claimed that neural reuse is the ecological approach to brain functioning. The particularity of this ecological approach is its scale: instead of the scale the organism-environment system, the ecology targeted by neural reuse is the one of CNS systems.

Then brain may be seen as an ecosystem with its own components (e.g., the environment of a neuron are other neurons; the environment of a neural regions are other neural regions; and so on) and this is why its functioning may be grasped by an ecological approach. Furthermore, the idea of the ecological approach applied at different scales points to an understanding of the CNS as a specific scale of the whole organism-environment system. There are not two systems at two different levels that are explained in different terms, but only one whole system (organism- environment system) that can be analyzed at different scales (e.g., neural, motor, kinematic, etc.) by the appeal to the same ecological paradigm. In this sense, the CNS is just a chunk of the organism-environment system, not a different entity that operates in a different level, and according to different principles. The re-conceptualization of the brain in these terms, allows for the full compatibility between neural reuse architecture and ecological psychology and, also, for a specific conceptualization of resonance.

3.3 Resonance within Neural Reuse

As we have seen, choosing neural reuse as the framework for the study of the activity at the intra-organismic (neural) scale has two main consequences. First, it allows for the elaboration of a description of different scales relevant for the cognitive phenomena—ecological and neural scales—under compatible tenets. Second, it allows for a re-definition of resonance in more precise terms. Given the structural and theoretical parallels between ecological psychology and

112 neural reuse, and given that both the ecological and the CNS scales are proposed to be explained using DST, we can understand resonance by describing the relationship between the dynamic system at the neural scale and the dynamic system at the ecological scale. In other words, because both ecological psychology and neural reuse use DST as an explanatory tool, the dynamics of a given agent-environment interaction is captured by a dynamic system, and the respective intra-organismic interaction in the same event is captured by another dynamic system.

But resonance is what is going on at the intra-organismic scale with regard to the ecological scale, so a complete explanation of resonance also requires showing the specific kind of relation between the two dynamic systems. In other words, a model to explain resonance must explain how the dynamic systems at the two scales are coupled.

My core claim regarding the coupling between the dynamic systems at the two scales is that the ecological variable we take to be the main variable of our dynamic systems model at the ecological scale (i.e., the collective variable in DST jargon) is also the main variable of the dynamic systems model at the intra-organismic scale. So, for example, if the interaction at the ecological scale is described by using the variable for the time-to-contact (τ), the moment of inertia, or the relative phase of two oscillatory components of a system (ϕ), the interaction at the intra-organismic scale must be explained by appealing to the same variable. This can be re-stated in several ways: the two dynamic systems are constrained by the same ecological variable, the dynamic system at the intra-organismic scale reverberates to the ecological variable, the ecological variable defines the coupling between the two dynamic systems, etc. The idea is a simple one, though, and allows the final conceptualization of the concept of resonance: to explain resonance is to account for the coupling of the dynamic systems at the ecological and

113 intra-organismic scales in terms of the ecological variable that constrains a given agent- environment interaction.

Notice the importance of neural reuse as the framework for this idea. In principle, for example, pragmatist neuroscience or Dynamic Field Theory could fit in this definition of resonance as far as both use DST to account for the activities of the CNS. However, both in pragmatist neuroscience and in Dynamic Field Theory, the activities of the CNS are accounted by the dynamics at the neural scale; namely, the intrinsic dynamics of the system at the neural scale are the realizers of the cognitive function. However, in the case of resonance, both the neural and the ecological scales, and their interactions (coupling), are relevant for the explanation of the function.53 Such a fact cannot be grasped if the neural scale is described in terms of pragmatist neuroscience or Dynamic Field theory, as they rule out the importance of the ecological scale for the explanation (they do not meet the distribution requirement; see §3.1.2).

In order to make both the CNS and ecological scale relevant for the explanation of the CNS function, neural reuse is the adequate framework.

This conception of resonance just proposed may be described in terms of an abstract model for the sake of clarity and precision. Suppose a system performing some (psychological) task. We can define the dynamics of such a system at two different scales:

A—E; the agent-environment (aka ecological) scale.

N; the intra-ogranismic (CNS) scale.

The dynamics at these two scales (i.e., their change over time) may be defined as two functions of some variable:

A—ED = ƒ(ψ, t)

53 By this statement, I am not giving up on the primacy of the ecological scale for the psychological explanation (see Chapter 5), but I am just adding the story regarding the intra-organismic scale to complement the explanation at the ecological one.

114

ND = ƒ(χ, t)

Where, ψ is the ecological variable and χ is the variable that defines the intra-organismic dynamics. Now, as when N resonates to A—E these two functions are coupled in terms of the ecological variable, so χ may be defined as:

χ = kψ

Where k stands for the coupling function. The abstract model does not detail the characteristics of such a function, so it might be just linear constant or a more complex function. Given that, our abstract model of resonance for some (psychological) task would be:

A—ED = ƒ(ψ, t)

ND = ƒ(kψ, t)

Now both functions—i.e., both dynamic systems—are coupled in terms of the ecological variable. It is worth noting that although the same variable ψ is operating at both scales, there is no trace of representationalism is this model. For ψ to be represented at the neural scale, the model should describe an external system that generates some informational variable that, eventually, is internalized by another system. In this case, however, there is only one system that is analyzed at two different scales (ecological and neural), so such an internalization of an informational variable does not occur. The same variable constrains the system and this is reflected at different scales, but the none of the scales is representing the variable.

Or course, the model proposed is just an abstract model, so it does not refer to any concrete or empirical situation. However, some consequences follow from it. One of them is that ecological variables must be found in CNS dynamics if the model is plausible. That is, ψ should be found in the dynamics of the intra-organismic scale when it is present in the dynamics of the ecological scale. I address this consequence in the next chapter.

115

Chapter 4: On Plausibility54

In a famous passage of the preface to his book on Leibniz, Bertrand Russell claimed that he felt

“—as many others have felt—that the Monadology was a kind of fantastic fairy tale, coherent perhaps, but wholly arbitrary.” (Russell 1990: vii). Although Russell proclaimed his admiration for Leibniz and his philosophical system, the feeling of disconnection between the Monadology and reality was always somewhat present for him. The main reason for this feeling is that, even though the Monadology was coherent in itself and with the rest of Leibniz’s works,55 the content of the proposal was absolutely detached from the knowledge and, especially, the best science of

Russell’s times.

The situation described by Russell regarding his reading of the Monadology may serve as an example for a more general issue faced by some theoretical proposals both within philosophy and some sciences. Sometimes, perfectly sound and coherent theoretical proposals may seem a fantastic fairy tale due to their detachment from the relevant scientific knowledge in the field. This detachment may occur in two different ways. On the one hand, the theoretical proposal may ignore—consciously or unconsciously—empirical findings. This fact might lead to

54 Some parts of this chapter already appeared as an article in Minds and Machines (see Raja 2018). 55 Russell (1900) claimed that, after a profound research of Leibniz works—specially of Discours de Métaphysique—, it became clear to him that all the philosophical system of the German philosopher was based on a few elementary principles and was developed by a neat logical deduction. Thus, the Monadology, even as a fairy tale, was completely coherent with the whole system (p., viii & ff.).

116 an incompatibility between some of the theoretical claims and what is empirically plausible. An example of this kind of detachment is Hegel’s argument about the possible number of planets in the Solar system, both if he knew about the already discovered 8th planet (conscious ignorance) or if he did not know about it (unconscious ignorance). On the other hand, the theoretical proposal may be such that it does not fit within the theoretical and empirical coordinates of the scientific knowledge already in place. Intelligent design, for instance, suffers from this kind of detachment, as the postulation of a supranatural cause for the existence of species in their current form is not compatible with the scientific coordinates of biology. In this chapter, my aim is to show that my theory of resonance does not suffer from this second form of detachment.56

The idea of resonance as an ecological variable constraining the behavior of an intra- organismic system just as it constrains the organism-environment system might be a pretty picturesque one for some readers, both from the cognitivist and the ecological traditions. Thus, both an analysis of the place of the mechanism of resonance within the sciences of the mind

(especially neuroscience) and a justification of its plausibility are needed. On the one hand, I will address the place of resonance within the sciences of the mind by reviewing some of the usages of the concept in different models, disciplines, and contexts: motor resonance (mirror neurons), resonance in dynamic systems theory and artificial intelligence, and adaptive resonance theory

(Grossberg 2013). These usages will help me to frame resonance within experimental psychology and neuroscience, but also will serve as instances of mechanisms of resonance— already in use in those sciences—that are closely related, if not identical, to the resonance process I am putting forward in this story. On the other hand, the plausibility of resonance in the

56 I do not explicitly address the first form of detachment because, first, if it is a matter of conscious ignorance of scientific knowledge, I take it to be a matter of dismissive attitude towards science. This is not the case for the theory of resonance. And second, because if it is due to unconscious ignorance of scientific knowledge, I hope it will be solved by other means (i.e., reading more, working more, with more feedback from my peers, and so on).

117 terms I am proposing here may be addressed in two ways: regarding neural dynamics themselves and regarding the models we use to explain them. Put simply, an ecological variable that already constrains the organism-environment interaction also constrains the behavior of an intra- organismic system when the dynamics of the latter are modulated by the influence of the former.

Such a modulation may be found at the neural scale when the variation in the recorded activity of single neurons or populations of neurons is strongly correlated with the variation of the ecological variable at the ecological scale. This fact would account for the biological plausibility.

Also, such a modulation may be found in a more abstract scale when the models used to explain the dynamics of some intra-organismic scale include the ecological variable as one of their parameters and are successful at predicting the behavior of the target neural system. This fact would account for the explanatory plausibility. Both biological and explanatory plausibility are equally relevant for a successful theory of resonance. Resonance must be, first, possible at the neurophysiological scale. That is, the CNS must be capable of exhibiting such a behavior—i.e., we have to be able to measure changes in the target behavior compatible with our depiction of resonance. And second, our models for the behavior of intra-organismic systems must be able to include the relevant ecological variable that constrains the target behavior. These models may hold a degree of abstraction such that their biological plausibility may be brought into question.

However, these are the scientific tools we currently have, and as such they should offer resources for developing an account of resonance. These two plausibility requirements would be met if examples of CNS dynamics and models constrained by an ecological variable are found.

4.1 The Place of Resonance in Contemporary Cognitive Science

In a broad sense, a resonant system is a system that mimics or matches the changes of another system regarding some feature or variable. For example, a wall resonates with a loud speaker

118 when the vibration of the speaker is mimicked by the wall—namely, the properties of the vibration of the wall (frequency, phase, intensity, etc.) match or are strongly correlated with the properties of the vibration of the speaker. Resonance is, thus, a good metaphor for events that involve frequency matching, phase locking, strong linear and non-linear correlations, similar dynamics, etc. This is the reason why it is a concept often used in the cognitive sciences.

Many phenomena studied by the sciences of the mind may be understood in terms of resonance. The most widespread usage of the term has to do with the activity of mirror neurons

(Gallese et al. 1996, Rizzolatti et al. 1996), but, as noted above, the concept has been used to makes sense of other phenomena and models of many other cognitive activities. All of them share what I want to refer as the central feature grasped by resonance in the cognitive sciences: strong matching or strong covariance between specific features of the systems. The way the details of such a strong matching are understood slightly differ from one to another use of resonance, but all of them may be generally seen as examples of the kind of resonant process I am defending here. In the following I will analyze these different uses of resonance in the sciences of the mind. The analysis will serve to shed some light on and to make more explicit details of the ecological conception of resonance.

4.1.1 Mirror Neurons

Since the 1990s, mirror neurons are a component of the ontology of cognitive science. Mirror neurons are visuomotor neurons that activate both when executing and when observing an action.

For example, they activate both when I grasp a glass of water and when I see someone grasping a glass of water. Furthermore, associated mirror systems have been identified in humans for sensations (Avenanti et al. 2005, Keysers and Gazzola 2009), actions (Fadiga et al. 2005), and emotions (Bastiaansen et al. 2009).

119

The classic interpretation for the significance of the activation of mirror neurons has to do with interpersonal understanding (Fitzgibbon et al. 2014). By the mechanism of mirror neurons, we can understand others’ movements and actions, improve motor learning, etc. Leonetti et al.

(2015), for example, claim:

[S]tudies with transcranial magnetic stimulation (TMS) have shown that

observation of a hand grasping an object elicits a motor resonant response, i.e., a

pattern of motor-evoked potentials (MEP) facilitation of the same muscular

groups and with the same time course as in the observed grasping of that object...

Thus, by encoding the kinematic aspects of an observed action, the specific

subliminal activation of the primary motor cortex (M1) facilitates its repetition as

can be useful, for instance, during imitation for motor learning. (p. 3014;

emphasis is mine).

In this quote, Leonetti et al. use the common name for the activity of mirror neurons: motor resonance. The activation of visuomotor neurons when observing a hand grasping an object is in fact an instance of resonance because of two features. First, the pattern of the neural activity is related to the muscular groups involved in grasping an object with the hand. And second, the temporal structure of the neural activation also resembles the time structure of the neural activation when performing the action of grasping an object with the hand. These two features make the neural activation of the observer mimic the neural activation of the performer. In this sense, the neural activation of the observer resonates with the neural activation of the performer.57

57 Another way to understand this example is that the neural activation of the observer mimics her own neural activation when performing the action: the neural activation while observing resonates with the

120

The meaning of resonance when referring to mirror neurons is fully compatible with the common usage of resonance in the cognitive sciences—i.e., strong matching/covariance. Motor resonance, as described in the literature on mirror neurons (Gangitano et al. 2004, Borroni et al.

2011, Press et al. 2011; Cavallo et al. 2013, McCabe et al. 2014), is based on a system that mimics or strongly matches the features of another system. What the resonant system matches is either its own state in a different situation (i.e., while observing instead of while acting) or a state of another system (i.e., a neural state of another system). To put it in terms of ecological information, my reaching, for instance, may be lawfully related to activity in my motor neurons, in which case my reaching specifies my motor neuron activity. Actually, this is what one should expect from resonant system as the one I am describing. If that is right, then the mirroring response of someone’s mirror neurons (i.e., motor resonance) can be seen as driven by my motor neurons. In this sense, as we do not see light, but we see what is specified by the light, someone’s neurons do not resonate to my movements: they resonate to my neural activity that my movements specify.

This is the way in which motor resonance, as proposed in the study of the activity of mirror neurons, is compatible with an ecological account of resonance described as the coupling between the dynamics of the organism-environment system and the intra-organismic dynamics.

The lawful relation between movements and neural activity meets the requirement of the presence of an ecological informational variable that constrains the resonant dynamics.

4.1.2 Resonance, Dynamic Systems, and Artificial Intelligence

As already noted in Section 2.2, in physics, resonance refers to the phenomenon in which an oscillatory system—or just an external force—drives another oscillatory system to oscillate with neural activation while acting. This interpretation and the one offered in the main body of the text are equivalent.

121 greater amplitude at specific frequencies (Ogata 2005). This physical notion of resonance is widely used in dynamic systems theory to understand the features of coupled oscillators, and therefore it has been sometimes used in the cognitive sciences when the dynamic approach has been utilized to explain cognitive phenomena.

One example of the use of the physical sense of resonance applied to psychology may be found in the study of sequential effects developed by Gökaydin (2015; see also Gökaydin et al.

2016). Sequential effects are one of the most pervasive effects in psychology and may be defined as the behavioral dependence on the past sequence of events. Namely, given a task performed in many trials, the behavior in any trial depends on the previous ones. Sequential effects affect behavior by modulating the response in the trial (e.g., affecting its reaction time) and show exponential decay across sequences of trials. These effects have been observed across many modalities (Squires et al. 1976, Ward 1979) and in many different tasks and aspects of behavior, such as reaction time (Bertelson, 1961) or predictive guesses (Jarvik 1951). Sequential effects have been also reported at the neural scale (Squires et al. 1976, Sommer et al. 1990, Sommer et al. 1999, Jentzsch and Sommer 2002).

In his work, Gökaydin (2015, Gökaydin et al. 2016) offers a new framework for the study of sequential effects based on coupled damped oscillators. Details aside, the motivation for the development of this new framework has to do with the inability of previous models (e.g., exponential filters, A/R filters, or exponential + A/R filters)58 to explain certain features of sequential effects: cost-benefit aspects of sequential effects, dependence on the response-

58 Classically, sequential effects have been modelled using two kinds of filters: exponential filters and A/R filters. Both kinds of filters are based on the fact that sequential effects are reduced exponentially across sequences of trials: trial TX is strongly influenced by trial TX-1, less influenced by trial TX-2, even less influenced by trial TX-3, and so on. When the filter is applied to single trials no matter whether they are a repetition or an alternation with respect to previous ones, the filter used is an exponential one. Otherwise, A/R filters are those applied not to single trials but to sequences of repetitions or alternations of trials.

122 stimulus interval, individual effects, the role of processing delays, etc. The framework proposed by Gökaydin exploits the resonant properties of coupled damped oscillators to account for sequential effects and to shed light over the features that remained unexplained.

In a damped oscillator, “driving forces with frequency close to [its natural frequency] lead to increasing amplitude of motion with each cycle, whereas those far from [its natural frequency] do not.” (Gökaydin 2015: 152). This is physical resonance. In a system of coupled damped oscillators, the phenomenon of resonance is also present, but the number of resonant states increases. In a model for sequential effects based on coupled damped oscillators, resonant states are the key to explain the influence of previous events in each trial. Repetitions and alternations of stimulation in a sequence of trials are represented as driving forces close and far from the natural frequencies of the damped oscillators, respectively. Repetitions resonate with the frequency of the damped oscillators and alternations do not. Such a model, at least theoretically, captures the dynamics of sequential effect just as well as previous models do, but also explains those features of the phenomenon that remained unexplained (e.g., the processing delays).

There are two main reasons why the system of coupled damped oscillators used by

Gökaydin captures sequential effects. First, he uses damped oscillators because they act as exponential filters for their natural—or resonating—frequency (Gökaydin 2015: 149). In this sense, damped oscillators provide the same benefits than classic exponential filters do. They are positively affected when the driving force applied to them matches their resonating frequency.

Thus, the proper dynamics of damped oscillators capture the role of the repetition of a trial that is found in sequential effects. For example, a damped oscillator will maintain a given amplitude as far as a driving force is sequentially applied in the same frequency of its resonating frequency.

123

Similarly, a given trial will be positively affected by the fact of being a repetition of the previous trial. In this sense, repetitions are modelled as a driving force in applied to a damped oscillator in terms of its resonating frequency.

The second reason for using a system of coupled damped oscillators as a model for sequential effects is that expectations can be framed in the system. In sequential effects, repetitions affect the current trial more strongly than alternations do as far as the performer of the current trial is taken to be expecting the same stimulus she received in the previous trial.

According to Gökaydin (2015), in his model:

… [T]he velocity of the oscillator will be taken to represent expectations in the

following way: an oscillator with positive velocity will be considered to be

‘expecting’ a stimulus with a positive sign, and one with negative velocity to be

expecting a stimulus with negative sign. Reaction time should therefore be a

function of the magnitude of the velocity at the moment a stimulus is displayed as

well as whether the stimulus is that which was expected. (p. 155).

Positive and negative velocity may be taken as a conventional measure to model repetitions

(positive; i.e., forces matching the resonant frequency) and alternations (negative; i.e., forces far from the resonant frequency). In this sense, the model of two damped oscillators offers a way to distinguish between repetitions and alternations in sequential effects in terms of what is expected in every trial. Moreover, as the expectations are constitutive of the dynamics of the systems, reaction times, processing delays, etc., are as well captured by the model. 59

Another example of the use of physical resonance in contemporary cognitive science may be found in embodied robotics. Pioneered by Rodney Brooks in the 1980s (see Brooks 1990),

59 One possibility to explicitly capture processing delays in the model is by including a simple time-delay component in the equations of the coupling oscillators; see Gökaydin (2015) for further details.

124 embodied robotics developed a new way to understand and construct Artificial Intelligence (AI hereafter). Instead of following the symbol system hypothesis (Newell and Symon 1976), by which AI was develop in base on a symbolic rule-based system that explicitly controls behavior of the AI devices, Brooks (1990) proposed the physical grounding hypothesis. According to this hypothesis, intelligent behavior is an emergent feature of the interaction of AI devices and their environments:

[The physical grounding hypothesis] states that to build a system that is intelligent

it is necessary to have its representations grounded in the physical world. Our

experience with this approach is that once this commitment is made, the need for

traditional symbolic representations soon fades entirely. The key observation is

that the world is its own best model. It is always exactly up to date. It always

contains every detail there is to be known. The trick is to sense it appropriately

and often enough. (p. 5).

Framed within the field of embodied robotics, Ralf Der and Georg Martius (2012) offer an account of autonomy (i.e., machines that discover their own behavioral options by interacting with their environment) and of learning (i.e., stabilization of those behavioral solutions) in AI.

The kind of learning Der and Martius propose—homeokinetic learning (2012: 127 & ff.)—is based on physical resonance. An AI’s internal states and the external world match in terms of resonance and so “homeokinetic learning is able to detect and amplify latent oscillatory modes of the outside physical world.” (p. 149). As in the case of the sequential effects, the internal states of the AI device can be modelled in terms of an internal oscillator coupled to an external oscillator (or external force). In this sense, stable patterns of behavior are reinforced resonant states and learning is defined as the process of resonance itself.

125

The notion of physical resonance in the previous examples is compatible with the ecological concept of resonance. In this sense, both Gökaydin’s model for sequential effects based on the resonating frequencies of coupled damped oscillators and embodied robotics based on the resonant properties of homeokinetic learning may be seen as specific examples or implementations of the kind of resonant process—or resonant mechanism, if you like—that is required in the ecological literature.60

4.1.3 Adaptive Resonance Theory

The examples of resonance just reviewed are specific instantiations of a process of resonance in specific domains or issues of the cognitive sciences—mirror neurons, sequential effects, and embodied robotics. However, as far as the theory of resonance I am proposing in this work is supposed to be the basis for an ecological cognitive architecture, the role of resonance as a central concept for an approach to the functioning of a cognitive system is worth addressing. I turn now to Adaptive Resonance Theory (ART hereafter; Grossberg 2013), that purports to be a cognitive architecture based on a concept of resonance.

According to Grossberg (2013), ART is “a cognitive and neural theory of how the brain autonomously learns to categorize, recognize, and predict objects in a changing world.” (p. 2).

The theory has been applied to many domains of the cognitive sciences. Listing all of them would require several pages61 but, just to name a few, ART has been applied to attention

(Carpenter and Grossberg 1987), auditory perception (Grossberg et al. 2004), categorization

(Carpenter et al. 1991), emotions (Grossberg et al. 2008), learning (Carpenter et al. 1998),

60 It might be argued that the two kinds of physical resonance reviewed in this section should not be identified with the ecological notion of resonance as far as the latter is a kind of informational resonance and not just a physical one. I think such a sharp distinction does not really hold. I am going to request the reader to keep such concern for the time being. I will address it in a following section (4.1.4). 61 A very exhaustive list of the different applications of ART both to theoretical and empirical research on the cognitive sciences may be found in Grossberg (2013).

126 locomotion (Grossberg et al. 2001), memory (Grossberg and Pearson 2008), motor control

(Bullock et al. 1993), speech (Grossberg and Kazerounian 2011), and visual perception (Cao and

Grossberg 2005). In this sense, ART offers a mechanism that accounts for nearly all cognitive phenomena, and so it serves as the basis for a cognitive architecture built up on the concept of resonance.

An architecture based on ART takes the form of a bi-directional system with both top- down and bottom-up information flows. Attention, categorization, or learning are carried out by the system by matching the two flows. For example (Figure 11), in a system with two layers, F1 and F2, a sensory input I gets into the first layer F1 and produces a pattern of activation X. This pattern X sends a signal S through an adaptive filter that encodes it in the form of T.62 In response to T, a representation Y is activated in F2. The representation Y is understood as a recognition state (category) that represents the pattern of activation X in F2 (Figure 11a). The representation Y activates top-down signals U that are encoded—in the same way than before— in the form of V. When V gets to F1, a matching process occurs between X and V which result in the selection of the pattern X* in F1 (Figure 11b). The pattern X* encompasses the features of the input I expected by the state Y—concretely, X* is referred as the ‘attentional focus’ in ART

(Grossberg 2013: 8). At this point, two scenarios are possible. On the one hand, the matching between the top-down and the bottom up information flows, X*, may be not good enough or may be simply a mismatch. In this case, a vigilant system ρ will reset the process by suppressing Y,

U, and V.63 Then, a new cycle starts with a new Y* (Figure 11c, 11d).

62 The adaptive filter encodes S in terms of previous knowledge of the system (memory, history of interactions, and so on). 63 Technically, the left triangle in the Figures 11a, 11b, 11c, and 11d, is the system A, where ρ is the vigilance parameter. I have simplified the explanation for the sake of clarity.

127

Figure 11. A two-layered ART architecture. (a) An input I generates a pattern of activation X in the layer F1. X sends a signal S that gets to F2 in the form of T after being encoded by an adaptive filter. T activates a representation Y in F2. (b) Y sends a signal U that gets to F1 in the form of V after being encoded by an adaptive filter. A matching process between V and X occurs and a pattern X* is selected. If successful, the matching process ends up here. (c) If not successful, a vigilant system ρ resets the system. (d) A new cycle with a new representation Y* starts. (from Grossberg 2013: 8, Figure 2).

On the other hand, if the matching between the top-down and the bottom up information flows,

X*, is good, the system gets into a resonant state:

When there is a good enough match between bottom-up and top-down signal

patterns between two or more levels of processing, their positive feedback signals

amplify, synchronize, and prolong their mutual activation, leading to a resonant

state that focuses attention on a combination of features… that are needed to

correctly classify the input pattern at the next processing level and beyond.

(Grossberg 2013: 6).

128

Attention, learning, categorization and all other cognitive processes are constituted by these resonant states. Resonant states remain in the system and are the key for memory consolidation or decision-making among other processes. Otherwise, mismatches are suppressed. This is the reason why, virtually, “all conscious states are resonant states.” (Grossberg 2013: 2).

An interesting feature of ART’s resonant states is that the very idea of resonant states is, as in the case of its ecological counterpart, an informational one. When Grossberg says that

“signals amplify, synchronize, and prolong their mutual activation”, he is defining ART’s resonant states in the same terms those of physical resonance are usually characterized, but

ART’s states also include a sense of informational coupling. ART’s resonant states are correlated with events that may be seen as instances of physical resonance at the neural scale—see, for instance, instances of synchronous oscillations at the neural scale as predicted by ATR (Raizada and Grossberg 2003, Grossberg and Versace 2008) or models of the embodiment of ART in laminar thalamocortical circuits (SMART model; see Grossberg and Versace 2008)—, but, at the same time, ART’s resonant states involve the generation of informational structures (patterns of activation and representations) that are the key elements to capture different cognitive processes.

In this sense, ART’s resonance is informational in the same sense the ecological notion of resonance is: what resonates is the structure of the inputs—i.e., the information carried by the inputs—and the representations generated at other levels of the system.

As an instance of informational resonance, ART may provide several insights to understand the resonant process in place in ecological psychology. However, it is worth noting that accepting ART as such as a cognitive architecture would lead to an incompatibility with the tenets of the ecological theory. ART is not well suited to be an ecological cognitive architecture because of the reasons discussed in Chapter 3 regarding Predictive Processing, which is very

129 similar to ART. Namely, as with Predictive Processing, ART does not meet the requirements of de-localization and anti-computation posited by the explanatory strategy of ecological psychology.

4.1.4 Ecological Resonance as Informational Resonance

The different instances of resonance just reviewed serve as examples of a resonant process—or resonant mechanism—utilized in the cognitive sciences and as possible ways in which resonance may be used in an ecological cognitive architecture. Through the presentation of the different examples—mirror neurons, sequential effects, embodied robotics, and ART—I have explicitly referred to two seemingly processes: physical resonance and informational resonance. However, these two seemingly different process are actually just to ways to understand and to talk about the same kind of process. Concretely, informational resonance is the general case while physical resonance is a sub-type of the general case. On the one hand, all cases of physical resonance carry information. For example, if string B resonates to string A, string B’s frequency carries information about string A’s frequency. In this sense, physical resonance is always a kind of informational resonance. On the other hand, a resonant event in an ART architecture, for example, does not need to be implemented in terms of physical resonance. In this sense, informational resonance is a more general case than physical resonance. Thus, informational and physical resonance hold an asymmetric relation. However, they both have to do with the main feature of cognitive systems grasped by resonance: strong matching or strong coupling.

Physical resonance refers to events of strong matching or coupling in terms of mechanical means in the broad sense of the term. As already noted, a physical resonant system is described as an oscillator that is driven to oscillate at greater amplitude in specific frequencies due to the effect of a driving force or driving oscillator. Such driving is produced by some form

130 of physical force (e.g., vibrations, electricity, or chemical reactions). In this sense, resonance is mechanical and easily quantifiable. Otherwise, in the case of informational resonance the strong matching or coupling in the resonant system occurs in terms of non-purely mechanical variables: resonance is of high-order/informational variables. For example, in the case of systems based on

ART, resonance occurs when the pattern of activation produced by sensory stimulation matches the pattern of activation of the category generated by the system—a matching between X and Y

(or V) in Figure 11. Thus, in systems based on ART, resonance occurs in terms of the structure of activation of different elements of the system and its surroundings (inputs). This kind of resonance is informational because what resonates is what carries information about the surroundings of the system (the pattern of activation) and not the signal itself (light or electricity in case of a visual input, for example), although sometimes the informational process of resonance may be also described in mechanical terms.

The concept of resonance I am developing in this story—aka ecological resonance, to differentiate it from other uses of the term in the cognitive sciences—is clearly an instance of informational resonance. When the dynamics at the intra-organismic scale resonate to the dynamics at the organism-environment scale in terms of an ecological variable, resonance is informational. The ecological variable that constrains the strong matching or coupling between both the driving (organism-environment) and the driven (intra-organismic) sub-systems that integrate the resonant system is not a just physical force, but a high-order variable that constitutes information at the ecological scale. However, as noted above, a classification that sharply distinguishes physical and informational resonance would misguide our understanding of the concept. This fact may be clarified if we think about Gökaydin’s model for sequential effects one more time.

131

The model for sequential effects proposed by Gökaydin (2015) has been presented in section 4.1.2 as an example of physical resonance. However, it is also an example of informational resonance in that the coupled oscillators that constitute it resonate to the history of interactions of the system. In other words, previous interactions of the system—i.e., previous sequences of trials—with its environment inform its future interactions. In this sense, every trial

(interaction) is informative of the story of the system and, in virtue of that information, affects the next trial. For example, when the current trial repeats the previous one, the coupled oscillators resonate to the repetition itself: they resonate to the fact that there is a repetition. The opposite occurs when, instead of a repetition, the coupled oscillators face an alternation.

Repetitions and alternations are, in this case, informational variables. However, also as already noted in section 4.1.2, the simplicity of Gökaydin’s model for sequential effects makes it easy to analyze in terms of physical resonance. When repetitions and alternations are taking to be just two different driving forces—the former in phase and the latter in antiphase with regard to the natural frequency of the coupled oscillators of the model—the model becomes just a physical resonant system and can be studied as such. Thus, as physical resonance is a sub-type of informational resonance, physical and informational resonance are sometimes just two ways to understand the same phenomenon. In this case of Gökaydin’s model, the explanation in terms of physical resonance is possible thanks to non-linear methods of analysis (e.g., dynamic systems theory, in general, and oscillators, in particular).

This insight coming from Gökaydin’s model for sequential effects in psychology helps to better understand ecological resonance. As in the case of Gökaydin’ model, ecological resonance is described in terms of coupled dynamic systems. Thus, although ecological resonance is defined as a kind of informational resonance, it may be studied in terms of physical resonance.

132

The intra-organismic dynamics may be described in terms of dynamic systems and, particularly, in terms of oscillators (e.g., neural activation, different oscillatory rhythms at different neural scales, synchronization or neural populations, and so on), and the same occurs with the organism-environment dynamics. Therefore, the translation from informational resonance to physical resonance is as possible as it is in the case of the model for sequential effects.64

4.2 Biological Plausibility

After the review of different examples resonance that may be candidates to be instances of ecological resonance, I turn to the biological plausibility of ecological resonance. In order to evaluate such a biological plausibility, three different kinds of findings must be considered. The first kind of findings has to do with the phenomenon of resonance itself. Resonance, in a general sense, must be possible at the neural scale. Namely, some process of the kind of physical or informational resonance should be identified in the CNS. As I noted in the previous section, ecological resonance is a kind of informational resonance that may be characterized in terms of physical resonance. Thus, the finding of informational resonance at the neural scale points to the plausibility of ecological resonance—although this finding would not be a definitive proof for ecological resonance as such because it is a more restrictive notion.

The second kind of findings that would support the plausibility of ecological resonance are those related to the sensitivity of neurons and neural regions to ecological information. As I

64 There is a further reason why ecological resonance may be easily translated from informational to physical resonance. The very nature of ecological information (Michaels and Carello 1981, Turvey et al. 1981) makes it easy to be understood in physical terms. On the one hand, for instance, optical ecological information depends on the reflections and refractions of light in the surfaces of the environment with respect to the position of the organism—i.e., optical ecological information depends on laws of optics. On the other hand, for example, the remaining time for an object to hit you depends on the rate of expansion of the object in your visual field—i.e., the expansion is what informs you about one feature of your interaction with the object, the time-to-contact. These two cases are paradigmatic examples of ecological information and show that ecological information itself (general optical information or time-to-contact information) is easy to be transformed into physical variables—reflections, refractions, expansions—and, thus, ecological resonance may be understood in terms of physical resonance.

133 define it, ecological resonance is the coupling between intra-organismic and organism- environment dynamics in terms of ecological information. Therefore, the intra-organismic scale

(i.e., neural scale) must be sensitive to the variables that count as ecological information. As in the case of general physical or informational resonance, sensitivity to ecological information at the neural scale is not a definitive proof for the plausibility of ecological resonance. Neural activation may covary with the values of ecological variables without the need for ecological resonance as such. However, neural sensitivity to ecological variables strengthens the plausibility of ecological resonance.

The last kind of findings that would support the plausibility of ecological resonance is the evidence of modulation of the dynamics at the neural scale in terms of the ecological information that operates at the organism-environment scale in a given situation. In other words, a proof for the plausibility of ecological resonance would be to directly observe the coupling between the intra-organismic and the organism-environment dynamics in terms of the relevant ecological variable in given scenario. Such an observation would constitute a direct proof of the plausibility of ecological resonance.65

4.2.1 General Resonance and Informational Sensitivity

If we take sensitivity to information as a necessary condition for ecological resonance, we need to find examples of resonance to (general) information and, more concretely, we need to find resonance to ecological information at the neural scale. In the case of resonance to general informational at the neural scale, several studies have reported resonant features both in neurons and in neural regions (e.g., Hutcheon and Yarom 2000, Kumar and Mehta 2011, Lea-Carnall et

65 The italics in plausibility aim to note that the described finding would not constitute a definitive proof for the existence of ecological resonance, but for its possible role in the explanation of perception and action. Of course, further investigation would be needed to have a corpus of evidence of the presence of ecological resonance in the relevant perceptual processes.

134 al. 2016, Lea-Carnall et al. 2017). All of the resonant features reported in these studies are frequency-related ones. In this sense, it is possible to study them in terms of physical resonance by using non-linear methods. Neurons and neural regions have their intrinsic and preferred frequencies in terms of firing or synchronization and these frequencies are related with network size and connectivity (Lea-Carnall et al. 2016), or with synaptic plasticity (Kumar and Mehta

2011), for instance.

An interesting example of a study of resonance at the neural scale is the one offering evidence for frequency-dependent cortical plasticity in the human brain, developed by Lea-

Carnall et al. (2017). The study is concerned with the primary somatosensory cortex, which exhibits a high degree of plasticity (i.e., rapid reorganization) when exposed to repetitive stimulation (e.g., Godde et al. 2003, Pleger et al. 2003, Pliz et al. 2004, Vidyasagar et al. 2014).

Lea-Carnal et al. offer evidence regarding the role of resonance in this kind of plasticity using tools from psychophysics, neuroimaging, and neurocomputational modelling (2017: 8874). They report that the stimulation at a resonant frequency (i.e., at a frequency equal to the intrinsic or preferred ones in the primary somatosensory cortex) directly affects synaptic plasticity in the given neural region to a larger degree than higher or lower frequencies.

The study developed by Lea-Carnall et al. (2017) is relevant in terms of ecological resonance because of two reasons. First, such a clear example of resonance at the neural scale supports the plausibility of resonant dynamics in the CNS. And second, the relation between resonance and plasticity may provide a way to fine-tune ideas regarding the framework of ecological resonance. Neural reuse, one of the main components of my theory of resonance, is a specific kind of neural plasticity and involves some processes—e.g., interactive differentiation and search (Anderson 2014, 2016)—that require further elaboration. A resonant process

135 underlying neural plasticity such as the one proposed by Lea-Carnall et al. (2017) might be a way elaborate those processes. The very idea of resonance, thus, might serve a guiding tool to improve the conceptualization of neural reuse.66

As noted at the beginning of the section, the other set of findings favoring the plausibility of ecological resonance encompasses instances of neural sensitivity to ecological information. If ecological resonance is plausible, different elements of the CNS must be sensitive to ecological information. The most famous variable of ecological information is tau (τ), the ecological variable that specifies the time-to-contact between an approaching object and the visual system

(Lee 2009). In relation to this variable, different studies on pigeons have reported activation on neurons in the nucleus rotundus in response to looming stimulation (Wang and Frost 1992,

Hatsopoulos et al. 1995, Sun and Frost 1998).67 The interesting fact about these studies is that those neurons respond specifically to tau or to a transformation of tau—e.g., rho (ρ), that is the inverse of tau, or eta (η), which can be computed using tau (see Figure 12).

The fact that neurons or group of neurons respond to a variable of ecological information such as tau entails that it is possible for ecological information to affect events at the intra- organismic scale. However, it is not enough to prove the plausibility of ecological resonance.

The criteria for ecological resonance is more strict than mere covariance. In the case of the previous example, what we find is specific activation given some stimulation. What we need for ecological resonance, though, is online modulation of the intra-organismic scale in terms of the ecological information in place at the organism-environment scale—i.e., an informational coupling between the neural and the ecological scales. Thus, as in the case of the influence of

66 More on the use of insights from the theory of resonance developed here in the fine-tuning of neural reuse in Chapter 5. 67 Nucleus rotundus is the equivalent to the pulvinar nuclei in the mammalian thalamus, which is related to early vision and the control of visual attention and saccades (Robinson and Petersen 1985, Petersen et al. 1987, Chalupa 1991, Berman and Wurtz 2011).

136 resonance in the plasticity of the primary somatosensory cortex, the finding of neural activation provoked by specific ecological variables serves as a basic background for the plausibility of ecological resonance, but not as direct support of the phenomenon.

Figure 12. Example of the activation of nucleus rotundus neurons given the three variables τ, ρ, η. The activation is slightly different between τ and ρ, and more explicitly diverse in the case of η. As the three variables are based on the optical angle occupied by an approaching object in the retina (θ), their differences have to do with some different way to mathematically evaluate that angle. Tau (τ) is invariant both regarding size and velocity, while ρ and η are differently sensitive to them. This the reason why the activation time (X axis) of ρ and η changes with respect to τ as the size of the object (Y axis) changes. The same occurs with velocity (see figure 2b in Sun and Frost 1998: 298). (from Sun and Frost 1998: 298, figure 2a).

4.2.2 Ecological Resonance and the Neural Scale

There are examples of neural activity that reflect more precisely the idea of ecological resonance.

For instance, Van der Weel and van der Meer (2009) show that tau constrains the neural dynamics when it is constraining the organism-environment dynamics. In their study, van der

137

Weel and van der Meer analyze the theta rhythm oscillatory behavior of babies’ visual cortex— which is related to cognitive and anticipatory attentional processes (Orekhova et al. 1999)— during a looming danger situation. They found that such oscillatory behavior is tau-coupled. That is, the change in rhythm’s temporal structure is linearly correlated with (modulated by) the value of tau at the ecological scale. In Figure 13 (C), they show how the value of tau of the waveform of the neural activity of babies’ visual cortex (black dots) matches the value of tau of the loom over time (blue dots)—starting roughly at 0.10 seconds. This fact shows, I contend, a process of resonance in which the dynamic systems at the intra-organismic and the ecological scales are coupled in terms of the ecological variable for a given task. Such a coupling is quantified in linear terms in Figure 13 (C), which fits the abstract model I presented in Chapter 3 where the coupling between the two dynamic systems as a linear or non-linear relation between the main variables at each scale was described (χ = kψ).

Figure 13. This figure shows the tau-coupling analysis between the two relevant scales: the ecological scale (variable Y) and the neural scale (variable X). Part A and B show the change over time of the neural activity (source waveform [SWF]) and the visual angle, respectively, along with their rates of change during a looming situation. Part C plots the tau value of SWF and the corresponding tau value of the loom over time. Finally, part D, represents the tau value of SWF against the tau value of looming to check if they are coupled. The figure shows a

138

strong coupling between them, r2 = 0.959. (Image from van der Weel and van der Meer 2009: 189, figure 4).

The finding reported by van der Weel and van der Meer is a clear example of of ecological resonance. The dynamics of the visual cortex captured by its theta rhythm are coupled to the agent-environment dynamics in terms of the relevant ecological information τ. Thus, this finding, along with all the previous partial ones (i.e., resonance and informational sensitivity at the neural scale), show that ecological resonance, as described in this work, is plausible in the biological sense.

4.3 Explanatory Plausibility

The second kind of plausibility I want to evaluate is explanatory plausibility. Put simply, ecological resonance is explanatorily plausible if we are able to include ecological information in the models we use to explain the intra-organismic scales (i.e., neural scale, neuro-muscular scale, and so on) given some behavioral task. This kind of plausibility is different from biological plausibility as explanatory models may be more or less abstract and, thus, may or may not accurately reflect biological features of the system. In this sense, a model may be so abstract that the biological details of the target system are not captured (i.e., the model may be medium- independent, for example) while its explanatory or predictive powers are intact. Such a model would be worth being developed nonetheless as explanation or prediction are values by themselves. This is the reason why explanatory plausibility is worth considering separately.

I will consider three studies that exemplify the explanatory plausibility of resonance.

First, Port et al. (2001; see also Merchant et al. 2004) proposed a model to account for the dynamics of the motor cortex during an interception-of-a-moving-target behavior in monkeys. In the model that accounts for both single-neuron and population scale dynamics, tau (τ) is one of

139 the main parameters. Interestingly, the dynamics of populations of neurons fit the model when tau was one of its parameters in percentages that went from 60% to 90%, showing that

[A] substantial fraction of neurons in the motor cortex modulated their activity in

accordance with τ, although this effect was not seen as frequently as those of hand

position or velocity (Port et al. 2001: 312).

Two consequences of this report must be highlighted. First, the ecological variable is present in the model of brain activity. This is compatible with the idea of ecological resonance and points to its explanatory usefulness. A further interesting feature of this specific study is that, although τ is a perceptual variable, it appears in motor control activity. This fact is fully compatible with the idea of perception-action loop and the intimate relation between perception and action as proposed by ecological psychology (Turvey et al. 1981; see also Chapter 2). And second, tau is not as important in the model as other variables. This might be a feature of the model itself or a deeper feature of models of the neural scale that include ecological information. Anyway, as there are many ongoing processes at the neural scale during any behavior, such a situation is not surprising and, crucially, it does not undermine the explanatory plausibility of ecological resonance. The requirement for explanatory plausibility is the presence of ecological information in models of intra-organismic dynamics, not that ecological information must be the chief parameter in those models.

In another example, some studies on musical perception of rhythm, pulse, and tonality

(Large 2008; Large et al. 2016) show that the dynamics at the organism-environment scale match the dynamics at the intra-organismic scale guided by the same variable. Put simply, the perception of rhythm, for instance, depends on the influence of the metric structure of the music over the CNS’s intrinsic oscillatory dynamics. In other words, perception of rhythm depends on

140 the intra-organismic dynamics being modulated by the ecological interaction—they exhibit the same rhythm, which is the ecological variable in this case. Large (2008) proposed an abstract dynamical model to account for this phenomenon. It is worth noting that in these studies the appeal to resonance, in general, and non-linear resonance, in particular, is explicit.

Non-linear resonance is a concrete kind of resonance that provides explanation for several features of the perception of rhythmical patterns (Large 2008: 203-205). The three main features that may be observed in non-linear resonant systems are spontaneous oscillation, entrainment, and high-order resonance. Spontaneous oscillation allows for the intrinsic oscillation of a system in specific rhythmical sequences. Entrainment refers to the ability of the system to be coupled to an external rhythm. And, finally, high-order resonance refers to the arising of oscillations at frequencies not present in the stimulation that are due to the coupling itself. High-order resonance allows for a coupling in terms of multi-frequency networks, as needed when the stimulation is as complex as rhythmical or musical patterns.68

These insights developed by Large regarding the perception of rhythm and tonality are fully compatible with ecological resonance. First, the idea of a non-linear coupling between the dynamics at the intra-organismic and the organism-environment scales is already present in the model of resonance I proposed in Chapter 3. Thus, the idea of non-linear resonance is part of the idea of ecological resonance. Second, the use of high-order variables (e.g., rhythm or pulse) to account for the perceptual event in place is expected in ecological resonance: these high-order variables are the ecological information that constrains the resonant system at different scales.

68 Further details regarding non-linear resonance are consciously omitted at this point. However, non- linear resonance is a very suggestive phenomenon that will surely deserve close attention in the future. For example, it might be useful to understand processes of off-line cognition as “non-linear resonance predicts that metrical accent may arise even when no corresponding frequency is present in the stimulus.” (Large 2008: 205). Non-linear resonance, thus, might be a way to understand those cognitive events not fully dependent on the presence of stimulation. I will briefly come back to other insights offered by non- linear resonance in the Coda.

141

And third, the fact that Large uses the idea of resonance and the high-order variables to model the phenomena of interest reinforces the explanatory plausibility of ecological resonance.

In a last example, a study (de Rugy et al. 2002) shows that a model in which the intra- organismic activity is constrained by an ecological variable—a transformation of tau—correctly describes participants’ foot-pointing dynamics when walking. In this last example, the model is not concerned with neural activity as such, so it seems clear that it cannot be claimed that the biological requirement is met (see Figure 14). However, the example is interesting because it shows how a classic inverse-pendulum model of gait, modified to include assumptions similar to what the theory of resonance proposes, is successful in explaining human behavior. The model itself aims to describe neither the ecological nor the neural scale, but it is targeted to an intermediate scale of behavioral kinematics.

Figure 14. The model used by de Rugy et al. is based on a model of bipedal locomotion developed by Taga (1998). It incorporates a neural rhythm generator (a) and a musculoskeletal system (b). The behavior emerges by the modulation between these two systems that happens during action (c). What de Rugy et al. include is a transformation of tau to that modulation to see if it helps the

142

prediction of foot-pointing dynamics during walking. The enhanced model actually predicts foot-pointing dynamics better that Taga’s one. (Image from de Rugy et al. 2002: 143, figure 2).

All these examples account for plausibility of resonance at the explanatory level: they fit the abstract model I proposed in Chapter 3 and show that we are able to develop concrete models that grasp the coupling between the two scales relevant to account for cognitive phenomena in terms of resonance. Further questions are (i) how these models can be developed in terms of the current research in ecological psychology and neural reuse, (ii) how these models must be understood given the current state of affairs in ecological psychology and neural reuse, (iii) which tools are available to carry out the development of these models and their interpretation, and (iv) how the development of resonance in this work can inform further research in ecological psychology and neural reuse. I address these issues in the next chapter.

143

Chapter 5: Methods for the New Theory of Resonance69

Ecological psychology has witnessed a very productive empirical development since its first formulation. Gibson was hosting postdoctoral fellows and other scholars at ‘the airport’ (the common name for his lab at Cornell) for several years, and some of them continued and further elaborated the ecological approach to perception and action (e.g., Bob Shaw, Dave Lee, or

Michael Turvey). Perhaps against some of the ideas of Gibson himself,70 a group of psychologists in the Center for the Ecological Study of Perception and Action (CESPA) at

University of Connecticut began what to all extents has been taken to be the orthodoxy in ecological psychology. Such an orthodox approach is largely based on the combination of the theoretical underpinnings of ecological psychology with methods borrowed from complexity science, especially from Dynamic Systems Theory.

The combination of the ecological approach to perception and action with tools from

Dynamic Systems Theory is the current most common form of ecological research in psychology

(see Chemero 2009) and some people refer to it as ecological dynamics. In this part of the story I explore the integration of the prospective research on resonance with ecological dynamics. To do

69 This chapter is an extension along with an intense re-elaboration of some ideas already appeared as an article in Minds and Machines (see Raja 2018). 70 It is not clear whether Gibson himself was willing to undertake the mathematization of the ecological approach, and it is still a matter of discussion whether ecological psychology should be further grounded in physics—as the CESPA crew proposed (see Turvey 1992)—or in biology (see Reed 1996).

144 so, first, I offer a brief introduction to the dynamic systems approach to cognitive science and a review of the most complete model for the study of perception and action within the ecological tradition to the date: Warren’s behavioral dynamics (2006), which is in itself an instance of ecological dynamics. I will also propose a way to include resonance within Warren’s approach.

Second, I evaluate the implications of including resonance in such a methodological framework. Concretely, I explore two dimensions of ecological dynamics, in general, and behavioral dynamics, in particular: multi-scale analysis and fractal analysis. On the one hand, by multi-scale analysis, I refer to the search for the influence of relevant ecological variables at many different scales (e.g., ecological, behavioral, CNS, population of neurons, or single neurons, among others). Such a multi-scale approach has already been defended in psychology and behavioral sciences (Ibañez- Gijón et al. 2016), but also in many other disciplines such as economics (Ouyang et al. 2015) or engineering (Lamarque et al. 2012). The approach is not essentially different from the classic single-scale approaches. Classic single-scale dynamic systems approach accounts for the changes of a system through time by means of differential equations that describe and quantify those changes. The only novel feature that comes with multi-scale approaches is the specific treatment of the coupling between dynamic models at different scales. On the other hand, by fractal analysis, I refer to the tool that opens the possibility of addressing the relation between relevant scales in terms of fractals—i.e., how structural properties are constant at both higher and lower scales. Fractal structures are found in many different physical and biological systems, and fractal analysis is a mathematical tool that has been successfully applied in many disciplines (Moreau et al. 2009, Bizzarri et al. 2011), including psychology and behavioral sciences (Van Orden et al. 2003). These two kinds of analysis allow, first, for a straight-forward inclusion of resonance at the intra-organismic scale

145 within ecological research and, second, for the generation of several testable hypothesis regarding its role in the activity of cognitive systems.

Finally, I take a step back and analyze the way tools and concepts from ecological dynamics and the dynamic systems approach to the cognitive sciences are informative of some of the correlates of neural reuse as a principle for the functional organization of the brain.

Concretely, I examine the possibility for coordination dynamics and one of its main features, metastability (Kelso and Tognoli 2007), to be the link between Warren’s behavioral dynamics

(2006), my own theory of resonance, and Anderson’s neural reuse (2014). Beyond being such a link, neural reuse’s correlates such as active search—the active selectivity of different neural configurations to respond to different functional requirements while maintaining the dispositional tendencies of each brain region—might find foundation in metastability as well. In particular, active search is a speculative proposal that accounts for the fact that brain regions are able to synchronize with other brain regions to constitute wider functional systems while, at the same time, each region maintains its own functional preferences. It is my contention that dynamic systems that exhibit features such as metastability capture the kind of behavior of brain regions, such as active search, in which we find different patterns of functional stability and instability at different scales. For example, when a brain region participates in a transient group of synchronized brain regions, it is part of a punctual short-term stability at a larger scale that will become unstable—i.e., that will disappear—as soon as the functional constraint is over. To capture the dynamics of this kind of behavior is to give an account of the dynamics of active search as Anderson describes it. If my contention is accurate, then, what we get from my new theory of resonance is not only that neural reuse enables a specific understanding of the phenomenon of resonance at the neural scale, but also that such an understanding of resonance

146 can play a beneficial role as well in further developing and comprehending some details of neural reuse.

5.1 Behavioral Dynamics and Resonance

At the beginning of this story, especially the two first chapters and the Interlude, I offered a description of the main tenets of the ecological approach to perception and action. As we have seen, starting as an explanatory strategy that entails the redefinition of the role of body and environment in psychological events, one of the main aspects of the ecological approach is that psychological explanation appeals to laws of interaction between organisms and environments.

Given this, a central question within the approach has to do with the kind of tools ecological psychologists can use to discover, develop, and understand those laws of interaction. The most prevalent solution to this question—ecological dynamics—was found by some ecological psychologist in the late 1970s and early 1980s when they proposed the use of tools from

Dynamic Systems Theory (see Kugler, Kelso, and Turvey 1980). Using Dynamic Systems

Theory to model perception and action turns out to match the requirements of the ecological approach as far as the dynamics of systems at different scales can be captured by sets of differential equations. In this sense, these equations may be taken as dynamic laws for the interactions of a system/scale of a system (e.g., the organism-environment system, the muscular- skeletal system, or the neural system).

In this section, first, I review one of the most complete instances of ecological dynamics in the current ecological research, Warren’s behavioral dynamics (2006), and then I describe the way resonance may be included in the model. However, before I get to it, I briefly introduce

Dynamic Systems Theory and the main aspects of its application to research within the cognitive sciences.

147

5.1.1 Dynamic Systems Theory

The most broad and basic definition of a dynamic system is that it is a system that changes over time. Given this definition, it is not difficult to see that virtually everything is a dynamic system as virtually any system or event we can observe or think about undergoes through quantitative and/or qualitative changes over time. Examples are population growth, radioactive decay, lasers, heart cell synchronization, neural networks, or ecosystems (see Hilborn 1994, Strogatz 1994,

Kaplan and Glass 1995, Kelso 1995). All of these are systems or events that, given some initial conditions, change their structure or their activity over time, both in the qualitative sense (e.g., the rate of growth of a population tends to increase exponentially if there is no counterbalance) and in the qualitative sense (e.g., the topological structure of a neural network changes over time depending on its own activity).

A specific set of dynamic systems are complex systems. Besides changing over time as any other dynamic system, complex systems are composed by many units constantly interacting with each other at different scales and “exhibit emergent properties due to the interaction of their subsystems when certain unspecific environmental conditions are met” (Fuchs 2013: 3). One of the more important properties of complex systems is that they (normally) are nonlinear dynamic systems, where ‘nonlinear’ means that the overall state of the system cannot be described just in terms of the states of its components.71 An example of this kind of system is cognitive systems or, to use the ecological jargon, organism-environment systems. The changes and interactions

71 The difference between a linear system and a nonlinear system is that, in the case of the former, the resulting behavior of the system can be described as an addition of the behaviors of its components. On the contrary, due to the high level of interactions and coupling between the components of a nonlinear systems, its behavior cannot be reduced or explained as an addition of the behavior of each of its components. In a mathematical sense, nonlinear systems are represented by equations which terms are products, powers, or functions like sin or cos. Geometrically, nonlinear systems form curves (i.e., not straight lines) when represented. Examples of nonlinear systems are non-harmonic oscillators, strange attractors, biological oscillators, neural networks, economic events, etc.

148 occurring in organism-environment systems—i.e., the behavior of an organism in its environment and the way such a behavior changes—depend upon an intricate set of interactions that, in some cases, occur at many different scales (e.g., the organism-environment scale itself, but also the neural scale, the muscular-skeletal scale, and so on). Given this, organism- environment systems are complex systems and, therefore, nonlinear dynamic systems. The insight developed by some ecological psychologists some decades ago is that, as with other dynamics systems, organism-environment systems can be studied by using the tools provided by

Dynamic Systems Theory.

Dynamic System Theory (DST hereafter) is a way to study the change over time in systems or events. The beginning of DST may be traced back to the 17th Century when Isaac

Newton discovered calculus and, in particular, differential equations.72 Such a discovery, in combination with his laws of motion and universal gravitation, allowed him to solve the problem of the motion (i.e., change of position over time) of the Earth around the Sun—also called the two-body problem. Differential equations provided a new way to solve the problem of change over time, but they had an intrinsic problem: when systems were more complex (e.g., the three- body problem), analytic solutions to the equations were very difficult if not impossible to find.

However, by the end of the 19th Century, Poincaré proposed a new approach to the solution of problems based on differential equations: a qualitative/geometrical solution instead of a quantitative/analytical solution. Poincaré switched the focus of research on change over time from looking for solutions in terms of explicit formulas and numerical results to geometrical representations of the possible states of a system through time. Such a geometrical approach

72 Some readers would argue that the one who discovered calculus was Leibniz and not Newton. It seems to be the agreement among scholars, however, that Newton discovered calculus first although Leibniz was the first one who published it. Anyway, for what is important to us in this chapter, the discovery of calculus is just as important as its usage in the study of motion and, thus, Newton is the most relevant figure in this case.

149 proved to be highly informative of the qualitative changes of systems over time and is the basis of contemporary DST.73

DST offers models to capture the change over time of a system given some variables and some control parameters. Variables refer to features of the systems that change over time and capture their quantitative and qualitative evolution. Control parameters refer to other features of the system that determine the change of the variables (e.g., initial conditions). A typical DST model is a differential equation or a set of differential equations—composed by variables and control parameters—that is called system.

The key of the geometrical method to address such kind of system is that it can be represented in a geometrical space to capture the qualitative transitions the system can undergo over time given some initial conditions. For example, a change of a given variable can be geometrically represented with respect to its own values (aka phase space) or with respect to time—for an example, see Figure 15. These representations reflect the stability points of the system; namely, stable solutions of the system as it evolves through time. Stability points may be attractors, i.e., points towards which the system tends to evolve, and repellers, i.e., points from which the system tends to escape. For example, two pendulums closely hung on the same wall tend to synchronize their oscillations.74 This phenomenon can be studied in terms of DST. The two pendulums may be described as a dynamic system which qualitative evolution through time may be captured by the change of one variable, their relative phase of oscillation, given a control parameter, the strength of their coupling based on their reciprocal mechanical influence through the wall. Given this, the synchronized state (i.e., relative phase equal zero) will be an attractor for

73 See Strogatz (1994) for a still brief but more complete history of DST. 74 It is well known that this general phenomenon—i.e., pendulums mounted next to each other on the same support often became synchronized—was first observed and described by Christiaan Huygens in 1665 (see Strogatz 2003).

150 the system while some other non-synchronized states (i.e., relative phase different from zero) will be repellers.

Figure 15. (a) Vector field of the equation ẋ = sinx. The vector field is built over the phase space (x, ẋ) and the trajectory the equation by adding solid points, empty points, and arrows in the x axis. Solid points are attractors (π) and empty points are repellers (2π). Arrows on the x axis indicate the direction of change of the function given some initial conditions. For example, if the initial condition was x = π/4, the change of the system will tend to π. (b) Representation of the change of x over time (t) given the same equation, ẋ = sinx, and an initial condition of x = π/4. Again, x tends to π. (c) Representation of the change of x over time (t) given the same equation, ẋ = sinx, and different initial conditions. Again, x always tends to approach π and to go away from 2π. (Images from Strogatz 1994: 17-18, figures 2.1.1, 2.1.2, and 2.1.3).

Starting in the 1980s, DST have been applied to many different studies of perception and action, and to the field of the cognitive sciences in general. Despite the peculiarities of every model, the general use of DST in the sciences of the mind may be captured in a simple, abstract form.

Following Randall Beer (1995a), organism and environment can be defined in terms of two continuous-time coupled dynamic systems, each one described in terms of a differential

151 equation. As they are coupled systems, the changes in the environment are a function of the states of both the environment and the organism, and the changes in the organism are a function of the states both of the organism and the environment (see Figure 16).

Figure 16. DST in the cognitive sciences. Organism and environment are described as two coupled dynamic systems. The dynamics of the environment, ẊE, are a function, ε, of its own states, XE, and the states of the organism, XO, which variables are parameters of ẊE by a coupling function σ. The dynamics of the organism, ẊO, are a function, α, of its own states, XO, and the states of the environment, XE, which variables are parameters of ẊO by a coupling function μ. (Figure based on Beer 1995a: 181, equation 5).

When DST is applied to the sciences of the mind in such a way, the qualitative changes in the organism-environment system are captured by the system of equations that describe the coupled dynamics of organism and environment. In terms of perception and action, for example, a model of this form describes some behavior of an organism in its environment.75

5.1.2 Behavioral Dynamics

As we have seen, ecological dynamics combines ecological psychology with DST models to account for perception and action. The ongoing interactions at different scales of the organism- environment system are described by dynamic models that capture their qualitative changes over

75 DST is a broad theory. There are many topics I have not addressed in this brief introduction. Many other concepts, other kinds of geometrical representation, other systems, etc. To offer a complete account of DST is out of the scope of this paper, so I just highlighted the points that allow us to understand the role of DST in the explanations provided by ecological psychologists.

152 time. However, there is still an open question: How are the different scales integrated in a coherent research framework? In other words, in what specific way the dynamics of the organism-environment system, the organism, and the environment, the neural system, etc., may be related to each other? To date, the most complete and exhaustive answer to this question is the one offered by behavioral dynamics.76

Behavioral dynamics, also known as information-based control, is a neo-Gibsonian approach to perception and action, mostly concerned with prospective control of action based on the information generated in the interaction between organisms and their environments—i.e., ecological information. Behavioral dynamics has been developed by Bill Warren in the last two decades and its main tenets are summarized in his article “The Dynamics of Perception and

Action” (2006). The basic assumptions of the approach are rooted in the central tenets of ecological psychology developed in the present work: the need for an explanation at the organism-environments scale, and the central roles of body and environment in psychological events that are grasped by the Gibsonian idea of the mutuality between perception and action.

According to Warren (2006), given these two assumptions, “stable, adaptive behavior emerges from the dynamics of the interaction between a structured environment and an agent with simple control laws, under physical and informational constraints.” (p. 358).77

This abstract formulation of the chief principle of behavioral dynamics finds its concrete implementation in the use of DST tools. In behavioral dynamics, organism and environment are taken to be two dynamic systems that are coupled and constrain each other—i.e., a standard

76 However, despite being the most complete approach in ecological dynamics, behavioral dynamics is not exempt of challenges and criticisms. See, for example, Fajen (2007). 77 Notice that this claim is fully compatible with the explanatory strategy of embodiment (see Chapter 1), with the ideas of de-localization and anti-computation (see Chapter 2), and with the fact that environment has to be structured to provide information (informational constraint) and the organism has to be active (physical constraint) to act upon the environment and to be sensitive to such information (see Interlude).

153 application of DST to the cognitive sciences (see Figure 16 above). On the one hand, the organism mechanically constrains the environment by the forces exerted to it (e.g., movements).

On the other hand, the environment informationally constrains the organism through the structures of her sensory fields (e.g., the structure of light in the visual field or the structure of sound in the auditory field). The coupling of these two dynamic systems may be studied, according to Warren, from a broader scope: the dynamics of the organism-environment system as a unit. The dynamic coupling between organisms and environments gives rise to specific organism-environment dynamics that can be explored at this broader scale. This scale, named behavioral dynamics, is the one that provides the name for the framework.78

According to Warren (2006), DST models can be used to capture dynamics at two different scales (see Figure 17). First, environment and organism are two coupled dynamic systems at the same scale. The environment is taken as a dynamic system e and its changes, ė, are a function of its previous states and the forces produced by the organism, F. The organism is another dynamic system o and its changes, ȯ, are a function of its previous states and the information, i, provided by the environment. Thus, the coupling by these two dynamic systems is facilitated by F and i. And second, there is a broader scale that captures the organism- environment dynamics as a unitary event. This scale is named behavioral dynamics and captures behavior (x) and behavioral changes (ẋ and ẍ) of an organism during its ongoing interaction with its environment. Behavioral dynamics emerge from the dynamics at the lower scale and, at the same time, capture that lower scale in an abstract fashion.

78 Because of this fact, the use of “behavioral dynamics” through this section might be confusing. “Behavioral dynamics” refers both to the name of Warren’s approach to the dynamics of perception and action, and to the specific dynamics at the organism-environment scale. I have chosen to maintain this naming because it is the way Warren himself uses the concept. However, to avoid the possible confusion, I will make explicit which one of the two meanings of “behavioral dynamics”—approach or scale—I am using every time the concept appears in this text.

154

Figure 17. Schema of the dynamics of perception and action. The most important aspects of the approach are the way the environment changes by the constraint of the organism and vice versa. Put simply, the way the environment changes, ė, depends on the environment itself, e, and the forces exerted on it by the agent, F. On the other hand, the way the organism changes, ȯ, depends on the organism itself, o, and the informational constraints posed by the environment, i. From these interactions, a regularity, x, emerges at the scale of the organism- environment system, and its change ẋ and/or ẍ is described as behavioral dynamics. (Based on Warren 2006: 367, figure 4; and Richardson et al. 2008: 175, figure 9.7b).

The framework in which behavioral dynamics generates models allows for an account of perception and action from two standpoints. On the one hand, perception and action may be seen as constrained by a broader scale that incorporates their common dynamics. On the other hand, the broader scale of behavioral dynamics can be seen as well as constrained by the dynamic of perception and action at the lower scale. As Warren puts it:

… [A] formal description of behavior requires identifying a system of differential

equations whose vector field corresponds to the observed pattern of behavior [at

the behavioral dynamics scale]… [A]n explanation of adaptive behavior further

requires showing how these behavioral dynamics arise from interactions among

155

the system’s components, that is, how a stable solution is codetermined by

physical and informational constraints. (2006: 368; emphasis is mine).

This is to say, to explain behavior, we need to capture the dynamics at the scale of the organism- environment system taken as a unit (behavioral dynamics), but also to describe the way the organism controls its own action and stabilizes its behavior in terms of its coupling with the environment (control laws).

An example of a model at the scale of behavioral dynamics is Fajen and Warren’s steering model for navigation in sparsely crowded environments (2003). This model predicts the trajectories followed by agents given just the constraints posited by obstacles and goals (or targets) and capturing the change in the steering of agents with respect to these components of the environment. Conceptually, Fajen and Warren’s model operates both at the scales of behavioral dynamics and control laws. An agent’s steering is, in itself, always a steering with respect to some layout of the environment given some reference axis. However, different aspects of such steering will be grasped at each of the two possible scales depending on the variables used to capture their dynamics. For the analysis of the model, I will proceed from the scale of behavioral dynamics to the scale of control laws.

In Fajen and Warren’s model (2003), the goal is understood as an attractor for the steering of an agent (see Figure 18). Given an arbitrary reference axis, the direction of the agent,

ϕ, and the position of the goal, ψg, give rise to the angle βg. To guide her own steering towards the goal, the agent must close that angle βg. Such a closing is defined in terms of the dynamic equation of a damped mass-spring system that defines the two control parameters b (damping) and k (stiffness). Otherwise, obstacles are defined in the same terms, but in this case the agent

156 must open the angle βo to avoid hitting them. In this sense, the steering of the agent (ϕ) is attracted by the goal and repelled by the obstacles.

Figure 18. Graphical depiction of the relation between an agent and a goal in Fajen and Warren’s steering model (2003). Given an arbitrary reference axis, the steering of an agent, ϕ, and the position of the goal, ψg, form an angle βg. The goal acts as an attractor for the steering of the agent and such attraction is modelled using the equation of a damped mass-spring and its two typical parameters for damping (b) and stiffness (k). (From Warren 2006: 374, figure 7).

When the dynamics of navigation in a sparsely crowded environment are defined in the as above, they can be captured with the following equation:

–C1dg –C3|βo| –C4do ϕ¨ = – bg (–ϕ˙) – kg βg (e + C2) + ∑ ko βo e e (Equation 1)

Where ϕ¨ captures the acceleration in the change of the steering of the agent, – bg (–ϕ˙) is the damping term, – kg βg and + ko βo are the attraction and repulsion terms, respectively, and the rest of terms modulate the increase or decay of attraction to the goal or repulsion from the obstacles as a function of the distance between the agent and the goal (dg) and the obstacles (do) in respect of the parameters CN—concretely, C1 and C2 determine the rate of the decay in the attraction or

157 repulsion of the goal and the obstacles, respectively, and C3 and C4 determine their minimum value to maintain, for example, the attraction to distant goals. On the one hand, the damping term, – bg (–ϕ˙), is included in the model to capture the resistance to turning and, so, to prevent oscillations in steering of the agent. Usually, when navigating, agents do not oscillate around the new heading direction after a change in the steering, but just smoothly get to the new direction.

This is the feature of steering behavior captured by the damping term. On the other hand, the attraction and repulsion terms, – kg βg and + ko βo, are modulated by the spring stiffness parameter k to capture the strength of angular acceleration; namely, the strength of attraction towards the goal or the strength of repulsion from the obstacles. The stiffness of a spring is its resistance to deformation—i.e., its rigidity, its resistance to be compressed or stretched. Thus, the higher the value of stiffness (k), the more the attraction or the repulsion, respectively.79

Once navigation in a sparsely crowded environment is described in terms of Fajen and

Warren’s model and is formalized in Equation 1, the regularities (trajectories) at the organism- environment scale can be predicted. In other words, the regularities at the scale of behavioral dynamics can be captured. In Figure 19 we can observe the predicted trajectory offered by Fajen and Warren’s model applied to a navigation task in a specific sparsely crowded environment. It shows how the regularities of some observed behavior (grey line) can be captured at the scale of behavioral dynamics by the model (black line). The robustness of the model has been supported by several studies on perception and action (Fajen and Warren 2001, Fajen and Warren 2003,

Warren and Fajen 2004, Bruggeman and Warren 2005, Cohen et al. 2005, Fajen and Warren

79 As agents and obstacles and goals are abstractly connected by damped mass-springs in Fajen and Warren’s model the role of the stiffness term (k) is twofold. On the one hand, the more the value of kg, the more difficult to stretch the damped mass-spring between ϕ and ψg. Namely, the more difficult to increase the angle βg, thus, the more the attraction to the goal. On the other hand, the more the value of ko, the more difficult to compress the damped mass-spring between ϕ and ψo. Namely, the more difficult to reduce the angle βo; thus, the more the repulsion from the obstacle.

158

2005, Lobo et al., Under Review). However, it is still needed to show how the analysis at the scale of behavioral dynamics is integrated with the analysis in terms of control laws.

Figure 19. Predicted (black line) and one observed (grey line) trajectories for the navigation of a sparsely crowded environment using an optic-to-haptic sensory substitution device. Initial position is (X, Y) = (0, 100), obstacles are black squares, and the goal is the grey circle. The matching between the predicted and the observed trajectories is statistically significant and such an effect is reinforced when more observed trajectories are included in the model. (From Lobo et al. (Under Review), figure 4).

As Warren (2006) proposes, behavioral dynamics both capture and emerge from the organism-environment interactions constrained by ecological information and the mechanical forces exerted by the organism. The relation between these two kinds of constraints is determined by control laws of the form ȯ = ψ (o, i), where ȯ is the change of the state of an organism while dynamically coupled with its environment and is defined as a function ψ of its previous states, o, and ecological information, i (see Figure 17 above). Warren and Fajen (2004) state that:

159

A control law is generally considered to be a mapping from task-specific

informational variable(s) to action variable(s) that describe observed behavior…

If regularities in behavior can be identified at this level of abstraction, it suggests

that there are systematic dependencies of action on information. (p. 309).

In this sense, control laws describe the way organisms in a dynamic coupling with their environment deal with ecological information and their own states to determine how they control the forces they exert on the environment (i.e., actions). For example, suppose I am running and I want to stop before I hit an object in front of me. Following the nomenclature of Figure 17

(above), my own action, ȯ, must change as to go from one active ‘running’ state, o1, to another active ‘stopping’ state, o2. To do so, what is needed is a control law that relates my current state o1 with the ecological information regarding time-to-contact with the object I do not want to hit

(the variable τ, as we have noted in previous chapters; see Lee 2009). A specific control law for my stopping behavior will integrate τ and my state o1 to accomplish the desired task (i.e., to reach the state o2). In other words, a control law of my coupling with the environment will reflect the regularities in the sensorimotor dynamics that must be controlled and that both give rise to and are constrained by the regularities in the behavioral dynamics.

The final form of control laws and the kind of ecological information integrated in them are, of course, empirical questions. There are different proposals regarding both issues and it is beyond of the scope of this work to analyze all of them.80 However, in order to fully understand the relation between control laws and behavioral dynamics, we will benefit from reviewing a

80 Early examples of control laws are the ‘formulae’ for controlling locomotion proposed by Gibson (1958). Currently, there are different examples of ecological information for diverse perceptual systems— e.g., optic flow (Warren 1998, Warren et al. 2001, Lee 2009) or dynamic touch (Amazeen and Turvey 1996, Turvey et al. 1999, Carello and Turvey 2017)—and several proposals regarding the final form of control laws—e.g., kinematic form, kinetic form, or dynamic form (for a review, see Warren and Fajen 2004, Warren 2006).

160 concrete example. Again, Fajen and Warren’s steering model (2003) is a good framework for this task. Given Fajen and Warren’s model for navigation through a sparsely crowded environment at the behavioral scale, a control law is proposed (see Warren and Fajen 2004: 332-

333):

–C1dg –C3|βo| –C4do ϕ˙ = – kg βg (e + C2) + ∑ ko βo e e (Equation 2)

Where ϕ˙ is the change in in the steering of the organism given its relative position regarding the goal, – kg βg, and the obstacles, + ko βo. The first noticeable feature of this control law for steering change is that it is the first-order (ϕ˙)81 counterpart of the second-order variable (ϕ¨; see Equation

1) that describes the regularities at the scale of behavioral dynamics. The implications of this are twofold. On the one hand, it gives a concrete mathematical sense to the idea of the mutuality between behavioral dynamics and control laws—i.e., that behavioral dynamics both emerge from and capture control laws. The control law is incorporated in the model for behavioral dynamics

(Equation 1) by directly including its variables in the equation. On the other hand, in terms of ecological information, the difference between behavioral dynamics and control laws is not as much a qualitative difference as a difference in how the same informational constraints operate at different scales. The informational constraint is the same one in both scales and is captured by the angles βg and βo, that are defined as the difference between the steering of the agent and the positions of the goal (βg = ψg – ϕ) or the obstacles (βo = ψo – ϕ) respectively (see Figure 18 above).82

81 In fact, being a first-order equation makes it a control law of the form ȯ = ψ (o, i) as it is required in Figure 17. 82 Actually, the use the outcome of the control law (Equation 2) in the model for behavioral dynamics (Equation 1)—given the adequate mathematical transformations—leads to the same predictions (see Warren and Fajen 2004).

161

Given the multi-scale constraints of ecological information, the second noticeable feature of the control law for steering behavior (Equation 2) is that the information needed to control the steering of the agent is specified by the angles βg and βo. Both angles β are an abstraction themselves, of course. However, it is possible to define βg in terms of variables “specified by optic flow and the proprioceptive locomotor axis [egocentric direction], whereas the direction of a goal… is given by its visual direction” (Warren and Fajen 2004: 332; see also Warren et al.

2001, Li and Warren 2002, Wilkie and Wann 2003). Then, the ecological information that constrains this specific task can be formulated as:

βg = (ϕloco – ψg) + wv(ϕflow – ψg) (Equation 3)

Where βg is the difference between the steering of the agent and the location of the goal, ϕloco –

ψg determines the egocentric direction with respect to the goal, and ϕflow – ψg determines the visual angle between the goal and the steering of the agent in the optic flow. The structure of this flow is co-determined by the layout of the environment, w, and the velocity of the agent, v. In this sense, ecological information is defined, precisely, at the ecological scale; namely, in terms of the organism-environment relations.

This information is integrated in the control law (Equation 2) as a variable—modulated by the constant kg—for the control of the dynamics of the damped mass-spring that, according to the model (see Figure 18 above), unites the steering of the agent and the location of the pole. In this way, the regularities of agent’s sensorimotor dynamics while coupled with her environment are captured by the control law.

In summary, Warren’s behavioral dynamics approach (aka information-based control models; Warren 2006) requires the description of the dynamics at the organism-environment

162 scale (behavioral dynamics) and the control laws at the scale of the organism to offer a complete explanation of behavior. As we have seen with the example of Fajen and Warren’s steering model (2003), this is an approach that offers a very complete and detailed explanation of perception and action at these different scales. However, as might be inferred from the previous discussion, the viability of this approach is based on one aspect of the relation of information and action: organisms must be sensitive to ecological information and use it in terms of their own intrinsic dynamics. Behavioral dynamics, though, remains silent regarding the way organisms are able to carry out such a process. Luckily, by shedding some light on this issue, resonance may help to further complete Warren’s approach.

5.1.3 Resonance in Behavioral Dynamics

The framework of behavioral dynamics sets the importance of the coupling between organisms and environments to determine the dynamics of perception and action. Concretely, the control of the action of an organism needs to take in account its own states and the kind of ecological information it is able to perceive. The relation between these two variables, which both enables and is captured by the regularities at the scale of behavioral dynamics, is formalized by control laws. However, the existence of control laws assumes that organisms somehow integrate ecological information and their own internal dynamics to generate the forces that constitute their actions. Although the regularities of the relation between perception and action enabled by such an integration are captured by control laws, Warren’s approach remains silent regarding the way the integration itself is carried out by the intra-organismic systems (e.g., in the case of humans, the proper electrical dynamics of the nervous system).83 In this section I address this

83 Notice that, although the integration between ecological information and organismic states occurs at the intra-organismic scale, the control of action itself remains distributed through the organism-environment system as required by behavioral dynamics. The integration of ecological information and organismic

163 issue by showing how the theory of resonance can account for the integration of ecological information and the dynamic states of organisms.

As I have described in previous chapters, resonance (or ecological resonance) is the process by which the dynamics at the intra-organismic scale are constrained by the same variable of ecological information that constrains the dynamics at the organism-environment scale. In a more specific sense, resonance is the process by which ecological information generated at the organism-environment scale constrains the dynamics of the CNS. The basic way in which resonance occurs is by the coupling (i.e., synchronization, phase locking, modulation, linear/nonlinear correlation) of neural dynamics and organism-environment dynamics thanks to mechanisms of resonance such as those exemplified in Chapter 4. But how can the process of resonance be included in behavioral dynamics? The most natural way to do this is to understand the CNS as a dynamic system that is coupled to the other two dynamic systems—organism and environment—already coupled in the model (see Figure 20). The relation of coupling will be a special one as the CNS is nested in the organism, but in general terms it is still a regular dynamic coupling in which ecological information, i, constrains the dynamics at the neural scale, ṅ, given a function of resonance of the form ṅ = ρ (n, i). The neural dynamics that result from this constraint constitute a proper part ∂o of the overall dynamics of the organism (ȯ).

states is not the control of the action. As Warren (2006) puts it: “[C]ontrol is distributed over the agent- environment system. I interpret this statement to imply that biology capitalizes on the regularities of the entire system as a means of ordering behavior.” (p. 358). In this sense, integration of ecological information and organismic states is needed for the different constraints at different scales to be possible—i.e., in order to have an informational constraint of the organism-environment coupling, the organism has to be sensitive to information—but it does not guide or control behavior by itself as it depends on regularities that only appear at other scales (e.g., information in the optic flow, that depends on the layout of the environment and the velocity of the organism—and not on its internal states—, constrains the control of action bring a property of the organism-environment system).

164

Figure 20. Behavioral Dynamics and Resonance. Resonance is added to the framework of behavioral dynamics in the form of a dynamic system that captures the dynamics of the CNS and that is nested within the dynamics system of the organism. As it does with the dynamics of the organism when it is coupled with its environment, ecological information, i, also constrains the neural dynamics given a function ṅ = ρ (n, i). Neural dynamics constitute a proper part ∂o of the overall dynamics of the organism ȯ (yellow arrow) that constrain the dynamics of the environment, ė, by exerting forces F on it given the function F = β(ȯ).

The just noted special character of the coupling between organism and neural dynamics (yellow arrow) is due to the fact that the neural system is nested in the organism; namely, the neural system is an intra-organismic system. Such a nesting makes the coupling between organism and neural dynamics not only a relation of mutual constraint but also a relation of constitution— neural dynamics are a part (∂o) of the overall dynamics of the organism (ȯ). Most of the time such a distinction between constraint and constitution will not entail practical differences in terms of modelling or experimental research.84 However, as we shall see, the constitutive aspect

84 Actually, the overall dynamics of the organism, ȯ, may also be seen as nested within the overall dynamics at the behavioral dynamics scale, ẋ/ẍ, and, in principle, this fact entails no deep effects in terms of the development of the behavioral dynamics approach.

165 of neural dynamics regarding the organism offers a possibility to analyze their coupling in a very specific way in terms of fractal and scalar relations (more on this in section 5.2).

The inclusion of resonance within the behavioral dynamics approach, as shown in

Figure 20, may be further detailed. To do so, it will be interesting to exemplify the new approach by taking into account the other main aspect of the story of resonance I have offered; that is, taking neural reuse (Anderson 2014) as the principle for the functional organization of the CNS.

As we have seen in Chapter 3, neural reuse’s main tenet is that brain regions, although having their own propensities to participate in specific tasks (e.g., visual perception, decision making, or motor control), are used and re-used in many different cognitive events. Thus, neural reuse predicts that, for each cognitive task, we will find groups of dynamically coordinated brain regions participating in them (see Figure 10 in Chapter 3). These groups of brain regions are named TALoNS (Transiently Assembled Local Neural Subsystems; see Anderson 2014: 94, 113-

117), and, in terms of behavioral dynamics, they may be understood as the neural states captured by neural dynamics (ṅ). Namely, the dynamics of the CNS, ṅ, are defined by specific transiently stable assemblies of brain regions (TALoNS) and the change from one of them to another given the specific constraints posited by the dynamics of perception and action in concrete situations.

Understood this way, TALoNS are constrained by ecological information in terms a resonance function ṅ = ρ (n, i) and participate, in terms of constraints and constitution by the function ∂o =

μ (ṅ), in the overall organismic dynamics (captured by a control law) that give rise to the forces exerted on the environment by a given organism. Two main consequences follow from this conceptualization of TALoNS.

The first consequence is that TALoNS, as control laws before, must be explained in terms of sensorimotor regularities framed within the dynamics of perception and action. On the one

166 hand, TALoNS are constrained by ecological information. In this sense, they are part of perceptual dynamics. On the other hand, TALoNS are part of the overall dynamics of the organism that constrain its actions in the environment. Namely, they are also part of action dynamics. Thus, TALoNS participate in the overall perception and action dynamics. Importantly, a single TALoNS participates in these dynamics. In other words, it is not needed to define a

TALoNS in terms of perception and a subsequent one in terms of action. The one and the same

TALoNS participates in the whole sensorimotor regularity. This fact is, moreover, compatible with TALoNS being groups of coupled brain regions that have their own activation propensities.

Under this understanding of TALoNS, it is fair to expect that regions of the visual and motor cortex, for example, are jointly activated given some specific perception-action dynamics.

The second consequence is that this understanding of TALoNS sets a structured way of researching the relations between the scale of behavioral dynamics, control laws, and neural dynamics. As behavioral dynamics capture the regularities of control laws and the same ecological information that constrains these two scales also constrains neural dynamics, putting together the three scales is a matter of finding dynamic neural patterns that correspond to

TALoNS and modelling their coupling with the dynamic systems at higher scales.85 The concrete details of the relation between specific dynamic neural patterns and TALoNS, and the way to measure them, is an empirical question, although research presented in Chapter 4 might provide some insights (e.g., Large 2008, van der Weer and van der Meer 2009). Otherwise, it is fair to expect that the details of the integration of ecological information in neural dynamics, described by the function ṅ = ρ (n, i), will be carried out by some form of resonant mechanism (again, the

85 Notice that researchers could also proceed in the opposite direction: discovering different TALoNS could guide the research on specific ecological information or control laws at other scales. In a stronger sense, it might be possible that an eventual cognitive ontology based on TALoNS informs the research at the organism-environment scale in a deeper sense. For example, by discovering differences in the dynamics of tasks that seem to be the same at the higher scale.

167 ones described in Chapter 4 are clear candidates for such a mechanism; e.g., Gökaydin’s mechanism of coupled resonant oscillators, Large’s nonlinear resonance in nonlinear oscillators).86

5.1.4 Two Models for Resonance?

By now, the theory of resonance has been set out. Moreover, the way the theory fulfills its main aims has been explicated. On the one hand, it has been shown how an operative notion of resonance can be integrated within the ecological approach. The inclusion of resonance within the framework of behavioral dynamics offers both a conceptual setting and the prospects of a research plan for the development of a story about the role of the brain in psychological events while staying true to the main tenets of the Gibsonian proposal. On the other hand, a mechanism for the coupling of dynamics at different scales based on mechanical/informational resonance has been described. Moreover, possible and plausible concrete instantiations of such a mechanism of resonance has been reviewed in chapter 4.

However, the reader might hold the suspicion that two different models for resonance have been offered in this work. One model has been offered at the end of Chapter 3, while a seemingly different model has been offered in the previous section. Are they different models? If so, are they compatible? A closer look to both models will show how there are no differences between the two models but in terms of their level of abstraction.

86 Such a resonant mechanism should not be understood as a neural mechanism in any classic sense. It is a multi-scale mechanism that connects constraints between different systems at different scales. For example, we can talk about a resonant mechanism that makes one tuning fork to resonate to another one, but we do not think that any of these two tuning forks embodied the mechanism in any strong sense. Rather, we think the mechanism is some kind of mechanical or informational process that connects both tuning forks in terms of resonance.

168

The first model of resonance—offered in Chapter 3—shows the main aspects of the theory (see Figure 21). Neural dynamics (ND) are nested within organism-environment dynamics

(O-ED) and they are coupled in terms of the informational variables ψ and χ, where χ = kψ.

Figure 21. Abstract Model of Resonance. It shows the nesting of neural dynamics (ND) within organism-environment dynamics (O-ED). Also, it shows resonance is the process by which ND are constrained by the same information O-ED are constrained, and that is why χ = kψ—i.e., χ is a linear or non-linear transformation of ψ in terms of the factor k.

The second model includes resonance within the framework of behavioral dynamics and makes neural dynamics to be nested in a multi-scale system in which we find other two scales—control laws and behavioral dynamics (see Figure 20 above).

The two models might seem different in kind at first, but they actually are the same model at different levels of abstraction. The abstract model depicted in Figure 21 is the most general. It provides an account of resonance that captures its basic features. Meanwhile, the model based on behavioral dynamics depicted in Figure 20 is just the concrete application of the general abstract model to the special case of an already established approach within ecological research. The two higher scales of the model based on behavioral dynamics (control laws and behavioral dynamics) reflect the organism-environment dynamics described in the abstract

169 model, and neural dynamics are the lower scale in both models. Also, in both of them, ecological information constrains all the scales of the system. What, then, are the benefits of providing the two models? There are two main payoffs. One the one hand, the abstract model remains medium-independent. Even in the case the model based on behavioral dynamics is not successful, the abstract model still can be accepted and, eventually, can be concretized in a different framework. On the other hand, the model based on behavioral dynamic operationalizes the theory of resonance to the extent of providing concrete guidelines for research. Also, it provides concrete testable hypotheses (e.g., the relation of TALoNS with control laws and behavioral dynamics).

5.2 Multi-Scale Approach and Fractal Analysis

As is clear throughout this chapter, one of the more prominent features of the theory of resonance is that it is promoting a multi-scale account of psychological events. This is not an uncommon feature in the sciences, both between and within fields. Suppose, for example, we want to gain knowledge about a tiger. We will probably need to review the outcomes of different sciences that inform about tigers at different scales (e.g., biology, zoology, ecology, geography, economics, and so on). The same might be said regarding an account of the function of an organ or a medical condition, for instance.

A similar flavor of multi-scaling may be witnessed within specific sciences. Within physics, for example, an account of the different features of a material requires the joint work of physicists at different scales of description (e.g., quantum mechanics, solid-state physics, and crystallography). Normally, the more complex the phenomenon, the more scales that must be addressed to achieve a complete explanation. In this sense, it is fair to expect a multi-scale account of very complex events such as perception and action.

170

The complexity of psychological events makes the multi-scale approach to them common in the cognitive sciences. Experimental psychology and the neurosciences, for instance, are allegedly trying to provide an account of the same kind of phenomena at (at least) two different scales. In a more concrete sense, within ecological dynamics themselves—or within the dynamic approach to perception and action, more generally—the idea of having a multi-scale account of a phenomenon of interest is also a fairly common one. For this reason, it is worth mentioning that by a theory of resonance that combines different scales to account for psychological events I am not proposing a revolutionary methodology. Moreover, as already noted above, despite being a multi-scale approach based on DST, its technical details are not dramatically different from standard DST.87 Different variants of a similar kind of multi-scale methodology have been both proposed and applied to research within the field of perception and action (e.g., Van Orden et al.

2003, Riley and Van Orden 2005, Kelso and Tognoli 2007, Van Orden et al. 2012, Tognoli and

Kelso 2014, Ibáñez-Gijón et al. 2016, Teques et al. 2017). These applications go from different ranges of behavioral and neural scales to different levels of coordinative structures (i.e., synergies). An interesting consequence of the common usage of a multi-scale approach within ecological dynamics—and within research in ecological psychology, more generally—is that there are no rigid constraints regarding the selection of the scale of interest. I have focused my interest in the relation between the ecological scale and one intra-organismic scale, i.e., the CNS

(neural dynamics). However, it is possible to address perception and action from other scales: the neuro-muscular-skeletal scale, the scale of kinematics, or the scale of kinetics, for example.

Although I have defined resonance in a very narrow sense as the constraint of neural dynamics

87 Any kind of comprehensive account of such details is completely out of the scope of this work. However, the NSF’s Tutorial in Contemporary Non-Linear Methods for the Behavioral Sciences (edited by Michael Riley and Guy Van Orden) is a good introduction to the field. See: https://www.nsf.gov/sbe/bcs/pac/nmbs/nmbs.jsp.

171 by ecological information, it is in principle possible to track the same kind of constraint at some intermediate scale. For example, an ecological variable could be playing a role in neural dynamics but also in kinematic relations between different parts of the body, and such a constraint might be also modelled in terms of resonance.88

Beyond the commonality of multi-scale approaches in ecological dynamics, my own account of resonance holds a particularity that is worth noting and that allows for the use of other tools from complexity science to complete the analysis of psychological events. The ontological and temporal relation between the scales involved in the explanation of resonance is a relation of nesting. The two general scales relevant for explanation (organism-environment and intra- organismic) are both physically and temporally nested—concretely, the intra-organismic scale is nested in the organism environment scale (see Figure 20 and Figure 21, above). In other words, the intra-organismic scale accounts for a fast-dynamics sub-system of the system that exists at the ecological scale. For this reason, the scales are taken to be different descriptions of the same complex system. In other words, the scales are always intrinsically coupled and may be understood as two different physico-temporal scales of the same phenomenon. Such a particular relation allows for a fractal analysis of the whole complex system.

In general terms, fractal analysis (Mandelbrot 1982, Feder 1988, Schroeder 1991,

Peitgen et al. 1992, Iannaccone and Khokha 1994) allows for the detection of structural similarities between the different physical and temporal scales of a complex system—also known as fractals or scale-free patterns. Fractals are structural regularities found in a complex system no matter the physical or temporal scale in which you observe it. For example, it does not matter

88 I am not addressing these other scales in this work, but I take the selection of the right set of scales as a framework for investigation a task of the scientist. Whether or not my theory of resonance is capable of accounting for two scales different from the ecological and neural one might be a matter of debate—i.e., whether we can safely talk about integration of ecological information outside of the CNS—but I think there are not theoretical reasons to deny the possibility a priori.

172 whether you look to complete behaviors or neural spikes, or whether you sample the system in intervals of milliseconds or hours. In the particular case of cognitive systems, these scales may be the organism-environment system, the muscular-skeletal system, or the neural system, for example, but also scales that are defined in terms of time: full actions, movements that compose actions, specific brain dynamics, etc. In this context, resonance seems to be amenable of fractal analysis: resonance refers to the relation between a complex system (CNS) that is in itself a part of—is nested in—a bigger complex system (organism-environment system), so looking at the nested complex system is, as already noted, looking at the nesting complex system at a specific scale. Importantly, this nesting relation does not only occur in the physical sense, but also in the temporal sense: the dynamics of the nested complex system (CNS) are developed in faster time scales than the ones of the nesting complex system (organism-environment).

This rationale supporting the fractal analysis of resonance motivated some researchers in ecological psychology and complex systems (see, for example, Van Orden et al. 2003,

Liebovitch and Shehadeh 2005, or Aks 2005) to propose fractal analysis to trace relations between different scales—usually temporal ones—of the behavior of organism-environment systems. There are two main contributions of fractal analysis to this kind of research. On the one hand, fractal analysis provides a new set of statistical methods for the analysis psychological events, for example, the study of probability density functions in terms of power laws (see

Liebovitch and Shehadeh 2005). On the other hand, the use of fractal analysis within the approach of ecological dynamics entails a further development of some conceptual cornerstones of the theory. This conceptual development is mostly based on the idea that psychological systems exhibit self-organized criticality (Bak 1990, Turcotte 1999, Bak 1996; Jensen 1998,

Juarrero 1999, Van Orden et al. 2003).

173

Self-organized criticality refers to the fact that complex systems—cognitive systems among them—endogenously organize themselves around critical states. A critical state is understood as a poised state that involves all the scales of the system and that is highly context sensitive. Van Order et al. (2003), in their application of the idea of critical states to cognitive systems, say that a critical state “is a global state that is acutely context sensitive. Criticality refers to a precise balance among constraints; that is what it means to be in a critical state.” (p.

332). In other words, a critical state involves a specific balance of constraints at all scales of a system and, thanks to its high context sensitivity, such a state as to be able to rapidly change from one behavioral pattern to another one just because of a subtle change in the contextual conditions. Originally, self-organized criticality was proposed by Bak et al. (1987, 1988; see also

Bak 1990, 1996) as the mechanism for the fractal structure of nature both at the physical level and at the temporal one. Thus, the fact that complex systems organize themselves around these critical states in which constraints at multiple scales are intimately balanced is what makes nature exhibit such physical and temporal patterns.

Alicia Juarrero (1999) formulated a specific role of self-organized criticality in terms of cognitive systems: intentions in cognitive systems are self-organized critical states. This was labeled as the Juarrero’s Conjecture by Van Orden et al. (2003: 333) and entails that different scales of cognitive systems are related in a fractal sense. If intentions are critical states, all the physical and temporal scales of the system (e.g., behavioral (full action), kinematic (specific movements), neural, and so on) must hold a fractal relation. For example, if the constraints of the dynamics of a higher scale (say, organism-environment scale) are balanced and actually serve as the constraints of the dynamics of a smaller scale (say, neural scale)—e.g., as it is stipulated in the theory of resonance proposed here—the whole system must show a fractal structure.

174

In a more concrete sense, according to Van Orden et al. (2003), from the idea of self- organized critically and Juarrero’s Conjecture it is predicted that what in a psychology or a neuroscience experiment is usually taken as unsystematic random variability (error, noise) is actually a source of information because such a variability will exhibit a fractal structure. Thus, a fractal analysis will find 1/ƒ noise (pink noise) in its temporal distribution. For example, suppose an experiment in which a participant is asked to grasp a glass of water with her right hand. Every time the participant does that, she will perform basically the same behavior, the same movement.

However, there will always be some variability from reach to reach—i.e., the movement will be slightly different for each trial. If we analyze these variations as a temporal distribution, we will find a fractal structure (1/ƒ or pink noise) in the distribution. This is the mark that a higher scale

(e.g., the behavioral/intentional scale; the intention to grasp the glass) is constraining a lower one

(e.g., the muscular or the neural scale). Following this logic, if neural dynamics are temporally constrained by some ecological interaction, as established by my account of resonance, their temporal variability will exhibit a fractal pattern (1/ƒ or a pink noise pattern). For example, a fractal relation is expected to hold between specific kinds of behavior and specific TALoNS that are constrained by the same ecological informational variable. Therefore, an analysis of the variability in neural activation regarding specific TALoNS—or specific brain dynamics constrained by ecological information—taken in the form of a time series, must show a 1/ƒ or pink noise pattern. Self-organized criticality, then, provides specific testable hypothesis regarding resonance.

It is important to note one implication of fractal analysis applied in the cognitive context, in general, and in terms of resonance, in particular. The 1/ƒ noise predicted by the fractal theory is a mark of a scalar relation; namely, a relation in which higher scales constrain lower ones. For

175 example, in the case of Juarrero’s Conjecture, intentions are critical states that constrain the whole behavior of the system. So, the variability at the scale of, say, particular movements may be seen as constrained by the intention. Given this, it is fair to think that a fractal analysis of this kind offers directionality for the coupling between two relevant scales and, more specifically, for the coupling between different scales that takes place in resonance: the higher scale (organism- environment scale) always constrains the lower one (neural scale) and not the other way around.

For instance, if we take it that a specific kind of behavior exhibits a kind of critical state for the organism-environment system, such a critical state will constrain the dynamics at the neural scale. This fact entails the primacy of the ecological scale regarding any psychological and cognitive investigation: the ecological scale will always be the source of the collective variables that will describe a particular critical state and, subsequently, will be used to account for different scales of the system. In this way, my proposal avoids any interpretation based on internalism (i.e., localization) and, at the same time, is compatible with the primacy of the ecological scale in psychological explanation that, as we have already noted, is one of the main tenets of ecological psychology—i.e., the change in the explanatory strategy regarding psychological events (see Chapter 1 and 2).

5.3 Neural Reuse Revisited

The story of resonance I have developed here has offered four outcomes so far. First, I have provided an operational definition of resonance as the coupling between neural dynamics and organism-environment dynamics in terms of ecological information. Second, I have described a mechanism for resonance—which is a mechanism of synchronization based on resonance; i.e., the increasing in the amplitude of an oscillator when a force feeds it near to its proper frequency—and some examples of its possible implementation at the neural scale. Third, I have

176 shown how to integrate resonance into current research in ecological psychology by its addition to the framework of behavioral dynamics. And fourth, I have provided a principle for the functional organization of the brain that provides an adequate environment for resonant events, neural reuse.

Beyond these four outcomes, it is possible to further study the relation between resonance and neural reuse. On the one hand, it is possible to understand how the transient patterns of stable connectivity between regions (TALoNS) may emerge in terms of resonance. And, on the other hand, it is possible to put forward some ways in which the dynamic approach to perception and action that I am favoring here may help to concretize some theoretical concepts of neural reuse such as active search. I turn to these two issues now.

If we want to understand the way TALoNS emerge, we must pay attention both to features of resonance and features of neural reuse. Given the characteristics of the mechanism of resonance, it requires the coupling between the neural scale and the organism-environment scale to be given in some form of oscillatory fashion. Resonance (mechanical/informational) is described in terms of amplitude changes in oscillations due to some external force. This fact may be translated to the scale of the CNS as a change in the oscillatory neural dynamics constrained by ecological information.89 Thus, neural dynamics must be understood in terms of oscillations.

In terms of neural reuse, TALoNS emerge as the temporary coordination of different neural regions. In this sense, it is not enough that a given neural region resonates to ecological information, but a network of regions—that, in principle, might hold different oscillatory dynamics—temporarily coordinate their dynamics. As we have already seen in Chapter 3, neural reuse allows for a dynamic explanation of brain activity, so a characterization of the brain in

89 In Chapter 4 I have already analyzed the plausibility of such a fact—e.g., van der Weel and van der Meer’s (2009) experiment on the presence of τ (tau) in the dynamics of babies’ visual cortex.

177 terms of oscillatory dynamics is not a problem.90 However, an account of how such oscillatory dynamics may give rise to TALoNS is still needed.

My proposal in this section is that if the functioning of the brain is understood in terms of coordination dynamics (see Kelso 1995, Kelso and Tognoli 2007), we will have the conceptual and empirical tools to address the emergence of TALoNS. Coordination dynamics is a framework for the functioning91 of the brain that “proposes that dynamic coupling between the parts of the brain and between the brain and the world are used to express perception, cognition, consciousness and behavior.” (Tognoli and Kelso 2009: 32). In this sense, coordination dynamics is compatible both with the idea of resonance—as part of behavioral dynamics—and the idea of TALoNS. Actually, the definition provided by Tognoli and Kelso could serve as a definition of TALoNS as well.

The basics of coordination dynamics regarding the functioning of the brain are found on the coordination of neural dynamics at different scales in terms of the HKB Model (Haken et al.

1985). The HKB model was first proposed as a model for phase transitions in human hand movements, but it was rapidly generalized to capture phase transitions in many other kinds of systems (e.g., Jirsa et al. 1998, Kelso et al. 1998, Carson et al. 2000, Mechsner et al. 2001,

Temprado et al. 2002, Aramaki et al. 2005, Pellecchia et al. 2005). In general, the HKB model is able to predict the change of the relative phase (ϕ) between two oscillators over time; namely, how the behavior of the two oscillators is stable or not over time regarding different regimes

(e.g., in-phase regime, anti-phase regime, and so on). These oscillators may be human hands,

90 To recall a claim by Michael Anderson in this regard: “[I]t is worth an initial if brief reflection on an important disanalogy between the brain and a computer: whereas a computer is typically understood as a device that carries out a specific instruction set on (and in response to) inputs, brain responses to stimuli are characterized instead by specific deviations from intrinsic dynamics.” (2014: xx). 91 Notice that this is not a framework for the functional organization of the brain (neural reuse) but for its functioning. In this sense, neural reuse and coordination dynamics are not confronting frameworks but complementary ones.

178 human legs, neurons, or metronomes, for example, and the HKB model is able to capture the qualitative changes in their coordination over time.

In coordination dynamics, the brain is taken to be composed of non-linearly coupled non- linear oscillators (Tognoli and Kelso 2009: 33) which coupling is captured by a specific form of the HKB model and, thus, the temporal dynamics of the synchronized states of the different brain regions are modelled in terms of their relative phase of oscillation. Such a depiction of the functioning of the brain meets the first desideratum of this section: to have brain function compatible with the mechanism of resonance. Ecological information can constrain brain dynamics in terms of resonance precisely because of their oscillatory nature. Moreover, the idea of having a collective variable for neural dynamics (relative phase, ϕ) opens the possibility of making their informational constraint by an ecological variable (e.g., τ) to be concretized in terms of the proposed model of resonance (see Section 3.3 or Figure 21). Namely, if we take the dynamics of the organism-environment scale to be constrained by a specific informational variable, τ, and we understand the dynamics of the synchronization between brain regions (neural scale) in terms of their relative phase, ϕ, the model of resonance proposed both in Section 3.3 and in Figure 21 is able to capture the relation between the two scales as ϕ = kτ; where the parameter k reflects the linear or non-linear relation between the variables, ϕ and τ, that capture the dynamics at the different scales.

Based on the patterns of synchronization between brain regions, proponents of coordination dynamics propose three different states (or schemes) of coordination: uncoupled, phase-coupled, and metastable (Tognoni and Kelso 2014: 36; see Figure 22). Uncoupled brain regions are usually so because they do not interact with each other. Uncoupling is a feature found between regions but not within them—i.e., intrinsic dynamics of brain regions always hold some

179 form of coupling as they hold local interactions. Otherwise, phase-coupled brain regions are usually so because, despite having their own intrinsic dynamics, their level of interaction is strong enough as to overcome their own “personalities” and to make them to engage in a coordinated activity. These two schemes of coordination have been thoroughly studied in the last years (e.g., Singer and Gray 1995, Bressler and Kelso 2001, Fries 2005, Singer 2005, Bressler and Tognoli 2006, Uhlhaas et al. 2009, Wang 2010), but in terms of biological or cognitive systems, they are idealizations. It is very rare if not impossible to find stable patterns of coordination lasting through time. On the contrary, what it is normally found in biological and cognitive systems—like the brain—are alternated moments of different patterns of stability and instability. This phenomenon is captured by the third schema of coordination mentioned above, the metastable one.

Figure 22. Three schemes of coordination (phase-coupled, metastable, and uncoupled) at four different scales (model, behavior, brain microscale, and brain macroscale). All charts (A-L) show the relative phase (ϕ; y-axis) between two oscillators through time (x-axis) given the three different schemes of coordination and the four scales of observation. In general terms, the more horizontal the lines are, the more coordination is achieved. Thus, we can see how (green) lines are mostly horizontal all the time in the phase-coupled charts (A, D, G, J) and (purple) lines are mostly diagonal—non-coupling—all the time in the uncoupled charts (C, F, I, L). However, the (colored) lines in the metastable charts (B, E, H,

180

K) combine both moment of phase-coupling and uncoupling through time without ever stay in any of them. (From Tognoli and Kelso 2014: 37, figure 1).

Coordination that exhibits metastability or metastable regimes (see Kelso and Engstrøm 2006,

Kelso and Tognoli 2007, Kelso 2012, Tognoli and Kelso 2014) combine moments of synchronization (or integration), named dwells, and moments of non-synchronization (or segregation), named escapes. According to Tognoli and Kelso (2014):

Integrative tendencies are strongest during moments of quasi-synchrony or

dwells: participating neural ensembles support a collective behavior. Segregative

tendencies are observed as a kind of escape behavior: neural ensembles diverge

and are removed from the collective effort. (p. 36).

Metastability allows brain regions to be non-linearly coupled (or quasi-synchronized) with other brain regions that hold different intrinsic dynamics without the necessity of a strong phase- coupling.92 Moreover, the moments of coupling give rise to moments of non-coupling without the need for a different kind of mechanism. The transitions are part of the dynamics of the system. Due to this feature, Kelso and Tognoli (2007) take metastability to be a useful concept to understand the seemingly dual nature of brain regions:

Metastable coordination dynamics reconciles the well-known tendencies of

specialized brain regions to express their autonomy, with the tendencies for those

regions to work together as a synergy. (p. 40).

If we extrapolate this feature of metastable regimes to neural reuse, we find that metastability may be the way to understand that, although having their own “personalities”, brain regions form

92 Notice how this fact may be interpreted in terms of non-linear resonance (Large 2008). Speculatively, such quasi-synchronization might be understood in terms of harmonics, sub-harmonics, etc.

181 temporary alliances to solve cognitive tasks. In other words, metastability can reconcile the emergence of TALoNS with the own autonomous activity of each brain region.

If the brain is organized under the principles of neural reuse and functions in terms of resonance and coordination dynamics, the second desideratum of this section is met. Namely, the idea of the emergence of TALoNS constrained by ecological information finds a perfect environment to be developed. It is possible, for example, to expect that a brain region dedicated to vision resonates to a given ecological variable (e.g., τ). By resonating to τ, the brain dynamics can be integrated in a temporary metastable regime of synchrony with other brain regions and constitute a given TALoNS with them. Such a TALoNS may unite different regions that are engaged in other activities (e.g., motor, attention, etc.) and that will be all of them non-linearly constrained by the same ecological variable. The TALoNS, thus, will constitute a transient sensorimotor neural event that will be eventually dissolved due to the very dynamics of the metastable system.93

There is a further issue of neural reuse in which metastability may provide some interesting insights. According to Anderson (2014), TALoNS are discovered and selected by a mechanism of active search (i.e., the active selectivity of different neural configurations— patterns of functional connectivity—in order to respond to different functional requirements even though each brain region has some specific functional tendencies). However, active search is mostly speculative:

93 It is worth noting that such an event would be also compatible with the idea of self-organized criticality. Understanding TALoNS in terms of metastability allows for the interpretation of them as self-organized critical states that appear and disappear over time. I will not pursue the analysis of the details of the relation between self-organized criticality and metastability regarding cognitive systems in this work, but it is for sure an interesting issue for future work.

182

[M]echanisms of interactive differentiation underlying functional development

must also include a process of active search: the rapid testing of multiple neural

partnerships to identify functionally adequate options… Obviously, this is highly

speculative, and establishing the existence of some such process will take

significant research effort, but it is far from clear to me how a more passive

process could account for both the variety of things that can be learned and the

apparent ability (at least in some cases) to rapidly shift the functionally relevant

neural bases for a given skill. (Anderson 2014: 58-59).

Metastability, combined with resonance, could provide a basis for active search. In a very basic sense, the intrinsic dynamics of a metastable system account for the emergence and disappearance of TALoNS due to the coordination of different brain regions that are sensitive to different contextual situations—i.e., brain regions that resonate to environmental information.

So, the search for the right TALoNS in a given situation might be as informationally controlled as the behavior of the system at different scales.94 However, the question about how new

TALoNS are discovered remains open.

Metastable systems that include noise—i.e., virtually all biological and cognitive systems—are able to guide such a discovery of TALoNS. According to Tognoli and Kelso,

“noise… is not essential for the emergence of metastable behavior, though it can allow the system to discover (and in fact stabilize) new states.” (2014: 37). Thus, the inherent noise of a metastable system might be part of the mechanism of discovery of TALoNS by active search.

94 Remember that Warren’s behavioral dynamics are also known as information-based control (Warren 2006). This means that the behavior of an organism in its environment is not controlled by any central system, but it is controlled by the very on-going interaction at the organism-environment scale and by the information generated there. Thus, what I am suggesting here is that such an information-based control is extended to all the scales of the system, including neural dynamics.

183

Summing up, if we take the brain to function as metastable system we get a way to fully integrate the different aspects of the theory of resonance: the main tenets of ecological psychology, neural reuse and its correlates (TALoNS and active search), behavioral dynamics, and resonance itself. A complete story about the role of the brain in psychological events from the ecological perspective has been given. Now, I turn to the future.

184

Coda: The Future

At the beginning of this work I promised to tell you a story. A story that was also a theory. A story of embodied cognition that was also a story of neuroscience. A story about perception, action, and other psychological events. A story about the role of the brain in those events. A story of resonance. To this point, the main parts of the story are told. Inspired by its usage as a metaphor in ecological psychology, an operational notion of resonance has been offered. Such a notion of resonance provides a way to understand the way ecological information can be integrated at the intra-organismic scale; concretely, at the scale of neural dynamics. However, the notion has not been developed in the void. The story has led us from the very significance of the idea of embodiment to the need for an operative way to understand resonance.

The story started with a clear and succinct definition of embodiment as an explanatory strategy that has its origins in the constitution of Modern science in the 17th century (Chapter 1).

Embodiment was then characterized as a shift in explanation: from internal (teleological) entities within the bodies to laws of interaction between them. For example, to explain the fall of a stone, the Aristotelian-Scholastic strategy of appealing to its substantial form was substituted by

Newton’s Universal Gravitation Law in which an interaction between two bodies (e.g., the stone and the Earth) is captured. Since the 17th Century, embodiment understood in this way has

185 become a generalized strategy in the in the physical sciences and, to a lesser degree, in the sciences of the life—e.g., natural selection also assumes an interaction between organisms and environments. However, the first manifestation of embodiment in the sciences of the mind did not appear until the second half of the 20th century.

As defended in Chapter 2, it was not until the work of J. J. Gibson (1966, 1979) that we find an embodied theory for psychology. Gibson developed ecological psychology where he embraced the explanatory strategy of embodiment and applied it to perception and action. The explanation for perception and action, he claimed, lays at the ecological scale; namely, at the scale of the interactions between organisms and environments (Gibson 1979). Such a shift in the explanatory strategy of the sciences of the mind required other conceptual changes that, consequently, also required a new framework and new methods of research. These conceptual changes entail, first, a commitment with anti-computationalism and the de-localization of cognitive systems (Chapter 2). And second, a commitment with the structure of environmental information and the active role of the body in sensorimotor events (Interlude). Thus, resonance must be defined within the constraints of these requirements.

After the Interlude on the significance of embodiment, the story of resonance arrived at a place in which context was set and, so, it faced the need for a positive account of the different tools—both conceptual and methodological—the cognitive sciences provide to make of resonance an operative notion that can be used for further research. The positive account started analyzing the kind of neural environment needed to develop a plausible concept of resonance in the field of the neurosciences while staying true to the main tenets of ecological psychology. To do so, different cognitive architectures were tested in terms of anti-computationalism and de- localization: ACT-R, Semantic Pointer Architecture, Dynamic Systems Architecture, Predictive

186

Processing, and Neural Reuse (Chapter 3). The outcome of that analysis was that Anderson’s neural reuse (2014) is the principle for the functional organization of the brain that best fits a theory of resonance compatible with the main tenets of ecological psychology. By selecting neural reuse as a framework—and given the previous contextual constraints—a concrete, operative notion of resonance finally emerged: resonance is the process by which neural dynamics are coupled to organism-environment dynamics in terms of ecological information.

The next chapter in the story of resonance was devoted to defending the possibility and plausibility of the new operative notion of resonance (Chapter 4). The mechanism of resonance

(mechanical and informational at the same time) was made explicit and different usages of the concept of resonance within the cognitive sciences—e.g., mirror neurons, resonant mechanisms,

Adaptive Resonance Theory, etc.—were reviewed. The plausibility of resonance as it is defined in this story has been addressed both in the biological and in the explanatory sense by showing examples of ecological information constraining neural dynamics in tasks where the same information also constrains organism-environment dynamics.

In the last part of the story just told (Chapter 5), the operative notion of resonance has been integrated in Warren’s behavioral dynamics (2006). One of the aims of such an integration was to show the compatibility of the theory of resonance here developed and contemporary research in ecological psychology in terms of Dynamic Systems Theory. However, the choice of behavioral dynamics was not arbitrary. Behavioral dynamics is not a research program among others, but it is the most detailed and complete research program in ecological psychology.

Furthermore, the integration of resonance and behavioral dynamics entails a multi-scale approach of perception and action that is amenable for using Fractal Analysis to investigate the relation between different scales (e.g., neural, muscular-skeletal, organism-environment). Several

187 empirical hypotheses regarding the relation of these different scales, ecological psychology,

TALoNS, etc., have been advanced as well. Finally, at the very end of the story (Chapter 5),

Coordination Dynamics and their related concepts, especially metastability, has been included to the picture to better assemble neural reuse, behavioral dynamics, and resonance. Such an inclusion provided further conceptual possibilities and new perspectives regarding prospective empirical work.

And this is the end of this story. At least, the end of the story that will be told in this work. The story of resonance, however, will continue in the future. In this Coda, I will offer a premier of several possible directions of the story in the next few years. First, I analyze the relation between resonance and Radical Embodied Cognitive Neuroscience as a general paradigm for an embodied neuroscience. And second, I review several possible trends of conceptual and empirical research that will extend the adventures initiated in this story.

C.1 Resonance and Radical Embodied Cognitive Neuroscience

The last two decades have witnessed a move downwards to the brain in the different approaches of the embodied cognitive sciences. Such a downwards move does not entail the abandonment of body and environment as part of the explanation of psychological events. It is better understood as the acknowledgement of the same rationale that guides the story of resonance: to provide a complete account of psychological life requires an account of the role of the CNS in that life.

Throughout this work I have reviewed some of the instantiations of the downwards move in different paradigms of embodied cognitive science as, for example, predictive processing

(Friston 2010, Clark 2015) or coordination dynamics (Tognoni and Kelso 2009, 2014). However, these are not the only stories about the CNS one can find while examining embodied approaches to cognition. Some enactivists, for instance, have proposed neurophenomenology as a way to

188 integrate brain activity and first-person reports of experience (Varela 1996, Lutz and Thompson

2003; see Hardcastle and Raja 2018 for a review), and have embraced coordination dynamics— and even the metaphor of resonance!—as a framework to account for the role of the brain in perception, action, and cognition (Thompson and Varela 2001, Di Paolo et al. 2017).

Among the different versions of the downwards move in embodied cognitive science, radical embodied cognitive neuroscience (RECN hereafter) is the most ambitious and complete.95 RECN is an approach to neuroscientific inquiry, inspired by radical embodied cognitive science (Chemero 2009), that tries to integrate main of the insights of embodied cognitive science with research in neuroscience (Kiverstein and Miller 2015). Put simply, RECN aims to reconcile the research program outlined by Chemero (2009), based on ecological psychology and dynamic systems theory, and research on neuroscience. As Favela (2014) points out, RECN tries to face what ‘‘[the] National Science Foundation recently identified ‘‘grand challenges’’ in brain mapping’’ (p. 1): the need for a common theoretical approach across multiple scales of inquiry and the reduction of big data to small data.

My suggestion is that the theory of resonance I have proposed here may serve as a general framework to fulfill the aims of RECN and, thus, may serve as a guide to meet the two challenges proposed by National Science Foundation. On the one hand, when resonance is depicted in the way I have depicted it here, the language regarding the ecological (organism- environment) and the CNS (intra-organismic) scales is symmetrical: the same variables are integrated in coupled dynamic systems at different scales. Moreover, as argued in Chapter 3, ecological psychology and neural reuse allow researchers to explain different scales of cognitive phenomena using the same language. Both paradigms share basic principles (e.g., interactivity,

95 RECN is so because it aims to give a complete framework for neuroscience. It is not clear that neurophenomenology, for example, shares such an aim.

189 focus on action, complexity/redundancy dilemma, coordinative structures, DST, etc.), so the descriptions at both scales are easy to match. Even more, behavioral and coordination dynamics

(Chapter 5) provide a very specific way, based on dynamic systems theory, to accomplish such a matching.

On the other hand, but related with the point above, a description based on dynamic systems theory and other tools from the complexity sciences (e.g., fractal analysis) reduces the amount of data processing needed to unveil the behavior of cognitive systems at different scales.

First, reducing the data needed for analyzing behavior of complex system is at the very roots of dynamic systems theory: favoring the geometrical/qualitative approach over analytic/qualitative one is in itself a technique of big data analysis. And second, the possibility of studying a complex system at different scales provides a way to manage big sets of data in a distributed, manageable way both in higher and in lower scales—e.g., coordination dynamics enables the unveiling of shared dynamics of different regions without the need for many specific details of each region and the neurons that compose them. Moreover, the fact that ecological variables work as collective variables for complex systems at different scales reduces the degrees of freedom regarding our explanations. In other words, the possible explanations are constrained in terms of the ecological information. For example, as it is already noted in Chapter 5, looking for coupled brain regions may be guided by the specifics of the ecological information that is constraining their coupling. Again, van der Weel and van der Meer’s (2009) study on the presence of τ (tau) in babies’ visual cortex is a clear example of this guidance: as τ is a very well- known variable, it is relatively easy to look for the way it may be constraining the coupling of different systems.96

96 Compare this fact with the search for specific brain dynamics for an action without any informational guidance or any other kind of guidance.

190

Summing up, the theory of resonance defended throughout this work seems to be in a very good position to meet the great challenges posited by the National Science Foundation and, therefore, to serve as a framework for a RECN that offers a sound, plausible, and coherent story about the role of the CNS in psychological events while staying true to the main tenets of ecological psychology, in particular, and radical embodied cognitive science, in general.

C.2 Resonance and The Future

The project of advancing in RECN with the guidance of the just developed story of resonance may be seen as a general project for future research. Given that, it seems fair to spend some final words talking about concrete aims for such a general project. In this section, I have proposed three of these aims. First, to advance in the integration of the components of the theory of resonance: the process of resonance, neural reuse, behavioral dynamics, and coordination dynamics. Second, the development of simulations of resonant systems based on the insights provided by the use of minimal cognitive agents in the context of evolutionary robotics. And third, the role of resonance in the problem of the ‘scaling up’ of embodied cognitive science to account for ‘real’ cognition (e.g., remembering, thinking, and so on).97

C.2.1 Further Work in the Theory of Resonance

I delivered a general approach to the activity of CNS in cognitive systems based on resonance.

To do so, I analyzed the relations between resonance, neural reuse, behavioral dynamics, and coordination dynamics and how their combination provides a complete account of the role of the

CNS in psychological event. As I see it, I offered the basic structure of such an account, but

97 It is worth noting that this is not—and is not intended to be—exhaustive. First, because, as I see it, “future” does not mean “near future” in this section. Hopefully, there is time enough to investigate other aspects of the theory and other paths of development. And second, because the story of resonance told in this work is broad enough, both in terms of time and in terms of topics, that there are many other issues that might deserve further attention in the future.

191 further work must be done in terms of the integration of different components. For example, coordination dynamics—which already provides a possible mechanism for active search; see

Chapter 5—might be the tool to find different patterns of synchronization between brain regions that may be the TALoNS described by neural reuse (see Figure 23).

Figure 23. Coordination dynamics and TALoNS. In this figure we see the dynamics of coordination of eight neural ensembles during an episode of waking brain EEG activity. (A) A scalp topography is projected on two axes (x, y) and the center frequency of each ensemble is projected in another axis (z). (B) Phase trajectories of each ensemble showing elements of metastability—see the similarity of these trajectories with the ones in Figure 22B, E, H, and K. (C) Ensemble oscillations in different frequency bands. Two organized groups are simultaneously present. On in the alpha band (1, 2, and 3) and another one in the gamma band (7 and 8). Finding these ensembles between different brain regions could be a way to find TALoNS. (From Tognoni and Kelso 2014: 42, figure 4).

TALoNS are temporary associations of different brain regions in specific psychological events.

The data shown in Figure 23 is compatible with the idea of TALoNS and is an example of how coordination dynamics could be a method to identify them.

Another example of further integration between the different components of the story of resonance may be described in terms of behavioral events, TALoNS, and ecological information.

Namely, a further integration between behavioral dynamics and neural reuse. The identification

192 of TALoNS could lead to the better description of behavioral tasks and, thus, to a better characterization of ecological information, and vice versa. Ecological information may be used as a guide for finding new TALoNS. In both cases, it is important to get to a better understanding of the relation (i.e., the integration) of the scales in which both TALoNS and ecological psychology emerge.

C.2.2 Resonance and Minimal Cognitive Agents

In the last 25 years, Randall Beer and his collaborators have developed an approach to evolutionary robotics based on models known as minimal cognitive agents (Beer 1995a, 2000,

2003, 2008, 2014). This approach is based on the study of the evolution of behavior of simple simulated agents in terms of the dynamics of the interaction between their brain and body, and the environment (see Figure 24a). In this sense, Beer’s approach is completely compatible with the main tenets of ecological psychology and, thanks to the use of dynamic systems theory, with the theory of resonance proposed here.

Figure 24. (A) Abstract Schema of Beer’s approach to the study of minimal cognitive agents. Compatible with ecological psychology and behavioral

193

dynamics, Beer’s approach is one of the chief examples of a dynamic approach to the cognitive sciences. (B) Example of a simple minimal cognitive agent. This minimal cognitive agent is composed by an evolved neural network—that does not appear in the figure—and a simple body that allows it to only move horizontally. It also has six sensors (grey lines) that uses to discriminate between diamonds (to avoid) and circles (to catch). (From Beer 2003: 211 & 213, figures 1 & 2A).

Usually, simulated minimal cognitive agents are composed by a simple neural network (Beer

1995b) and a simple body that allows them to perform a very limited set of behaviors—e.g., catching or avoiding two different kinds of objects by discriminating them and moving horizontally (see Figure 24b). The environment is also simple, sparsely crowded one. After running the simulation of the behavioral evolution of a given minimal cognitive agent, it is studied in terms of the relations between the dynamics at the three scales (brain-body- environment) by using the tools of dynamic systems theory.

My theory of resonance could benefit from the application of Beer’s research framework in terms of simulation. The study of complex systems is difficult in itself and even more difficult when they have to be studied in terms of their relations. There are a lot of events going on both in the brain and at the ecological scale, and we do not always have the conceptual and empirical means to conduct controlled studies of them in the real world. So, simulations allow for reducing the complexity of systems while still serving as a good tool for making predictions and for understanding the out-in-the-world system. In this sense, a system based on resonance, as it is a very complex one and involves relations between many different scales, is a good candidate to be modelled in terms of a simulation.

A ‘resonant’ minimal cognitive agent may be based on a typical Beer’s minimal cognitive agent but should reconsider some of its characteristics. First, the neural network that constitutes its CNS must be compatible with neural reuse, i.e., it cannot be fully distributed or

194 completely modular. Second, the dynamics of the neural network must be metastable and show temporary synchronization between regions (TALoNS). Third, the minimal cognitive agent must be proactive and must learn to deal with its environment though exploration and interaction. And finally, the behavior of the agent must be clearly based on ecological information generated at the agent-environment scale. All these changes to Beer’s model are plausible and may constitute a very interesting research project in itself.

C.2.3 Resonance and Real Cognition

A further way in which this story of resonance could be applied is in terms of the scaling up of embodied cognitive science to “real” cognition. One of the most common criticism to embodied approach to cognition is that, although they seem to provide interesting explanation of perceptual events and simple behaviors, they fail to account for higher-order cognitive tasks such as remembering, thinking, planning future actions, etc.—this problem has been expressed as the inability of radical embodied approaches (e.g., ecological psychology, dynamic systems approach, or some forms of enactivism) to account for representation-hungry tasks (Clark and

Toribio 1994).

My proposal here is that the new theory of resonance provides some conceptual and empirical tools to approach the problem of the scaling up—actually, such a proposal is already in progress (see Sanches de Oliveira et al., in preparation). Concretely, the notion of nonlinear resonance, briefly introduced in Chapter 4, might help us to reframe processes of remembering

(memory) or planning (anticipation). According to Large (2008), there are three main features of nonlinear oscillation (like the one observed in neural systems): spontaneous oscillation, entrainment, and high-order (or nonlinear) resonance (p. 202 & ff.). Each of these features may account for different characteristics of what we take to be “real” cognition. First, real cognition is

195 often spontaneuous. At least sometimes, it seems to happen by itself. For example, when we imagine an event or when we try to remember the mailing address of a friend. Such spontaneous is found in of nonlinear oscillations. Second, real cognition seems to engage with the future. For example, when we anticipate the following position of an object in our visual field in order to capture it or when we plan our path to get to a place. According to Large, entrainment allows neural oscillations to exhibit an “anticipation tendency” with respect to stimulation (2008: 203).

This could be the basic form of the future orienting of real cognition. And finally, real cognition is active in absence of stimulation. This has been also called offline cognition and is taken to be the basis of remembering, for example. Large claims that, in cases of nonlinear oscillation:

… oscillations arise at frequencies that are not present in the stimulus…

Nonlinear resonance predicts that metrical accent may arise even when no

corresponding frequency is present in the stimulus. (2008: 205).

This fact could support the idea of cognitive processes working offline, i.e., when stimulation is not present anymore.

Of course, the way in which these basic notions exhibited by nonlinear resonant systems can provide a full-fledged account for real cognition is an empirical question and, therefore, it is a matter of future research. However, as in many other cases throughout this story, what we see is that resonance gives us some new insights, some new concepts, and some new tools to face the difficult task of understanding what is going on in the brain while we live our day-by-day psychological lives.

196

References

Adams, F., and Aizawa, K. (2001). The Bounds of Cognition. Philosophical Psychology 14(1): 43-64.

Aks, D. J. (2005). 1/f Dynamic in Complex Visual Search: Evidence for Self-Organized Criticality in Human Perception. In M. A. Riley and G. C. Van Orden, Tutorials in Contemporary Nonlinear Methods for the Behavioral Sciences (319-352). http://www.nsf.gov/sbe/bcs/pac/nmbs/nmbs.jsp

Amazeen, E. L., and Turvey, M. T. (1996). Weight Perception and the Haptic Size-Weight Illusion Are Functions of the Inertia Tensor. Journal of Experimental Psychology: Human Perception and Performance 22(1): 213-232.

Anderson, J. R. (1983). The Architecture of Cognition. Cambridge, MA: Harvard University Press.

Anderson, J. R. (2007). How Can the Human Mind Occur in the Physical Universe? Oxford, UK: Oxford University Press.

Anderson, M. L. (2010). Neural Reuse: a Fundamental Organization Principle of the Brain. Behavioral and Brain Sciences 33: 245-313.

Anderson, M. L. (2014). After Phrenology: Neural Reuse and the Interactive Brain. Cambridge, MA: MIT Press.

Anderson, M. L. (2016). Précis of After Phrenology: Neural Reuse and the Interactive Brain. Behavioral and Brain Sciences. doi:10.1017/S0140525X15000631, e120.

Anderson, M. L., Kinnison, J. and Pessoa, L. (2013). Describing Functional Diversity of Brain Regions and Brain Networks. NeuroImage 73: 50-58.

Anderson, M. L., and Penner-Wilger, M. (2013). Neural Reuse in the Evolution and Development of the Brain: Evidence for Developmental Homology? Developmental Psychobiology 55(1): 42-51.

Aramaki, Y., Honda, M., Okada, T., and Sadato, N. (2006). Neural Correlates of the Spontaneous Phase Transition during Bimanual Coordination. Cerebral Cortex 16: 1338- 1348.

Arbib, M. A. (2003). The Handbook of Brain Theory and Neural Networks. Cambridge, MA: MIT Press.

Arutyunyan, G. H., Gurfinkel, V. S., and Mirskii, M. L. (1968). Investigation of Aiming at a Target. Biophysics 13: 536-538.

197

Avenanti, A., Bueti, D., Galati, G., and Aglioti, S. M. (2005). Transcranial Magnetic Stimulation Highlights the Sensorimotor Side of Empathy for Pain. Nature Neuroscience 8(7): 955– 960.

Bak, P. (1990). Self-Organized Criticality. Physica A 163: 403-409.

Bak, P. (1996). How Nature Work: The Science of Self-Organized Criticality. New York: Copernicus.

Bak, P., Tang, C., and Wiesenfeld, K. (1987). Self-Organized Criticality. An Explanation of 1/f Noise. Physical Review Letters 59: 381.

Bak, P., Tang, C., and Wiesenfeld, K. (1988). Self-Organized Criticality. Physical Review A 38: 364.

Bargmann, C. I. (2012). Beyond the Connectome: How Neuromodulators Shape Neural Circuits. Bioessays 34(6): 458-65.

Bastiaansen, J. A. C. J., Thious, M., and Keysers, C. (2009). Evidence for Mirror Systems in Emotion. Philosophical Transactions of the Royal Society of London, Series B 364: 2391–2404.

Bedau, M. (1997). Weak Emergence. In J. Tomberlin (ed.), Philosophical Perspectives, Vol. 11: Mind, Causation, and World (pp. 375-399). Cambridge, MA: Blackwell Publishers.

Beer, R. D. (1995a). A Dynamical Systems Perspective on Agent-Environment Interaction. Artificial Intelligence 72: 173-215.

Beer, R. D. (1995b). On the Dynamics of Small Continuous-Time Recurrent Neural Networks. Adaptive Behavior 3(4):471-511.

Beer, R. D. (2000). Dynamical Approaches to Cognitive Science. TRENDS in Cognitive Sciences 4(3): 91-99.

Beer, R. D. (2003). The Dynamics of Active Categorical Perception in an Evolved Model Agent. Adaptive Behavior 11(4): 209-243.

Beer, R. D. (2008). The Dynamics of Brain-Body-Environment Systems: A Status Report. In P. Calvo and T. Gomila (Eds.), Handbook of Cognitive Science: An Embodied Approach (pp. 99-120). San Diego, CA: Elsevier.

Beer, R. D. (2014). Dynamical Systems and Embedded Cognition. In K. Frankish and W. Ramsey (Eds.), The Cambridge Handbook of Artificial Intelligence (pp. 128-148). Cambridge, UK: Cambridge University Press.

Berman, R., and Wurtz, R. (2011). Signals Conveyed in the Pulvinar Pathway from Superior Colliculus to Cortical Area Mt. The Journal of Neuroscience 31(2): 373–384.

198

Bernstein, N. A. (1967). The Co-Ordination and Regulation of Movements. Oxford, UK: Pergamon Press. (Original work published in Russian 1957; it is a volume edited by Bernstein himself.)

Bernstein, N. A. (1996). On Dexterity and its Development. In M. L. Latash and M. T. Turvey (Eds.), Dexterity and its Development (pp. 3-244). Manwah, NJ: Lawrence Erlbaum. (Original Russian manuscript written in 1945-1946 and published in 1991.)

Bertelson, P. (1961). Sequential Redundancy and Speed in a Serial Two-Choice Responding Task. Quarterly Journal of Experimental Psychology 13(2): 90–102.

Bizzarri, M., Giuliani, A., Cucina, A., D’Anselmi, F., Soto, A. M., and Sonnenschein, C. (2011). Fractal Analysis in a Systems Biology Approach to Cancer. Seminars in Cancer Biology 21: 175-182.

Blanchard, J. (1941). The History of Electrical Resonance. The Bell System Technical Journal 20(4): 415-433.

Borroni, P., Gorini. A., Riva, G., et al. (2011). Mirroring Avatars: Dissociation of Action and Intention in Human Motor Resonance. European Journal of Neuroscience 34: 662–669.

Bower, J. M. (2013). 20 Years of Computational Neuroscience. Berlin, Germany: Springer.

Boyle, R. (1772). The works of the honourable Robert Boyle. Thomas Birth (ed.), 2nd edition, 6 vols.

Brackenridge, J. B., and Nauenberg, M. (2004). Curvature in Newton’s dynamics. In I. B. Cohen and G. E. Smith (Eds.), The Cambridge companion to Newton (pp. 85-137), Cambridge, UK: Cambridge University Press.

Brady, M. (2009). Speech as a Problem of Motor Control in Robotics. In N. Taatgen, and H. van Rijn (Eds.), Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society (pp. 2558–2563), Austin, TX.

Bressler, S. L., and Kelso, J. A. S. (2001). Cortical Coordination Dynamics and Cognition. TRENDS in Cognitive Sciences 5(1): 26-36.

Bressler, S. L., and Tognoli, E. (2006). Operational Principles of Neurocognitive Networks. International Journal of Psychophysiology 60: 139–148.

Brooks, R. A. (1990). Elephants Don’t Play Chess. Robotics and Autonomous Systems 6: 3-15.

Bruggeman, H., and Warren, W. H. (2005). Integrating Target Interception and Obstacle Avoidance. Journal of Vision 5: 311.

Bruineberg, J., and Rietveld, E. (2014). Self-Organization, Free Energy Minimization, and Optimal Grip on a Field of Affordances. Frontiers in Human Neuroscience 8: art. 599.

199

Bruineberg, J., Kiverstein, J., and Rietveld, E. (2016). The Anticipating Brain Is Not a Scientist: the Free-Energy Principle from an Ecological-Enactive Perspective. Synthese. doi: 10.1007/s11229-016-1239-1.

Bullock, D., Grossberg, S., and Guenther, F. H. (1993). A Self-Organizing Neural Model of Motor Equivalent Reaching and Tool Use by a Multijoint Arm. Journal of Cognitive Neuroscience 5: 408-435.

Calvo, P., and Gomila, T. (2008). Handbook of Cognitive Science: An Embodied Approach. San Diego, CA: Elsevier.

Calvo, P., and Symmons, J. (2014). The Architecture of Cognition. Cambridge, MA: MIT Press.

Cao, Y., and Grossberg, S. (2005). A Laminar Cortical Model of Stereopsis and 3D Surface Perception: Closure and Da Vinci Stereopsis. Spatial Vision 18: 515-578.

Carello, C., and Turvey, M. T. (2017). Useful Dimensions of Haptic Perception: 50 Years after The Senses Considered as Perceptual Systems. Ecological Psychology 29(2): 95-121.

Carpenter, G. A., and Grossberg, S. (1987). A Massively Parallel Architecture for a Self- Organizing Neural Pattern Recognition Machine. Computer Vision, Graphics, and Image Processing 37: 54-115.

Carpenter, G. A., Grossberg, S., and Rosen, D. B. (1991). Fuzzy ART: Fast Stable Learning and Categorization of Analog Patterns by an Adaptive Resonance System. Neural Networks 4: 759-771.

Carpenter, G. A., Milenova, B. L., and Noeske, B. W. (1998). Distributed ARTMAP: A Neural Network for Fast Distributed Supervised Learning. Neural Networks 11: 793-813.

Carson, R. G., Riek, S., Smethurst, C. J., Lison-Parraga, J. F., and Byblow, W. D. (2000). Neuromuscular-skeletal Constraints upon the Dynamics of Unimanual and Bimanual Coordination. Experimental Brain Research 131 (2): 196-214.

Caruana, F., Uithol, S., Cantalupo, G., Sartori, I., Russo, G. L., and Avanzini, P. (2014). How Action Selection Can Be Embodied: Intracranial Gamma Band Recording Shows Response Competition during the Eriksen Flankers Test. Frontiers in Human Neuroscience 8: art. 668.

Cavallo, A., Bucchioni, G., Castiello, U., and Becchio, C. (2013). Goal or Movement? Action Representation within the Primary Motor Cortex. European Journal of Neuroscience 38: 3507–3512.

Chalmers, D. (2006). Strong and Weak Emergence. In P. Clayton and P. Davies (eds.), The Re- Emergence of Emergence (244-256). Oxford, UK: Oxford University Press.

Chalupa, L. (1991). Visual Function of the Pulvinar. In A. G. Leventhal (ed.), The Neural Basis of Visual Function (140-159). Boca Raton, Florida: CRC Press.

200

Chomsky, N. (1980). Rules and Representations. Oxford, UK: Basil Blackwell.

Clark, A. (1997). Being there: Putting brain, body, and world together again. Cambridge, MA: Bradford Books.

Clark, A. (2013). Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science‖. Behavioral and Brain Sciences 36: 181-253.

Clark, A. (2015). Surfing Uncertainty. London, UK: Oxford University Press.

Clark, A., and Toribio, J. (1994). Doing Without Representing. Synthese 101: 401-431.

Clayton, P., and Davies, P. (2006). The Re-Emergence of Emergence. Oxford, UK: Oxford University Press.

Chemero, A. (2009). Radical Embodied Cognitive Science. Cambridge, MA: MIT Press.

Cisek, P. (2007). Cortical Mechanisms of Action Selection: The Affordance Competition Hypothesis. Philosophical Transactions of the Royal Society B: Biological Sciences 362(1485): 1585-99.

Cohen, H. F. (1994). The scientific revolution: A historiographical inquiry. Chicago: University of Chicago Press.

Cohen, S. M. (2009). Substances. In G. Anagnostopoulos (Ed.), A companion to Aristotle (pp. 197-212), Oxford, UK: Blackwell Publishing.

Cohen, J. A., Bruggeman, H., and Warren, W. H. (2005). Switching Behavior in Moving Obstacle Avoidance. Journal of Vision 5: 312.

Costall, A. (1995). Socializing affordances. Theory & Psychology 5: 467-481.

Cutting, J. E. (1982). Two Ecological Perspectives: Gibson vs. Shaw and Turvey. The American Journal of Psychology 95(2): 199-222. de Rugy, A., Taga, G., Montagne, G., Buekers, M. J., Laurent, M. (2002). Perception-Action Coupling Model for Human Locomotor Pointing. Biological Cybernetics 87: 141-150. de Wit, M. M., van der Kamp, J., and Withagen, R. (2015). Gibsonian Neuroscience. Theory & Psychology. doi: 10.1177/0959354315623109.

Delafield-Butt, J. T., Galler, A., Schögler, B., and Lee, D. N. (2010). A perception-action strategy for hummingbirds. Perception 39: 1172-1174.

Dennett, D. I. (1978). Brainstorms: Philosophical Essays on Mind and Psychology. Montgomery, VT: Bradford Books.

Der, R., and Martius, G. (2012). The Playful Machine: Theoretical Foundation and Practical Organization of Self-Organizing Robots. Berlin, Germany: Springer-Verlag.

201

Di Paolo, E. A., Barandiaran, X. E., Beaton, M., and Buhrmann, T. (2014). Learning to Perceive in the Sensorimotor Approach: Piaget’s Theory of Equilibration Interpreted Dynamically. Frontiers in Human Neuroscience 8: art. 551.

Di Paolo, E. A., Buhrmann, T., and Barandiaran, X. E. (2017). Sensorimotor Life: An Enactive Proposal. Oxford, UK: Oxford University Press.

Dijksterhuis, E. J. (1961). The mechanization of the world picture. New York: Oxford University Press.

DiSalle, R. (2004). Newton’s philosophical analysis of space and time. In I. B. Cohen and G. E. Smith (Eds.), The Cambridge companion to Newton (pp. 35-56), Cambridge, UK: Cambridge University Press.

Distelzweig, P. (2013). Descartes' teleomechanics in medical context: Approaches to integrating mechanism and teleology in Hieronymus Fabricius ab Aquapendente, William Harvey and René Descartes. PhD Thesis. University of Pittsburgh.

Dotov, D. G. (2014). Putting Reins of the Brain: How the Body and the Environment Use It. Frontiers in Human Neuroscience 8: art. 795.

Edelman, S. (2008). Computing the Mind: How the Mind Really Works. Oxford, UK: Oxford University Press.

Eliasmith, C. (2013). How to Build a Brain. New York: Oxford University Press.

Ellis, W. D. (1938). A Source Book of Gestalt Psychology. London, UK: Routledge.

Fadiga, L., Craighero, L., and Olivier, E. (2005). Human Motor Cortex Excitability during the Perception of Others’ Action. Current Opinion in Neurobiology 15: 213–218.

Fajen, B. R. (2007). Affordance-Based Control of Visually Guided Action. Ecological Psychology 19(4): 383-410.

Fajen, B. R., and Warren, W. H. (2001). Behavioral Dynamics of On-Line Route Selection in Complex Scenes. Abstracts of the Psychonomic Society 6: 92.

Fajen, B. R., and Warren, W. H. (2003). Behavioral Dynamics of Steering, Obstacle Avoidance, and Route Selection. Journal of Experimental Psychology: Human Perception and Performance 29: 343-362.

Fajen, B. R., and Warren, W. H. (2004). Visual Guidance of Intercepting a Moving Target on Foot. Perception 33: 689–715.

Fajen, B. R., and Warren, W. H. (2005). Behavioral Dynamics of Intercepting a Moving Target. Experimental Brain Research 180: 303-319.

202

Faubel, C. and Sch, G. (2010). Learning Objects on the Fly: Object Recognition for the Here and Now. International Joint Conference on Neural Networks. Orlando, FL: IEEE Press.

Favela, L. H. (2014). Radical Embodied Cognitive Neuroscience: addressing “Grand Challenges” of the Mind Sciences. Frontiers in Human Neuroscience 8: art.796.

Feder, J. (1988). Fractals. New York: Springer.

Fink, P., Foo, P., and Warren, W. (2009). Catching Fly Balls in Virtual Reality: A Critical Test of the Outfielder Problem. Journal of Vision 9(13): 14-14.

Fitzgibbon, B. M., Fitzgeral, P. B., and Enticott, P. G. (2014). An Examination of the Influence of Visuomotor Associations on Interpersonal Motor Resonance. Neuropsychologia 56: 439-446.

Fodor, J. A. (1975). The Language of Thought. Cambridge, MA: Harvard University Press.

Fodor, J. A. (1981). Representations: Philosophical Essays on the Foundations of Cognitive Science. Cambridge, MA: The MIT Press.

Fodor, J. A. (1987). Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, MA: The MIT Press.

Foucault, M. (1966). The Order of the Things: An Archeology of the Human Sciences. New York: Routledge.

Foucault, M. (1969). The Archaeology of Knowledge. (A. M. Sheridan Smith, Trans.). New York: Routledge.

Freeman, W. (1991). The Physiology of Perception. Scientific American 264(2): 78-85.

Freeman, W. (2001). How Brains make up their Minds. New York: Columbia University Press.

Freeman, W., and Skarda, C. A. (1990). Representations: Who Needs Them? In J. L. McGaugh, N. M. Weinberger, and G. Lynch (Eds.), Proceedings of the Third Conference, Brain Organization, and Memory: Cells, Systems and Circuits (pp. 375-380). New York: Guilford Press.

Fries, P. (2005). A Mechanism for Cognitive Dynamics: Neuronal Communication through Neuronal Coherence. Trends in Cognitive Science 9: 474–480.

Friston, K. (2010). The Free-Energy Principle: a Unified Brain Theory? Nature Reviews Neuroscience 11(2): 127-138.

Friston, K., and Kiebel, S. (2009). Predictive Coding under the Free-Energy Principle. Philosophical Transactions of the Royal Society B 364: 1211-1221.

Friston, K., Daunizeau, J., and Kiebel, S. (2009). Active Inference or Reinforcement Learning? PLoS ONE 4: e6421.

203

Friston K., Kilner, J., and Harrison, L. (2006). A Free Energy Principle for the Brain. Journal of Physiology – Paris 100: 70-87.

Friston, K., Shiner, T., FitzGerald, T., Galea, J. M., Adams, R., Brown, H., Dolan, R. J., Moran, R., Stephan, K. E., and Bestmann, S. (2012). Dopamine, Affordance, and Active Inference. PLoS Computational Biology 8(1): e1002327. doi:10.1371/journal.pcbi.1002327.

Fuchs, A. (2013). Nonlinear Dynamics in Complex Systems: Theory and Applications for the Life-, Neuro-, and Natural Sciences. Berlin, Germany: Springer.

Gal, O., and Chen-Morris, R. (2013). Baroque Science. Chicago, IL: The University of Chicago Press.

Gallese, V., Fadiga, L., Fogassi, L., and Rizzolatti, G. (1996). Action Recognition in the Premotor Cortex. Brain 119: 593–609.

Gallistel, C. R., and King, A. P. (2009). Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience. Oxford, UK: Wiley-Blackwell.

Gangitano, M., Mottaghy, F. M., and Pascual-Leone, A. (2004). Modulation of Premotor Mirror Neuron Activity during Observation of Unpredictable Grasping Movements. European Journal of Neuroscience 20: 2193–2202.

Garber, D. (1992). Descartes’ Physics. In J. Cottingham (ed.), The Cambridge Companion to Descartes (286-334), Cambridge, UK: Cambridge University Press.

Gazzaniga, M. S. (1984). Handbook of Cognitive Neuroscience. New York: Plenum Press.

Gazzaniga, M. S., Ivry, R. B., Mangun, G. R. (2009). Cognitive Neuroscience: The Biology of the Mind. New York: W.W. Norton.

Gibson, J. J. (1950). The Perception of the Visual World. Boston, MA: Houghton Miffin.

Gibson, J. J. (1958). Visually Controlled Locomotion and Visual Orientation in Animals. The British Journal of Psychology 49: 182-194.

Gibson, J. J. (1966). The Senses considered as Perceptual Systems. Boston, MA: Houghton Miffin.

Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Boston, MA: Houghton Miffin.

Godde, B., Ehrhardt, J., and Braun, C. (2003). Behavioral Significance of Input-Dependent Plasticity of Human Somatosensory Cortex. Neuroreport 14: 543-546.

204

Gökaydin, D. (2015). The Structure of Sequential Effects. Dissertation, The University of Adelaide. (Consulted 02/01/2018: https://digital.library.adelaide.edu.au/dspace/bitstream/2440/98135/2/02whole.pdf).

Gökaydin, D., Navarro, D. J., Ma-Wyatt, A., Perfors, A. (2016). The Structure of Sequential Effects. Journal of Experimental Psychology: General 145(1): 110-123.

Goldinger, S. D., Papesh, M. H., Barnhart, A. S., Hansen, W. A., and Hout, M. C. (2016). The Poverty of Embodied Cognition. Psychonomic Bulletin and Review 23: 959-978.

Gould, S. J. (1989). Wonderful Life: The Burgess Shale and the Nature of History. New York: W. W. Norton & Company.

Grossberg, S. (2013). Adaptive Resonance Theory: How a Brain Learns to Consciously Attend, Learn, and Recognize a Changing World. Neural Networks 37: 1-47.

Grossberg, S., Bullock, D., and Dranias, M. (2008). Neural Dynamics Underlying Impaired Autonomic and Conditioned Responses Following Amygdala and Orbitofrontal Lesions. Behavioral Neuroscience 122: 1100-1125.

Grossberg, S., Govindarajan, K. K., Wyse, L. L., Cohen, M. A. (2004). ARTSTREAM: A Neural Network Model of Auditory Scene Analysis and Source Segregation. Neural Networks 17: 511-536.

Grossberg, S., and Kazerounian, S. (2011). Laminar Cortical Dynamics of Conscious : A Neural Model of Phonemic Restoration Using Subsequent Context in Noise. Journal of the Acoustic Society of America 130: 440-460.

Grossberg, S., Mingolla, E., and Viswanathan, L. (2001). Neural Dynamics of Motion Integration and Segmentation within and Across Apertures. Vision Research 41: 2521- 2553.

Grossberg, S., and Pearson, L. (2008). Laminar Cortical Dynamics of Cognitive and Motor Working Memory, Sequence Learning and Performance: Toward a Unified Theory of how the Cerebral Cortex Works. Psychological Review 115: 677-732.

Grossberg, S., and Versace, M. (2008). Spikes, Synchrony, and Attentive Learning by Laminar Thalamocortical Circuits. Brain Research 1218: 278-312.

Guicciardini, N. (2004). Analysis and synthesis in Newton’s mathematical works. In I. B. Cohen and G. E. Smith (Eds.), The Cambridge companion to Newton (pp. 308-328), Cambridge, UK: Cambridge University Press.

Haken, H., Kelso, J.A.S., and Bunz, H. (1985). A Theoretical Model of Phase Transitions in Human Hand Movements. Biological Cybernetics 51: 347-356.

Hardcastle, V. G., and Raja, V. (2018). The Neural Correlates of Consciousness. In R. Gennaro (Ed.), The Routledge Handbook of Consciousness (pp. 235-247). New York: Routledge.

205

Hatsopoulos, N., Gabbiani, F., and Laurent, G. (1995). Elementary Computation of Object Approach by a Wide-Field Visual Neuron. Science 270: 1000–1003.

Heidegger, M. (1927). Being and Time (J. Macquarrie and E. Robinson, Trans.). New York: HarperCollins Publishers. (This translation published, first, in 1962 and, also, in 2008).

Hermans, E. J., van Marle, H. J., Ossewaarde, L., Henckens, M. J., Qin, S., van Kesteren, M. T., Schoots, V. C., Cousijn, H., Rijpkema, M., Oostenveld, R. and Fernández, G. (2011). Stress-Related Noradrenergic Activity Prompts Large-Scale Neural Network Reconfiguration. Science 334(6059): 1151-53.

Henry, D. M. (2009). Generation of animlas. In G. Anagnostopoulos (Ed.), A companion to Aristotle (pp. 368-384), Oxford, UK: Blackwell Publishing.

Hilborn, R. C. (1994). Chaos and Dynamics: An Introduction for Scientists and Engineers. Oxford, UK: Oxford University Press.

Hinton, G. E., and Zemel, R. S. (1993). Autoencoders, Minimum Description Length and Helmholtz Free Wnergy. In J. Cowan, G. Tesauro, and J. Alspector (Eds.), Advances in neural information processing systems 6 (pp. 3-10). Burlington, MA: Morgan Kaufmann.

Holden, J. G. (2005). Gauging the Fractal Dimension of Cognitive Performance. In M. A. Riley and G. C. Van Orden, Tutorials in Contemporary Nonlinear Methods for the Behavioral Sciences (267-318). http://www.nsf.gov/sbe/bcs/pac/nmbs/nmbs.jsp

Hohwy, J. (2016). The Self-Evidencing Brain. Noȗs 50(2): 259-285.

Hutcheon, B., and Yarom, Y. (2000). Resonance, Oscillation and the Intrinsic Frequency Preferences of Neurons. Trends in Neurosciences 23: 216-222.

Hutto, D., and Myin, E. (2013). Radicalizing Enactivism: Basic Minds without Content. Cambridge, MA: MIT Press.

Hutto, D. D., Kirchhoff, M. D., Myin, E. (2014). Extensive Enactivism: Why Keep It All In? Frontiers in Human Neuroscience 8: art. 706.

Iannaccone, P. M., and Khokha, M. (1994). Fractal Geometry in Biological Systems: An Analytical Approach. Boca Raton, Florida: CRC Press.

Izhikevich, E. M. (2007). Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. Cambridge, MA: The MIT Press.

Ibáñez-Gijón, J., Buekers, M., Morice, A., Rao, G., Mascret, N., Laurin, J., and Montagne, G. (2016). A Scale-Base Approach to Interdisciplinary Research and Expertise in Sports. Journal of Sport Sciences, DOI: 10.1080/02640414.2016.1164330.

James, W. (1907). Pragmatism: A New Name for Some Old Ways of Thinking. New York: Longmans, Green, and Co.

206

Jarvik, M. E. (1951). Probability Learning and a Negative Recency Effect in the Serial Anticipation of Alternative Symbols. Journal of Experimental Psychology 41(4): 291– 297.

Jentzsch, I., and Sommer, W. (2002). Functional Localization and Mechanisms of Sequential Effects in Serial Reaction Time Tasks. Perception and Psychophysics 64(7): 1169–1188.

Jirsa, V. K., Fuchs, A., and Kelso, J. A. S. (1998). Connecting Cortical and Behavioral Dynamics: Bimanual Coordination. Neural Computation 10: 2019-2045.

Juarrero, A. (1999). Dynamics in Action: Intentional Behavior as a Complex System. Cambridge, MA: The MIT Press.

Kaplan, D., and Glass, L. (1995). Understanding Nonlinear Dynamics. New York: Springer.

Käufer, S., and Chemero, A. (2015). Phenomenology: An Introduction. Maden, MA: Polity Press.

Kelso, J. A. S. (1995). Dynamic patterns. Cambridge, Mass.: MIT Press.

Kelso, J.A.S. (2012). Multistability and Metastability: Understanding Dynamic Coordination in the Brain. Philosophical Transactions of the Royal Society of London B (Biological Sciences) 367: 906–918.

Kelso, J. A. S., and Engstrøm, D. A. (2006). The Complementary Nature. Cambridge, MA: The MIT Press.

Kelso, J. A. S., Fuchs, A., Lancaster, R., Holroyd, T., Cheyne, D., and Weinberg, H. (1998). Dynamic Cortical Activity in the Human Brain Reveals Motor Equivalence. Nature 392: 814-818.

Kelso, J. A. S., and Tognoli, E. (2007). Toward a Complementary Neuroscience: Metastable Coordination Dynamics of the Brain. In L. I. Perlovsky and R. Kozma (eds.), Neurodynamics of Cognition and Consciousness (pp. 39-59). Berlin, Germany: Springer.

Kelso, J. A. S., Tuller, B., Vatikiotis-Bateson, E., and Fowler, C. A. (1984). Functionally Specific Articulatory Cooperation Following Jaw Perturbation During Speech: Evidence for Coordinative Structures. Journal of Experimental Psychology: Human Perception and Performance 10(6): 812-832.

Kepler, J. (1604). Optics: Paralipomena to Witelo and the optical part of astronomy. William H. Donahue, trans. (2000). Santa Fe, NM: Green Lion Press.

Keysers, C., and Gazzola, V. (2009). Expanding the Mirror: Vicarious Activity for Actions, Emotions, and Sensations. Current Opinion in Neurobiology 19: 1–6.

207

Kieras, D. E., and Meyer, D. E. (1997). An Overview of the EPIC Architecture for Cognition and Performance with Application to Human-Computer Interaction. Human-Computer Interaction 12: 391-438.

Kiverstein, J., and Miller, M. (2015). The Embodied Brain: Towards a Radical Embodied Cognitive Neuroscience. Frontiers in Human Neuroscience 9: art.237.

Knill, D. C., and Pouget, A. (2004). The Bayesian Brain: the Role of Uncertainty in Neural Coding and Computation. Trends in Neurosciences 27: 712-719.

Koffka, K. (1923). Perception: An Introduction to the Gestalt-Theorie. Psychological Bulletin 19: 531–585.

Koyré, A. (1965). Newtonian studies. London, UK: Chapman & Hall.

Kugler, P. N., Kelso, J. A. S., and Turvey, M. T. (1980). On the concept of coordinative structures as dissipative structures I: Theoretical lines of convergence. In G. E. Stelmach and J. Requin (Eds.), Tutorials in motor behavior (pp. 3-37), Amsterdam, Netherlands: North Holland.

Kugler, P. N., and Turvey, M. T. (1987). Information, Natural Law, and the Self-assembly of Rhythmic Movement. Hillsdale, NJ: Lawrence Erlbaum Associates.

Kuhn, T. S. (1970). The structure of scientific revolutions. Chicago, IL: Chicago University Press.

Kumar, A., and Mehta, M. R. (2011). Frequency-Dependent Changes in NMDAR-Dependent Synaptic Plasticity. Frontiers in Computational Neuroscience 5: 38.

Latash, M. L., Scholz, J. P., and Schöner, G. (2002). Motor Control Strategies Revealed in the Structure of Motor Variability. Exercise and Sport Sciences Reviews 30(1): 26-31.

Lamarque, C.-H., Ture Savadkooh, A., Etcheverria, E., and Dimitrijevic, Z. (2012). Multi-Scale Dynamics of Two Coupled Nonsmooth Systems. International Journal of Bifurcation and Chaos 22(12): 1250295.

Large, E. W., and Kolen, J. F. (1994). Resonance and the Perception of Musical Meter. Connection Science 6(1): 177-208.

Large, E. W. (2008). Resonating to Musical Rhythm: Theory and Experiment. In S. Grondin (ed.), The Psychology of Time. West Yorkshire, UK: Emerald.

Large, E. W., Kim, J. C., Flaig, N. K., Bharucha, J. J., and Krumhansl, C. L. (2016). A Neurodynamic Account of Musical Tonality. Music Perception 33(3): 319-331.

Lea-Carnall, C. A., Montemurro, M. A., Trujillo-Barreto, N. J., Parkes, L. M., and El-Deredy, W. (2016). Cortical Resonance Frequencies Emerge from Network Size and Connectivity. PLoS ONE: Computational Biology 12: e1004740.

208

Lea-Carnall, C. A., Trujillo-Barreto, N. J., Montemurro, M. A., El-Deredy, W., and Parkes, L. M. (2017). Evidence for Frequency-Dependent Cortical Plasticity in the Human Brain. PNAS (Proceedings of the National Academy of Sciences) 114(33): 8871-8876.

Lee, D. N. (1980). The Optic Flow Field: The Foundation of Vision. Philosophical Transactions of the Royal Society of London B 209: 169-179.

Lee, D. N. (2009). General Tau Theory: Evolution to date. Special Issue: Landmarks in Perception. Perception 38: 837-858.

Lennox, J. G. (2009). Form, essence, and explanation in Aristotle’s biology. In G. Anagnostopoulos (Ed.), A companion to Aristotle (pp. 348-367), Oxford, UK: Blackwell Publishing.

Leonetti, A., Puglisi, G., Siugzdaite, R., Ferrari, C., Cerri, G., and Borroni, P. (2015). What You See Is What You Get: Motor Resonance in Peripheral Vision. Experimental Brain Research 233: 3013-3022.

Lewis, F. A. (2009). Form and matter. In G. Anagnostopoulos (Ed.), A companion to Aristotle (pp. 162-185), Oxford, UK: Blackwell Publishing.

Li, L., and Warren, W. H. (2002). Retinal Flow Is Suffcient for Steering during Observer Rotation. Psychological Science 13(5): 485-491.

Lewontin, R. (2000). The triple helix. Cambridge, MA: Harvard University Press.

Liebovitch, L. S., and Shehadeh, L. A. (2005). Introduction to Fractals. In M. A. Riley and G. C. Van Orden, Tutorials in Contemporary Nonlinear Methods for the Behavioral Sciences (178-266). http://www.nsf.gov/sbe/bcs/pac/nmbs/nmbs.jsp

Lindberg, D. C. (1978). The science of optics. In David C. Lindberg (Ed.), Science in the Middle Ages (pp. 338-368), Chicago, IL: The University of Chicago Press.

Lipinski, J., Spencer, J. P., Samuelson, L. K., and Schöner, G. (2006). Spam-Ling: A Dynamical Model of Spatial Working Memory and Spatial Language. In Proceedings of the 28th Annual Conference of the Cognitive Science Society (pp. 768–773). Vancouver, Canada: Cognitive Science Association.

Lobo, L., Nordbeck, P. C., Raja, V., Chemero, A., Riley, M. A., Jacobs, D. M., and Travieso, D. (Under Review). Route Selection and Obstacle Avoidance with a Short-Range Haptic Sensory Substitution Device.

Lombardo, T. J. (1987). The Reciprocity of Perceiver and Environment. Hillsdale, NJ: Lawrence Erlbaum Associates.

Lutz, A., and Thompson, E. (2003). Neurophenomenology: Integrating Subjective Experience and Brain Dynamics in the Neuroscience of Consciousness. Journal of Consciousness Studies 10: 31–52.

209

Mandelbrot, B. B. (1982). The Fractal Geometry of Nature. New York: W. H. Freeman and Company.

Manning, G. (2015). Descartes’s metaphysical biology. HOPOS: The Journal of the International Society for the History of Philosophy of Science 5: 209–239.

Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. New York: Freeman.

Matthen, M. (2009). Teleology in living things. In G. Anagnostopoulos (Ed.), A companion to Aristotle (pp. 335-347), Oxford, UK: Blackwell Publishing.

McBeath, M. K., Shaffer, D. M., and Kaiser, M. K. (1995). How Baseball Outfielders Determine where to Run to Catch Fly Balls. Science 268(5210): 569-73.

McCabe, S. I., Villalta, J. I., Saunier, G., et al. (2014). The Relative Influence of Goal and Kinematics on Corticospinal Excitability Depends on the Information Provided to the Observer. Cerebral Cortex. doi:10.1093/cercor/bhu029.

Mechsner, F., Kerzel, D., Knoblich, G., and Prinz, W. (2001). Perceptual Basis of Bimanual Coordination. Nature 414: 69-73.

Meijer, O. G. (2001). Making Things Happen: An Introduction to the History of Movement Science. In M. L. Latash, and V. M. Zatsiorsky (Eds.), Classics in Movement Science (pp. 1-57). Champaign, IL: Human Kinetics.

Merchant, H., Battaglia-Mayer, A., and Georgopoulos, A. P. (2004). Neural Responses during Interception of Real and Apparent Circularly Moving Stimuli in Motor Cortex and Area 7a. Cerebral Cortex 14: 314-331.

Merleau-Ponty, M. (1945). Phenomenology of Perception (D. Landes, Trans.). Oxford, UK: Routledge. (This translation published in 2012).

Michaels, C., and Carello, C. (1981). Direct Perception. Englewood Cliffs, NJ: Prentice-Hall.

Michaels, C., and Palatinus, Z. (2014). A Ten Commandments for Ecological Psychology. In L. Shapiro (Ed.), The Routledge Handbook of Embodied Cognition (pp. 19-28). New York, NY: Routledge.

Milkowski, M. (2013). Explaining the Computational Mind. Cambridge, MA: MIT Press.

Moreau, A. L. D., Lorite, G. S., Rodrigues, C. M., Souza, A. A., and Cotta, A. (2009). Fractal Analysis of Xylella Fastidiosa Biofilm Formation. Journal of Applied Physics 106(2): 10.1063/1.13173172.

Myin, E. (2016). Perception as Something We Do. Journal of Consciousness Studies 23(5–6): 80–104.

210

Neisser, U. (1967). Cognitive Psychology. Englewood Cliffs, NJ: Prentice-Hall.

Newell, A., and Simon, H. A. (1976). Computer Science as Empirical Inquiry: Symbols and Search. Communications of the ACM 19(3): 113-126.

Newton, I. (1687). Philosophiæ Naturalis Principia Mathematica. (Andrew Motte, Trans., 1962). Berkeley, CA: The University of California Press.

Newton, I. (1999). The Principia: Mathematical principles of natural philosophy. I. B. Cohen and A. Whitman (trans.). Berkeley, CA: University of California Press.

Ochsner, K. N., and Kosslyn, S. (2013). The Oxford Handbook of Cognitive Neuroscience, Volume 1: Core Topics. Oxford, UK: Oxford University Press.

Ogata, K. (2005). System Dynamics, 4th Edition. New Jersey, NJ: Pearson Prentice Hall.

Olmstead, A., Viswanathan, N., Aicher, K., and Fowler, C. A. (2009). Sentence comprehension affects the dynamics of bimanual coordination: Implications for embodied cognition. Quarterly Journal of Experimental Psychology 62: 2409–2417.

O’Regan, J. K., and Noë, A. (2001). A Sensorimotor Account of Vision and Visual Consciousness. Brain and Behavioral Sciences 24: 939-1031.

Orekhova, E. V., Stroganova, T. A., and Posikera, I. N. (1999). Theta Synchronization during Sustained Anticipatory Attention in Infants over the Second Half of the First Year of Life. International Journal of Psychophysiology 32: 151–172.

Ouyang, F.-Y., Zheng, B., and Jiang, X.-F. (2015). Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems. PLoS ONE 10(10): e0139420.

Oyama, S., Griffiths, P. E., and Gray, R. D. (2001). Introduction: What is developmental systems theory? In S. Oyama, P. E. Griffiths, and R. D. Gray (Eds.), Cycles of contingency: Developmental systems and evolution (pp. 1-11). Cambridge, MA: The MIT Press.

Peitgen, H.-O., Jürgen, H., and Saupe, D. (1992). Fractals for the Classroom. Part One: Introduction to Fractals and Chaos. New York: Springer.

Pellecchia, G., Shockley, K., and Turvey, M. T. (2005). Concurrent Cognitive Task Modulates Coordination Dynamics. Cognitive Science 29: 531-557.

Pessoa, L. (2016). Beyond Disjoint Brain Networks: Overlapping Networks for Cognition and Emotion. Behavioral and Brain Sciences. doi:10.1017/S0140525X15000631, e120.

Petersen, S., Robinson, D., and Morris, J. (1987). Contributions of the Pulvinar to Visual Spatial Attention. Neuropsychologia 25(1): 97–105.

211

Pleger, B., Foerster, A. F., Ragert, P., Dinse, H. R., Schwenkreis, P., Malin, J. P., Nicolas, V., and Tegenthoff, M. (2003). Functional Imaging of Perceptual Learning in Human Primary and Secondary Somatosensory Cortex. Neuron 40(3): 643-653.

Pliz, K., Veit, R., Braun, C., and Godde, B. (2004). Effects of Co-Activation on Cortical Organization and Discrimination Performance. Neuroreport 15: 2669-2672.

Port, N. L., Kruse, W., Lee, D., and Georgopoulos, A. P. (2001). Motor Cortical Activity during Interception of Moving Targets. Journal of Cognitive Neuroscience 13: 306–318.

Press, C., Cook, J., Blakemore, S.-J., and Kilner, J. (2011). Dynamic Modulation of Human Motor Activity when Observing Actions. Journal of Neuroscience 31: 2792–2800.

Raizada, R., and Grossberg, S. (2003). Towards a Theory of the Laminar Architecture of Cerebral Cortex: Computational Clues from the Visual System. Cerebral Cortex 13: 100- 113.

Raja, V. (2018). A Theory of Resonance: Towards an Ecological Cognitive Architecture. Minds and Machines 28(1): 29-51.

Raja, V. (in Preparation). The Significance of Embodiment.

Raja, V., Biener, Z., and Chemero, A. (2017). From Kepler to Gibson. Ecological Psychology 29(2): 1-15.

Reed, E. D. (1988). James J. Gibson and the Psychology of Perception. New Haven, CT: Yale University Press.

Reed, E. S. (1996). Encountering the World: Toward an Ecological Psychology. New York, NY: Oxford University Press.

Richardson, M. J., and Chemero, A. (2014). Complex Dynamical Systems and Embodiment. In L. Shapiro (Ed.). The Routledge Handbook of Embodied Cognition (pp. 39-50). New York: Routledge.

Richardson, M. J., Shockley, K., Fajen, B. R., Riley, M. A., and Turvey, M. T. (2008). Ecological Psychology: Six Principles for an Embodied-Embedded Approach to Behavior. In P. Calvo, and T. Gomila (Eds.), Handbook of Cognitive Science: An Embodied Approach (pp. 161-188). San Diego, CA: Elsevier.

Rigotti, M., Barak, O., Warden, M. R., Wang, X. J., Daw, N. D., Miller, E. K., and Fusi, S. (2013). The Importance of Mixed Selectivity in Complex Cognitive Tasks. Nature 497(7451): 585-90.

Riley, M. A., and Van Orden, G. C. (2005). Tutorials in Contemporary Nonlinear Methods for the Behavioral Sciences. http://www.nsf.gov/sbe/bcs/pac/nmbs/nmbs.jsp

212

Rizzolatti, G., Fadiga, L., Fogassi, L., and Gallese, V. (1996). Premotor Cortex and the Recognition of Motor Actions. Cognitive Brain Research 3: 131–141.

Robinson, D., and Petersen, S. (1985). Responses of Pulvinar Neurons to Real and Self-Induced Stimulus Movement. Brain Research 338(2): 392–394.

Rumelhart, D. E., McClelland, J. L., and the PDP Research Group (1986). Parallel Distributed Processing (Vol. 1 & 2). Cambridge, MA: MIT Press.

Runeson, S. (1977). On the Possibility of “Smart” Perceptual Mechanisms. Scandinavian Journal of Psychology 18: 172-179.

Russell, B. (1900). A Critical Exposition of the Philosophy of Leibniz with an Appendix of Leading Passages. Cambridge, UK: Cambridge University Press.

Sanches de Oliveira, G., Raja, V., and Chemero, A. (in Preparation). Radical Embodiment and “Real” Cognition.

Schmidt, R., and Richardson, M. (2008). Dynamics of interpersonal coordination. In A. Fuchs and V. K. Jirsa (Eds.), Coordination: Neural, behavioral, and social dynamics (pp. 282– 308), Heidelberg, Netherlands: Springer.

Scholz, J. P., Schöner, G., and Latash, M. L. (2000). Identifying the Control Structure of Multijoint Coordination During Pistol Shooting. Experimental Brain Research 135: 382- 404.

Schöner, G. (2008). Dynamical Systems Approaches to Cognition. In R. Sun (Ed.), The Cambridge Handbook of Computational Psychology (pp. 101-126). Cambridge, UK: Cambridge University Press.

Schöner, G., Spencer, J. P., and the DFT Research Group (2016). Dynamic Thinking: a Primer on Dynamic Field Theory. Oxford, UK: Oxford University Press.

Schroeder, M. (1991). Fractals, Chaos, Power Laws: Minutes from an Infinite Paradise. Mineloa, NY: Dover Publications.

Schwartz, E. (1990). Computational Neuroscience. Cambridge, MA: The MIT Press.

Shaffer, D. M., Krauchunas, S. M., Eddy, M., and McBeath, M. K. (2004). How Dogs Navigate to Catch Frisbees. Psychological Science 15(7): 437-441.

Shapiro, L. (2014). The Routledge Handbook of Embodied Cognition. New York: Routledge.

Shockley, K., Carello, C., and Turvey, M. T. (2004). Metamers in the haptic perception of heaviness and moveableness. Perception & Psychophysics 66: 731-742.

Singer, W. (2005). The Brain—An Orchestra without a Conductor. Max Planck Research 3: 15– 18.

213

Singer, W., and Gray, C. M. (1995). Visual Feature Integration and the Temporal Correlation Hypothesis. Annual Review of Neuroscience 18: 555–586.

Skarda, C. A., and Freeman, W. J. (1987). How Brains Make Chaos in order to Make Sense of the World. Behavioral and Brain Sciences 10: 161-195.

Sommer, W., Leuthold, H., and Soetens, E. (1999). Covert Signs of Expectancy in Serial Reaction Time Tasks Revealed by Event-Related Potentials. Perception and Psychophysics 61(2): 342–353.

Sommer, W., Matt, J., and Leuthold, H. (1990). Consciousness of Attention and Expectancy as Reflected in Event-Related Potentials and Reaction Times. Journal of Experimental Psychology: Learning, Memory, and Cognition 16(5): 902–915.

Spencer, J. P. (2009). Toward a Unified Theory of Development: Connectionism and Dynamic System Theory Re-Considered. Oxford, UK: Oxford University Press.

Spencer, J. P, Smith, L. B., and Thelen, E. (2001). Test of a Dynamic Systems of the A-Not-B Error: The Influence of Prior Experience on the Spatial Memory Abilities of Two-Years- Old. Child Development 72(5): 1327-1346.

Squires, K. C., and Donchin, E. (1976). Beyond Averaging: The Use of Discriminant Functions to Recognize Event Related Potentials Elicited by Single Auditory Stimuli. Electroencephalography and Clinical Neurophysiology 41(5): 449–459.

Squires, K. C., Wickens, C., Squires, N. K., and Donchin, E. (1976). The Effect of Stimulus Sequence on the Waveform of the Cortical Event-Related Potential. Science 193(4258), 1142–1146.

Stein, H. (2004). Newton’s metaphysics. In I. B. Cohen and G. E. Smith (Eds.), The Cambridge companion to Newton (pp. 256-307), Cambridge, UK: Cambridge University Press.

Sterrat, D., Graham, B., Gillies, A., and Willshaw, D. (2011). Principles of Computational Modelling in Neuroscience. Cambridge, UK: Cambridge University Press.

Strogatz, S. H. (1994). Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. Reading, MA: Perseus Books.

Strogatz, S. H. (2003). Sync: How Order Emerges from Chaos in the Universe, Nature, and Daily Life. New York: Hyperion.

Sun, H., and Frost, B. J. (1998). Computation of Different Optical Variables of Looming Objects in Pigeon Nucleus Rotundus Neurons. Nature Neuroscience 1(4): 296-303.

Swenson, R. (1992). Autocatakinetics, Yes—Autopoiesis, No: Steps towards a Unified Theory of Evolutionary Ordering. International Journal of General Systems 21(2): 207-228.

214

Swenson, R. (1997). Autocatakinetics, Evolution, and the Law of Maximum Entropy Production: A Principled Foundation toward the Study of Human Ecology. Advances in Human Ecology 6: 1-46.

Swenson, R., and Turvey, M. T. (1991). Thermodynamic Reasons for Perception-Action Cycles. Ecological Psychology 3(4): 317-348.

Temprado, J. J., Monno, A., Zanone, P. G., Kelso, J. A. S. (2002). Attentional Demands Reflect Learning-induced Alterations of Bimanual Coordination Dynamics. European Journal of Neuroscience 16: 1390-1394.

Thagard, P. (2005). Mind: Introduction to Cognitive Science. Cambridge, MA: The MIT Press.

Thompson, E., and Varela, F. (2001). Radical Embodiment: Neural Dynamics and Consciousness. TRENDS in Cognitive Sciences 5(10): 418-425.

Todd, J. T. (1981). Visual Information about Moving Objects. Journal of Experimental Psychology: Human Perception and Performance 7(4): 795-810.

Tognoli, E., and Kelso, J. A. S. (2009). Brain Coordination Dynamics: True and False Faces of Phase Synchrony and Metastability. Progress in Neurobiology 87: 31–40.

Tognoli, E., and Kelso, J. A. S. (2014). The Metastable Brain. Neuron 81: 35-48.

Tuller, B., Fitch, H. L., and Turvey, M. T. (1982). The Bernstein perspective: II. The concept of muscle linkage or coordinative structure. In J. A. S. Kelso (Ed.), Understanding Human Motor Control. Champaign, IL: Human Kinetics.

Turvey, M. T. (1992). Ecological Foundations of Cognition: Invariants of Perception and Action. In H. L. Herbert, P. W. van der Broek, and D. C. Knill (Eds.), Cognition: Conceptual and Methodological Issues (pp. 85-117). Washington, DC, US: American Psychological Association.

Turvey, M. T., Burton, G., Pagano, C. C., Solomon, H. Y., and Runeson, S. (1992). Role of the inertia tensor in perceiving object orientation by dynamic touch. Journal of Experimental Psychology: Human Perception and Performance 18: 714-727.

Turvey, M. T., Fitch, H. L., and Tuller, B. (1982). The Bernstein perspective: I. The problems of degrees of freedom and context-conditioned variability. In J. A. S. Kelso (Ed.), Understanding Human Motor Control. Champaign, IL: Human Kinetics.

Turvey, M. T., Shaw, R. E., and Mace, W. (1978). Issues in the Theory of Action: Degrees of Freedom, Coordinative Structures and Coalitions. In J. Requin (Ed.), Attention and Performance VII (pp. 557-595). Hillsdale, NJ: Erlbaum.

Turvey, M., Shaw, R., Reed, E. S., and Mace, W. (1981). Ecological Laws for Perceiving and Acting: A reply to Fodor and Pylyshyn. Cognition 10: 237–304.

215

Turvey, M. T., Shockley, K., and Carello, C. (1999). Affordance, Proper Function, and the Physical Basis of Perceived Heaviness. Cognition 73: B17–B26.

Uhlhaas, P. J., Pipa, G., Lima, B., Melloni, L., Neuenschwander, S., Nikolić, D., and Singer, W. (2009). Neural Synchrony in Cortical Networks: History, Concept and Current Status. Frontiers in Integrative Neuroscience 3: 17. van der Weel, F. R., and van der Meer, A. L. H. (2009). Seeing It Coming: Infants’ Brain Responses to Looming Danger. Naturwissenschaften 96: 1385-1391. van Dijk, L., and Withagen, R. (2014). The horizontal worldview: A Wittgensteinian attitude towards scientific psychology. Theory & Psychology 24: 3-18. van Dijk, L., and Withagen, R. (2016). Temporalizing agency: Moving beyond on- and offline cognition. Theory & Psychology 26: 5-26. van Gelder, T. (1998). The Dynamical Hypothesis in Cognitive Science. Behavioral and Brain Sciences 21(5): 615-665. van Gelder, T., and Gelder, T. V. (1995). What Might Cognition Be if not Computation? Journal of Philosophy 92(7): 345-381. van Gelder, T., and Port, R. (1995). It’s About time: an Overview of the Dynamical Approach to Cognition. Cambridge, MA: The MIT Press.

Van Orden, G. C., and Holden, J. G. (2002). Intentional Contents and Self-Control. Ecological Psychology 14: 87–109.

Van Orden, G. C., Holden, J. G., and Turvey, M. T. (2003). Self-Organization of Cognitive Performance. Journal of Experimental Psychology: General 132(3): 331-350.

Van Orden, G. C., Hollis, G., and Wallot, S. (2012). The Blue-Collar Brain. Frontiers in Psychology 3: art. 207.

Varela, F. (1996). Neurophenomenology: A Methodological Remedy to the Hard Problem. Journal of Consciousness Studies 3: 330–350.

Vidyasagar, R., Folger, S. E., and Parkes, L. M. (2014). Re-Wiring the Brain: Increased Functional Connectivity within Primary Somatosensory Cortex following Synchronous Co-Activation. Neuroimage 92: 19-26.

Wang, X. J. (2010). Neurophysiological and Computational Principles of Cortical Rhythms in Cognition. Physiological Reviews 90: 1195–1268.

Wang, Y., and Frost, B. J. (1992). Time to Collision Is Signalled by Neurons in the Nucleus Rotundus of Pigeons. Nature 356: 236–238.

216

Ward, L. M. (1979). Stimulus Information and Sequential Dependencies in Magnitude Estimation and Cross-Modality Matching. Journal of Experimental Psychology: Human Perception and Performance 5(3): 444.

Ward, L. M. (2002). Dynamical Cognitive Science. Cambridge, MA: MIT Press.

Warren, W. H. (1998). Visually Controlled Locomotion: 40 years Later. Ecological Psychology 10(3-4): 177-219.

Warren, W. H. (2006). The Dynamics of Perception and Action. Psychological Review 113(2): 358-389.

Warren, W. H., and Fajen, B. R. (2004). From Optic flow to Laws of Control. In V. F. Hendricks, and J. Symons (Eds.), Optic Flow and Beyond (pp. 307-338). Berlin, Germany: Springer.

Warren, W. H., Kay, B. A., Zosh, W. D., Duchon, A. P., and Sahuc, S. (2001). Optic Flow Is Used to Control Human Walking. Nature Neuroscience 4(2): 213-216.

Wertheimer, M. (1923). Laws of Organization in Perceptual Forms (W. Ellis., Trans.). In W. Ellis (Ed.), A Source Book of Gestalt Psychology (pp. 71–88). London, UK: Routledge and Kegan Paul. (This edition and, therefore, this translation published in 1938).

Wilkie, R., and Wann, J. (2003). Controlling Steering and Judging Heading: Retinal Flow, Visual Direction, and Extraretinal Information. Journal of Experimental Psychology: Human Perception and Performance 29(2): 363-378.

Wilson, R. (2004). Boundaries of the Mind. Cambridge, UK: Cambridge University Press.

217