Ruhr-Universität Bochum

Fakultät für Philosophie und Erziehungswissenschaft

THE PROBLEM OF PSYCHONEURAL ISOMORPHISM BETWEEN VISUAL OBJECTS AND THEIR NEURAL CORRELATES

Inaugural-Dissertation zur Erlangung des akademischen Grades einer Doktorin/eines Doktors der Philosophie

vorgelegt von ALFREDO VERNAZZANI aus NEAPEL (ITALIEN)

Referent/-in: Prof. Dr. Tobias Schlicht Koreferent/-in: Prof. Marcin Miłkowski

Dekanin: Prof. Dr. Corinna Mieth

Mündliche Prüfung BOCHUM, DEN 16. Mai 2018

Eidesstattliche Erklärung

Alfredo Vernazzani Vor- und Zuname: 30.07.1984 Geburtsdatum: Neapel (Italien) Geburtsort:

Hiermit versichere ich an Eides statt,

- dass ich die eingereichte Dissertation selbständig und ohne unzulässige fremde Hilfe verfasst, andere als die in ihr angegebene Literatur nicht benutzt und dass ich alle ganz oder annähernd übernommenen Textstellen sowie verwendete Gra- fiken, Tabellen und Auswertungsprogramme kenntlich gemacht habe. Außerdem versichere ich, dass die vorgelegte elektronische mit der schriftlichen Version der Dissertation übereinstimmt und die Abhandlung in dieser oder ähnlicher Form noch nicht anderweitig als Promotionsleistung vorgelegt und bewertet wurde.

[Unterschrift] ORT, DATUM DR. ALFREDO VERNAZZANI Curriculum Vitae — March 2019

EMPLOYMENT AND APPOINTMENTS

September 2019 - Visiting Scholar May 2020 HARVARD UNIVERSITY, GRADUATE SCHOOL OF EDUCATION (accepted) Sponsor: Prof. Catherine Elgin. April 2019 - Post-Doc Researcher (wissenschaftlicher Mitarbeiter) May 2020 RUHR-UNIVERSITÄT BOCHUM October 2018 - Research Fellow March 2019 RUHR-UNIVERSITÄT BOCHUM April - July 2018 Lehrbeauftragter (Lecturer) RUHR-UNIVERSITÄT BOCHUM October 2016 - Lehrbeauftragter (Lecturer) March 2017 RHEINISCHE-FRIEDRICH-WILHELMS-UNIVERSITÄT BONN

RESEARCH AREAS

AOS: Philosophy of Mind & Perception (Contents of Visual Experience, Structure of Visual Objects, Neural Correlates of Consciousness, Visual Aesthetics), Philosophy of Psychology & of Cognitive Science (Mechanistic Explanation, Intertheoretic Integration, Scientific Models). AOC:, Metaphysics; History of Philosophy; Aesthetics; Philosophy of AI.

EDUCATION

10.2016 - PHD IN PHILOSOPHY: RUHR-UNIVERSITÄT BOCHUM. May 16, 2018 Dissertation: «The Problem of Psychoneural Isomorphism Between Visual Objects and their Neural Correlates» Mark: Summa cum laude Primary Supervisor: Prof. Tobias Schlicht (Bochum). Secondary Supervisor: Prof. Marcin Miłkowski (Polish Academy of Sciences). 2015 - 2016 VISITING PHD STUDENT AT THE UNIVERSITY OF CAMBRIDGE October - March Faculty of Philosophy, University of Cambridge, UK Sponsor: Prof. Tim Crane

2013-2016 PHD STUDENT AT THE UNIVERSITÄT BONN (left for Bochum) Supervisor: Prof. Andreas Bartels.

2012 MA IN PHILOSOPHY: HUMBOLDT-UNIVERSITÄT ZU BERLIN Mark: Sehr gut

2009 BA IN PHILOSOPHY: UNIVERSITÀ DEGLI STUDI ‘FEDERICO II’ Mark: 110 cum laude /110 FURTHER EDUCATION

2017 Spring School in Social Cognition, Emotion, and Joint Action March 6-10 Ruhr-Universität Bochum, 2017 Master Class: Three lectures by Prof. Jakob Hohwy on Predictive March 3-4 Coding and the Mind. Ruhr-Universität Bochum, Germany Curriculum Vitae, Alfredo Vernazzani, March 2019

2016 Neo-Aristotelian Approaches to the Metaphysics of the Mind Summer School September-October The Harry Wilks Study Center at the Villa Vergiliana, Bacoli, Italy, University of Oxford

2013 Summer School in Phenomenology and Philosophy of Mind August Center for Subjectivity Research, University of Copenhagen PUBLICATIONS • Articles and Book Chapters † = invited * = non-peer reviewed - “Do We See Facts?” Under review (Revisions) forth. “Psychoneural Isomorphism: From Metaphysics to Robustness.” † In F. Calzavarini & M. Viola (eds.) New Challenges in the Philosophy of Neuroscience, Springer. 2017 “The Structure of Sensorimotor Explanation.” «Synthese». DOI: 10.1007/s11229-017-1664-9 2016 “Fenomenologia naturalizzata nello studio dell’esperienza cosciente.” «Rivista di filosofia» 107(1): 27-48. DOI: 10.1413/82722. 2016 “Psychoneural Isomorphism and Content-NCCs.” † * «Gestalt Theory» 38 (2-3): 177-190. 2015 “Manipulating the Contents of Consciousness.” In Noelle, D. C., Dale, R., Warlaumont, A. S., Yoshimi, J., Matlock, T., Jennings, C. D., & Maglio, P. P. (Eds.). Proceedings of the 37th Annual Meeting of the Cognitive Science Society (pp. 2487-2492). Austin, TX: Cognitive Science Society. 2014 “Sensorimotor Laws, Mechanisms, and Representations.” In Bello P., Guarini M., McShane M. & Scassellati B. (Eds.) Proceedings of the 36th Annual Conference of the Cognitive Science Society (pp. 3038-3042). Austin TX: Cognitive Science Society. 2011 “Die europäische Identität / L’identità europea” [ITA/DE]. † * In: Gregor Vogt-Spira, Anke Fischer, & Luigi Galimberti-Faussone (eds.): Die Zukunft Europas. Il futuro dell’Europa (pp. 122-130). Stuttgart: Franz-Steiner Verlag. • Editor Forth. Guest editor for a Special Issue of «Synthese» on «The structure of perceptual objects» with Dr. Blazej Skrzypulec, and Prof. Tobias Schlicht. Accepted, in preparation. • Reviews 2013 Review of Marco Sgarbi’s (ed.): The Kant-Weymann Controversy: Two Polemical Writings on Optimism. Verona: Aemme Edizioni (2010). «Rivista di filosofia» 104(1). • Non-academic 2014 “Il mestiere di pensare.” Review of Diego Marconi: Il mestiere di pensare. Torino: Einaudi (2014) «Uncommons.it», http://www.uncommons.it/village/il-mestiere-di-pensare-535 • In preparation “Philosophy of Perception as Model-Building” “Seeing Things Aesthetically” “Visual Arts and the Aesthetic Depth of Seeing” “Visual Habits” with Flavia Felletti “Embodied Cognition: A Guide for the Perplexed” 2 Curriculum Vitae, Alfredo Vernazzani, March 2019

CONFERENCES • Keynote speaker 2018 “Visualizing Vision: Pictures and Visual Perception” June 30 – July 2 Philosophers’ Rally University of Łódź, Łódź, Poland. • Invited Speaker 2019 “Visual Arts and the Aesthetic Depth of Seeing” January 17 London Mind Group. Senate House, University of London, London, UK 2018 “The Structure of Sensorimotor Explanation.” April 6 Neural Mechanisms Online. Webinar on Neural Mechanisms, IUSS Pavia. With responses from Mazviita Chirimuuta (Pittsburgh), and Dan Burnston (Tulane University). 2017 “Embodied Cognition: A Guide for the Perplexed.” November 22 CEPERC-ILCB. Aix-Marseille Université, Aix-en-Provence. 2017 “Do We See Facts?” May 19-20 Second Bochum-Rutgers-Workshop in Philosophy. Ruhr-Universität Bochum, Bochum, Germany. 2016 “Perception and Speculative Materialism: Meillassoux on Primary and November 24-26 Secondary Qualities.” The New Faces of Realism. Bergische Universität Wuppertal, Germany. 2013 “Das Isomorphismusproblem zwischen personalem und March 27-28 subpersonalem Niveau in der visuellen Wahrnehmung.” Doktorandentagung Wissenschaftsphilosophie. Leibniz-Universität Hannover, Germany. • Refereed Talks

2019 “What Is Embodiment?” July 19–21 93rd Joint Session of the Aristotelian Society and Mind Association Durham University, Durham, UK 2019 “The Aesthetic Depth of Seeing.” May 22-24 AISC Midterm Conference IMT School of Advanced Studies, Lucca, Italy 2018 “A Mechanistic Framework for the Neural Correlates of Conscious October 5-6 Visual Object Perception.” Neural Mechanisms Online, Webconference. 2018 “The Structure of Sensorimotor Explanation” June 25-27 AISC Midterm Conference. Genoa, Italy. 2017 “Is There Scientific Evidence That We See Facts?” September 24-27 XXIV. Kongress der deutschen Gesellschaft für Philosophie. Humboldt-Universität, Berlin, Germany. 2017 “Philosophy of Perception as Model-Building.” September 21-23 Fourth Philosophy of Language and Mind Conference. Ruhr-Universität Bochum, Germany. 3 Curriculum Vitae, Alfredo Vernazzani, March 2019

2017 “The Structure of Sensorimotor Explanation.” August 21-26 9th European Congress of Analytic Philosophy. Ludwig-Maximilian-Universität, Munich, Germany. 2017 “Trope Representationalism and Mental Mechanisms.” August 21-26 9th European Congress of Analytic Philosophy. Ludwig-Maximilian-Universität, Munich, Germany. 2016 “Visual perception, Trope Similarity, and Intentional Mechanisms.” June 23-25 Mechanistic Integration and Unification in Cognitive Science. Polish Academy of Sciences, Warsaw, Poland. 2015 “The Neural Correlates of Conscious Content from a Mechanistic August 3-8 Standpoint.” 15th Congress of Logic, Methodology and Philosophy of Science. Helsinki, Finland. 2015 “Looking for the Mechanisms of the Contents of Consciousness” June 9-13 Toward a Science of Consciousness. Helsinki, Finland. 2015 “Psychoneural Isomorphism and Intentional Mechanisms.” May 21-23 19th Scientific GTA Convention ‘Body, Mind, Expression.’ University of Parma, Italy. 2015 “Mechanisms and the Intentional Content of Experience.” February 26-28 Rudolf-Carnap-Lecture 2015: John Campbell. Ruhr-Universität Bochum, Germany. 2014 “Representations and Sensorimotor Explanation.” September 4-6 SOPhiA2014. University of Salzburg, Austria. 2014 “Explaining Consciousness without Gaps.” August 28 – Sep. 2 8th European Congress of Analytic Philosophy. University of Bucharest, Romania. 2014 “Vividness and the Levels of Consciousness.” July 11-13 88th Joint Session of the Aristotelian Society and Mind Association. Fitzwilliam College, University of Cambridge, UK. 2014 “Mechanisms and First-Person Accounts of Consciousness.” April 24-26 12th Annual Meeting of the Nordic Society for Phenomenology. Helsinki, Finland. • Selected Non-Refereed Talks 2018 “Do We See Facts?” July 23 First Bochum Graduate Workshop in Philosophy of Mind and of Cognitive Science. Ruhr-Universität Bochum, Germany. 2017 “Philosophy of Perception as Model-Building.” October 20 Ockham Society. University of Oxford, UK. 2017 “Model Building in the Philosophy of Perception.” May 22-23 Naturalistic Approaches to Content and Consciousness. Polish Academy of Sciences. 2016 “Do We See Facts?” February 16 New Directions in the Study of the Mind. Peterhouse, University of Cambridge, UK.

4 Curriculum Vitae, Alfredo Vernazzani, March 2019

• Refereed Poster Presentations 2017 “Phenomenological Models of Perceptual Content.” June 27-29 The Human Mind Conference. The Møller Centre, University of Cambridge, UK. 2016 “Do We Perceive Facts?” August 10-13 24th Annual Meeting of the European Society of Philosophy and Psychology, St. Andrews, Scotland. 2015 “Manipulating the Contents of Consciousness.” July 22-25 CogSci 2015: 37th Annual Meeting of the Cognitive Science Society. Pasadena Convention Center, Pasadena, California, USA. 2014 “Sensorimotor Laws, Mechanisms, and Representations.” July 23-26 CogSci 2014: 36th Annual Meeting of the Cognitive Science Society. Québec City Convention Center, Québec City, Canada. • Comments (invited) 2019 Comment on Abel Wajnerman Paz “The Global Nueronal May 31 Workspace as an efficient broadcasting network.” Neural Mechanisms Online. Webinar on Neural Mechanisms, IUSS Pavia. 2018 Comment on Matteo Grasso’s “IIT vs Russellian Monism.” December 14 Neural Mechanisms Online. Webinar on Neural Mechanisms, IUSS Pavia. 2018 Comment on Lena Kästner’s “On the Mechanistic Triad.” June 1 Neural Mechanisms Online, Webconference. Webinar on Neural Mechanisms, IUSS Pavia. • Selected Attended Workshops and Conferences 2018 Workshop “True Enough” with Prof. Catherine Elgin. June 20 Organized by Prof. Catrin Misselhorn. Universität Stuttgart. 2016 “Mental Representation – Naturalistic Approaches.” May 20-21 Organized by Prof. Nicholas Shea (King’s College, London) Institute of Philosophy, School of Advanced Studies. 2015 “Mental Representations: The Foundations of Cognitive Science? September 21-23 Organized by Prof. Tobias Schlicht (Ruhr-Universität Bochum). Ruhr-Universität Bochum, Germany. 2014 “Workshop: Cognitive Science and the Arts.” July 23 Organized by B. Tversky, P. Healey, D.K. Kirsh. CogSci2014, Québec City, Canada. SCHOLARSHIPS, HONORS, AND GRANTS

• Honors 2011 INVITATION TO MEET THE ITALIAN AND THE GERMAN FEDERAL PRESIDENT July 6-7 Villa Vigoni, Deutsch-Italienisches Zentrum für europäische Exzellenz, Loveno di Menaggio, Italy. Personally invited to meet the German Federal president Christian Wulff and the Italian president Giorgio Napolitano, with 28 young researchers from Italy and Germany. The meeting was preceded by a workshop on European issues: ‘Il futuro dell’Europa’ / ‘Die Zukunft Europas’.

5 Curriculum Vitae, Alfredo Vernazzani, March 2019

• Scholarships 11/2016 – 09/2018 FELLOWSHIP (25,300€) – Ruhr-Universität Bochum. 11/2014 – 10/2016 PHD SCHOLARSHIP (26,400€) – Barbara-Wengeler Stiftung.

10/2010 – 12/2011 DAAD SCHOLARSHIP (12,000€) – DAAD: One-year scholarship for the MA at the Humboldt-Universität zu Berlin.

2/2009 – 3/2009 GOETHE-INSTITUT SCHOLARSHIP (free two-months intensive language course)– Goethe-Institut, Freiburg i.B.

• Conference Subsidies and Minor Scholarships 2019 Travel bursary for the invited talk at the London Mind Group – London Mind Group 2018 Travel and accommodation for the Philosophers’ Rally - University of Łódź. 2017 Travel and accommodation for the talk “Embodied cognition: A Guide for the Perplexed” - Aix-Marseille Université, Aix-en-Provence. 2016 Registration costs, conference dinner, and hotel accommodation covered for the conference “The New Faces of Realism” - Bergische Universität Wuppertal. 2016 Conference subsidy for “Mechanistic Integration and Unification in Cognitive Science” - Rheinische Friedrich-Wilhelms-Universität Bonn. 2016 Bursary for the conference “Mental Representation – Naturalistic Approaches” - Arts and Humanities Research Council (AHRC). 2015 Conference subsidy for CLMPS 2015 - Barbara-Wengeler Stiftung. 2015 Conference subsidy for CogSci2015 - Barbara-Wengeler Stiftung. 2015 Conference subsidy for Toward a Science of Consciousness –Rheinische Friedrich-Wilhelms-Universität Bonn. 2015 Conference subsidy for the 19th Scientific Gestalt Convention - Society for Gestalt Theory and Its Applications. 2015 Conference subsidy for the Carnap-Lecture 2015 - Ruhr-Universität Bochum. 2014 Conference subsidy for the 88th Joint Session - Aristotelian Society. 2011 Full costs (travel, accommodation, meals) for “Il futuro dell’Europa” – Villa Vigoni, Deutsch-Italienisches Zentrum für europäische Exzellenz. TEACHING EXPERIENCE (AS PRIMARY INSTRUCTOR) † = Taught in German. • Ruhr-Universität Bochum

WS 19/20 John McDowell’s «Mind and World». Postgraduate course (MA) Analytische Bewusstseinstheorien. † [“Analytic Theories of Consciousness”] Graduate course (BA) SS 2019 Philosophy of Attention: Sebatian Watzl’s «Structuring Mind». Postgraduate course (MA) 6 Curriculum Vitae, Alfredo Vernazzani, March 2019

Eine Einführung in die Wahrnehmungsphilosophie. † ["An Introduction to the Philosophy of Perception”] Graduate course (BA) SS 2018 Philosophische Probleme wissenschaftlicher Modelle. † [“Philosophical Problems of Scientific Models”] Graduate course (BA) Teleosemantics. Taught with Prof. Tobias Schlicht. Postgraduate course (MA) • Rheinische Friedrich-Wilhelms Universität Bonn 2017 Wahrnehmungsgehalt. † January 31 [“Perceptual Content”] Ringvorlesung “Einführung in die Philosophie” (BA, MA) WS 16/17 Philosophie der Wahrnehmung: Eine Einführung. † [“Philosophy of Perception: An Introduction”] Postgraduate course (MA) ACADEMIC MEMBERSHIPS • Societies Since 2014 Nordic Society for Phenomenology (NSoP) Since 2014 Aristotelian Society Since 2010 European Society for Early Modern Philosophy (ESEMP) 2014 – 2016 Cognitive Science Society (CogSci) Since 2015 Society for the Philosophy of Information (SPI) Since 2015 Socitété de philosophie analythique (SoPhA) 2015 – 2016 The Cambridge Moral Sciences Club Since 2015 Society for the Metaphysics of Science Since 2015 Society for Philosophy of Science in Practice (SPSP) Since 2016 European Society for Philosophy and Psychology (ESPP) Since 2019 Associazione Italiana di Scienze Cognitive (AISC)

• Research groups Since 2019 External member of «Aesthetics and Ethics» (AERG) REVIEWER • Conferences and Workshops 2019 «Carnap Lecture» with Frances Egan & Robert Matthews (Ruhr-Universität Bochum); «Philosophical Perspectives on Medical Knowledge» (University of Genoa); «Fiction, Imagination, and Epistemology» (Ruhr-Universität Bochum); EuroCogSci 2019 (Ruhr- Universität Bochum). 2018 «Carnap Lecture» with Thomas Metzinger (Ruhr-Universität Bochum); Scientific Committee «New Challenges in Philosophy of Neuroscience» (University of Pavia-Milano). 2017 «Carnap Lecture» with Patricia Churchland (Ruhr-Universität Bochum); 4th Philosophy of 7 Curriculum Vitae, Alfredo Vernazzani, March 2019

Language and Mind Conference (Ruhr-Universität Bochum); Workshop «Evolving Enactivism» (Ruhr-Universität Bochum) • Journals

Phenomenology and the Cognitive Sciences (2017); Philosophical Psychology (3x) (2016-2017); Philosophical Explorations (2018); Synthese (6x) (2018); Open Philosophy (2019).

• Publishers

Oxford University Press (2018); Springer (2019). • Other Project Reviewer for the Research School Plus (Ruhr-Universität Bochum)

CONFERENCES AND WORKSHOPS ORGANIZED 2019 Scientific Manager for EuroCogSci 2019 September 2-4 Main organizer: Prof. Albert Newen. Ruhr-Universität Bochum. 2019 Co-organizer of the Second Bochum Early Career Workshop in June 13-14 Philosophy of Mind and of Cognitive Science. Keynote speakers: Prof. Onur Güntürkün (RUB), Dr. Rebekka Hufendiek (). Ruhr-Universität Bochum. 2018 Founder and Co-organizer of the 1st Bochum Graduate Workshop in July 23-24 Philosophy of Mind and of Cognitive Science. Keynote speakers: Prof. Jim Pryor (NYU), Dr. Eva Schmidt (UZH). Ruhr-Universität Bochum. 2015 Student Volunteer in the Organization of CogSci 2015. July 21-25 Pasadena Convention Center, Pasadena, California, USA.

LANGUAGES

• Languages spoken Italian (mother tongue) French (speaking: basics, reading: good) English (C2, fluent) Spanish (reading: good) German (C2, fluent) Latin (good) Ancient Greek (basics)

• Language courses 2011 Intensive Language Course: French June 6 – July 7 Alliance Française, Paris 2008-2009 Intensive Language Course: German September - March Goethe-Institut, 2007 Intensive Language Course: German March 5 – April 8 Goethe-Institut, Berlin. ACADEMIC TRAINING 2016 Training on Lecturing Performance – Trainer: Stewart Theobald. March 18 University of Cambridge, UK.

8 Curriculum Vitae, Alfredo Vernazzani, March 2019

2016 Interview Technique Workshop – Trainer: Alan Fawcitt. January University of Cambridge, UK. 2015 Certificate on Teaching Skills – Instructor: Dr. Arif Ahmed. October University of Cambridge, UK. OTHER NON-ACADEMIC ACTIVITIES

• Prize 25/5/1998 Absolute first prize certificate 100/100 as a member of the orchestra S.M.S ‘Ruggiero’ of Caserta (classical guitarist). Musical Association ‘Eureka’: 5th National Competition of Musical Performance «Città di Melito».

• Non-Academic Memberships Member of ANPI Deutschland (since 2014) President of ANPI Deutschland (2016-2017) [ANPI: National Association of Italian Partisans; Italian antifascist organization]

• Public Outreach 17/4/2015 Interviewed on the importance of anti-fascism today, and the present role of A.N.P.I. during the celebration for the 70th Anniversay of Italian Liberation Interview by Agnese Franceschini (RadioColonia) http://www.funkhauseuropa.de/sendungen/radio_colonia/il_tema/partisan en100.html 9/6/2014 Interviewed on anti-fascism and the role of A.N.P.I. in Germany during the festival ‘Birlikte-Zusammenstehen’ (ten years after the bombing attack) Interview by Agnese Franceschini (RadioColonia) http://www.funkhauseuropa.de/sendungen/radio_colonia/il_tema/keupstra sse190.html 11/10/2013 “Alfredo fa ricerca in Germania: la locomotiva d’Europa raccontata da un campano” Interview by Giulio Pitroso (GenerazioneZero Sicilia) http://www.generazionezero.org/blog/2013/10/11/alfredo-fa-ricerca-in- germania-la-locomotiva-deuropa-raccontata-da-un-campano/ 10/2/2012 “Ich möchte wieder stolz sein auf das Bild Italiens in der Welt” Interview by Patricia Liberatore (Konrad Adenauer Stiftung) Konrad-Adenauer-Stiftung Aquädukt http://kas-aquaedukt.de/ich-mochte-wieder-stolz-sein-auf-das-bild- italiens- in-der-welt/ • Events Organized as Members of ANPI Deutschland 12/11/2016 Organization of the theatre piece «Canto dei deportati» written by Maria Filograsso, performed by Maria Filograsso and Giulio Bufo A.N.P.I. Deutschland — Interkulturelles Zentrum “offene Welt.” 31/5/2016 Organization of the celebrations for the 2nd of June (‘Festa della Repubblica’ – Republic Day), and the 70th anniversary of woman suffrage in Italy. In collaboration with the Italian Cultural Institute (IIC) and the Italian Consulate in Cologne. Invited speaker: Dr. Michela Ponzani, Archivio Storico del Senato (Senate of the Republic). Chairing: Alfredo Vernazzani. A.N.P.I. Deutschland - Italienisches Kulturinstitut Köln. 9 Curriculum Vitae, Alfredo Vernazzani, March 2019

17/4/2015 Organization of the celebrations for the 70th Anniversary of the Liberation from the Nazi and Fascist occupation of Italy. In collaboration with the Italian Cultural Institute (IIC) and the Italian Consulate in Cologne. Invited speakers: Tullio Montagna, Member of the national board of A.N.P.I.; Prof. Filippo Focardi, Historian, University of Padova; Prof. Rudolf Lill, Historian, Universität-Köln. Introduction: MP Laura Garavini, PD (Partito Democratico); Dr. Lucio Izzo, Chairman of the IIC; Dr. Emilio Lolli, Italian Consul General of Cologne. Chairing: Alfredo Vernazzani; Dr. Lucia Beccarelli. A.N.P.I. Deutschland - Italienisches Kulturinstitut Köln. 3/2015 Organization of an educational project with the high school Liceo Linguistico “Italo Svevo” in Cologne. The project consisted in reading and discussing with the students exemplary literary texts on the Resistenza and the nature of fascism. Alfredo Vernazzani: lecturing on Elio Vittorini’s «Uomini e no». Francesca Polistina: lecturing on Beppe Fenoglio’s «Il partigiano Johnny». Gabriele Rasi: lecturing on Italo Calvino’s «Il sentiero dei nidi di ragno». A.N.P.I. Deutschland - Liceo Linguistico Italo Svevo, Köln, Germany.

10

THE PROBLEM OF PSYCHONEURAL ISOMORPHISM

BETWEEN VISUAL OBJECTS AND THEIR NEURAL CORRELATES

Contents

List of Figures v Preface vi Acknowledgments viii

PART I THE PROBLEM

I. THE PROBLEM OF PSYCHONEURAL ISOMORPHISM 1 1. Filling-in, Cartesian Materialism, and Isomorphism 2 1.1 What Is Filling-in? 2 1.2 Dennett on Filling-in 5 1.3 Rejecting Analytic Isomorphism 8 2. Naturalizing Phenomenology and Isomorphism 15 2.1 Naturalizing Phenomenology 15 2.2 Mapping the Neural Correlates of Consciousness 19 2.3 From Matching to Isomorphism? 20 3. The Scope and Aims of this Work 23 3.1 The Relevance of Psychoneural Isomorphism 24 3.2 Focusing on Visual Objects 24

II. OUTLINING A RESEARCH STRATEGY 26 1. The Character of Isomorphism 26 1.1 Defining Isomorphism 27 1.2 What is Meant with “Psycho-Neural”? 29 1.3 PI and the Metaphysics of the Mind-Body Problem 30 2. A Short History of Psychoneural Isomorphism 34 2.1 Fechner, Mach, and Müller 34 2.2 Gestalt Isomorphism 38 2.3 From Second-Order Isomorphism to the Present Day 43 3. How To Study Psychoneural Isomorphism 47 3.1 Outlining A Research Strategy 47 3.2 Outline of the Next Chapters 49

PART II THE PHENOMENOLOGICAL DOMAIN

III. STATES OF SEEING 53 1. States of Seeing 53 1.1 States of Seeing and Visual Perception 53 1.2 Unity of Consciousness and the Visual Field 57 1.3 The Representational Character of Seeing 59 2. Content and Phenomenology 64 2.1.1 What Does “Consciousness” Mean? 64 2.1.2 Phenomenal and Access Consciousness 64 2.1.3 State Consciousness, Creature Consciousness, Background Consciousness 66 2.2 Accessibility and Phenomenal Overflow 68 2.3 Representational Content and Consciousness 69 2.3.1 Intentionalism 69 2.3.2 Varieties of Intentionalism 70

i

3. The Role of Consciousness in this Work 74 3.1 Consciousness and PI 74 3.2 An Overview of the Phenomenological Domain 76

IV. FACTS, SENSORY INDIVIDUALS, AND SENSORY REFERENCE 78 1. Seeing and the Ontology of Visual Objects 79 1.1 States of Seeing and Visual Properties 79 1.2 Facts 82 2. Factualism and Fish’s Argument 83 2.1 Factualism 83 2.2 William Fish’s Argument for Factualism 85 3. Sensory Individuals and Sensory Reference 87 3.1 Binding and Places 87 3.2 Sensory Individuals as Material Objects 88 3.2.1 Superimposed Objects 88 3.2.2 Dynamic Feature-Object Binding 89 3.3 Sensory Reference 90 4. Tracking and Seeing Facts? 92 4.1 Two Ontological Criteria 92 4.2 Material Objects as Particulars 93 4.2.1 Tracking Thin Particulars? 93 4.2.2 Tracking Thick Particulars? 95 4.3 Fish’s Argument Revisited 97 5. Visual Objects as Constellations of Properties 98 5.1 Material and Visual Objects 98 5.2 Tracking and Binding 99 5.3 Perceptual Content 101

PART III THE NEURAL DOMAIN

V. A MECHANISTIC STANDPOINT ON CONTENT-NCC RESEARCH 105 1. The Contents of Visual Perception 106 1.1 The Content View and Visual Accuracy Phenomena 106 1.1 Content and Consciousness 109 2. Goals and Aims of Content-NCC Research 110 3. The Standard Definition of Content-NCC 112 1.1 Chalmers’ Definition 112 1.1 Problems with the Standard Definition 115 4. A Mechanistic Approach to Content-NCCs 122 4.1 Mechanisms and Mechanistic Explanation 122 4.2 Decomposing Content-NCCs 126 4.2.1 Intentional Mechanisms 126 4.2.2 Selection Mechanisms 129 4.2.3 Proper-NCC 132 4.3 Manipulating the Contents of Consciousness 134 4.3.1 Prerequisite vs. Consequent Neural Activity 134 4.3.2 Manipulation and Mechanisms 137 4.4 A New View of Content-NCC Research 138 5. Schemas, Integration, and Content Ontology 139 5.1 Schema, Sketches and Strategies 140 5.2 Interfield Integration in Consciousness Studies 142 5.3 The Ontology of Visual Content 143

ii

VI. THE STRUCTURE OF SENSORIMOTOR EXPLANATION 144 1. An Outline of the Sensorimotor Theory 145 2. Dynamic System Theory and the Dynamical Hypothesis 149 3. The Explanatory Structure of the Standard SMT 152 3.1 A Nomothetic Explanation 152 3.2 The Mere Description Worry and the Role of Representations 156 4. Towards a Mechanistic SMT 158 4.1 Mechanizing the SMT 159 4.2 The SMT as a Complement to the Orthodoxy 164

PART IV THE ROAD TO STRUCTURE

VII. THE CONFIGURATION AND ONTOLOGY OF VISUAL OBJECTS 171 1. Objects, Universals, and Tropes 172 1.1 Two Constraints on Visual Objects 172 1.2 Against Type-Three Nominalism 173 1.3 Universals and Tropes 178 1.3.1 Universals 179 1.3.2 Tropes 180 2. The Ontology of Visual Objects and Properties 182 2.1 Configuration and Visual Objects 182 2.1.1 The Configuration Constraint 182 2.1.2 Facts and Bundles 186 2.1.2.1 Facts, Facts, and Facts 187 2.1.2.2 Configuration and Bundles 189 2.1.3 Interim Conclusion 190 2.2 The Particularity Constraint 191 2.2.1 Keeping Particularity Within Representationalism 191 2.2.1.1 Schellenberg’s Argument 193 2.2.1.2 Rescuing Particularity within Representationalism 194 2.2.2 Universals, Tropes, and the Particularity of Perception 197 2.2.3 Interim Conclusion 198 3. Tagging Things in the World 198 3.1 Three Trope Theories 198 3.1.1 Standard Trope Theory 199 3.1.2 Resemblance Class Trope Nominalism 200 3.1.3 Natural Class Trope Nominalism 201 3.2 Tropes and Perceptual Tagging 202 3.2.1 The Solution 202 3.2.2 Advantages of Tagged Tropes 205

VIII. TOWARDS PSYCHONEURAL ISOMORPHISM? 208 1. Setting the Stage 208 1.1 Varieties of Isomorphism 209 1.2 Visual Accuracy Phenomena 209 1.3 Intentional Mechanisms 210 1.4 The Broad Picture 211 2. Modeling Visual Objects 213 2.1 What Are Models? 213 2.2 Models of Visual Objects 217 2.2.1 Philosophical Models of Visual Objects 217 2.2.1.1 Justifying Philosophical Models 217 2.2.1.2 Families of Philosophical Models 222 iii

2.2.2 Scientific Models of Visual Objects 224 2.2.3 Models of Visual Objects as Phenomenological Models 227 2.3 Model Pluralism about Visual Objects 230 3. Connecting the Two Domains 232 3.1 The Matching Content Doctrine 233 3.2 Jean Petitot’s Neurogeometry of Vision 236 3.2.1 Neurogeometry and Psychoneural Isomorphism 237 3.2.2 The Limits of Neurogeometry 242 3.2.2.1 Intertheoretic Integration 242 3.2.2.2 The Explanatory Structure of Petitot’s Model 245 3.3 Connecting Morphological Explanations with Mechanisms 246

CONCLUSION 250

BIBLIOGRAPHY 251

iv

List of Figures and Tables

Fig. 1 “How to detect the Blind Spot.” Ch. 1, p. 3 Fig. 2 “Neon Color Spreading and Craik-O’Brien-Cornsweet Effect.” Ch. 1, p. 4 Fig. 3 “Kanizsa Triangle.” Ch. 1, p. 4 Fig. 4 “Perception of Surface Colors.” Ch. 1, p. 10 Fig. 5 “Petitot’s Scheme of Emergent Phenomenal Space.” Ch. 1, p. 22 Fig. 6 “The Two Domains of Psychoneural Isomorphism.” Ch. 2, p. 30 Fig. 7 “The Subsystems of Visual Perception.” Ch. 3, p. 56 Fig. 8 “Sets and Subsets of the Mind and the Brain.” Ch. 3, p. 75 Fig. 9 “Gabor patches.” Ch. 4, p. 89 Fig. 10 “Material and visual objects.” Ch. 4, p. 99 Fig. 11 “Chalmers’ content-NCCs.” Ch. 5, p. 115 Fig. 12 “Decomposition of a Visual Object.” Ch. 5, p. 127 Fig. 13 “Intentional Mechanisms Underlying a Visual Object.” Ch. 5, p. 128 Fig. 14 “Target Activity and Neural Confounds” Ch. 5, p. 136 Fig. 15 “Content-NCCs: A Schema” Ch. 5, p. 141 Fig. 16 “Fellman & Van Essen’s hierarchy of visual areas” Ch. 5, p. 142 Fig. 17 “Buhrmann et al.’s Minimal Agent Model.” Ch. 6, p. 161 Fig. 18 “A Simple Visual Object.” Ch. 7, p. 187 Fig. 19 “Configuration and Emergent Properties.” Ch. 7, p. 205 Fig. 20 “Dennett’s Parrot-Tagging.” Ch. 7, p. 205 Fig. 21 “The Two Domains Revisited.” Ch. 8, p. 211 Fig. 22 “Model, Model Description, and Target.” Ch. 8, p. 215 Fig. 23 “Families of MoPs.” Ch. 8, p. 223 Fig. 24 “The Threefold Modeling Relation Applied.” Ch. 8, p. 224 Fig. 25 “A Tree Hierarchical Structure.” Ch. 8, p. 225 Fig. 26 “A Tree Representation of Two Visual Objects.” Ch. 8, p. 226 Fig. 27 “A Tree Representation of a Natural Scene.” Ch. 8, p. 226 Fig. 28 “Multistable Material Objects.” Ch. 8, p. 227

Tab. 1 “Dynamical Hypothesis and SMT’s theses.” Ch. 6, p. 151 Tab. 2 “Three-types of Nominalism.” Ch. 7, p. 175 Tab. 3 “The Varieties of Class Nominalism.” Ch. 7, p. 176 Tab 4 “Conceptual and Geometrical Eidetics” Ch. 8, p. 238

v

0

PREFACE

This work offers a systematic analysis of the concept of psychoneural isomorphism. Roughly, it means that between something “psychological” and something “neural” there is an isomorphism, i.e. an invertible function that completely maps the relational structure of one domain onto its image. The concept was put forward by Gestalt psychologists in the late ‘20s, and his fiercest advocate was Wolfgang Köhler who coined the term “psychophysical isomorphism.” Part of the motivation that led some psychologists to endorse this concept was its potential heuristic value in the search for brain correlates of psychological phenomena. Today the concept is sometimes mentioned in debates about the neural correlates of our perceptual experience (cfr. Ch. 1). As I will show in this work, the concept has so far eluded a systematic characterization, such that it is unclear what is isomorphic to what, what kind of relational structures are the “psychological” and the neural domains, what kind of thesis is that of psychoneural isomorphism, and what role the concept may play in current debates in the philosophy of mind and cognitive science.

~

Some elucidations about the notation. This work is divided into eight Chapters. Reference to Chapters within this work are always capitalized and abbreviated, e.g. Ch. 1, Ch. 2, etc. Reference to chapters of other books is always written in full, e.g. Burge 2010, chapter 8. Each Chapter is divided into several sections. Reference to sections within the same Chapter is marked with the section number, preceded by the sign §, e.g. §1, §2, §3, etc. When reference is given to a specific section of another Chapter, it is first given reference to the Chapter and then to the section, e.g. Ch. 2, §2. Each section is divided into a number of paragraphs, and sometimes sub-paragraphs. In this case, reference within the Chapter is noted e.g. §2.2 or §3.1 for paragraphs or §2.2.2 and §3.1.2 for subparagraphs, whereas reference to some particular paragraph and subparagraph of other Chapters is marked like in the following example, Ch.3, §2.2 or §2.2.3. When referring to multiple sections, the following abbreviation is adopted, §§, e.g. Ch. 3, §§2-3.

Bibliographic references are given within the text in short form, and in full in the Bibliography at the end of this work. I have adopted the following convention (broadly following the Chicago Manual of Style, 16th Edition): short reference gives the surname of the author and the publication date, e.g. Burge 2010, Clark 2000. To disambiguate between works of the same author published in the same year, a letter is added at the end of the publication year and proceed in alphabetical order, starting with the first work cited, e.g. Palmer 1999a, 1999b. vi

Reference to multiple works of the same author mention the surname of the author only once, the publications years are separated by a comma, e.g. Chalmers 1996, 2004; references to multiple authors are demarcated by a semicolon, e.g. Chalmers 1996; Clark 1997. References to pages are given separating them with a comma from the publication year, e.g. Bechtel 2008, p. 18. Full reference is given in the footnotes only where I refer to works not listed in the Bibliography.

Parts of this work have already been published in journals and conference proceedings. The following list mentions the papers that have already been published and the Chapters to which they refer. Sometimes the following papers correspond almost entirely to whole Chapters (especially 4 and 6). In other cases, the published material has been re-used and adapted to fit in a Chapter.

[Chapters 1 and 8] “Fenomenologia naturalizzata nello studio dell’esperienza cosciente” Rivista di filosofia 107/1 (2016), pp. 27-48.

[Chapters 2 and 5] “Psychoneural Isomorphism and Content-NCCs” Gestalt Theory 38/2-3 (2016), pp. 177-190.

[Chapter 4] “Do We See Facts?” (under review)

[Chapter 5] “Manipulating the Contents of Consciousness” In Noelle, D. C., Dale, R., Warlaumont, A. S., Yoshimi, J., Matlock, T., Jennings, C. D., & Maglio, P. P. (Eds.). Proceedings of the 37th Annual Meeting of the Cognitive Science Society (pp. 2487-2492). Austin, TX: Cognitive Science Society, 2015.

[Chapter 6] “Sensorimotor Laws, Mechanisms, and Representations.” In Bello P., Guarini M., McShane M. & Scassellati B. (Eds.) Proceedings of the 36th Annual Conference of the Cognitive Science Society (pp. 3038-3042). Austin TX: Cognitive Science Society.

[Chapter 6] “The Structure of Sensorimotor Explanation” Synthese (2017)

[Chapter 8] “Philosophy as a Simulation of Nature: Modeling Perceptual Content” under review

vii

Acknowledgments

I have greatly benefited from discussions with various people in the last years of work on psychoneural isomorphism. I would like to thank the Barbara-Wengeler Stiftung for the generous financial support for the two years 2014-2016, and also the Ruhr-Universität Bochum who supported me financially from late 2016 until the thesis submission in 2017. Proceeding in chronological order, I would like to thank Dr. Alexander Staudacher, one of my former teachers when I was a graduate student at the Humboldt-Universität zu Berlin. He has provided over the years useful comments to my work and supported my applications for a Barbara- Wengeler-Stiftung scholarship and my research stay at the University of Cambridge. Many thanks also to Prof. Andreas Bartels from the Universität-Bonn, who was my first supervisor before I moved to the Ruhr-Universität Bochum, where I eventually completed the PhD thesis. A big thank to my supervisors, Prof. Marcin Milkowski and Prof. Tobias Schlicht, who supported in various ways this work, and provided many insightful and intelligent comments that dramatically improved the quality of this work. I have also greatly benefited from comments from other people working at the Ruhr-Universität Bochum, in particular: Prof. Albert Newen, Krys Dolega, Judith Martens, Sabrina Coninx, Dr. Beate Krickel, Prof. Markus Werning, Elmarie Venter, and Luke Roelofs. I also greatly benefited from a research stay at the University of Cambridge, sponsored by Prof. Tim Crane, who kindly invited me to attend his weekly seminars on philosophy of mind and perception. The research stay in Cambridge eventually culminated in Ch. 4—one of the most important parts of this work. I presented a first draft of this Chapter in a talk I have at Peterhouse, as part of Tim Crane’s weekely meetings of his project New Directions in the Study of the Mind. I greatly benefited from comments and discussions with Prof. Tim Crane, Prof. Craig French, Prof. Bence Nanay, Dr. Henry Taylor, Alex Moran. Thanks also to Dr. Joseph Neisser for his comments on my paper “Do We See Facts?” and for a stimulating discussion on the problem of isomorphism in Helsinki, 2015.

Many parts of this work have been presented at several conferences, I would like to collectively thanks the audiences and conference organizers of at least the following conferences: 12th Annual Meeting of the Nordic Society of Phenomenology, Helsinki 2014; 88th Joint Session of the Aristotelian Society and Mind Association, Cambridge 2014; 36th Meeting of the CogSci, Québec City 2014; Rudolf-Carnap Lecture, Bochum 2015; 19th Scientific GTA Convention, Parma 2015; 37th Meeting of the CogSci, Pasadena 2015; Mechanistic Integration and Unification in Cognitive Science, Warsaw 2016; The Human Mind Conference, Cambridge 2017. Thanks also to the Arts and Humanities Research Council (AHRC) for a generous bursary to attend the conference ‘Mental Representations—Naturalistic Approaches’ organized by Prof. Nicholas Shea in London, 2016; and thanks also to the Aristotelian Society for granting me a scholarship to attend the 88th Joint Session. A special thank for their comments or support viii

of my work to Prof. Fiorenza Toccafondi , Prof. Vittorio Gallese, and Prof. Achille Varzi, who made me better aware of the importance of the problem of the metaphysics of properties.

Finally, I would like to thank my parents, who supported me in an extremely difficult time of my life that unfortunately occurred during the early stages of writing this PhD thesis. Many thanks to my friends, who either encouraged me, or were the accidental victims of my obsession with the core problems of this thesis, or with whom I discussed the mathematical aspects of my thesis. In particular: Francesco Altiero, Chiara Rita Napolitano, Mariapia Dell’Omo, Antonio Bellotta, Giulio Capriglione, Laura Pennisi, Davide Manna, Daniela Longobardi, Elena Benicchi, Gabriele Rasi, Simona Wanda Conzales, Alessandro Bramucci.

ix

PART I

PSYCHONEURAL ISOMORPHISM

THE PROBLEM AND RESEARCH STRATEGY

1

THE PROBLEM OF PSYCHONEURAL ISOMORPHISM

When it comes to the task of theorizing about the relationship between what we experience and the underlying biological substrate or “neural correlate,” philosophers and scientists have sometimes mentioned the concept of “psychoneural isomorphism.” “Isomorphism” is a mathematical concept: it is a function or map that completely preserves the structure of a domain or object onto another domain or object (cfr. Ch. 2, §1). A “psychoneural” isomorphism is an isomorphism that holds between something “psychological” and something “neural.” Among the researchers who have mentioned or discussed this concept—from now on, “PI”—we can enumerate: Bridgeman (1983), Lehar (1999, 2003), Noë & Thompson (2004), O’Regan (1992, 2011), Palmer (1999a), Pessoa et al. (1998), Petitot (2008), Revonsuo (2000), and Thompson (2007). Yet, with the exception of Lehar (2003) (cfr. Ch. 2, §2.3), none of these researchers has provided a systematic analysis of PI. In this work, I set out to fill this gap and to shed light on the role of PI within contemporary research in the philosophy of perception and the cognitive sciences.

In this Chapter, my objective is to justify the present investigation and clarify the nature of my contribution to our understanding of the relationship between perceptual content and its biological substrate. To this end, I critically outline two recent debates where the concept of PI would (allegedly) play an important role. The first debate (§1) touches on the issue of filling-in or “perceptual completion.” The second one (§2) touches on the issue of the naturalization of Phenomenology, and the problem of mapping phenomenal states onto the underlying neuronal states. The first two Sections help us determine the proper philosophical context of our problem, and hence also elucidate the contribution of this work to the current debates (§3).

A caveat is in order. In the next pages, I will often mention concepts such as “contents” (or “perceptual content”), “neural correlate,” and “consciousness” without providing any clear definition. This is not an arbitrary choice. As it will turn out, much of the present work consists precisely in a clarification of these concepts (on “content,” cfr. Ch. 3, 4, and 7; on “neural correlate”, cfr. Ch. 5). For now, the following rough definitions are given. With “perceptual content” I refer to the conditions of accuracy of perceptual experience, i.e. the conditions under which a perceptual state is an accurate representation of a represented object. A “neural correlate” should be understood as a particular area of the central nervous system that is somehow related to a given psychological phenomenon. Thus, the neural correlates of visual perceptual contents are the brain regions whose activity is somehow related to the subject’s visual perceptual accuracy. Finally, with “consciousness,” I refer to a subject’s experience: the Chapter 1: The Problem of Psychoneural Isomorphism state a subject is in, when she is normally awake, in contrast with being unconscious (on “consciousness” cfr. Ch. 3).

1. Filling-in, Cartesian Materialism, and Isomorphism

After the publication of Dennett’s Consciousness Explained (1991), there was a sudden spurt of interest from philosophers for the family of phenomena known as “filling-in” or “perceptual completion” (Pessoa et al. 1998). In these phenomena, the brain seems to fabricate our perceived reality, creating the illusion of a continuous and richly detailed experience. Within the debate on filling-in, the concept of PI has sometimes been mentioned, especially in relation with the problem of explaining filling-in phenomena. In a nutshell, the relation between PI and filling-in can be summarized as follows. What we consciously experience is not a mere passive registration of the physical stimuli, and it is well known that the structure of the received physical stimuli does not correspond to what we see (as shown by filling-in phenomena). Since the perceptual content seems to be a complete representation of the visual scene, some researchers may suggest that visual perceptual content must correspond—perhaps up to the point of an isomorphism—to some kind of neural activity that fills the gaps of the physical information. Thus according to some researchers, PI would be a necessary requirement of any successful explanation of the percepts.

In order to lay bare the role of PI in the debate, I focus on Pessoa et al.’s (1998) study on filling- in. I first provide an overview and taxonomy of filling-in phenomena (§1.1). Later (§1.2), I introduce Dennett’s standpoint, since it plays a critical role in shaping the contribution of Pessoa and his collaborators. I finally (§1.3) dwell on Pessoa et al.’s account of filling-in and the role of PI, developing some critical remarks.

1.1 What is Filling-in?

The term “filling-in,” or “perceptual completion,” refers to several distinct perceptual phenomena that consist in the perception of features—such as colors and shapes, in the visual perceptual modality—although such features are not physically instantiated in the environment (Komatsu 2006, p. 220) (cfr. also Pessoa & De Weerd 2003). Cases of filling-in can be found in many perceptual modalities, but scientists have studied visual filling-in phenomena in some depth (Pessoa et al. 1998; Weil & Rees 2011). An example of non-visual filling-in is the familiar “proofreader effect.” To put it roughly, in the proofreader effect we do not sometimes notice mistakes and typos in a text because our minds tend to make “automatic corrections” in order to facilitate reading comprehension. To put it roughly, this happens because our mind “corrects” the mistakes in order to facilitate reading comprehension. Another interesting case of non-visual filling-in is the completion of acoustic information, especially parts of speech in contexts where noise would severely hamper the communication (Warren 1970). What is remarkable about these phenomena is that they nicely illustrate the active role of the brain in 2 Chapter 1: The Problem of Psychoneural Isomorphism constructing our perceptual experience of the world. In the next pages, I will dwell exclusively on visual filling-in phenomena.

There are several taxonomies of visual filling-in (Komatsu 2006; Myin & De Nul 2009; Pessoa et al. 1998; Weil & Rees 2011). Komatsu (2006, p. 221) categorizes these phenomena into three groups: (A) missing visual information; (B) cases of image stabilization on the retina or continuous identical visual input; (C) illusions. I will review them in this order.

(A). A paradigmatic example of this group is the case of the blind spot. In each human eye there is an area completely devoid of photoreceptors owed to the optic nerve that connects the eye with the optic chiasm. This region lies very close to the foveal area and extends in length ca. 6º and ca. 4,5º in breadth (Churchland & Ramachandran 1993, p. 28). Since this region is devoid of photoreceptors, one would expect the perceivers to experience a “blind spot” within their visual field, a “phenomenological blind spot.” Under normal conditions, we certainly do not notice anything suspicious, since we seemingly enjoy a rich and continuous visual phenomenology. That something interesting is going on, unbeknownst to us, is shown by a simple experiment that can reveal our visual blind spots, as shown by Fig. 1:

Fig. 1: How to detect the blind spot

If the observer looks at the black circle with the right eye closed from a distance of ca. 10-16 cm, after few seconds, she will notice the disappearance of the cross form the visual field. Something similar happens with scotomas. A scotoma is an area of the visual field that is blind due to some lesion or insult to th corresponding brain region. There are several different cases of scotoma, yet, if the damaged area is not too extended, some other brain areas may overtake its functions and “complete” the visual field (Pessoa et al. 1998, p. 731; Ramachandran & Gregory 1991).

(B). An excellent example of filling-in due to stabilization is the “Troxler effect,” named after the Swiss physician Ignaz Troxler who discovered it in 1804 (cfr. also Hamburger et al. 2006, pp. 1129-1138; Komatsu 2006, p. 221). When a subject stares at one specific object, without moving her eyes, she will notice after few seconds that items in the periphery of her visual field will disappear. The effect is soon dispelled as the subject moves her eyes, and items in the fringe of the visual field will “pop out” again. Hamburger et al. (2006) have also found evidence that when subjects are being shown figures with two or three colors, where one of the colors occupied a smaller surface, in case of the Troxler effect observers would see some particular color overflowing its region and covering other colors. In this way, Hamburger et al. (2006) 3 Chapter 1: The Problem of Psychoneural Isomorphism could bring evidence that some colors are filled-in more often—gray for example (ibid., third experiment)—whereas black is the strongest “inducer” (68,8% of the cases) that is never filled-in (ibid., pp. 1135-1136; cfr. also von der Heydt, Friedman & Zhou 2003).

(C). The third group of filling-in is that of visual illusions. Popular examples are the Neon- Color-Spreading—where bright colors seem to spread over a white background (Bressan et al. 1997; Pessoa et al. 1998, pp. 730-731; Todorović 1987) (Fig. 2 left)—, the Craik-O’Brien- Cornsweet effect, and cases of modal and amodal completion. The Craik-O’Brien-Cornsweet effect is illustrated by a figure that looks to be divided into two vertical gray regions, where the left-hand side looks darker than the right-hand side (Fig. 2 right). In reality, the figure the two sides have exactly the same luminance, except for a vertical abrupt discontinuity in the middle. Vision scientists take this phenomenon to show that our visual systems are more sensitive to abrupt discontinuities, rather than gradual ones (Komatsu 2006, p. 221).

Fig. 2: Neon color spreading (left), and the Craik-O’Brien-Cornsweet effect (right). The two sides of the figure have the same local luminance.

The most interesting instances of this group of filling-in phenomena, however, are cases by modal and amodal completion, and boundary and featural completion, which are nicely illustrated by Kanizsa figures such as Fig. 3.

Fig. 3: A Kanizsa triangle.

4 Chapter 1: The Problem of Psychoneural Isomorphism

Modal completion is shown by the fact that «the completed parts display the same type of attributes or “modes” […] as the rest of the figure» (Pessoa et al., 1998, p. 728); amodal completion refers «to the completion of an object that is not entirely visible because it is covered or occluded by something else» (ibid.). The observer apparently sees a white triangle in the foreground (modal completion), and another triangle in the background (amodal completion). Also, Fig. 3 illustrates a case of boundary completion (the boundaries of the triangle in the foreground), and featural completion (the illusory whiteness of the same triangle).

Much more could be added about filling-in phenomena, and what they reveal about the visual system (for detailed analyses, cfr. Pessoa & De Weerd, 2003). The real pivot point for the debate centers on the question of whether the brain actually “completes” the missing information or it simply “ignores” the absence of information in the afferent physical stimulus. Does the brain fill in the blind spot? Is the brain “painting” surfaces with colors from other regions of the visual field? Is the brain producing the visual impression of a white triangle in the foreground of a Kanizsa figure? It is in conjunction with these questions that filling-in phenomena are related with PI.

1.2 Dennett on Filling-in

In the eleventh chapter of Consciousness Explained, Dennett discusses the problem of filling-in as part of his attack against Cartesian Materialism, i.e. the position according to which there would be a central “stage” in the brain—a Cartesian theater—where representations converge to become conscious. Dennett’s contribution plays a critical role in shaping the subsequent debate on the problem of perceptual completion. As we will see in the next paragraph (§1.3), Pessoa et al. (1998) accept much of Dennett’s account, and only diverge from few (albeit important) details. It is therefore first necessary to briefly outline Dennett’s standpoint on filling-in. Dennett advances several considerations, but it is possible to articulate them into two steps: first, he casts doubt on the authority of first-person reports; second, he focuses on the relationship between consciousness and the brain.

Concerning the first step, much of Dennett’s work is a sophisticated attempt to demystify the presumption that we have some sort of privileged access to our own conscious experience. The subject’s phenomenological reports or “phenomenological descriptions”—i.e. utterances concerning the subject’s own conscious experiences—should be analyzed from a critical standpoint. Dennett calls this stance “heterophenomenology” (1982; 1991, p. 72, p. 98; 2005, pp. 35-46). Heterophenomenology demands us to handle phenomenological data just like data from any other field of scientific inquiry that require suitable interpretations. An analogy in this context might help. Heterophenomenology can be compared to the task of philologists who collate different versions of the same text in order to reconstruct a critical edition. It is only by comparing several sources, including, but not limiting to, the subject’s reports, that researchers

5 Chapter 1: The Problem of Psychoneural Isomorphism can fathom out what is really going on in the subject’s mind. In this respect, the heterophenomenological stance denies absolute authority to the subject about what is actually going on in her mind, whilst at the same time conferring «total, dictatorial authority over the account of how it seems to you, about what it is like to be you» (1991, p. 96; first emphasis added). Concerning the Kanizsa triangle, for instance, Dennett easily admits that for a perceiver it might seem as if there is a white triangle in the foreground, but this does not mean that an inexistent white triangle is presented in a ghostly subject’s phenomenological field. In other words, phenomenological descriptions are an insufficient source to rely on, and little, if anything, can be inferred from them without also taking into account additional sources. So, for instance, cases of filling-in do not provide conclusive evidence that something in the brain must actively complete the “missing” information. This leads us to the second step.

Dennett articulates this step in three distinct considerations about the relation between perceptual content and underlying brain processes. First, he contends that the brain does not need to produce a “final version” of the perceptual content. Second, he urges us to carefully distinguish between vehicle and content of perception. Third, he maintains that it is an empirical question to understand what happens in the brain in conjunction with our perceptual experience.

Turning to the first point, Dennett’s main lesson in his book (1991) is that philosophers and scientists should balk at the idea that the brain must produce a “rich” and detailed version of the conscious content (cfr. Ch. 3, §2.2) and that, once this version has finally been assembled, it could be manifested in consciousness thanks to some particular brain center. Dennett calls this idea “Cartesian Materialism,” and “Cartesian Theater” the supposed seat of consciousness in the brain. Instead, he proposes an alternative model, according to which the brain is described as a collection of highly competitive agents or “homunculi,” each of which produces its own “draft” or version of the perceptual content. According to Dennett, this Multiple Drafts or “Pandemonium” theory of consciousness would easily account for phenomena like the blind spot or cases of brain lesions whose functions are overtaken by other brain areas. Consider the blind spot. According to Dennett, since we are “naturally designed” with blind spots on our retinas, there is no reason to postulate in the brain some agents responsible for incoming information from the retinas’ blind spots. The absence of representation, as he reminds us, is not equivalent to the representation of absence. The perceivers simply do not notice the blind spots just like in cases of anosognosia, i.e. a condition in which a subject cannot detect a disability or impairment (1991, p. 355).

The second and third claims are strictly interwoven. Dennett operates a distinction between representational contents and their underlying vehicles (1978; 1991, chapter 4). The distinction can be cashed out in terms of what is represented, the content of a state, and the carrier of such content, the vehicle. For example, a painting might represent a landscape, the content being a 6 Chapter 1: The Problem of Psychoneural Isomorphism particular configuration of the oil colors, the figure and lines that can be more or less accurate with regard to the actual physical landscape being depicted (cfr. Ch. 3, §1.3). The vehicle of such a painting might be canvas and oil colors, the pencil’s graphite, and so on. Analogously, a digital picture realized by means of a software may represent something, perhaps the very same landscape of the painting. Dennett maintains that from the representational content alone very little can be inferred about the underlying vehicle (1991, p. 68). Take the case of the digital picture: from the digital image composed out of thousands of pixels very little can be inferred about how the computer generates it. If the representational content does not reveal much about the underlying vehicle, it follows that phenomenological descriptions, no matter how accurate, should be used very carefully in trying to infer or postulate the underlying agents in the brain that would be responsible for that content: «introspection provides us—the subject as well as the “outside” experimenter—only with the content of representation, not with features of the representational medium itself» (1991, p. 354). From this, he derives his third claim, that it is largely an empirical question to figure out what processes or brain structures lie behind the perceptual content (1991, pp. 353-354)1.

Let us pause to consolidate. On the one hand, we should not bestow an absolute authority on our phenomenological descriptions. Rather, phenomenological descriptions are but one important, yet, far from infallible, resource we can rely on to direct our research about how the brain generates the perceptual content. From the fact that we apparently see a richly detailed visual scene it does not follow that the brain is literally reproducing all the details of such a scene (cfr. Ch. 3, §2.2). On the other hand, Dennett suggests that there is no such thing as a single locus in the brain that realizes (or constitutes) our conscious experience. How the brain realizes our conscious experience is matter of empirical research, but Dennett clearly favors a multiple drafts model over the simplistic myth of a single locus that he calls the “Cartesian theater.” Since there is no single locus where representations converge only to be presented as if on an evanescent psychological scene for the amusement of an internal observer, there is also no need to suppose that the brain literally “completes” the representations.

What I have just described is but a sketch of Dennett’s framework, within which his considerations about filling-in ought to be embedded. It is striking to observe that Dennett holds an ambiguous stance towards filling-in phenomena. On the one hand, he explicitly suggests that the very idea of “filling-in” is but a leftover of Cartesian materialism: «This idea of filling in is common in the thinking of even sophisticated theorists, and it is a dead giveaway of vestigial Cartesian materialism» (1991, p. 344). The reasons that lead Dennett to this first

1 Dennett’s stance is very close to that expressed by Valentino Braitenberg: «But it is much more difficult to start from the outside and to try to guess internal structure just from the observation of behavior. It is actually impossible in theory to determine exactly what the hidden mechanism is without opening the box, since there are always many different mechanisms with identical behavior» (1984, p. 20). 7 Chapter 1: The Problem of Psychoneural Isomorphism conclusion root in the framework sketched out above: there is no need to think of perceptual content in terms of richly detailed representations that must be completed in order to appear on the stage of a Cartesian theater. Talk about “filling-in” would just be a metaphor that might easily lead us into thinking that there is something that needs to be completed, when in fact something is being ignored. Dennett’s positive interpretation of filling-in is that the brain “finds out” or “judges” that certain features are present, without the brain having to fill in any internally generated representation. On the other hand, Dennett is also of the opinion, as we have seen, that it is a matter of empirical investigation to find out whether the brain actually “completes” the missing information. This ambiguity becomes apparent in the following passages:

…it might turn out that somewhere in the brain there is a roughly continuous representation of colored regions […] This is an empirical possibility. We could devise experiments to confirm or disconfirm it. (1991, p. 353).

Now, is it possible that the brain takes on of its high-resolution foveal views […]? […] I suppose it is possible in principle, but the brain almost certainly does not go to the trouble of doing that filling in! (ibid., pp. 354-355; first emphasis added) (cfr. also Dennett 1992, pp. 42-43).

Hence, Dennett claims that talk about filling-in is but a relic of a vestigial Cartesian materialism, for there is no such thing as filling-in; but at the same time, he urges for caution in concluding how the neural system behaves in cases of filling-in, for the brain might actually complete the representational contents. Ultimately, only experiments, and not armchair reflection, will tell us whether there actually is a neural filling-in. This ambiguous stance toward filling-in shows up again in Pessoa et al.’s (1998) considerations about PI and filling-in.

1.3 Rejecting Analytic Isomorphism

We have seen that filling-in phenomena show that the structure of (visual) appearances does not correspond to the physical stimuli: we do not receive information from the blind spot, band stripes appear to be of different shades of gray in the Craik-O’Brien-Cornsweet effect, although they actually have the same luminance, and we seem to see a white triangle in the foreground in Fig.3. This leads us to the question: Does the brain actively represent this information or ignore it? Consider the case of the blind spot: Does the brain actively fill in the absent information from the blind region of the retina? If a positive answer is given, we are then led to the question: Must the neural activity be isomorphic to the structure of the percept? Some vision scientists respond in the affirmative.

Whatever happens at the neural level, what is at stake in the debate about filling-in is nothing less than the «proper form of explanation in cognitive neuroscience» (Pessoa et al. 1998, p. 726; emphasis added). Suppose one assumes that there is a brain region that forms the immediate 8 Chapter 1: The Problem of Psychoneural Isomorphism substrate of perceptual content, and that such a region must reflect the phenomenological discontinuities. One would then suppose that the proper explanation of a phenomenon such as, for example, the Craik-O’Brien-Cornsweet effect necessarily involves a neural discontinuity corresponding to the phenomenological difference in brightness: «the brain takes the local edge information and uses it to fill in the two adjacent regions so that the region with the luminance peak (left) becomes brighter than the region with the luminance trough (right)» (ibid., p. 726) (see Fig. 2). Pessoa et al. call such a thesis «analytic isomorphism». Analytic isomorphism is but one form of PI, and it results from the conjunction of the following theses:

(T1): Perceptual contents must have (a) neural correlate(s). (T2): Perceptual contents are realized in a specific brain region. (T3): Perceptual contents have the same structure of the underlying neural correlates.

Let’s call T1 the “neural correlate thesis;” T2 the “bridge locus thesis;” and T3 the “PI thesis.” T1-3 are conceptually distinct. Most scientists today accept some version of T1, but T1 does not entail T2, and whether it entails T3 remains an open question that will be examined in this work. Consider first T2. Some researchers believe that there must be one place in the brain that forms the immediate or direct substrate of perceptual experience. This doctrine goes under the name of “bridge locus.” A concise definition of this concept has been put forward by Teller & Pugh: «there exists a set of neurons with visual system input, whose activities form the immediate substrate of visual perception. We single out this one particular neural stage, with a name: the bridge locus» (1983, p. 581; cited in Teller 1984, p. 1235). The very idea of a bridge locus can be articulated in different ways. Teller and Pugh for example suggest that this locus might be a neural “stage,” assuming a hierarchical model of the neural activity. Such a stage could be a level of computation, viz. of information processing. Another way to interpret the bridge locus thesis is to state that there is one anatomical place in the brain, whose function is to realize or constitute the perceptual content. The bridge locus thesis (T2) entails the neural correlate thesis (T1), i.e. if the perceptual content is realized in a specific brain region the perceptual content has a neural correlate.

Consider now the “PI thesis.” According to some researchers (e.g. Fry 1948), a proper explanation of filling-in phenomena requires that, in addition to T1, there must be an «identity of shapes of spatial distributions of percepts and the underlying neural activities» (Todorović 1987, p. 548), i.e. there must be an isomorphism. In other words, PI would be a necessary explanatory requirement. In order to bring this claim into clearer view, we can focus on a specific study case: filling-in of colors. Filling-in of colors occurs in many of the three groups of filling-in phenomena examined in §1.1. We visually perceive uniform colors in the part of the visual field corresponding to the blind spot or to the lesion corresponding to scotomas (group A). It also occurs in stabilized stimuli, such as artificial stabilization of the retinal image. Artificial stabilization is achieved by mounting a small projector with a suction cap on the eye 9 Chapter 1: The Problem of Psychoneural Isomorphism

(e.g. Yarbus 1957, 1967, chapter 1; Tatler et al. 2010). In this case, the stabilized stimulus gradually fades away and assumes the color of the surrounding region (group B). Finally, another case of filling-in of colors occurs in the neon-color-spreading illusion, where we have the visual impression of a bluish patch of color spreading in the middle of the figure (see Fig. 2 left) (group C). Some researchers believe that perception is based on an image-like representation held in a two-dimensional array of neurons in which «color signals spread in all directions except across borders formed by contour activity» (Von der Heydt, Friedman, Zhou 2003, p. 107; Cohen & Grossberg 1984). These theories assume pointwise representations of visual information, where the activity of each element of the neural array represent either color or contour for one of the location of the visual field (Von der Heydt, Friedman, Zhou 2003, p. 107; Weil & Rees 2011, p. 41). In other words, these theories postulate a structural correspondence (an isomorphism) between the percept and the neural array. Isomorphic theories of color filling-in assume that color is represented by the activity of cells whose receptive fields point at the surface, but receive additional activation through horizontal connections. This is illustrated in Fig. 4:

Fig. 4: A (left), perception of the surface color results from the activity of the cells whose receptive fields point at the surface. A is disproven by filling-in phenomena such as the blind spot. B (right) is a representation of isomorphic color filling-in theories. Color features in this case are neutrally represented via horizontal connections (from Von der Heydt, Friedman, Zhou 2003, pp. 108-109).

The PI thesis has courted controversy. Some vision scientists think that only T1-2 are necessary philosophical assumptions about the nature of the neural correlates of perceptual content, but deny that T3 is required. Ratliff & Sirovich (1978) take this stance:

The neural activity which underlies appearance must reach a final stage eventually. It may well be that marked neural activity adjacent to the edges […] is, at some level of the visual system, that final stage and is itself the sought-for end process. Logically nothing more is required. Nevertheless, we cannot by any reasoning eliminate a priori some higher-order stage or filling-in process […] But parsimony demands that any such

10 Chapter 1: The Problem of Psychoneural Isomorphism

additional stage or process be considered only if neurophysiological evidence for it should appear. (p. 847)

Ratliff & Sirovich deny that any explanation of filling-in phenomena require a neural-perceptual isomorphism, joining the ranks of researchers who espouse non-isomorphistic explanations of filling-in phenomena like the Craik-O’Brien-Cornsweet effect (Bridgeman 1983; Foster 1983; Laming 1983; for a review, cfr. Todorović 1987, p. 548). More recently, Von Der Heydt, Friedman & Zhou (2003) consider an alternative, non-isomorphistic explanation, of color filling-in that requires a tentative association of features by low-level mechanisms (symbolic filling-in theory).

To deny T3 means to state that a proper explanation of a perceived phenomenological discontinuity does not need to be “mirrored” by a discontinuity of the underlying neural activity. Ratliff and Sirovich’s standpoint is similar to Dennett’s criticism of filling-in. A shared critique is that explanations of the phenomenological discontinuities do not need to postulate an equivalent discontinuity in neural activity. Also, they both believe that any such correspondence cannot be ruled out a priori, but requires experimental validation. Dennett, as we know (§1.2), goes one step further, denying credit to the very notion of a “final stage” that would form the immediate correlate of perceptual content. (I will return to this point below). As Todorović rightly observes, the point at stake touches on the issue of the nature of the relationship between neural activity and perceptual content. How should we understand this relationship? Todorović laments the lack of interest for this crucial issue («This relationship is crucial in physiological explanations of perceptual phenomena, but is seldom itself the focus of attention», p. 548), although the problem has a long and venerable history (cfr. Köhler 1929; Mach 1865; Müller 1896; Teller 1984; on the history of psychoneural isomorphism, see Ch. 2, §2). Yet, Todorović himself discusses the problem only briefly in his paper (pp. 548-549), taking side with the isomorphistic approaches, and espousing both T2 and T3: «the logical consequence of the isomorphistic approach is that a neural activity distribution not isomorphic with the percept cannot be its ultimate neural foundation» (1987, p. 549; my emphasis). In this passage, the “neural foundation” is clearly understood as the bridge locus thesis (T2), the neural location that makes a given content conscious. Howsoever T1-3 are interpreted, analytic isomorphism can be defined as the thesis according to which there is a single locus in the brain that is alone responsible for and is isomorphic with the perceptual content.

Todorović’s acceptance of what Pessoa and his collaborators call “analytic isomorphism” is clearly meant as an explanatory principle. This reading seems to find further support in the following passage:

If the question is, what is it about the neural substrate of vision that makes us see as we do, the only acceptable kind of answer is, we see X because elements of the substrate Y

11 Chapter 1: The Problem of Psychoneural Isomorphism

have the property Z or are in the state S (Teller 1990, p. 12; quoted in Pessoa et al. 1998, p. 728).

The patterns that hold between neural activities and visual phenomena would be codified by what Teller (1984) calls “linking propositions” (cfr. also Ch. 2, §2.3). Linking propositions, as the name suggests, are propositions linking statements about phenomenological states with statements about neural states2. Pessoa et al. (1998) clearly follow Todorović (1987, p. 548) in interpreting T3 as a particular instance of a specific family of linking propositions, the analogy family:

Φ looks like Ψ → Φ explains Ψ

The Greek letter Φ stands in for neural phenomena, whereas Ψ stands in for psychological phenomena. Pessoa et al. (1998, p. 728) contend that the arrow linking the two statements should not be read as the logical connective of material implication. Instead, it should be read as a heuristic:

…[It] is meant to guide the search for the major causal factors involved in a given perceptual phenomenon. Thus the term “explains” on the right-hand side is really too strong—the idea is that Φ is the major causal factor in the production of Ψ: “if psychophysical and physiological data can be manipulated in such a way that they can be plotted on meaningfully similar axes […] then the physiological phenomenon is a major causal factor in producing the psychophysical phenomenon” (Pessoa et al. 1998, p. 728; the quotation is drawn from Teller 1984, p. 1240).

As I have explained, T1 finds widespread consensus among researchers. Theses T2 and T3 are more problematic, and they do not entail one another, for certainly we can think of T3 without thereby espousing T2, and (perhaps) assume T2 without espousing T3. Taken together, they form the concept of “analytic isomorphism.” But what is exactly the role of analytic isomorphism? As we have seen, on the one hand, Pessoa et al. (1998) take it to be a specific doctrine that shapes the structure of explanation of perceptual phenomena, in line with Todorović and Teller. On the other hand, subsuming analytic isomorphism under the family of linking propositions, they explicitly deny that such isomorphism refers to an “explanation,” and contend that it would be instead a useful heuristic principle in individuating the relevant causal factors. I will return on this ambiguity in a moment; but before doing that, I will first present Pessoa et al.’s rejection of analytic isomorphism.

2 Notice that in this Chapter I am not espousing any particular ontological standpoint concerning the ontological status of the underlying neural correlates: terms like states, events or process should therefore be taken with a grain of salt. I will return on this issue in Ch. 5. 12 Chapter 1: The Problem of Psychoneural Isomorphism

On the face of analytic isomorphism, Pessoa et al. (1998) set out to show the following. Firstly, that analytic isomorphism is yet another manifestation of Dennett’s Cartesian theater, and as such, it must be rejected. Secondly, and contra Dennett, that there is plenty of evidence to show that filling in is a real phenomenon. Of the three theses that form analytic isomorphism, T2 bears a striking resemblance with Dennett’s Cartesian theater: the thesis according to which consciousness “occurs” in one specific locus of the brain. Pessoa et al. (1998, p. 742) argue that T2 is unwarranted for at least three reasons:

(a) Because brain regions are not independent stages or modules, they interact reciprocally (Zeki & Shipp 1988). Moreover, there is ample scientific evidence that shows that visual processing is highly interactive and context-dependent (Van Essen & DeYoe 1994). (b) Because cells in visual areas are not responsive to a single kind of features, but to many features (e.g. Martin 1988; Schiller 1996). Even at a larger scale than simple cells, more recently, some studies suggest that a strict compartmentalization paradigm is perhaps too simplistic (Grill-Spector & Malach 2004, p. 653; Malach 1994; for a review on the issue of cortical specialization, cfr. Kanwisher 2010). A good example is the controversy over the correct location of a color computation center. In the absence of a clear understanding of human color vision processing, Grill-Spector & Malach (2004, p. 654) suggest to speak of a «color-processing stream»—rather than a color processing area— that begins in the retina and passes through V1, V2, and through other areas until it reaches the V4/V8 complex. (c) Dennett & Kinsbourne (1992) have shown how postulating a centralized state hinders, rather than facilitates, our understanding of temporal perception.

Since analytic isomorphism consists of the conjunction of T1-3 (T1 ∧ T2 ∧ T3), it logically follows that rejecting T2 means rejecting analytic isomorphism altogether. It is this particular aspect of Pessoa et al.’s account that is manifestly inherited from Dennett, together with his skepticism about the role of phenomenological descriptions. However, in contrast with Dennett, who suspiciously regards filling-in phenomena as leftovers of Cartesian materialism, they contend that there is ample evidence supporting the existence of filling-in mechanisms in the brain (Pessoa et al. 1998, pp. 737-741).

We need not review the scientific literature discussing evidence for neural filling-in (cfr. Churchland & Ramachandran 1993; Matsumoto & Komatsu 2005; Pessoa & De Weerd 2003; Tong & Engel 2001; Weil & Rees 2011). It suffices here to draw attention to the fact that Pessoa et al. contend that, although there is evidence for neural filling-in, such phenomena do not entail any commitment to analytic isomorphism. However, as I said, the PI thesis (T3) is independent from T2, and hence it is still possible to defend some form of psychoneural isomorphism independently from analytic isomorphism. Although Pessoa et al. are aware of

13 Chapter 1: The Problem of Psychoneural Isomorphism this, they neither further discuss the role of T3, nor how we should construe it. Indeed, the very existence of such an isomorphism is left as an open empirical question:

Whether there are either spatial/topographic or topological/functional neural-perceptual isomorphisms in any given case is an empirical question for cognitive neuroscience to decide (1998, p. 742).

Before turning to some general problems, I would like to draw the reader’s attention to the following ambiguities. Firstly, as we have seen, Dennett does not want to prejudice the question of filling-in, and asserts that whether the brain completes the missing information is matter of empirical research. At the same time, he contends that most certainly, the brain does not fill in information, as thinking in this way would mean to espouse Cartesian materialism. Secondly, Pessoa et al. (1998) hold that isomorphism belongs to the family of linking propositions, and that the problem at stake is really that of the structure of explanation in cognitive neuroscience. Yet, at the same time, they are inclined to think that isomorphism is but a heuristic principle in identifying the relevant causal factors that underlie perceptual phenomena. Whether there is an isomorphism, disjoined from analytic isomorphism, so they claim, is an empirical question.

It seems that Dennett’s standpoint about filling-in phenomena hinges on the problem of what philosophers of science call stabilization. Roughly, the notion of stabilization refers to (a) the processes and methods whereby scientists empirically identify a given phenomenon, and (b) gradually come to agree that the phenomenon is a stable and robust feature of the world, rather than an artifact produced by any instrument, methodology or, in our case, some erroneous theoretic assumptions (on the notion of stabilization, cfr. Feest 2011, p. 59). Within the present context, Dennett contends that phenomenological descriptions alone are insufficient to determine the robustness of the filling-in phenomenon (this is sense (b) of stabilization). With a rough approximation, Dennett’s stance can be reformulated as the need for multiple determination in the identification of a given phenomenon (cfr. Culp 1994; Hacking 1981, 1983; Wimsatt 2007, pp. 37-74; for a dissenting voice, cfr. Hudson 2014). Concerning instead the problem of the relation between percept and underlying neural states, Dennett’s ambiguous stance results from his lack of any clear account of what it means to explain mental and perceptual phenomena. In other words, Dennett does not answer the following question: What is the proper form of explanation of visual phenomena?

The same ambiguity, as we have seen, can be found in Pessoa et al. (1998). Although they rightly point out that the concept of a PI has a long history in the philosophy of psychology and neuroscience, they do not ultimately clarify the role of a PI in vision science. The reason, again, is that they have neither thrown light on the relationship between perceptual content and underlying neural correlates, nor have they provided any clarification of what it means to

14 Chapter 1: The Problem of Psychoneural Isomorphism explain in cognitive neuroscience. Eventually, no clear conclusion is reached about PI, as they have not clarified what is or might be its role in the context of explanation in vision science.

In conclusion, I have shown that the problem of PI emerges in relation to the question of the proper form of explanation of psychological (perceptual) phenomena. As such, PI is an aspect of the search for the neural correlates of perceptual content. It is possible to hold PI, whilst, at the same time, denying analytic isomorphism. But if it is possible to hold PI without analytic isomorphism, what is exactly the role of PI, and what does it amount to? By now, we are left with a number of open questions that will be addressed in the next Chapters. For example:

− What are exactly the objects that stand in isomorphic relations? − What is the relation between PI and explanation of perceptual phenomena? − An isomorphism is a function or map that completely preserves the structure of one object onto another object, but what kind of structure is at stake in the present context?

It is not possible to fully understand PI without answering these questions.

2. Naturalizing Phenomenology and Isomorphism

In the previous Section, I have shown that the problem of PI emerges in relation to the problem of the search for the neural correlates of perceptual content. In this Section, I broach the issue of the naturalization of phenomenological descriptions and the problem of mapping perceptual states onto the cognitive system. In this debate, as in the previous one, the concept of PI seems to play a central, albeit obscure role. Many researchers maintain that rigorous phenomenological descriptions might guide the search for the neural correlates (e.g. Flanagan 1992; Horst 2005; Petitot et al. 1999; Thompson 2007; Varela et al. 1991; Varela & Shear 1999; Vernazzani 2016a). The advantages of such phenomenological descriptions in the search for the neural correlates is, however, dubious in the absence of a clear understanding of how we should map the phenomenal states onto the neural ones. Roy et al. (1999) have proposed a solution in the preface to the book Naturalizing Phenomenology that invokes the concept of PI.

I first describe the project of a naturalized Phenomenology in §2.1. I then move on to consider the problem of “matching” in the search for the neural correlates of consciousness §2.2, and finally discuss few instances of the concept of PI in the recent literature on the neural correlates of consciousness.

2.1 Naturalizing Phenomenology

Some proponents of phenomenological approaches to the study of the mind contend that adequate phenomenological descriptions are a necessary complement to third-personal investigation of the mind. Phenomenological descriptions are not equivalent to naïve phenomenal reports uttered by untrained perceivers. These researchers think that in order to 15 Chapter 1: The Problem of Psychoneural Isomorphism obtain accurate and helpful descriptions we need to rely on rigorous methods, that I will call “phenomenological methods” (Vernazzani 2016a, p. 28). Phenomenological methods are extremely heterogeneous, as they encompass Buddhist meditation techniques, introspectionism methods, and Husserlian Phenomenology (Varela & Shear 1999). Yet, even the most accurate phenomenological descriptions, if they have to guide our search of the underlying neural correlates, must be “naturalized.” Following Petitot et al. (1999), the problem of the naturalization of phenomenological methods will here be discussed only in relation to Husserlian Phenomenology. To prevent any possible confusion, I refer to the Husserlian method with a capital letter, Phenomenology, to distinguish it from other uses of the term.

There seemed to be mainly two motivations at the very heart of the phenomenological approaches (Vernazzani 2016a). The first motivation was the heuristic role of phenomenological descriptions. In the absence of any third-personal methods that could show the incontrovertible presence of conscious experience in a subject without relying on phenomenological reports, and without the means to exactly predict the texture and structure of a subject’s experience by merely observing the brain’s activity, rigorous phenomenological descriptions could provide a useful guide to neuroscience research (e.g. Gallagher 1997; Varela 1997). A nice exemplification of this problem is provided by a famous study-case discussed by Owen et al. (2006, 2007). The researchers studied a patient, a girl, in a vegetative state following a car accident, in order to understand whether she was consciously aware of external stimuli. In the course of two experimental sessions, the experimenter first read aloud few sentences, and then asked the patient to perform some motor acts, like playing tennis, or moving through her own house following a specific path. The patient’s neural activity was registered by means of fMRI, and later confronted with statistic parametric maps of some control subjects’ neural activity. The comparison showed that the patient’s neural activity was indistinguishable from that of the control subjects. This could suggest that the subject in vegetative state was, in some way, conscious of the external stimuli. However, as Naccache (2006) pointed out, from this outcome nothing can be inferred about the patient’s conscious activity3. At the present stage (Dehaene & Naccache 2001), the search for the neural correlates of consciousness must still rely on verbal or behavioral phenomenological reports.

Relatedly, it is sometimes argued that phenomenological descriptions might be helpful in theory construction as well as in theory confirmation (Roy et al. 1999, p. 12). The construction of theories about the neural activity that allegedly explains our conscious experience must take

3 It is noteworthy that recent developments in the neurosciences have now devised new methods to study the conscious activity from a third-personal viewpoint. For example, Jack Gallant and collaborators are now able to decode simple semantic contents from brain activity (e.g. Nishimoto et al., 2011; Huth et al. 2016). For more recent developments on neural correlates of different levels of consciousness, cfr. Boly et al. (2013). These new studies do not undermine the importance of verbal or behavioral reports, but they certainly weaken Naccache’s statement that nothing can be inferred from the brain activity alone. 16 Chapter 1: The Problem of Psychoneural Isomorphism descriptions of such an experience into account. In this way, we will be able to develop theories of specific mental phenomena according to reciprocal constraints (Varela 1997). A similar idea was at the heart of Owen Flanagan’s “natural method.” The method consists in balancing different perspectives in the construction of theories about conscious phenomena:

Tactically, what I have in mind is this. Start by treating three different lines of analysis with equal respect. Give phenomenology [i.e. conscious experience] its due. Listen carefully to what individuals have to say about how things seem. Also, let the psychologists and cognitive scientists have their say. Listen carefully to their descriptions about how mental life works and what jobs consciousness has, if any, in its overall economy. Finally, listen carefully to what the neuroscientists say about how conscious mental events of different sorts are realized, and examine the fit between their stories and the phenomenological and psychological stories.

The object of the natural method is to see whether and to what extent the three stories can be rendered coherent, meshed, and brought into reflective equilibrium. […] As theory develops, analyses at each level are subject to refinement, revision, or rejection. (Flanagan 1992, p. 11).

Enter the problem of naturalization. Roy et al (1999, p. 13) claim that we cannot make use of phenomenological descriptions in the absence of an explanatory link between the phenomenological level and the neural level. However, in spite of their programmatic declarations, Roy et al. in their contribution do not discuss the problem of what it would mean to scientifically explain consciousness, and no reference is made to the rich philosophical literature on scientific explanation. Instead, they believe that the central problem is that of naturalizing the phenomenological methods, and Phenomenology in particular. With “naturalizing” they mean: «integrated into an explanatory framework where every acceptable property is made continuous with the properties admitted by the natural sciences» (ibid., pp. 1- 2). Roy et al. then go on listing some possible ways to achieve the naturalization of the phenomenological descriptions: by reducing the phenomenological descriptions along the lines of an eliminativist standpoint (Churchland 1986); by adopting an “as if” strategy, according to which phenomenological descriptions refer to merely fictive entities postulated for pragmatic reasons; by enlarging our concept of nature to include also the “mental”; and finally by mutual constraining in theory construction. Of all these strategies, Roy et al. declare their preference for the third one.

Mutual constraining can be achieved in three different ways (Roy et al. 1999, pp. 66-68). The first one is by means of linking propositions (cfr. §1.3), the second one is by means of isomorphism, and the third one by means of generative passages. The former were already discussed in the previous section. Concerning PI, Roy et al. do not discuss in any detail what it

17 Chapter 1: The Problem of Psychoneural Isomorphism would mean to say that phenomenological descriptions might be isomorphic with the underlying neural correlates. In fact, they quickly dismiss PI with the following words:

But this isomorphic option makes the implicit assumption of keeping disciplinary boundaries: the job of [P]henomenology is to provide descriptions relevant to first- person phenomena. The job of natural science is to provide explanatory accounts in the third person. Both approaches are joined by a shared logical and epistemic accountability. But is this really possible or even productive? Is this not another form of psycho-neural identity? (Roy et al. 1999, p. 68).

With these words, the “isomorphic” way to naturalization is quickly dismissed. The passage is somewhat obscure, as remarks pertaining to different philosophical areas are confusedly brought together. On the one hand, Roy et al. seem to criticize PI on the ground that it would preserve disciplinary boundaries between Phenomenology and the sciences. Why exactly this would be a problem is unclear. Talk about integrating phenomenological perspectives with scientific theories seems to be a typical instance of the problem of interfield integration (e.g. Vernazzani 2016a)—i.e. the problem of understanding how different scientific fields interact—, but why disciplinary boundaries would be a problem in this context is far from clear. On the other hand, Roy et al. cast doubt on two yet different issues: the very coherence and usefulness of PI, and its alleged ontological implications. Concerning the former point, asking whether PI is «possible» is just another way to ask what PI means, what are its relata, etc. (see the questions at the end of §1.3). Since Roy et al. do not address these questions, it is unclear what motivates their rejection of PI. Concerning the latter point, Roy et al. seem to think that PI entails some form of identity theory, i.e. the philosophical position according to which the mind is the brain. This contention is echoed by a later contribution of Antti Revonsuo who, discussing the problem of the mapping relation between consciousness and its neural correlates, states:

…there must be isomorphism between one specific level of organization in the brain and phenomenal consciousness, simply because these boil down to one and the same thing. (2000, p. 67)

In other words, Revonsuo thinks that, if consciousness is identical to some level of organization in the brain, PI must be true. However, in his contribution Revonsuo does neither clarify what is the structure of «phenomenal consciousness», nor what is the structure of the «level of organization in the brain». Furthermore, neither Revonsuo nor Roy et al. spell out what kind of identity theory would be implied by PI. Without a detailed characterization of these issues, PI remains a vague and confused concept. (I will return on the relationship between PI and the metaphysics of the mind in Ch. 2, §1.3).

After rejecting PI, Roy et al. turn to the third way to achieve mutual constraining: the generative passages. Generative passages are described as the «passages» that allow the mutual constraints 18 Chapter 1: The Problem of Psychoneural Isomorphism to be «operationally generative» (Roy et al. 1999, p. 68). What this exactly means is unclear, but they seem to suggest that both sides—the phenomenological and the neural one—could be abstractly described mathematically so to belong to «both sides at the same time» (ibid.). Put in this way, generative passages closely resemble PI: there can only be an isomorphism if we mathematically describe the structures of the two domains, and the two domains turn out to have the same structure (cfr. also Bayne 2004). Whilst the link between the concepts of generative passages and PI remain obscure in Roy et al. (1999), the authors overtly prefer the former way to naturalize Phenomenology. What is clear, however, is that for these authors a central problem is that of finding a way to map phenomenological descriptions onto the underlying neural correlates. It is to this issue that I now turn.

2.2 Mapping the Neural Correlates of Consciousness

The idea of mapping perceptual contents onto their neural correlates is clearly expressed by David Chalmers in his classic contribution to the nature of the neural correlates (2000). In this paper, Chalmers argues that the perceptual content must correspond or “match” (the term is due to Noë & Thompson 2004) with the neural correlate. Chalmers motivates this contention via a reference to some important studies in the neuroscience of consciousness.

In several works, Crick & Koch (1995, 1998) have argued that the neural activity of the primary visual cortex V1 cannot be the direct correlate of conscious visual perception in virtue of a mismatch between conscious perception and the properties of the neurons’ receptive field in V1. The mismatch at stake can be shown by means of a phenomenon like the Land effect, i.e. a case of partial color constancy, where the perceived color at one particular location is influenced by wavelength of the light entering from the surrounding region of the eye (Land & McCann 1971). Interestingly, studies on anesthetized monkeys have shown that neurons in region V4, but not in V1, exhibit the Land effect (Schein & Desimone 1990; Zeki 1983), thus suggesting that V1 cannot be the correlate of this effect in virtue of a “mismatch.”

Another case that illustrates the matching relation is provided by experiments devised by Gur & Snodderly (1997). Alternating two isoluminant colors at a frequency beyond 10Hz in humans causes the perception of a single fused color. Yet, in spite of the color perception, color opponent cells in V1 of two alert macaque monkeys follow high-frequency flicker above heterochromatic fusion frequencies (Crick & Koch 1998, p. 102).

Beside these specific experiments, it is believed that a matching relation lies at the very heart of neuroscience research trying to uncover the functional specialization of some brain areas. One of the most popular cases that nicely illustrate the functional specialization of some brain areas is the syndrome of achromatopsia. Achromatopsia is a condition in which subjects loose the ability to see colors, although they still retain the ability to consciously visually perceive. Experiments on subjects affected by achromatopsia provide indirect evidence for the role of 19 Chapter 1: The Problem of Psychoneural Isomorphism areas V4 and V4α in color perception (Sacks et al. 1988; Zeki 1990)4. However, indirect proof can be deceiving, for it does not bring conclusive evidence of the direct functional role of a brain area for a specific function. Direct evidence correlating activity of V4 with color perception has been shown by Zeki et al. (1991) through color and gray stimulation, detecting a significant change of activity only in the region of the lingual and fusiform gyri: the areas we call V4 (see also McKeefry & Zeki, 1997).

These experiments suggest that this “matching” is a relation of co-occurrence between a psychological or phenomenological effect and a corresponding neural occurrence. This co- occurrence relation raises several questions. Firstly, whether this matching relation justifies talk about an isomorphism. I tackle this issue in the next paragraph (§2.3). Secondly, we should spell out the nature of the “vertical” relation holding between perceptual contents and the neural correlates. Crick (1996, p. 485) maintains that the notion of “correlation” embedded within the very concept of “neural correlate of consciousness” would enable us to sidestep a number of metaphysical issues about the consciousness-brain relation. This might be a useful strategy, if we want to put aside some technicalities, and focus instead on more pressing practical issues. Chalmers shares Crick’s standpoint, and states that the search for correlation «can be to a large extent theoretically neutral» (2000, p. 37). The first issue deserves few more comments here; the second issues will be addressed in a later Chapter (5, §§3-4).

2.3 From Matching to Isomorphism?

The correspondence, matching, or “co-occurrence” of perceptual content with neural activity is hardly a proof of an isomorphism. Some researchers, however, have taken a stance on whether there is a PI in this sense. We can individuate two camps. The first camp is championed by Noë & Thompson (2004) and by Thompson (2007). These philosophers deny that there is an isomorphism between perceptual content and underlying neural correlates. Defenders of the second camp, like Petitot (2008), hold the opposite view, according to which there actually is a PI, and that it plays a central role within the project of the Naturalization of Phenomenology.

Noë & Thompson (2004) in their contribution argue that there is no matching relation (§2.2) between perceptual content and the underlying neural activity. The neural activity that is supposed to match with the perceptual content would be the receptive field of single neurons. In short, a neuron’s receptive field is the area surrounding it where the presence of a stimulus will alter its firing rate. The “matching” relation is then clearly interpreted as a form of isomorphism. Hence, they deny that there is any PI. Their argument is based on three different

4 Similarly, Lashley argued that visual mechanisms do not extend beyond the striate cortex because lesions in the prestriate region of the monkey «has not been found to produce any disturbances in sensory or perceptual organization» (Lashley 1948, quoted in Mishkin, Ungerleider, Macko 1983, p. 199). As we now know (Grill-Spector & Malach 2004), Lashley was wrong: the visual system extends far beyond the striate cortex. 20 Chapter 1: The Problem of Psychoneural Isomorphism considerations (2004, p. 14). First, they contend that perceptual content exhibits structural coherence. This would mean that perceptual content has a peculiar structure described by figure-ground relations and other Gestalt phenomena (e.g. Bozzi 1989; Köhler 1929). Yet, a neuron’s receptive field certainly does not exhibit such structural coherence. Second, perceptual content is intrinsically experiential, or to borrow Nagel’s (1974) phrase, there is something it is like to experience a perceptual content (cfr. Ch. 3, §2). But, so Noë & Thompson argue, there is nothing it is like to experience a receptive field content. Third, perceptual content would be active and attentional. Perceptual content would be produced for the purpose of action and the exploration of the environment. The process of exploration would be crucially shaped by the role of attention. This would be clearly shown by examples of occluded objects. Take the example of a car seen from behind a fence. The car does not appear “complete,” i.e. the subject does not see the car in its entirety because it is occluded by another object. Yet, there is a sense in which we see the “whole” car whose presence is merely attentional (e.g. Kanizsa & Gerbino 1982). Again, a neuron’s receptive field does not exhibit any of these properties.

The lesson drawn from these considerations is that there is no matching or isomorphic relation between perceptual, experienced content and its underlying neural correlates. On the basis of these arguments, Thompson (2007, p. 350) suggests that the very idea that «neural systems described neurophysiologically could match conscious states and their content» is simply a category mistake. The three arguments presented above, from structural coherence, from experience, and from the perceptual content’s active and attentional character, would show that neural and perceptual contents would be different in kind. Thompson furthermore adds (ibidem) three more features that are common among conscious states but not neural states: they are intentional (or «world-presenting», i.e. they are directed at something in the world, for example like perceptual states present some particular item or feature of the world, cfr. Ch. 3, §1.3), they are holistic (they are constituted by interrelated perceptions, intentions, emotions and actions), and they are intransitively self-aware (or non-reflective subjective character). Still, in order to preserve some form of relation between perceptual content and neural activity, Noë & Thompson (2004, p. 15) argue that there is some form of content «agreement» (cfr. Thompson 2007, pp. 357-358). What this “content agreement” would be, however, is unfortunately left hanging in the air.

There are several highly controversial claims in Noë & Thompson (2004) (cfr. Ch. 8, §3.1). One in particular is the claim that the matching or isomorphic relation would hold between perceptual content and single neurons’ receptive fields. This criticism is taken up by Jean Petitot (especially in his 2008; cfr. also his 1992-1993; 1994; 1999; and Vernazzani 2016a) (cfr. also Ch. 8, §3). The French mathematician and philosopher argued that from the fact that receptive fields of single cells cannot be isomorphic with perceptual content it does not follow that there is no PI. On the contrary, he maintains that it can be mathematically shown that there 21 Chapter 1: The Problem of Psychoneural Isomorphism is an isomorphism holding between perceptual content and a macroscopic level of neural activity, i.e. if we take into consideration the activity of populations of neurons, rather than single cells. Petitot’s is a fascinating project that cannot be described in detail here. The principal claim is that accurate phenomenological descriptions produced by the conceptual tools of Husserlian Phenomenology represent the conceptual counterparts of geometrical descriptions that articulate in mathematical terms the neurophysiology of the functional architectures (2008, p. 396). Using sophisticated mathematical models, Petitot argues that «l’accord entre le macro-niveau géométrique (morphologique) émergent M […] et l’expérience phénoménale E […] est extrêmement fort, beaucoup plus fort qu’une simple corrélation. C’est même la forme la plus forte possible de matching de contenus puisque, à la limite, c’est un isomorphisme» (p. 367). Here, M represent the geometrical morphology of complex populations of neurons that would emerge from the micro neural physics of single neurons N. M is then shown to be isomorphic with the “phenomenal space” E, see Fig. 5:

Fig. 5: The relations between phenomenal space, micro neural physics, and global emerging geometry according to Petitot (2008, p. 370).

According to Petitot, the emergence of M upon N is not ontological: «Les morphologies émergentes sont des idéalités géométriques et ne possèdent par conséquent aucun contenu ontologique propre» (p. 372). What makes the problem of perceptual content so mysterious— the question about how the neural activities of single cells give rise to our conscious perceptual experience—would be the wrong attempt to deduce E from N. This wrong inference would also be the mistake made by Noë & Thompson, who were not able to identify any isomorphism between E and N. Petitot believes that it is possible to nomologically deduce («peut être nomologiquement déduite») the structure of E from the global geometry describing the dynamics of macro-populations of neurons. The identified isomorphism between E and M would hold in particular between the “pure immanent intuitions” of E and the mathematical idealization of M. From this, Petitot concludes that, although there is no “hard problem” (Chalmers 1996) of explaining the emergence of conscious perceptual content from M, there is a hard problem of explaining the relation between the geometrical space of M and the phenomenal space of E (pp. 370-371).

22 Chapter 1: The Problem of Psychoneural Isomorphism

Petitot finally concludes that the isomorphism between M and E—a form of PI—warrants a specific metaphysical conclusion about the very nature of conscious perceptual experience: the double-aspect theory. The double aspect theory is a form of identity theory, according to which although the mind is the brain, we have different kinds of access to them. In the words of Metzinger: «Scientifically describing [M] and phenomenally experiencing [E] are just two different ways of accessing one and the same underlying reality» (2000, p. 4). I will later show (Ch. 2, §1.3) that this metaphysical conclusion is unwarranted. Moreover, the double-aspect theory, on this formulation, seems also to express an epistemological thesis that does not simply follow from an identity statement.

Several issues are left unaddressed by Petitot. Firstly, although he has offered a mathematization of Husserlian Phenomenology, it is still unclear what kind of content is being mathematically described. Are they phenomenological descriptions? Or rather mathematical models of perceptual phenomena? Are the visual phenomena themselves isomorphic with the macro- neural activity? This is not very clear, and unfortunately, the issue is further complicated by its metaphysical interpretation. Why would PI lend support to a double aspect theory? Two things can be isomorphic, and yet be still two different items (cfr. Ch. 2, §1.1). Furthermore, why would an isomorphism help us bridging the explanatory gap between consciousness and the brain? Also, although the morphodynamical approach is supposed to help us explain consciousness, Petitot does not clarify what it means to scientifically explain perceptual content. In light of these considerations, we must still regard the problem of PI as open.

3. The Scope and Aims of this Work

In this Chapter, I have shown that despite the concept of PI comes up in several debates it remains an open problem. What is clear is that PI hinges around a problem that is of central interest to contemporary philosophy of mind and cognitive science, as well as to cognitive scientists, namely the search for the neural correlates of our perceptual experience. Given the centrality of this issue, it will be appropriate to keep this work within manageable limits. In other words, I will narrow down the scope of this analysis of PI to a specific issue. If the concept of PI shows any interesting implication, we might be justified in extending its application to other domains of research. Narrowing down the scope of this work is necessary but insufficient to tackle our issue. Another necessary step is that of articulating a research strategy that may guide us through the many issues that surround our concept.

I will discuss my research strategy in the next Chapter (Ch. 2). In the remainder of this Chapter, I first highlight the areas of philosophical investigations that may benefit from this work (§3.1); then, in compliance to the aforementioned recommendations, I circumscribe the scope of this work to visual objects (§3.2).

23 Chapter 1: The Problem of Psychoneural Isomorphism

3.1 The Relevance of Psychoneural Isomorphism

As I have said, the concept of PI is closely connected with the problem of the relation between conscious perceptual experience with its neural correlates. In this sense, in an ideal philosophical geography, we can place PI at the crossroad of the following questions: What are the neural correlates of consciousness? How are they related to perceptual content? How can we map perceptual content onto the neural correlates?

We have also seen that none of the previously examined accounts has clarified in any way what is meant with “psychoneural isomorphism,” merely mentioning the concept, or quickly dismissing it without any discussion. Pessoa et al. (1998) have mentioned the concept of PI, distinguishing it from «analytic isomorphism» but without going into any details. What is isomorphic to what, and what is the relevance of this concept for contemporary research still requires clarification. Within the debate about the naturalization of Phenomenology, the concept is sometimes mentioned with contrasting evaluations. Whereas Roy et al. (1999) quickly dismiss it on unclear grounds, and Noë & Thompson (2004) have provided some reasons to reject an isomorphism between perceptual content and single cells’ receptive fields, Petitot (2008) and Hohwy & Frith (2004) have rightly observed that these studies do not rule out PI as an option in research on the neural correlates of consciousness. Petitot goes even one step further, claiming that morphodynamical models of neural activity can be shown to be isomorphic with perceptual content, under some level of description, and bestow a central role to PI in his mathematization of Phenomenology. Yet, again, it is neither clear what would be the structure of the phenomenological domain, nor is it clear what would their neural correlates be.

An analysis of PI thus fills a gap in the existing literature. We can identify several domains of philosophical research that might benefit from the present work. Firstly, this work bears on the nature of the neural correlates of consciousness (Ch. 5). As we have seen, none of the researchers discussed above has made any attempt to shed light on this issue. Secondly, the work bears also on the structure of the perceptual content. Since PI is a relation of structural identity, we will have to explain what the structure of perceptual content consists in. This will require a detour into the current controversy about the contents of consciousness (Ch. 3, 4, 7). Finally, since the concept of PI emerges in conjunction with the problem of understanding the role of phenomenological descriptions in guiding the search for the neural correlates, this work will also provide some insights into the problem of the structural relations that hold between perceptual content and its neural correlates.

3.2 Focusing on Visual Objects

In the last twenty years, we have witnessed an exponential increase of publications in philosophy of mind and perception. Whilst until few decades ago it was still imaginable to study “the mind” and its faculties–from will to moods, from perceptual states to thoughts and beliefs (e.g. 24 Chapter 1: The Problem of Psychoneural Isomorphism

Armstrong 1968)— within one single philosophical book, today the degree of complexity and the amount of publications makes impossible to embrace such a broad variety of phenomena within one single work. Therefore, in order to keep this work within manageable limits, I will exclusively focus on visual perceptual content, and more specifically on visual objects. This choice requires a short explanation.

Firstly, the decision to restrict this work to visual objects is motivated by methodological reasons. Much of the current philosophy of perception focuses on visual perception, and studies in vision science are a pioneering field of scientific research, integrating perspectives from diverse disciplines, such as perceptual psychology, neuropsychology, and neurophysiology among others. Relying on these resources will be particularly helpful, as any study of PI must take into account two different domains (cfr. Ch. 2, §1), an “experienced” one of conscious visual perception, and a neural one pertaining to the neural correlates.

Secondly, despite the attention given to seeing over other sense modalities, philosophers of perception have relatively neglected the status of visual objects, i.e. the perceptual unities manifest in states of seeing such as «ordinary specimens of dry-goods» (Austin 1962, p. 8) like books, cars, lamps, chairs, persons, cats, etc., but also perceptual ephemera like shadows and rainbows (Casati 2015). Some philosophers of perception are satisfied in stating that we see objects beside properties (e.g. Siegel 2010a) without much commenting on what such objects are. As I will later explain (Ch. 4), determining the structure of visual objects has paramount implications for several issues in philosophy of mind and perception. Thus, my choice to focus on visual objects is meant to fill in (sic!) a gap in the existing philosophical literature on perception.

Thirdly, and finally, given the previous motivations, it is understandable that, if we can fruitfully analyze the concept of PI in the case of relatively well-understood phenomena such as visual objects and their underlying neural correlates, we will have made a case for PI. In other words, this work serves as a “test” for PI to assess whether it can play any useful role that might eventually justify its application to other sense modalities and mental phenomena, given the right sort of context and information.

25 2

OUTLINING A RESEARCH STRATEGY

In the previous Chapter, I have shown that the problem of PI is an aspect of the quest for the neural correlates of the contents of consciousness, and that in this work I will only focus on visual objects. The purpose of this Chapter is to clarify the concept of PI and set the agenda for the next Chapters.

So far, I have only provided a rough definition of our concept at the outset of the previous Chapter: a function or map that completely preserves sets and relations; but if we want to fully understand what “psychoneural isomorphism” is, we must first introduce a rigorous definition of “isomorphism” that might also help us to determine the correct application of our concept. I define the concept of isomorphism in the first Section (§1). By means of this definition, I will be able to determine the necessary requirements that any investigation must satisfy in order to justify an appropriate use of the concept of “isomorphism;” I call these criteria the “Character of Isomorphism.” Also, I discuss the place of PI within the mind-body problem in contemporary philosophy of mind (§1.3), and show that PI is independent from the metaphysical relation that holds between mind and brain. This raises the following question: What then is the correct interpretation of PI? The answer will be found at the end of a historical reconstruction of our concept (§2). This historical sketch plays two roles: it will give historical depth to the present work; and it will bring to the fore the heuristic role of PI. In §3, I briefly summarize the results of §§1-2 and Ch. 1, thereby setting the goals of this work, and outlining the agenda for the next Chapters.

1. The Character of Isomorphism

What is an isomorphism? In the Oxford English Dictionary, at the entry “isomorphic” we find: «corresponding or similar in form and relations»1. This ambiguously suggests two different meanings: “correspondence” and “similarity.” The Merriam Webster provides three definitions of isomorphism: 1. «similarity in organisms of different ancestry resulting from convergence»; 2. «similarity of crystalline form between chemical compounds»; 3. a «homomorphism that is one- to-one»2. Again, the first two definitions suggest a relation of “similarity,” whereas the third one suggests a relation of structural identity.

1 C. Soanes & A. Stevenson (Eds.) (2006)2. Oxford Dictionary of English. Lavis, TN: Oxford University Press. 2 Merriam-Webster, online edition, entry “Isomorphism”: http://www.merriam- webster.com/dictionary/isomorphism (retrieved 7.10.2016). Chapter 2: A Research Strategy

Etymologically, the word has a clear meaning: “identity” (iso) of “structure” or “form” (morphism). An isomorphism thus denotes some kind of structural identity. Besides more informal usages, “isomorphism” is a mathematical concept that is addressed by the third definition reported in the Merriam Webster. I will now elaborate on this definition.

1.1 Defining Isomorphism

An isomorphism is a bijective—i.e. both surjective and injective—morphism, a function that preserves sets and relations among elements (Weisstein 2009, p. 2027). A “morphism” or homomorphism in mathematics is a map between two objects or domains that partially preserve their structures. Let us assume two arbitrary domains A and B that are relational structures. A relational structure is a set A together with a family «Ri» of relations on A. Two relational structures A and B are said to be similar if they have the same type. (Here, I follow the convention of using a bold face—e.g. A—to refer to the relational structure, whereas I use the italics—e.g. A—to refer to the carrier set or domain). A homomorphism can be defined as follows:

Let A and B be similar relational structures, with relations «Ri» and «Si» respectively. A homomorphism from A to B is any function m from A into B satisfying the following

condition, for each i: If ∊ Ri, then ∊ Si. (Dunn & Hardegree 2001, p. 15)

The relational structure B can be defined as a homomorphic image of A if there exists a homomorphism from A to B (onto B). The relation of structural similarity admits different degrees. To put things in less technical terms, we can say that a homomorphic image B can be more or less similar to A.

The concept of isomorphism is but a specific version of homomorphism. Thus, every isomorphism is also a homomorphism. An isomorphism, however, is a relation of structural identity between two objects or domains. As stated at the outset of this paragraph, an isomorphism is a bijective homomorphism: a one-to-one correspondence relation between elements of the two domains. A formal definition of “isomorphism” can now be given:

A homomorphism h from A to B is said to be an isomorphism from A to B (between A and B) iff it satisfies the following conditions: (1) h is one-one; (2) h is onto. (Dunn & Hardegree 2001, p. 17)3

3 Dunn & Hardegree define isomorphism by means of the simple material implication “if.” I have slightly modified their definition, adding a biconditional. I thank Christian Straßer for pointing this out to me. 27 Chapter 2: A Research Strategy

We can clarify the definition with some examples. Consider the sequence of natural numbers

ℕ0 = {0, 1, 2, 3…+∞}. This sequence is isomorphic to the sequence of annual time segments from 0 to infinity, i.e. there is a function from the set of annual time segments to ℕ0 that is homomorphic and onto. Yet another example: two regular dice with six faces can also be isomorphic, it can be shown that there is a function that completely maps the structure of one die onto the other one.

Steven Lehar (2003, pp. 383-385) distinguishes between different kinds of isomorphism: structural and functional. The former is a «literal isomorphism in the physical structure». This can be nicely illustrated with the example of the two dice. Both dice have six faces, the numbers have the same arrangement, with number 1 being on the opposite side of number 6, etc. The latter concept, functional isomorphism, refers to the behavior of a system B as if it were physically isomorphic to A (Putnam 1973). Take again the dice example. Suppose John and Mary play dice. The chances of getting a result from 1 to 6 are the same with both dice (assuming that one of the two is not loaded!). However, suppose now that for some reason one of the two die disappears, and that John has a software installed on his computer that simulates the behavior of a die. All John has to do is to press “Enter” on his keyboard and the program will yield a number from 1 to 6 with the same probability as if it were an ordinary physical die. The computer simulation is a functional isomorph of the physical die.

These two examples highlight some important features of isomorphisms. The first feature is that an isomorphism does not require the numerical identity of the two objects, but only of their structures (cfr. §1.3). Two dice or French card sets can be isomorphic and yet not be the same die or card set, i.e. two distinct things can be isomorphic. All that is required is that their relational structures A and B must be the identical. An upshot of this feature is that two domains or objects can be isomorphic and still possess different features in so far as the relational structure is unaltered. Consider again the two dice: one die can be blue whilst the other one be red, there would still be an isomorphism. It is also conceptually possible to think of an isomorphism from A onto A (onto itself). In this case, we speak of an “automorphism.” It is however also possible to have a mere homomorphism from A to A. In this case, we talk about an “endomorphism.” (cfr. §1.3).

Another important feature is that, once we recognize two domains as isomorphic, we can potentially exploit this relation to infer something about one of the two domains. Let there be two isomorphic domains A and B. Suppose also that, for whatever reason, the image B is observable, whereas the domain A and its relational structure A are not accessible for direct observation. In this case, a researcher can still get some insights into the structure of A by studying the structure of its image B. Things of course might be complicated by the need of some axioms that govern the transformation rules, for example if B is a topological structural isomorphism of A (a topological isomorphism is also called a “homeomorphism”); hence, in 28 Chapter 2: A Research Strategy principle, one needs to know also the transformation rules in order to infer something about A from B.

For terminological clarity, in mathematics an isomorphism is a special case of homomorphism, whereas the converse is not necessarily true. In the remainder, whenever I will use the concept of “homomorphism” it will always be in the sense of something less than complete structural equivalence. In conclusion, given the definition of isomorphism, we can now easily identify the requirements that must be met in order to properly speak about an isomorphism in general, these are:

(1) We must identify two domains, A and B. (2) We must show that A and B contain elements and that they are relational structures A and B, and what kind of relational structure they are. (3) We must identify a function f that completely maps the structure of A onto B.

The foregoing points (1)-(3) also chart the problems that will be addressed in the next Chapters (§3.2). We can call these joint requirements the Character of Isomorphism, as they directly bear on the appropriateness of the use of the concept of “isomorphism.” Notice that the “Character of Isomorphism” is quite independent from the specific domains examined. Hence A and B may be, for example, my left and right hands, two buildings or chairs, two algebraic systems, and so on. If the two domains satisfy all the given prerequisites, we are entitled to talk about an “isomorphism.”

1.2 What is meant with “psycho-neural”?

Let us now turn to the two isomorphic domains. As I said, the choice of the domains is somewhat arbitrary. In principle, we can just pick whatever we wish as domains and try to show whether—under the right sort of description—they are isomorphic. So I can state for example that my Norton edition’s copy of Melville’s Moby Dick is isomorphic to another copy still on sale in a New York bookshop, or that Duchamp’s Fountain (1917) is isomorphic to every other porcelain urinal produced by the same company. In our case, the adjective “psychoneural” provides some additional information: it suggests that there is an isomorphic relation between a psychological domain Ψ’s relational structure Ψ and a neural domain ϕ’s relational structure ϕ. (Notice that I will continue to adopt the convention of using bold face and italics, cfr. §1.1). A terminological caveat: some researchers talk about a “psychological isomorphism” (e.g. Madden 1957). Madden’s “psychological isomorphism” however is a mere stylistic variation of our PI. In this work, I will continue to refer to “psychoneural isomorphism” as it makes explicit the reference to the domains.

Madden (1957), and Pribram (1984) mention different possible domains of PI. Pribram says that an isomorphism could hold between (a) the brain and experience, (b) between the brain 29 Chapter 2: A Research Strategy and the environment, or (c) being a three-fold relation between them all. Similarly, Madden contends that an isomorphism can hold between (a) stimuli and sensory responses, (b) between receptor events and afferent neural processes, and (c) between neural events and phenomenal events, where “phenomenal” should be understood as “conscious.” The latter is but a form of what Fechner (1860) called innere Psychophysik (internal psychophysics): the relation between the neural and our experience (Erleben). This is also the kind of isomorphism envisaged by Wolfgang Köhler (cfr. §2.2).

As I showed in Ch.1, the concept of PI is related to research on the neural correlates of the contents of consciousness. Hence, the proper subject area of this work will roughly be located within Fechner’s “internal psychophysics.” I will have nothing to say about a putative isomorphism between, say, retinal image and primary visual cortex—i.e. whether the retinotopic map of V1 is isomorphic to the retina, or merely homomorphic—, or between the environment and our visual representation of it. Let us call the first domain, that of the conscious contents, the “phenomenological domain,” and “neural domain” the second domain (cfr. Sekuler called them «perception» and «brain activity», 1966, p. 230). A graphic representation of PI is given in Fig. 6:

Fig. 6: PI i holds between the phenomenological domain Ψ and a neural domain ϕ.

With the “Character of Isomorphism” I have identified the problems that must be solved in order to justify any talk about PI. These problems will be addressed in the next Chapters. Before I move on to the next section, we must first discuss the relation between PI and the metaphysics of the mind-body problem.

1.3 PI and the Metaphysics of the Mind-Body Problem

Having identified the two domains, which will be properly spelled out in the later Chapters, we can raise the following question: What is the relation between PI and the Mind-Body metaphysics? The centrality of this issue is such that it will help us to bring into sharper focus

30 Chapter 2: A Research Strategy the role of PI within contemporary research. There are two possible ways to understand the relation of PI with the Mind-Body metaphysics. The first one is to examine the connection between PI and every single metaphysical option. This is the “long” way. A better and more effective way is it to analyze a particularly instructive case and draw some lessons that can be generalized to other metaphysical options. This is the option I prefer.

My starting point is the following claim put forward by Antti Revonsuo (cfr. Ch. 1, §2.3):

…there must be isomorphism between one specific level of organization in the brain and phenomenal consciousness, simply because these boil down to one and the same thing. (2000, p. 67; emphases in the original).

What makes this quotation particularly interesting is that it postulates a relation of entailment between PI and some version of the identity theory, i.e. the position according to which the mind is the brain. No reference is here made to any kind of structure—nor, indeed, is made in the rest of his chapter—, thus Revonsuo’s claim plainly fails to meet requirement (2) of the Character of Isomorphism; but for my purposes we can gloss over this specific issue, since my interest now is to explore the relation between PI and the Identity theory. Also, no reference or elucidation is to be found about what kind of identity theory is assumed in this context: type- identity—types of mental states are identical to types of brain states—or token-identity—token mental states are identical to token brain states. Again, we can skip this issue for now. In order to facilitate the analysis, we can break down Revonsuo’s claim into the following propositions:

P1: Phenomenal consciousness is identical with some level of organization in the brain. (Identity thesis). P2: Phenomenal consciousness is isomorphic to some level of organization in the brain.

Phenomenal consciousness and a given level of organization in the brain are our two domains. The identity thesis formulated in P1 is a typical instance of metaphysical necessity involving theoretical identity statements such as «Water is H2O». According to Revonsuo, it is P1 that grounds proposition P2. This confers modal character to the truth of P2 by suggesting that, necessarily («must»), if P1 is true then P2 is true as well:

◻ ︎ (P1→P2)

In the debate about the nature of modality, philosophers have explored different relations between the varieties of necessity. Some for example contend that mathematical and logical necessity, physical necessity, and other varieties all depend on metaphysical necessity (monism). Others argue that we should recognize a plurality of necessities without reducing them to one fundamental concept (pluralism) (the classic on metaphysical necessity is Kripke 1980; cfr. also Fine 1994, 2002; and Cameron 2010 for an overview). As I said, P1 is a metaphysical thesis,

31 Chapter 2: A Research Strategy whereas P2 is a mathematical function (§1.1). The point I want to stress is that it would be misleading to think that, in order to examine the correctness of Revonsuo’s claim, we should first try to settle the issue of the relationship between different kinds of modalities, and more specifically, between metaphysical identity statements and mathematical concepts. Rather, my strategy is the following: I will examine the relation between metaphysical identity and P2, and show that not just every function from a domain onto itself is an automorphism. If this is true, it follows that even identity statements do not exempt us from specifying the exact isomorphic function between a domain and itself. For expository reasons, we can abstract away from the specificity of the consciousness-brain relation. The claim’s structure is general enough to be paraphrased as follows: “If two domains A and B are identical (A=B), then they are necessarily isomorphic.”

What kind of identity is at stake in our case? A preliminary distinction can be drawn between qualitative and numerical identity (Noonan & Curtis 2014). Two entities are qualitatively identical in some respect if they share some property. For example, both Paul Nash’s «Totes Meer» (1940-41) and Max Ernst «L’Ange du Foyer» (1937) share the properties of being surrealist paintings, being realized with oil on canvas, etc. Extreme cases can be given of virtually indistinguishable entities. Talk about qualitative identity is of course complicated by our assumptions about the nature of properties (cfr. Allen 2016; Ch. 7). In the present context, however, we are assuming a stronger form of numerical identity. Numerical identity «requires absolute, or total, qualitative identity, and can only hold between a thing and itself» (Noonan & Curtis 2014).

Let’s consider A and a function from A to itself. The question is: Is every function from A to itself an isomorphism just in virtue of the identity of domain and image? Intuitively, we would respond in the affirmative: for A is obviously qualitatively, and therefore structurally, identical to itself. However, this would be a mistake. The mistake results from a confusion between identity relation A=A (and therefore qualitative identity) with an isomorphism. An isomorphism is a map or a function (requirement 3 from the “Character of Isomorphism”) from a domain onto another domain, or from a domain onto itself4. It is perfectly legitimate to talk about a mapping from A to itself—i.e. from A to A—that is not an isomorphism. Indeed, as I have explained in §1.1, a homomorphic function from A to A (to itself) is called an “endomorphism.” Analogously, we can determine an isomorphic function from A onto A (onto itself), such a function is customary called an “automorphism” (Cohn 1981, p. 49). Accordingly, what Revonsuo is claiming is that the consciousness-brain identity necessarily implies the truth of an automorphism (an isomorphism from A onto itself). We can now give P2 a more precise formulation:

4 Notice that although here I refer to functions, an isomorphism can also hold between algebraic systems, vector spaces, or categories in general. 32 Chapter 2: A Research Strategy

P2*: There is a homomorphic function h from M (phenomenal consciousness) to B (some level of neural organization), and M=B, that is one-one and onto, i.e. it is bijective.

Although every automorphism is, by definition, also an endomorphism—just like every isomorphism is by definition a homomorphism—the converse is not necessarily true: an endomorphism or homomorphism is not necessarily an automorphism or an isomorphism. To show this, it suffices to analyze an example. Consider a vector space V, an endomorphism from V to V is a linear map:

L: V → V

Now, an automorphism is an invertible endomorphism. However, if we assume a vector dimension dim V > 0, the endomorphism L: V → V, v ↦ 0 is not invertible, hence it is not an automorphism5. It follows that an endomorphism is not necessarily an automorphism. The same considerations apply in the case of P1 and P2 of Revonsuo’s formulation.

It can be argued that cases of identity of the two domains, A=A, determine that there must be at least an isomorphism, i.e. a homomorphic and bijective function. However, the lesson that we can draw from these considerations is that even an identity thesis does not exempt us from specifying under which level of description, what structures, are isomorphic, even in cases of identity. The example of the vector space shows just this. Although in this case there is only one vector space, not every function from V to itself is an automorphism. This is quite different from claims about qualitative identity. Failing to realize that isomorphism is a function or map— as requested by requirement 3 of the Character of Isomorphism—is just to misunderstand the very concept of isomorphism.

The foregoing discussion is particularly instructive, for it shows that PI is not a trivial thesis whose truth can be decided by simply reducing it to the metaphysics of the mind6. Indeed, the identity of the isomorphic domains, or their distinctness does not prejudice the question of whether they might be isomorphic or not. Consider the case of any non-identity thesis, such as Cartesian or properties dualisms. Within a paradigmatic Cartesian scenario, we would describe the mind as a substance distinct from the brain or body. We would therefore have two non- numerically identical domains, A and B. Under the assumption that we can describe their relational structures, nothing decides, a priori, to consider A and B as either homomorphic or isomorphic. In §1.1 I have discussed few examples of distinct, yet isomorphic items. Two dice can be isomorphic, and yet be numerically distinct, just like the sequence of natural numbers is

5 Thanks to Dr. Vincenzo DeMaio and Francesco Altiero for this particular example. 6 I am assuming here that the central ontological and metaphysical question concerning the nature of the mind centers on the reducibility of the mind to the body, or a part of it (e.g. the brain). This of course is not tantamount to say that there are no other metaphysical problems. 33 Chapter 2: A Research Strategy isomorphic to the annual time segments. Talk about PI is orthogonal to the broader metaphysical question of the metaphysics of consciousness, i.e. the problem of PI is distinct from the mind-body problem.

Such a conclusion licenses the following problem: if PI is independent from the mind-body problem—the question of whether the mind is the brain or not—, what is the purpose of talking about PI concerning perceptual content and its neural correlates? My answer in short will be that the relevance of PI is only motivated to the extent that the concept plays a heuristic role. I will show this by means of a short historical reconstruction of our concept.

2. A Short History of Psychoneural Isomorphism

The word “isomorphism” first appeared at the beginning of the 19th century. The very idea of an isomorphism was originally intimately connected with the work of Mitscherlich in crystallography and chemistry: isomorphs being «substances having the same crystal form but different compositions» (Melhado 1980; for an historical overview, cfr. Salvia 2013). In psychology, the concept of PI is inseparable from Gestalt psychology. The first occurrence of PI can be found in Wolfgang Köhler’s Gestalt Psychology (1929), but the fatherhood of our concept should probably be ascribed to Wertheimer (§2.2).

It is worth bearing in mind that the purpose of this work is systematic, not historical. Accordingly, this historical overview will deliberately be sketchy and incomplete. For the most part, I will rely on secondary literature on the history of psychology and of our concept (for the history of psychology, cfr.: Greenwood 2015; Legrenzi 2012; Smith 2013; Thomson 1972; Toccafondi 2000; on the history of PI, cfr: Lehar 1999; Luccio 2010; Luchins & Luchins 2015; Scheerer 1994).

2.1 Fechner, Mach, and Müller

The first printed occurrence of PI can be found in Köhler (1929). However, Köhler explicitly made reference to antecedent research that foreshadowed the concept of PI (1929, p. 58). In particular, he mentioned Hering’s assumption of parallelism, and Müller’s psychophysical axioms. Furthermore, both Scheerer (1994) and Luccio (2010) in their historical accounts of PI mention Ernest Mach and Gustav Fechner as forerunners of PI. Indeed, the 19th century psychophysicists played a crucial role in defining the philosophical background that was later to give rise to the concept of PI. In this paragraph, I will focus on three key figures, Fechner, Mach, and Müller.

Gustav Fechner is largely credited as one of the founding fathers of psychophysics. Trained as a scientist, he was later brought to studies on the relationship between the “psychological” and the “physical” by genuine philosophical and spiritual interests (Heidelberger 2003). In the course of

34 Chapter 2: A Research Strategy almost fifty years of work, Fechner articulated different views on the relation between the “physical” and the “psychological,” but they were all formulated within a form of psychophysical parallelism. Already in his dissertation Praemissae ad theoriam organismi generalem of 1823, Fechner stated:

Parallelismus strictus existit inter animam et corpus, ita ut ex uno, rite cognito, alterum construi possit. (quoted from Heidelberger 2000, p. 53)

In his works, Fechner never used the phrase “psychophysical parallelism,” whose authorship is still matter of controversy among historians (on this topic, and its relevance in the development of the mind-body problem in the analytic tradition, cfr. Heidelberger 2002). He adopted instead the term “Identitätsansicht” (identity perspective), a term that betrays the influence of Schelling’s thought—and therefore, indirectly, Spinoza’s (cfr. Luccio 2010)—, which he discovered through Lorenz Oken’s lectures, a follower of Schelling. The quoted passage does not offer much, but it certainly provides some little precious clues for the present historical Section. There, Fechner explicitly suggests that from a proper understanding («rite cognito») of the soul («anima») or of the body («corpus»), one can construct the other. In other words, attaining the right sort of knowledge about (in modern jargon) the mind or the body one could in principle deduce the other one. This thought constitutes the very heart of Fechner’s theory: the idea that one could map subjective sensations against objectively measured sensory stimuli (Smith 2013, p. 83). The philosophical foundation of this thought was a commitment to a form of parallelism according to which the soul and the body are but different perspectives or aspects of one and the same substance:

Es sind im Grunde nur dieselben Processe, die von der einen Seite als leiblich organische, von der anderen als geistige, psychische aufgefaßt werden können. Als leibliche Processe stellen sie sich Jemandem dar, der außerhalb dieser Processe selbst stehend, dieselben ansieht, oder aus Gesehenem unter Form des äußerlich Wahrnehmbaren erschließt, wie der Anatom, Physiolog, Physiker [es] thut. (Fechner 1851; quoted from Heidelberger 2000).

The exact formulation of this parallelism changed in the course of Fechner’s productive career, finally reaching an “objective idealism“ (objektiver Idealismus), according to which everything is ultimately spiritual in nature. Besides further developing the metaphysical aspects of his theory, Fechner also sought to find a mathematical formulation that could bridge soul and body, and therefore provide a way to express more precisely the intimate connection between the two “aspects” (Seite) of the same substance. This culminated in the formulation of what is today known as Weber-Fechner law, according to which a sensation is equal to the logarithm of the physical stimulus S = k log R + C (in his Elemente der Psychophysik, 1860).

35 Chapter 2: A Research Strategy

Fechner exerted a considerable influence on his contemporaries. With few exceptions—namely Helmholtz and his pupils, who adhered to a form of mind-body dualism—most psychologists in Germany and beyond adopted Fechner’s parallelism as a heuristic method. Particularly receptive to Fechner’s legacy was Ernst Mach. As Greenwood (2015, p. 326-327) points out, Mach was one of the anticipators of Gestalt psychologists, together with Christian von Ehrenfels. In his writings, Mach never adopted the concept of PI, but he advanced a “principle of equivalence” (Princip der Entsprechung) (Mach 1865) that may be considered an antecedent of PI. The principle is very simple: it merely states that for every psychological event, there must be a corresponding physical event, and that identical psychological events must correspond to identical physical events.

Few years later, Mach further developed his views. In the fourth chapter of his book Die Analyse der Empfindungen (1886), he asserted that the «guiding principle for the study of sensations» (leitender Grundsatz für die Untersuchung der Empfindungen) was the «principle of complete parallelism of the psychical and the physical» (Princip des vollständigen Parallelismus des Psychischen und Physischen). This principle was but a novel formulation of his “principle of equivalence.” In the revised 1900 edition, we find the following passage:

Nach unserer Grundanschauung, welche eine Kluft zwischen beiden Gebieten (des Psychischen und Physischen) gar nicht anerkennt, ist dieses Princip fast selbstverständlich, kann aber auch ohne Hilfe dieser Grundanschauung als heuristisches Princip aufgestellt werden, wie ich dies vor Jahren getan habe. Das hier verwendete Princip geht über die allgemeine Voraussetzung, daß jedem Psychischen ein Physisches entspricht und umgekehrt in seiner Specialisierung hinaus. Letztere allgemeine Annahme, die in vielen Fällen als richtig nachgewiesen ist, wird in allen Fällen als wahrscheinlich richtig festgehalten werden können, und bildet zudem die notwendige Voraussetzung der exakten Forschung (Mach 1886/1922, p. 50; quoted from Heidelberger 2000; cfr. Luccio 2010, p. 224).

There are two things that I wish to highlight. Firstly, Mach understands the principle as a heuristic (Scheerer 1994, p. 320). Secondly, the quotation does not make any explicit reference to isomorphism, nor are we allowed to talk about structures. In this sense, the 1865 formulation of the principle of equivalence cannot be interpreted as an isomorphism, since it fails to meet the second requirement of the Character of Isomorphism. However, in a later edition, published in 1906, Mach added the following sentence: «Ich suche nach Formähnlichkeit, Formverwandschaft zwischen dem Psychischen und dem entsprechenden Physischen, oder umgekehrt» (quoted in Heidelberger 2000; my emphasis). What these similarities of form are is unclear, and we can only tentatively compare this claim to some kind of morphism. Perhaps, the similarity of “forms” should be understood in terms of functions that preserve the structural relations among elements. 36 Chapter 2: A Research Strategy

Although, significantly, Köhler (1929) in the chapter dedicated to isomorphism does not discuss Mach’s principle of equivalence—Mach is merely mentioned twice in the whole book and never in relation to PI—he explicitly mentioned and discussed Georg E. Müller’s psychophysical axioms and Ewald Hering’s principle of parallelism. Interestingly, both Müller and Hering were connected to Fechner. Hering, professor of physiology in Prague, had studied in Leipzig with Fechner (Thomson 1972, p. 75); whereas Müller charted a different path, studying philosophy and history in Leipzig, before moving to Berlin in order to complete his studies with a dissertation about the possibility of a scientific philosophy (ibid., p. 80). During a long recovery period from a serious illness, Müller became interested in psychophysics and started a regular correspondence with Fechner. In spite of his earlier studies, Müller became a prolific researcher with an extraordinary reputation. It is not irrelevant, in the present context, to remind that Müller was also teacher and mentor of Friedrich Schumann, who later became collaborator of Carl Stumpf and one of Max Wertheimer’s teachers in Berlin. Schumann later moved to , where he hired Wolfgang Köhler and Kurt Koffka as assistants. A fervent follower of Fechner, Müller plays an important role in this historical overview for his famous five psychophysical axioms (1896). Of the five axioms, only the first three are relevant here:

I. The ground of every state of consciousness (Zustand des Bewußtseins) is a material psychophysical process. The states of consciousness occur (Vorhandensein) in conjunction with such psychophysical processes, and every psychophysical process corresponds to a state of consciousness. (Müller 1896, p. 1). II. To every equality (Gleichheit), similarity (Ähnlichkeit), and difference (Verschiedenheit) of the composition (Beschaffenheit) of a sensation (Empfindung) corresponds an equality, similarity, and difference of the constitution of a psychophysical process, and vice versa (umgekehrt). More specifically, degrees of variations across these dimensions have a psychophysical correspondence, and vice versa. (ibid., pp. 2-3). III. If the variations (Änderungen) or differences (Unterschiede) of the sensations have the same direction (Richtung), so will the underlying psychophysical processes. And if a sensation is variable in n-directions («in n-facher Richtung variabel») so must also be the psychophysical process, and vice versa. (ibid., p. 2).

The scope of the neural or psychophysical processes covered by the axioms is not always clear. Köhler (cfr. §2.2) interpreted Müller as saying that the psychophysical processes also include retinal processes, thus extending far beyond the direct correlates of visual perception.

Müller was well aware that such axioms were similar to principles already discussed by other researchers, among them Lotze, Fechner (1860), Mach (1865), and Hering (Müller 1896, p. 5). The axioms were meant to replace the vague notion of “psychophysical parallelism”: «Denn der Ausdruck “psychophysischer Parallelismus” ist viel zu unbestimmt […]» (ibid., p. 4). Also, he conferred them a heuristic character in the search for the neural processes underlying the 37 Chapter 2: A Research Strategy corresponding states of consciousness, i.e. the neural correlates of consciousness (cfr. also Scheerer 1994, p. 185). These three axioms, together with Hering’s doctrine of psychophysical correspondence—according to which a psychophysical parallelism was the «conditio sine qua non of all psychophysical research» (quoted in Scheerer 1994, p. 185)—, lay the ground for Köhler’s psychoneural isomorphism (Luccio 2010, p. 223).

2.2 Gestalt Isomorphism

The concept of PI was first explicitly put forward by Wolfgang Köhler (1929), and was subsequently further developed by other prominent members of the Gestalt school. Yet, Gestalt psychologists did not agree on a single definition of PI. On the contrary, even among the most prominent members of the Gestalt school we find little agreement about fundamental concepts. The «inaccuracy of some definitions, the scarcely scrupulous use of some terms, or the ambiguity of some fundamental concepts» (Kanizsa 1994, p. 149) was the main source of confusion and misunderstandings that later surrounded Gestalt psychology, and eventually led to its marginalization within scientific community. I will primarily discuss Wolfgang Köhler’s contribution, as he was the key figure in the development of PI, but I will also occasionally make reference to other Gestaltists as well.

In the second chapter of his book, “Psychology as a Young Science,” Köhler develops his attack against Behaviorism and stresses the importance of first-person descriptions of the phenomenological field in the study of the mind, anticipating by decades the claims advanced by defenders of phenomenological methods (cfr. Ch. 1, §2.1). In that chapter, Köhler pointed out that little is known of what happens in the «terra incognita» between sensory stimulation and overt behavior. This generates the problem of how to investigate the internal states and processes of an organism:

To the degree to which the interior of the living system is not yet accessible to observation, it will be our task to invent hypotheses about the events which here take place. For much is bound to happen between stimulation and response. (Köhler 1929, p. 51).

Köhler was well aware of the intrinsic limitations of early 20th century brain investigation methodologies. On a later page, he stated that, since our present views about the functions of the brain are «about as speculative as our own guesses», it will be «advisable to make full use of the chance which inference from direct experience offers to the psychologist» (ibid., p. 57). In other words, Köhler sought a way to infer the physiological processes from conscious experience. This was possible, in his view, because at least under normal conditions objective experience «depends upon physical events». The nature of such dependence is unclear; Luccio (2010) suggests that Gestalt psychology, and in particular Köhler’s approach, seems rooted in the monistic tradition that via 19th century psychophysics links these psychologists to Goethe, 38 Chapter 2: A Research Strategy

Schelling, and even earlier to Spinoza and Maimonides. What is clear, is that Köhler thought that researchers could exploit this dependence relation to infer something about the physiological processes:

…since experience depends upon physiological events in the brain, such experience ought to contain hints as to the nature of these processes. In other words, we argue that if objective experience allows us to draw a picture of the physical world, it must also allow us to draw a picture of the physiological world to which it is much more closely related. (ibid., p. 57).

Köhler then introduces the need for a clear «principle» that may govern the transition from direct experience to physiological processes. Such a principle, he says, should be one of «equality of structure» (ibid., p. 59). Both Hering and Müller worked under a similar assumption, but, in contrast with the proposed principle, they were guilty of referring to the mere «logical order» of experiences, rather than to the experiences themselves (ibid., p. 60). Furthermore, whereas Müller thought the axioms would hold between experiences and even retinal processes, Köhler thought that there was a closer physiological direct correlate of visual experience, and that this constituted the other domain. These remarks further narrow down the scope of the proposed principle, whose nature is clarified by means of the following example:

… I have before me three white dots on a black surface, one in the middle of the field and the others in symmetrical positions on both sides of the former. This is also an order; but, instead of being of the merely logical kind, it is concrete and belongs to the very facts of experience. This order, too, we assume to depend upon physiological events in the brain. And our principle refers to the relation between concrete experienced order and the underlying physiological processes. When applied to the present example, the principle claims, first, that these processes are distributed in a certain order, and secondly, that this distribution is just as symmetrical in functional terms as the group of dots is in visual terms. In the same example, one dot is seen between the two others; and this relation is just as much a part of the experience as the white of the dots is. Our principle says that something in the underlying processes must correspond to what we call “between” in vision. […] the experience “between” goes with a functional “between” in the dynamic interrelations of accompanying brain events (ibid., p. 61).

Köhler’s principle receives the following definition: «Experienced order in space is always structurally identical with a functional order in the distribution of the underlying brain processes»; he then gives it the name of «psychophysical isomorphism» (ibid., pp. 61-62). Such principle, it is then said, «covers practically the whole field of psychology» (ibid., p. 63), and is given a central place in the development of Gestalt psychology.

39 Chapter 2: A Research Strategy

A whole chapter is dedicated to our principle in his The Place of Values in a World of Facts (1938), where Köhler reiterates his contention that «vision and its cortical correlate are isomorphic», and remarked that perceptual organization does not agree with the facts within the physical space, but cortical organization «seems to agree with perception, rather than with physics» (quoted from Luchins & Luchins 2015, p. 74). This means that whereas perceptual content does not straightforwardly map onto the physical environment—as shown for example by cases of filling-in (Ch. 1, §1)—it should map up to isomorphism onto the cortical organization. However, Köhler also added few important remarks about PI.

Firstly, and with some ambiguity, Köhler described PI as a “postulate”: «Isomorphism is a postulate» (1938, p. 224; quoted from Luccio 2010, p. 228). Now, as Luccio correctly observes, the word “postulate” does not appear an appropriate choice, as a “postulate” is roughly equivalent to “axiom,” i.e. a proposition that is simply assumed within a given deductive system, rather than deduced as the theorems. However, this was most certainly not what Köhler had in mind, when he further described the role of PI as useful in forming «hypotheses» that need empirical validation (ibid.). If this reading is correct, then PI is properly understood as a heuristic principle. This interpretation finds further support in the passages discussed above from his 1929’s book, as well as in a later paper where he defined isomorphism not an a priori postulate, but «an hypothesis which has to undergo one empirical test after the other» (Köhler 1960; quoted from Scheerer 1994, p. 188). At the same time, however, in the latter work Köhler also seemed to endorse PI for the sake of a monistic metaphysics:

For instance, if the comparison were to show that, say, in perception, brain processes with a certain functional structure give rise to psychological facts with a different structure, such a discrepancy would prove that the mental world reacts to those brain processes as a realm with properties of its own—and this would mean dualism. (Köhler 1960, quoted from Luccio 2010, p. 241)

…monism “would become sensible precisely to the extent that isomorphism can be shown to constitute scientific truth” (Köhler 1960, quoted from Scheerer 1994, p. 189)

In other words, seeing the issue from a diachronic standpoint, Köhler initially introduced PI as a heuristic principle to be exploited in the search for the neural correlates of psychological processes. Later, whilst still retaining this interpretation as a firm acquisition, he embedded PI within a monistic metaphysics. It is sufficient, for now, to remember that—as shown in §1.3—PI is quite independent from the metaphysics of mind-brain, and that talk about PI is no more justified in the case of a monistic metaphysics than it is in a dualistic one (cfr. also §3.1).

Secondly, Köhler thought that PI applied only to systematic properties (Systemeigenschaften) excluding material properties (Materialeigenschaften) of both domains. This observation is important, for it helps us ruling out some potential misinterpretations of PI. The material 40 Chapter 2: A Research Strategy properties of the phenomenal domains are, as Scheerer explains (1994, p. 189), the “qualitative aspects” of sensorial experience; whereas the material properties of the neural domain would be chemical reactions—in the case of color experience—and forces in the cortex that could be described by means of dynamic models (Köhler 1929, chapter 4). According to neural field theory, which Köhler developed in his earlier Die physischen Gestalten in Ruhe und stationären Zustand (1920), there would be force fields in the brain that tend to seek equilibrium and remain in equilibrium until some external force disturbs them (Greenwood 2015, p. 333). The few attempts at establishing the truth of neural field theory or “brain-field patterns,” that Köhler performed in researchers conducted in the USA—mainly at Princeton, Dartmouth and MIT—, were all unsuccessful. Nonetheless, Köhler’s thought that only structural properties were mirrored at the neural level meant that the isomorphic neural counterparts of direct experience should not share exactly all the same properties of the latter. This is summarized in a statement that is rightly often quoted in the Gestalt literature: «The cortical correlate of blue is not blue» (Köhler 1938, quoted from Scheerer 1994, p. 189). However, the exact nature of the neural domain was unclear to Köhler due to the obvious limitations of the early 20th-century neuroscience.

Whilst Köhler developed PI and gave it the name we know today, the direct antecedent was probably Max Wertheimer, whose debt is fully acknowledged by Köhler in several publications (1920, 1929). Wertheimer had first the idea that a piecemeal and summative approach—as envisaged by associationism in psychology—was not able to capture the nature of perceptual experience, and that the neural correlates should be understood as fields of activity among cells (Luchins & Luchins 2015, p. 76). The gist of the concept of PI was formulated by Wertheimer after a series of experiments that led to the discovery of the so called “phi phenomenon” (cfr. also Ch. 4). The concept of “isomorphism” according to Wertheimer is mainly deduced from anecdotal evidence, rather than longer written elaborations. Indeed, Wertheimer mainly worked under the assumption of his understanding of PI, which he thought to hold not between phenomenal experience and underlying neural states, but between perceptual sphere or “phenomenal” field and the “geographical” field. In other words, according to Wertheimer PI would guarantee a relation of structural continuity between what we perceive and what there is in front of us (Luchins & Luchins 2015).

Finally, another figure worth mentioning in this brief reconstruction of PI within Gestalt psychology is Rudolf Arnheim. Arnheim’s understanding of PI deviates significantly from the trail blazed by Köhler, and comes closer to an integration of Köhler’s account with Wertheimer’s perspective. A brief analysis of our concept is offered in his study on the “Gestalt Theory of Expression” (1949). The term “expression” refers primarily to «external manifestations of the human personality», but more extensively, also to a variety of aspects such as the way a person dresses, handles the language, a pen, the occupation he prefers, just to mention few examples (ibid., pp. 51-52). After discussing few theories—in particular, Lipps’ and 41 Chapter 2: A Research Strategy

Darwin’s theories about emotion and expression perception—Arnheim sketches out a Gestalt theory of expression based on the «principle of isomorphism» (ibid., p. 58ff). Arnheim defines our principle as follows: «processes that take place in different media may be nevertheless similar in their structural organization» (ibid., p. 58; my emphasis). As is now known (§1.1), similarity of structure falls short of isomorphism, which demands structural identity; hence it may be argued that Arnheim’s understanding of our concept is only loosely related with PI. But this is not the only divergence. Arnheim asks himself how could we explain, within Gestalt theory, an expression. For instance, a subject A could perform a “gentle” gesture, which «is experienced as such by an observer B» (ibid., p. 59). Such an explanation is achieved by means of the principle of isomorphism. In other words, in this context, the concept plays an explanatory role. The explanatory structure of how a subject B could see and experience a gentle gesture by A as such is delivered by extending the isomorphic levels. Firstly, Arnheim distinguishes between five different isomorphic levels within the observed person, these are:

I. State of mind [psychological] II. Neural Correlate of I [electrochemical] III. Muscular forces [mechanical] IV. Kinesthetic correlate of III [psychological] V. Shape and movement of the body [geometrical]

To simplify, subject’s A state of mind—the «tenderness of A’s feeling» (ibid., p. 59)—would correspond to an underlying isomorphic neural correlate. However, the physiological electrochemical activity of II would be structurally correspondent also to III, and so on in a cascade of isomorphic activities. Whereas I-V describe A’s action, the next three steps describe B’s perception of A’s gesture:

VI. Retinal projection of V [geometrical] VII. Cortical projection of VI [electrochemical] VIII. Perceptual correlate of VII [psychological]

Again, to put it very simply, the retinal projection is described as isomorphic to I-V, and in turn VI is isomorphic to VII-VIII. This leads Arnheim to the conclusion that subjects perceive expressions in virtue of a number of dynamical processes that result in the organization of perceptual stimuli (ibid., p. 62).

There are a number of issues that make Arnheim’s concept of PI obscure and unhelpful. Firstly, as observed, his definition of “isomorphism” is very informal and not thoroughly articulated. Secondly, the lack of any clarification of the relevant structures: what is the structure of the “tenderness feeling” felt by A in performing the gesture? Thirdly, it is unclear under what descriptions levels as diverse as the ones mentioned could be interpreted as isomorphic. Fourthly, and finally, even though an isomorphism is here invoked to serve as explanatory 42 Chapter 2: A Research Strategy principle, it is unclear what kind of explanatory structure is here assumed. Nonetheless, Arnheim’s concept is interesting within the context of this brief historical overview, as it helps us to bring into sharper focus the alleged connection between PI and the problem of explanation.

2.3 From Second-Order Isomorphism to the Present Day

The conceptual landscape after the 60s is much more fragmented, and it is difficult to bring different streams of research into a single coherent whole. Again, I will focus only on few key protagonists, mainly among psychologists.

Parallel to the advancements of Gestalt psychologists, critics targeted the concept of Gestalt PI from different standpoints. These researchers did not share the same background assumptions about scientific psychology and how best to study the human mind, yet, their rejection of PI is symptomatic of general stance about how perceptual reports, and inferences from the structure of visual objects, ought to be used in our search for the physiological correlates of the human mind. Thus for example, Skinner (1963) argued that the Gestalt concept of PI is nothing less than a commitment to the “picture-in-the-head” theory, a dead-end research program that seeks to find out neural replica of perceptual contents in the subject’s brain. Yet, in a Dennettian style, Skinner claimed that, even if we were able to find such isomorphic pictures in the head: «we should have to start all over again and ask how the organism sees a picture in its occipital cortex, and we should now have much less of the brain available in which to seek an answer» (1963, p. 954). Skinner was, of course, writing from a thoroughly behaviorist standpoint that regarded with suspicion the study of the internal processing of the human mind.

When Roger Shepard criticized Köhler’s isomorphism, again as a form of “picture-in-the- head,” he did so from a completely different standpoint than that of Skinner. Indeed, in his groundbreaking paper on mental rotation, written in collaboration with his student Metzler, the experiments required the subjects to imagine performing a mental rotation of the presented items (Shepard & Metzler 1971, p. 701). Shepard famously proposed what he called a “second- order isomorphism:”

[The] isomorphism should be sought—not in the first order relation between (a) an individual object, and (b) its corresponding internal representation—but in the second order relation between (a) the relation among alternative external objects, and (b) the relations among their corresponding internal representations. Thus, although the internal representation for a square need not itself be a square, it should (whatever it is) at least have a closer functional relation to the internal representation for a rectangle than to that, say, for a green flash or the taste of persimmon. (Shepard & Chipman 1970, p. 2).

43 Chapter 2: A Research Strategy

In other words, Shepard operated a shift in perspective from the Gestalt isomorphism between the perceptual content and the physiological level, to a relation between distal objects and brain processes. Needless to say, whatever this isomorphism is, it is very different from the psychoneural isomorphism that supposedly holds between visual objects and their underlying neural correlates. Both Skinner and Shepard, as we have seen, thought of Gestalt’s PI in the same terms as of a picture in the head theory. Taking stock of these critiques, Mary Henle remarked that they do not really address the problem of PI and stated: «…the question of isomorphism remains. It is a heuristic not to be ignored. It involves finding cortical processes that will account for the specific functional properties of psychological facts.» (1984, p. 325, emphasis added).

Shepard’s critique of Gestalt’s PI missed the point, but the problem of formulating more rigorously the relation between the phenomenal and the neural level was still open. Visual physiologist Brindley reintroduced the issue of the relation between phenomenal terms and physiological terms (Teller 1984, p. 1234). Recognizing that they belong to different realms of discourse, and that often in vision science the “subject’s reports” are an essential part of the experiments, he felt the necessity of psychophysical linking hypotheses. He was, however, able to individuate only one of such hypotheses:

…whenever two stimuli cause physically indistinguishable signals to be sent from the sense organs to the brain, the sensations produced by those stimuli, as reported by the subject in words, symbols or actions, must also be indistinguishable. (quoted from ibid., p. 1234).

As we see, Brindley’s hypothesis—which he thought might well be a truism—follows the thread of 19th-century psyhophysicists and later Gestaltists in trying to pin down a strict formula that described the psycho-physical relation. The development of the linking propositions theory (cfr. Ch. 1, §1.3) is an explicit resumption of this endeavor. Introduced by Teller and Pugh, and refined by Teller (1984), a linking proposition is «a claim that a particular mapping occurs, or a particular mapping principle applies, between perceptual and physiological states» (Teller & Pugh 1983, p. 581; quoted in Teller 1984, p. 1235). There are several linking propositions, but as mentioned in Ch.1, only the “analogy family” is relevant here, since it addresses the issue of similarity relation. The passage, already quoted in the previous Chapter, is worth repeating here in order to highlight a yet different aspect:

…if psychophysical and physiological data can be manipulated in such a way that they can be plotted on meaningfully similar axes, such that the two graphs have similar shapes, then the physiological phenomenon is a major causal factor in producing that psychophysical phenomenon. (Teller 1984, p. 1240).

44 Chapter 2: A Research Strategy

On the basis of this passage, it seems that the similarity relation actually seems to be not between phenomena, e.g. a specific visual phenomenon and its neural correlates, but their graphs. In other words, putting the data in formal-graphical form a similarity of shape is revealed. But what kind of similarity is here at stake? Prima facie, it seems that the similarity is one of visual similarity in the form similar graphs, rather than a mathematically-formally specified similarity. No formal definition is here introduced, and Teller admits that «additional analytical work is badly needed» (ibid., p. 1241). The issue, to be explored later in this work (Ch. 8), is: Is there an isomorphism between graphical representations of visual objects and the underlying neural correlates? Or, more precisely, are graphical representations of visual objects isomorphic with graphical representations of the underlying neural correlates? What emerges from the linking proposition theory is surely that Teller thought it related both to the issue of scientific explanation of perceptual content, and to the heuristic of finding the underlying causes of perceptual content (ibid., p. 1240; cfr. also Ch. 1, §1.3).

Lehar’s works (2003) on Gestalt isomorphism in visual perception is perhaps today the most advanced and sophisticated attempt to provide a systematic analysis of our problem. Lehar’s analysis starts off with the observation that modern neuroscience seems unable to account for conscious experience. After reviewing a number of contrasting philosophical options about the nature of perceptual experience, contrasting direct with indirect realism, he suggests a solution to the problem of explaining how the brain engenders our conscious perceptual experience. Lehar’s solution is that of quantifying «the structural features of the subjective experience» (2003, p. 382). Lehar likens this idea with Chalmers’ principle of structural coherence (1996, pp. 222-225), according to which: «various structural features of consciousness correspond directly to structural features that are represented in awareness» (ibid., p. 223). Chalmers’ principle is meant to show that our conscious experience is not a chaotic blob, but a coherent whole with a specific structure. What this structure would be, is not specified by Chalmers. But Lehar interprets Chalmers’ principle as a restatement of the Gestalt’s concept of isomorphism: «to reflect the central fact that consciousness and physiology do not float free of one another but cohere in an intimate way» (2003, p. 382). From this he infers that «The connecting link between mind and brain therefore is information in information theoretic-terms» (ibidem). Lehar’s isomorphism is a form of functional isomorphism, and in this light he interprets Köhler’s concept of isomorphism. This functional PI between perceptual experience and the underlying physiological correlates would then be grounded in a rejection of dualism:

…the principle [of structural coherence] is actually solidly grounded epistemologically because the alternative is untenable. If we accept the fact that physical states of the brain correlate directly with conscious experience, then the claim that conscious experience contains more explicit information than does the physiological state on which it was based amounts to a kind of dualism that would necessarily involve some kind of nonphysical

45 Chapter 2: A Research Strategy

“mind stuff” to encode the excess of information observed in experience that is not encoded by the physical state. (2003, pp. 382-383).

Hence, again, PI is used to argue for some kind of monistic mind-brain metaphysics, which Lehar thinks constitutes the current orthodoxy: «The modern view is that mind and brain are different aspects of the same physical mechanism» (ibid., p. 376). Here, again it is worth notice that an isomorphism does not support any particular metaphysical standpoint about the nature of conscious experience or the mind. As already observed, two distinct things may be isomorphic, just like the two dice. So for example our conscious perceptual experience may be isomorphic to some physiological processes without being identical with them, or even being made of some Cartesian “mind stuff.”

Lehar’s paper is an instructive and lucid attempt to dispel the fog of mystery that surrounds our concept, but it fails to shed light on a number of issues. In particular, it is not clear what the perceptual content is. Lehar seems to think of it as composed by sense-data (ibid., pp. 377-382), although he also explicitly talks about representational content; for example, he states that «no aspect of the external world can possibly appear in consciousness except by being represented explicitly in the brain» (ibid., p. 377, emphasis added). From a philosophical standpoint, it is not clear whether we should interpret the sense-data as having representational content, or as being themselves representations. Furthermore, it is neither clear what is the relation between perceptual content and consciousness (cfr. Ch. 3, §2.3), nor whether the isomorphism holds between token or types of contents. Once again, however, PI is understood as some kind of heuristic, a helpful concept that may help us bridge the divide between conscious experience and the objective sciences of the mind: «…it should be possible by direct phenomenological observation to determine the dimensions of conscious experience, and thereby to infer the dimensions of the information encoded neurophysiologically in the brain» (ibid., p. 376).

In this section I have not aimed at historical comprehensiveness, my purpose was to provide an overview of the historical development of our concept, especially among psychologists. What clearly emerges from this reconstruction is that PI seems to have played three distinct roles: as metaphysical principle, as explanatory principle, and as heuristic principle. I have ruled out the former one, as PI is compatible with different metaphysical options. Regarding its explanatory role, the only way to assess its validity is by embedding it within a sound explanatory framework. Hence the question: What kind of explanatory framework sustains research on the neural correlates of visual objects? Is PI a fundamental component of psychological explanation? But most importantly, it is meant to play an heuristic role.

46 Chapter 2: A Research Strategy

3. How To Study Psychoneural Isomorphism

At the end of this Chapter, it is now time to put together the different results and outline a strategy to study the concept of PI. In the next section (§3.1), I will outline such a strategy. Later (§3.2), I will briefly introduce the next Chapters, presenting an overview that might help the reader to navigate the next Parts of this work.

3.1 Outlining a Research Strategy

In the first Chapter, I have shown that the proper conceptual location of the problem of PI is the quest for neural correlates of the contents of consciousness (Ch. 1, §§1-2). An examination of the use of the concept of PI within the recent literature also illustrates the contribution of this work to the current philosophical debate (Ch. 1, §3.1). To keep the work within manageable limits, I have narrowed down the focus of this study to visual objects and their neural correlates (Ch. 1, §3.2).

In this Chapter, I have first clarified what is meant with “isomorphism,” identifying three requirements that must be met in order to justify the correct use of the concept of “isomorphism.” I have called these requirements the “Character of Isomorphism” (§1.1). I have then fixed the two domains of our research as the “phenomenological” and the “neural” domain (§1.2). Later (§1.3), I have discussed the relation of PI with the metaphysics of the mind-body problem, arguing that the question of PI is orthogonal to the metaphysics of the mind. This opened the problem of what the significance of PI is, what role this concept is meant to play within contemporary research. I have tried to provide an answer to that question in §2 with an historical reconstruction of PI.

Whether two domains (or a function from a domain onto itself) are isomorphic or not, is not a question that can be abstractly decided. This means that in order to further explore PI we must take a concrete example and develop it up to the point where we can meaningfully conclude whether it is a PI or not. The chosen example is that of consciously perceived visual objects. Following the indications provided by the “Character of Isomorphism,” we can now outline a concrete research strategy:

1. The first step is that of identifying the domains. I have called such domains Ψ—the phenomenological domain—and ϕ—the neural domain. The identification of the domains is easy, and even, to some extent, arbitrary. In our case, the subject of this work being the neural correlates of the contents of visual consciousness, the choice of the domains is fixed by the very nature of our subject. Still, this leaves us with the task of clarifying what are these two domains. 2. We must show that Ψ and ϕ contain elements and that they are relational structures, Ψ and ϕ. Once we will have identified the elements, we will also need to show what kind 47 Chapter 2: A Research Strategy

of relational structures the two domains are. Focusing on visual objects, we will have to articulate at least a general account as to what kind of structure they have, and what philosophical theory of objects is able to capture this structure. At the same time, we will need to show what are the elements that populate the neural domain, and in what sense they can be said to be “structured.” 3. Finally, we will have to identify a function f that completely maps the relational structure Ψ onto the relational structure ϕ.

Each step presupposes a clarification of the former ones; so we cannot properly analyze step 2 without having first analyzed step 1, and step 3 without first analyzing steps 1-2.

Notwithstanding the foregoing, this work would be incomplete without a clarification of what PI is for. One could show that indeed, there is a PI, but that it is a fruitless and empty concept. In light of the historical reconstruction and the discussion of the two debates in Ch.1 §§1-2, we now know that PI is intimately connected with two further notions: that of explanation, and that of a heuristic. Pessoa et al. (1998) as we have seen (Ch. 1, §1.3), described isomorphism—in its various forms, of analytic isomorphism and as a subspecies of linking propositions—as an aspect of the problem of explaining filling-in phenomena, and therefore more generally the problem of explanation in psychology. This aspect also emerged clearly in our discussion of the explanatory framework requested for phenomenological methods (Ch. 1, §2.1), and in the discussion of Arnheim’s eight-isomorphic-levels as postulates introduced for the explanation of the perception of expressions.

The other aspect, the heuristic value of isomorphism, instead, emerged from the historical reconstruction outlined in §2. The need for some principle that could guide the search for the relevant explanatory units in the brain was foreshadowed by Fechner, Mach, and Müller in the XIXth century, and was later taken up by Gestalt psychology. As shown, Köhler bestowed a central role to PI as a conceptual bridge between the phenomenal and neural domains. Other Gestalt psychologists, like Solomon Asch, also described the role of isomorphism as a heuristic principle. In the entry “Gestalt Theory,” for the International Encyclopedia of the Social Sciences, Asch said, under the heading “Nativism” that «there is as yet little understanding of the physiological foundations that gestalt theory sought for psychology, and the postulate of isomorphism remains a heuristic principle» (1968, p. 173, emphasis added). Interestingly, earlier in the same entry he states that: «The postulate of isomorphism is intended as a heuristic guide to investigation. In this manner Köhler sought a unified explanation for facts in neurophysiology and psychology among certain facts of physics» (1968, p. 161). The heuristic and explanatory aspects of PI, therefore, are not mutually exclusive. On the contrary, PI was meant as a way for identifying the processes or parts of the brain that are explanatory relevant to specific structural aspects of the phenomenal domain, i.e. in our case of the visual objects.

48 Chapter 2: A Research Strategy

This set a clear goal for our work: for there being a PI our analysis must satisfy the three requirements of the Character of Isomorphism, but for justifying the role of PI within contemporary research, we need to show that PI plays a heuristic role within the contemporary research for the neural correlates of the contents of consciousness. The rationale of PI stands and falls with this role. We can capture this feature, that we may call the “Interpretation of PI,” in the following thesis:

Talk about psychoneural isomorphism is only justified in so far as it plays a heuristic role in the search for the neural correlates of the contents of consciousness.

An assessment of this thesis will only be possible at the end of this investigation, when steps 1-3 of the Character of Isomorphism will have been analyzed.

3.2 Outline of the Next Chapters

Now that my strategy has been outlined, it remains to be seen how it will be partitioned in this work. Here, I provide a short guide to the next Chapters.

The definition of the two domains is—as I said—little more than matter of stipulation. What is exactly meant with the “phenomenological” and “neural” domain will be object of an extended analysis. Part II of this work comprises Chapters 3 and 4. The central task of Part II is that of clarifying the nature of the phenomenological domain and its elements. Chapter 3 has two objectives. The first one is that of introducing and defining the concept of a “state of seeing” within a broad intentionalist or representationalist framework, according to which states of seeing possess a content, i.e. conditions of accuracy. It is the content of states of seeing that composes the elements of the phenomenological domains. The second objective is to clarify the role of consciousness within this work. As I will argue, my aim is not that of explaining consciousness, but to exploit consciousness in order to identify the relevant contents. In Chapter 4 I focus on visual objects. My central task will be to clarify the nature of the ontological elements that populate the phenomenological domains. After presenting two different views about the nature of visual objects—as bundle of properties and as facts—I will argue against the latter view, providing the basis for a theory of visual objects that will be developed in a later Chapter.

Part III focuses on the neural domain and its elements, and it comprises Chapters 5 and 6. In Chapter 5, I clarify the concept of a “neural correlates of the contents of consciousness” taking a stance against Chalmers’s (2000) approach. I will introduce a mechanistic-manipulationist stance on content-NCCs. My main contention will be that content-NCCs are better understood as a cluster of distinct mechanisms subserving different roles: intentional mechanisms, selection mechanisms, and the proper NCCs. This view will be substantiated by a vast scientific literature. I conclude the Chapter by showing further advantages of my view over Chalmers’ mainstream 49 Chapter 2: A Research Strategy definition. In Chapter 6 I will consider a potential challenge to my interpretation. According to the sensorimotor theory, visual perception can be explained by means of sensorimotor laws that govern the exercise of sensorimotor contingencies. I will advance a novel argument against the sensorimotor theory. I will provide some considerations in favor of a mechanization of the sensorimotor theory. In doing so, the sensorimotor theory can be shown to be compatible with my intentional mechanisms’ approach.

Part IV is more heterogeneous and comprises Chapters 7 and 8. The objective of Part IV is that of clarifying the structure of the phenomenological domain and to examine whether there effectively is room for a psychoneural isomorphism. Chapter 7 returns to the problem of the ontology of visual objects. Here, I will defend the claim that visual objects are better understood as spatial-mereological trope bundles. The first part of the Chapter provides an inference to the best explanation to the conclusion that the properties given in states of seeing are tropes. The second part is a defense of natural class trope nominalism, circumscribed to visual tropes. Chapter 8 deals finally with the central question of this work: how we should understand PI and whether it plays any heuristic role. In particularly, I will return Jean Petitot’s morphological model of neural activity corresponding to perceptual content, and show how to reconcile morphological explanations with mechanisms. It will be shown that PI does not play a heuristically useful role, but that it can serve as a check for correct explanations.

I will briefly sum up the main claims of this work and outline prospects for future researches in the Conclusion.

50

PART II

THE PHENOMENOLOGICAL DOMAIN

STATES OF SEEING AND VISUAL OBJECTS

3

STATES OF SEEING

In the first Part of this work, I have introduced the central issue and outlined a research strategy. In this second Part, I focus on the Phenomenological Domain, Ψ. As I have clarified (Ch. 1, §3), I narrow down the Phenomenological Domain to visual objects, which belong to the sphere of conscious experience (Ch. 2, §1). Accordingly, this Part has two main goals. The first goal is to shed light on the “Phenomenological Domain;” the second one is to specify what are the elements that populate it. I pursue these two goals in this and the next Chapter. In this Chapter, I elucidate the notion of “states of seeing.” This provides the general framework in which I embed the problem of visual objects. I tackle the issue of the nature of visual objects in the next Chapter. By the end of this Part, I will have defined, in compliance with the “Character of Isomorphism,” what is the Phenomenological Domain, and what are its elements. I will return on the issue of the structure of visual objects in Chapter 7.

This Chapter has the following structure. In the first Section (§1), I introduce the notion of “state of seeing” and segment it from the broader and more complex capacity of visual perception. I cast states of seeing in terms of a representational theory of visual perception. This creates the problem of understanding what is the relation between the content of states of seeing and consciousness. I outline the issue in terms of intentionalism in the second Section (§2). Finally, in the third Section (§3), I sum up the results achieved so far, and spell out the role of consciousness in this work.

1. States of Seeing 1.1 States of Seeing and Visual Perception

In this work, the Phenomenological Domain will be restricted to what I call states of seeing. For merely stylistic reasons, I will sometimes use cognate words, such as “seeing” and “see.” States of seeing are mental states that belong to—but do not exhaust—the complex capacity of visual perception, or simply “vision.” The cognitive machinery that is responsible for visual perception in general will be called the visual system. As we will see, the visual system can be decomposed into three sub-systems with different functions, and states of seeing are carried out by one of these sub-systems.

The relevant category of mental phenomena under scrutiny is that of perceptual phenomena. To get a grasp on the notion of perception, it might be useful to start with McDowell’s notion of “openness” to reality, in his words: Chapter 3: States of Seeing

This image of openness to reality is at our disposal because of how we place the reality that makes its impression on a subject in experience. (McDowell 1994, p. 26)

We can flesh out the notion of impression that reality makes on a perceiver as information. In this sense, the impact of reality on the perceiving organism can be cast in terms of informational states about the items that populate the environment. The information retrieved via one of the accredited sense modalities—in our case, the visual system—is processed by the cognitive machinery in a way that does not necessarily require conscious experience. Thus, the word “perception” has a broader scope than mere “conscious perception.” Visual perception is not necessarily, or not entirely, conscious. My usage of the word “perception” is similar to Dretske’s understanding of perception:

The word “perception” is often used more inclusively in cognitive studies. One perceives x if one gets information about x via an accredited sensory system whether or not this information is embodied in a conscious experience. (Dretske 2010, p. 54)

Part of the task of the visual system is that of providing suitable descriptions (or “representations,” cfr. §1.3) of the external environment that might be exploited for purpose of action and behavioral control. This is the descriptive function of the visual system, and the part of the visual system that carries out such a function is the descriptive subsystem. Besides the descriptive subsystem, studies show that another subcomponent of the visual system is the deictic subsystem that governs our sensory-motor capacities and guides action as a response to visual stimuli (e.g. Goodale 2001; Matthen 2005). (I will briefly return on the role of this subsystem in Ch. 6, where I will discuss the sensorimotor theory of vision and visual perception). Finally, there is also a third sensory subsystem, whose function is perhaps more primitive and phylogenetically more ancient than the descriptive subsystem. The function of this subsystem is to govern basic sensory reactions to external stimuli, such as tracking or index mechanisms (cfr. Ch. 4, §3)1. States of seeing belong to the descriptive subsystem.

1 As I understand it, the sensory capacity is a sub-capacity of visual perception. Some philosophers, e.g. Burge (2010) sharply distinguish between “perception” and “sensation” on the ground that only the former, but not the latter, delivers representations of the environments, and therefore states that are assessable for veridicality or accuracy (cfr. §1.3). An example of sensory capacity in primitive organisms is the Schwabe organ present in all genera within the Lepidopleurida, an anatomical synapomorphy of the clade. In a study conducted on Polyplacophora–primitive molluscs without cephalization–the Schwabe organ is located within the pallial cavity. Speculations about its function range from chemeosensitivity to preventing sediment overloading (Sigwart et al. 2014). Another example is the Paramecium caudatum’s sensitivity to temperature change to induce thermotaxis (Tawada & Miyamoto 1973). These functions do neither require complex representational capacities, nor an evolved cognitive system. (Continues on the next page).

54 Chapter 3: States of Seeing

The purpose of the descriptive subsystem is to provide accurate representations of what obtains in the perceiver’s environment. What “obtains” is simply what is present or, more informally, what is the case. We can clarify this by means of an example. Right now, I sit on my desk and write on my laptop. What is the case or what obtains before my eyes are the objects that populate this tract of the world in this very moment. “Obtain” thus contrasts with the notion of merely possible presence: some items might have been present here in this tract of the environment. Perception tells us only what is given or obtains in the environment at a given time (cfr. Ch. 7, §2.2). I will call “material objects” (Matthen 2005, p. 281; cfr. Ch. 4, §§2-3) the items in the environment, whether they are persons, familiar items such as chairs and books, non-human animals, or else. Perception can only capture items of a certain magnitude that fall within the reach of the sensory subsystem. In normal conditions, the exercise of the perceptual faculties of the descriptive subsystem informs the subject about mind-independent items (Strawson 1979, p. 97). In this work I will assume a form of realism about material objects—in contrast, for example, with Phenomenalism, i.e. the claim according to which there are no mind-independent objects (Staudacher 2011, p. 20)—and that these items stand in a causal relation with the perceiver via the sensory subsystem (Grice 1961; Snowdon 1981; cfr. Ch. 4).

Descriptive states are the exercise of descriptive capacities. Some, but not all, descriptive states are conscious. We can distinguish between non-conscious descriptive states and conscious descriptive states. I call the latter “states of seeing” or “seeing” (SoS). More precisely, we can put forward the following definition:

SoS’=df A mental state that has conscious visual descriptive character.

In order to demarcate the descriptive character of states of seeing from the descriptive character of non-conscious descriptive visual states, I will refer to the presentational character of states of seeing, thus:

SoS = df A mental state that has visual presentational character.

This is the benchmark definition of states of seeing that I will employ in this work. The “presentational character” of states of seeing is simply an abbreviated way to refer to their conscious descriptive character. In the next pages, for stylistic reasons I will sometimes refer to

In my jargon, Burge’s “perception” is akin to my “states of seeing” and “visual states.” They are both descriptive mental states, i.e. states that have veridicality conditions except that the former are conscious, whilst the latters are not. The reason why I differ from Burge’s terminology is that I want to stress the continuity between sensory and descriptive states, and therefore between the sensory subsystem and the descriptive subsystem. It is in virtue of this continuity that I base my argument against factualism in Ch. 4. 55 Chapter 3: States of Seeing the presentational character of states of seeing with expressions like “to make manifest” and “to manifest.2”

To sum up some of the key distinctions discussed in this paragraph, we can visualize them in the following chart (Fig. 7).

Visual Perception

Deictic Descriptive Sensory Subsystem Subsystem Subsystem

Descriptive Visual States

DescriptiveStates ofStates of

visual statesSeeing seeing

Fig. 7: The different subsystems of visual perception. The arrows denote the interaction between the subsystems.

I called visual perception the capacity of human beings to visually perceive the environment. Visual perception is a complex capacity that can be decomposed into at least three subsystems. These subsystems are: the sensory, the descriptive, and the deictic. The descriptive visual subsystem is responsible for descriptive visual perceptual states. Descriptive visual states and states of seeing are the outputs of the descriptive subsystem. Every descriptive visual state is a representation of the external environment but not all descriptive visual states are conscious. Some descriptive visual states are unconscious, while some others are conscious. I call the latter “states of seeing.” States of seeing are therefore by definition conscious. I refer to the descriptive conscious character of states of seeing as their “presentational character” to demarcate them from merely descriptive visual states. Both conscious and unconscious states will be called

2 The notion of presentational character is somewhat similar to the phenomenal sense of look- verbs (e.g. Pautz 2011, p. 116). Traditionally, the semantic analysis of look verbs distinguishes between three senses: the comparative, the epistemic, and the phenomenal sense (the locus classicus here is Jackson 1977a, pp. 30-49; cfr. also Chisholm 1957, chapter 4). The latter concept purports to show what is simply visually given at any time to a subject, as in the sentence “This pineapple looks yellow,” and in general, sentences of the form “X looks F to S.” Some philosophers have made reference to the phenomenal sense to support some form of sense-datum or representational theory of perception (Jackson 1977a). However, the semantic analysis of look verbs is not unproblematic. Martin (2010) casts doubt on the very existence of the phenomenal sense of look. Pautz (2010, p. 255) says that it is uncertain whether the phenomenal sense of look captures the perceptual state or rather doxastic states, to the measure that they describe the visual evidence exploited in belief formation. For these reasons, I do not embrace the semantic approach to look verbs. 56 Chapter 3: States of Seeing

“mental states.” Following a widespread consensus, and in order not to prejudice the question of their ontological status, I use the term “state” as an ontologically neutral concept that covers processes, events, or states. Only later in this work (Ch. 5) I will return on the ontology of states of seeing, where I will suggest that they are best understood as processes or events.

We have now two tasks ahead: to clarify the nature of the “descriptive character” of states of seeing (§1.3), and to elucidate the relation between description and consciousness (§2). Before doing that, I first locate states of seeing within other mental states that compose our conscious and unconscious mental lives. As I will now show, states of seeing are a part of our conscious mental lives.

1.2 Unity of Consciousness and the Visual Field

In the foregoing paragraph, I have made the claim that some states are conscious, whilst others are unconscious. This is obviously true not only for visual perception, but also for other sense modalities and cognitive states as well. At any given time, a subject S has many mental states. The sum of all these mental states is the overall state M of S’s mind at that given time. M is the mereological sum of all of S’s mental states at t. M can also be described as an abstract set made of all S’s mental states, such that:

M = {m1, m2, m3, …mn}

That is to say that M is the set composed by all of a subject’s mental states. Some of these mental states, as we have seen, are conscious. We can thus draw from M a proper subset C of all conscious mental states, such that:

C ⊂ M; C = {c1, c2, c3, …cn}

I will call the totality of all conscious mental states at a time t the “phenomenal unity of consciousness” (cfr. Bayne 2010; Bayne & Chalmers 2003) 3 . The various elements that compose C will be conscious mental states of different characters. For example, some of these states will be auditory experiences, some others will be emotional states, and yet others will be cognitive states. Together, they determine “what it’s like” (cfr. §2.1) to be S at a specific time t.

The “phenomenal unity of consciousness” is the mereological sum of all S’s conscious mental states at a given time, without making any particular assumption about the nature of our

3 I borrow the phrase “phenomenal unity of consciousness” from Bayne (2010), but I give it a slightly different meaning. According to Bayne, we can distinguish between a representational unity and a phenomenal unity of consciousness, accepting that consciousness is not identical with the representational content. This however has the drawback of creating a new set of “spooky” phenomenal contents that stand in some relation with the representational contents. I return on the relation between representational content and consciousness in §2. 57 Chapter 3: States of Seeing

conscious experience (cfr. §2). The single conscious mental states c1, c2, etc. are proper parts of a subject’s conscious experience at a given time. There is no agreement about the definition of the notion of “proper part” in mereology. One intuitive way to spell it out is as follows:

PPxy=df Pxy ∧ ¬Pyx

That is to say: x is a proper part of y means that x is a part of y and y is not a part of x. Notice however that some philosophers find more plausible to strengthen the claim and argue that the very notion of a proper part implies that every proper part must be supplemented by another disjoint part in order to constitute a whole (Casati & Varzi 1999, p. 39). In addition, one could refine the definition with modal operators. We do not need to delve deeper into these mereological issues. For our purposes, it suffices to point out that the single conscious mental states of a subject S are proper parts, i.e. non exhaustive-parts, of the mereological sum of all conscious mental states.

The mereological sum C (as well as M) can be studied from two distinct perspectives. The first perspective is a diachronic one. Consider a span of time that ranges from t1 to t2, where t1 and t2 are any two distinct moments in time. However we fix t1 and t2, the diachronic unity of consciousness will be the mereological sum of all conscious mental states of a subject S from t1 to t2 (Rashbrook 2012). Any account of the diachronic unity of consciousness must explain how such a unity holds over time. The second perspective is a synchronic one. This perspective consists in focusing on a subject which is “frozen” at a given time t. In this case, C will be the mereological sum of all of the subject’s conscious mental states at the time t. To account for the synchronic unity of consciousness means to show how different conscious mental states—among them states of seeing, acoustic states, and so on—hold together synchronically. The problem of psychoneural isomorphism can be studied under both perspectives. On the diachronic perspective, PI will be a function that maps out the process of conscious mental states and their relations with the underlying dynamic of brain states and relations. On the synchronic perspective, one will have to find out how C maps onto a neural domain ϕ (cfr. Ch. 2, §1.2). In this study, I will exclusively focus on the synchronic perspective, thus I will frequently make reference to an ideally static perceiver.

Let us now return to states of seeing. States of seeing, as I have said, are conscious mental states. We can therefore extrapolate them from the mereological sum C of all conscious mental states, and compose a subset of C of all and only states of seeing. The mereological sum of all states of seeing of a subject S at a given time t will be the visual field, V (Clark 1996, 2000), we thus have:

V ⊂ C ⊂ M; V = {v1, v2, v3, …vn}

(Notice that, from the above notation it follows that V is also a proper subset of M). The subject’s visual field is composed of all the subject’s states of seeing v1, v2, and so on at a given 58 Chapter 3: States of Seeing time t. V only represents a subject’s visual field within the synchronic perspective of the unity of consciousness. Finally, from V, I will only focus on those states of seeing that represent visual objects (cfr. Ch. 4). This subset of V is our Phenomenological Domain, or ψ, such that:

ψ ⊂ V

In the remainder of this work, assuming a synchronic perspective, I will frequently employ some more specific examples to throw light on the most abstract passages. We can thus introduce an ideal state of seeing v0 as a state of seeing for example this red apple on my desk, or this book.

1.3 The Representational Character of Seeing

I have introduced states of seeing and specified where they are placed with respect to other mental states. States of seeing have a presentational character: they make manifest or present things in the world to the perceiver. The nature of the items manifest in states of seeing is controversial. The contemporary landscape can be divided into two camps: representationalists and naïve realists or relationists (Campbell 2002, pp. 116-120) 4 . As I hinted above, the descriptive or presentational character of a mental state should be understood in representational terms. Hence, I am explicitly espousing a representational theory of states of seeing. In this work, I will simply assume a representationalist framework. I make this assumption for the following reasons: firstly, because it is the current orthodoxy in the philosophy of perception, and it is not my purpose to challenge it in this work (for two arguments in favor of representationalism in the philosophy of mind, cfr. Burge 2005, Pautz 2010); secondly, because it has been shown to cohere well with our scientific theories of the mind, where the notion of representation is ubiquitous (e.g. Bechtel 2001a; Miłkowski 2013, chapter 4); thirdly, proponents of naïve realism have not yet clarified the role of the cognitive machinery in visual perception, which will play a prominent role in the subsequent Chapters. (A notable exception from the relationist camp may be the sensorimotor theory, which will be

4 “Relationist” are sometimes also called views that do not fall squarely within this dichotomy. The sensorimotor theory of visual consciousness is a case in point (O’Regan & Noë 2001; see Ch. 6). The sensorimotor theorist denies that perceptual experiences are representational, although at the same time does not reject representations altogether, or even representational content (e.g. Noë 2002, p. 67; 2004, p. 22). Perhaps, it would be better to cast the distinction in terms of common factor views and disjunctive views (Pautz 2010, pp. 255-265). The former views hold that genuine perceptual states, illusions, and hallucinations have in common nondisjunctive properties, whereas the disjunctive views hold that these states exhibits disjunctive properties (cfr. Hinton 1967). However, this would enlarge the common factor views to encompass not only representationalism, but also the sense-data theory and Peacocke’s “sensationalist view” (Peacoke 2008). The latter philosophical options will not be further discussed in this work. 59 Chapter 3: States of Seeing object of study in Ch. 6). I will now spell out the nature of representationalism by means of a contrast with naïve realism.

Naïve realists contend that veridical perceptions involve a primitive, unanalyzable metaphysical relation to external items. The nature of this relation can be spelled out in different ways. For example, if it is interpreted as a causal relation, then naïve realism may be compatible with representationalism. Indeed, some philosophers take perceptual experiences to be both contentful (i.e. representational) and relational (e.g. McDowell 2013). The naïve realist standpoint that I want to briefly sketch out here, however, stands in opposition to representationalism in that it discards the key notion of content as explanatory unnecessary (e.g. Travis 2004), or as obscuring the nature of veridical perception (e.g. Brewer 2011). This radical form of relationism takes perceptual states (or my “states of seeing”) to be at least partially constituted by the worldly items, these items would «shape the contours of the subject’s conscious experience» (Martin 2004, p. 64), where the concept of “shape” should be read in an ontological, constitutive sense (Fish 2009, p. 6). As Campbell explicitly suggests: «[w]e have to think of the external object, in cases of veridical perception, as a constituent of the experience» (2002, p. 118). So, for example, when S sees a red apple, the properties the subject is acquainted with are intrinsic properties of the object itself, the color and shape of the apple are constituents of states of seeing (Campbell 2002, p. 116; 2010, p. 206). A subject’s conscious experience is then read off from the presentational character of a conscious mental state: what is presented to the subject determines what it is like to enjoy that particular experience (Brewer 2011, p. 92; Martin 1998, p. 174; cfr. §2). Accepting naïve realism means therefore to deny that states of seeing have a representational character (Travis 2004, p. 93)5.

Unlike naïve realists, representationalists maintain that a subject’s perceptual states have a representational character. On some versions of representationalism, states of seeing may have a relational character too, i.e. claim that a state of seeing constitutively is in causal relation with an external, mind-independent item, besides having a representational character (cfr. Ch. 7, §2.2). But representationalists deny the core idea of any naïve realist theory, i.e. that the external, mind-independent items are constituents of states of seeing. In the remainder of this

5 To what degree naïve realism is incompatible with representationalism and intentionality is still matter of controversy. For example, Searle overtly defines himself as a naïve realist (1983, p. 57), and later says that visual states are not “representations” but “presentations” of reality (2015, p. 68). Searle’s terminology differs from mine, however. Whereas I prefer to talk about “representational” states, Searle opts for “presentational” states, although his theory is canvassed within the framework of intentionality just as mine. Although I employ the representational terminology, I only offer a minimal account of the representational character of states of seeing. A fully articulated theory would require us to take a stance about the controversial problem of intentional inexistence, which focuses on the relation between the representational or intentional state and the item(s) it is about (cfr. Crane 2013). This task however exceeds the scope of this work. 60 Chapter 3: States of Seeing work, I will accept that states of seeing involve a genuine causal relation with worldly items (cfr. Ch. 4, §3), but I remain agnostic about whether this causal relation is compatible with some form of naïve realism or of anti-individualism (e.g. Burge 2010)6.

The claim that states of seeing have representational (or descriptive) character is tantamount to saying that they have contents. The term “content” can be used in different senses. In one sense that I call “trivial,” the content of a state of seeing is identical with its presentational character, where this character trivially specifies what the subject sees at a given moment. Arguably, this is the sense that Macpherson (2011) has in mind when she states that «there is always at least a minimal sense in which perceptual states are representational» (p. 130). In this trivial sense, the content of a state of seeing v0 is simply this red apple or that particular copy of the book. (This claim should not be confused with the naïve realist’s contention that the object itself is a constituent of the perceptual state). It is relatively easy to establish that perceptual states have contents in this trivial sense. To a rough approximation, the trivial sense is simply identical with the claim according to which we see items and their properties (Siegel 2010a, p. 45; cfr. also Byrne 2001 for a somewhat similar argument). However, as it stands, the trivial sense can be broadly accepted by virtually every philosopher of perception, for one thing, every philosopher of perception is committed to the view that states of seeing constitutively present properties to the subject7. Sense-data theorists, for instance, maintain that colors are sense-data, or properties of sense-data (e.g. Moore 1953, pp. 40ff). Similarly, Campbell (2002, p. 116) defines the properties that are «revealed» (p. 118) in states of seeing as intrinsic properties of the objects.

In contrast with the “trivial” concept of content, I shall here use the term “content” in a technical sense. I provide a more exact clarification of this notion below, but very roughly, we can say that the content of a mental state is its conditions of accuracy8. The technical sense of “content” is drawn from the theory of intentionality. The concept of intentionality is the

6 The view that I advocate here is fairly innocent: most philosophers are committed to some version of the causal theory of perception, although controversy ensues about both the exact causal patterns that hold between worldly items and states of seeing, and about whether the causal theory is a conceptual truth or rather an empirical truth (e.g. Snowdon 1981). 7 In defending Resemblance Nominalism, Rodriguez-Pereyra introduces an innocuous use of the term “property” that does not postulate universals or tropes. Take for example a set of red roses: «whatever it is that makes all red particulars red need not be an entity, like a universal or trope» (2002b, p. 17). For now, I will use the term “property” in this neutral sense. I will show the relevance of the problem of properties for the present investigation in Ch. 4, §1, and return at length on it in Ch. 7. 8 In the literature, the following expressions are used interchangeably: conditions of accuracy, of veridicality, or conditions of satisfaction (Searle 1983, p. 48; 2015, p. 57). I prefer the term “accuracy” for merely stylistic reasons. I suspect that these concepts should actually be sharply distinguished. For reasons of space, I will not further elaborate on their distinctness, as in this work I am exclusively concerned with the “internal psychophysics” rather than the relation of perceptual content with the environment (cfr. Ch. 2, §1.2).

61 Chapter 3: States of Seeing capacity of the mind to refer to or be about something else (Crane 2001, chapter 1; Searle 1983). A mental state that exhibits intentionality is said to have intentional character. For example, thoughts are always thoughts about something: we think about a particular object that we have seen, about a friend that lives far away, and so on. Similarly, memories are always memories of something, like the memory of our last holiday, or of our schooldays. The purview of intentionality is disputed among philosophers (Byrne 2001, p. 205). Some philosophers contend that all mental states are intentional, including bodily sensations like itches, physical pain, emotions, moods, and feelings. Representationalism about perceptual experience is the claim that perceptual states, in our case, descriptive visual states, both conscious (states of seeing) and non-conscious descriptive visual states have an intentional character: they are mental states that refer to something else (e.g. Byrne 2001, 2011; Chalmers 2010a, 2010b; Crane 2001, 2003; Dretske 1995; Pautz 2010, 2011; Searle 1983, 2015; Siegel 2010a; Tye 1995, 2000)9. Whenever we see, we see something; states of seeing and visual descriptive states are about something.

Following Twardowski, Crane (2001, p. 29) distinguishes between the object and the content of an intentional state. The object of a representational state is what the state refers to. So, for example, the state of seeing a red apple refers to a mind-independent object, the apple itself; a thought about my friend in Japan refers to a particular individual, and so on. It is not always clear what the object of a representational state is, or might be. In cases where the direction of fit is toward the environment, it is fairly easy to specify the individual or the individual item(s) a representational state refers to. States of seeing present or make manifest items in the world. My state of seeing the book with a red cover is directed to or about that red book on my desk. Yet, it is not easy to specify the objects of states directed to non-existent items. Thoughts directed towards non-existent objects actually abound in our lives. We can think about fictional characters, like Medea from Euripides’s tragedy. Descriptive perceptual experience, too, might be directed toward non-existent objects. A popular example among philosophers of perception is Macbeth’s hallucinating a dagger. Hallucinatory experiences fail to refer in that they are about objects that do not exist. Representationalism about perceptual experience should therefore articulate an explanation of hallucinatory experiences, i.e. of how some states might be about something that does not exist. For my purposes, it will suffice to focus on states of seeing, thus I will set the problem of hallucinations to one side10. Also, in order not to confuse between

9 The terms “intentionalism” and “representationalism” are sometimes used synonymously, whilst some other philosophers call “representationalism” the view according to which conscious experience is identical to representational content (or “strong representationalism,” cfr. §2.2). In this work, the term “representationalism” refers to the claim that visual perceptual states have descriptive character, i.e. are assessable for accuracy. The term intentionalism will be used to refer to the claim that the phenomenal character depends on content. 10 Recall that I take states of seeing to be genuine perceptual states, in contrast with hallucinations. The problem of hallucination will only play a marginal role in this work, though few remarks on it will be found in several Chapters. Of course, in line with the assumed 62 Chapter 3: States of Seeing different meanings of the word “object,” I will substitute Crane’s notion of object with “material object” (cfr. Ch. 4, §3). Material objects are always mind-independent items.

The second concept is that of “content.” As I have anticipated, the content of a representational state is its conditions of accuracy, i.e. the conditions under which the intentional state accurately represents the object it is about. Consider the case of photographs or portraits. A portrait might be more or less accurate with regard to the portrayed subject. Thus, accuracy comes in degrees. Where there is no object, such as in the case of hallucinations, the state of seeing will be inaccurate or misrepresent the object. How exactly accuracy might be further specified depends on what format the representational content takes. Concerning perceptual states, according to some philosophers the content of a state of seeing is a proposition (e.g. Byrne 2001; Chalmers 2010a; McDowell 1994; Schellenberg 2010, 2016; Searle 1983; Thompson 2009; Tye 1995, 2000), whereas others might opt for a non-propositional account of perceptual content, such as a scenario content (Peacocke 1992, pp. 61-98) or a property-complex view (Pautz 2007, pp. 498-499). I will leave open the problem of the nature of perceptual content for the time being, and return to it in Chapters 4 and 7.

The content of a mental state refers to a specific object from a given perspective or point of view. Suppose for example that Peter sees his cat Tibbles sitting on the mat11. Tibbles will be the material object of the state of seeing. But of course, Peter does not see the object, Tibbles, as such, instead he sees Tibbles from a particular viewpoint. So, perhaps Peter is in front of Tibbles thus representing only some of Tibbles’ properties (Ch. 4). The concept of a perspective or aspectual shape (Crane 2001, p. 18) captures the intuitive idea that a state of seeing presents a material object as being some way from a particular viewpoint.

A further element must be described in order to have a clear picture of the basic structure of intentionality: the mode. The “mode” of an intentional state refers to the modality in which a particular object is represented. For instance, in the case of states of seeing, the mode will be “visual perception.” The same object, say, Tibbles, might be represented by intentional state with different modalities. For example, Peter may remember his cat Tibbles or think about him. In every case, there is a material object, a real-world item, a cat, and a content that fixes the accuracy conditions of the representational state. Mode, content, and (material) object are the three defining features of intentional states. Each intentional state is then ascribed to a particular subject, such that S has an intentional state v0 that is directed at an object, say a red apple, represented through the mode “visual perception,” and having some degree of accuracy. representationalism, I take hallucinations to have a (mis)representational character (cfr. Miłkowski forth.). Naïve realists, and philosophers who reject representationalism altogether, have different options to cope with hallucinatory experiences, but the standard move is to take such experiences to be misjudgments of what is the case (e.g. Fish 2009, pp. 80-115). 11 The example is inspired from Peter Geach, Reference and Generality, Ithaca, NY: Cornell University Press, 1980, p. 215. 63 Chapter 3: States of Seeing

The purpose of this paragraph was to shed light on the concept of states of seeing. There are still a number of open issues that must be addressed: the relation between representational content and consciousness, defining what is exactly presented to the subject in states of seeing, and elaborate on how the visual system may generate conscious content. I broach the first issue in the next Section, and the second one in Ch. 4 and 7. The latter issue will be introduced and discussed in Ch. 5. Later (Ch. 8, §2), I will argue that philosophers may provide phenomenological models of perceptual content, i.e. merely descriptive, and non-explanatory models.

2. Content and Phenomenology

States of seeing are conscious mental states, and the presentational character of such states is what differentiates them from unconscious descriptive visual states. In the previous Chapters, I have often employed the concept of consciousness assuming some pre-theoretical understanding of the concept—after all, we all have some intuitive idea of what we talk about when we talk about consciousness. However, if the problem of PI is inextricably related to the problem of the neural correlates of conscious visual content, the concept of consciousness must be set on a consistent and intelligible footing (besides the specific works cited below, on consciousness cfr. also Bayne et al. 2009; Rose 2006; Seager 1999; Velmans & Schneider 2007; Zelazo et al. 2007).

In §2.1, I introduce the concepts of phenomenal and access consciousness. Some philosophers, most notably Block (2007), have argued that the two concepts denote not just two aspects of the same phenomenon, but two different, although closely related phenomena that may, under some condition, part from each other. The thesis is known as the “overflow” argument and endorsing or rejecting it has significant consequences for any account of the neural correlates of conscious content. The overflow thesis will briefly be discussed in §2.2; and than later in Ch. 5. Finally, in §2.3 I will introduce intentionalism, i.e. the claim according to which our conscious experience depends somehow on the representational content. The aim of this paragraph is to shed light on the notion of consciousness and, most importantly, to bring into clearer view the role of consciousness in this work.

2.1 What Does “Consciousness” Mean? 2.1.1 Phenomenal and Access Consciousness

Over more than fifty years, philosophers have introduced many concepts in the attempt to clarify the nature of our conscious experience (cfr. Rose 2006; Van Gulick 2009). The single most widely discussed notion of “consciousness” is that of phenomenal consciousness. Phenomenal consciousness, or simply p-consciousness, refers to the particular qualitative character of conscious states in contrast with unconscious states (e.g. Burge 1997, p. 427, 2006; Searle 2004, p. 134), or as Chalmers puts it, «the way it feels» (1996, p. 11). This “way of 64 Chapter 3: States of Seeing feeling” is nicely captured by Thomas Nagel’s (1974) famous “what-is-it-like-to-be” (cfr. also Farrell 1950). For instance, Ned Block says that «what makes a state phenomenally conscious is that there is something ‘it is like’ to be in that state» (1995, p. 377); whilst Chalmers glosses the concept as follows: «what it means for a state to be phenomenal is for it to feel a certain way […] in general a phenomenal feature of mind is characterized by what it’s like for a subject to have that feature» (1996, p. 12). As I hinted above, philosophers have introduced a great deal of concepts in the debate: few more examples include Shoemaker (1994, p. 22) who talks about «qualitative character», or «subjective character», endorsed by Metzinger (1995, p. 9) and Schlicht (2011). To avoid confusion, I hold the following concepts as synonym of p- consciousness: “phenomenality,” “conscious,” “experience,” and cognate constructions. The term “phenomenology” although sometimes used in the sense of “phenomenal consciousness”—e.g. the phenomenology of a state of seeing is what it is like to enjoy that state of seeing—will also be used in the sense of the field of study whose objective is the analysis of conscious experience.

Another popular concept of consciousness is that of access consciousness or a-consciousness (Block 1995). Ned Block introduced this concept in a classical study:

A state is access-conscious (A-conscious) if, in virtue of one’s having the state, a representation of its content is (1) inferentially promiscuous […], that is, poised for use as a premise in reasoning, (2) poised for rational control of action, and (3) poised for rational control of speech. (Block 1995, p. 231)

Put crudely, the idea is that a representational content is access-conscious if that content is available to other cognitive modules (Block 2007). A somewhat similar concept is that of psychological consciousness (Chalmers 1996, p. 25-31). Chalmers’ psychological consciousness is a catch-all concept that refers to a number of ways in which consciousness is operationalized, these include introspection, reportability, self-consciousness, attention, and more (ibid., pp. 26- 27). The «most general brand of psychological consciousness» according to Chalmers is awareness: «a state wherein we have access to some information, and can use that information in the control of behavior» (ibid., p. 28).

Block identifies three differences between p- and a-consciousness (1995, p. 170). The first difference is that p-consciousness is phenomenal, whereas a-consciousness is representational. A-consciousness presupposes a representational state, i.e. a mental state that refers to or is about something. Phenomenal consciousness is, on the contrary, a purely qualitative concept12. Another way to spell out this point is to say that a-consciousness is transitive, it is always consciousness of something; whilst p-consciousness is intransitive, i.e. it is not consciousness of

12 It must be noted that Block slightly altered his concepts over the years and the distinction between the two is not so clear-cut (cfr. Schlicht 2012). 65 Chapter 3: States of Seeing something. The second difference is that a-consciousness is a broadly functional notion, whereas p-consciousness is not functional. The third difference is an asymmetry between the two. P-consciousness divides in kinds: the feel of pain is a conscious kind of which this particular pain experience is a token, i.e. every token pain will be an instance of a particular type of p-consciousness. On the contrary, a particular a-conscious state need not be accessible at some other time. Whether these differences are merely conceptual distinctions operated on the same phenomenon, or rather are signs of a deeper ontological divide is a claim that will be discussed in §2.2.

2.1.2 State Consciousness, Creature Consciousness, Background Consciousness

Most of my examples so far are specific mental states being conscious, for instance, the state of seeing a red apple. In these cases, the adjective “conscious” is predicated of a token mental state. It is also possible to enlarge our perspective and apply the adjective “conscious” to organisms. Most of us take as uncontroversial that some non-human animals are conscious, whilst other entities are clearly non-conscious13. For instance, stones, corkscrews, and CDs are not conscious, whereas human beings, and in all likelihood cats, chimps, and other non-human animals are conscious (Allen & Bekoff 2007). The assertion that the latter entities are conscious does not imply that they are necessarily always conscious14. All it is required is to say that human beings and other non-human animals have the capacity of being conscious. This notion of consciousness is sometimes called creature consciousness (Rosenthal 1986). If we accept the distinction between p- and a-consciousness, we can further distinguish between creature p- consciousness and creature a-consciousness. The former refers to the capacity of an organism to have p-consciousness, and the latter to the capacity of an organism to have a-consciousness. If, as some researchers contend, there is a real distinction between p- and a-consciousness, we may conceive organisms that possess creature p-consciousness in the absence of creature a- consciousness, and vice versa (cfr. §2.2).

13 Perhaps this is an overstatement. Panpsychists contend that consciousness is literally everywhere: every causal process in the universe, from the complex operations of the brain to micro-physical interactions have a phenomenal aspect (Seager 2007). Panpsychists usually argue that whilst consciousness or proto-consciousness is literally everywhere, a complex and rich conscious life is only possible within highly complex systems, such as the human brain. If panpsychism is true, it follows that there are no neural correlates of consciousness, as every causal or functional process has an intrinsic conscious or proto-conscious side. This, however, does not rule out the neural correlates of conscious content that are the object of study in this work. 14 Whether for example human beings are conscious in every moment of their lives is matter of controversy. One obvious objection could be for example to show that patient in a vegetative state, in coma, or simply asleep are not conscious. However, we currently have no clear evidence that patient in vegetative state or coma lack consciousness altogether, cfr. Ch. 1, §2.1; concerning sleep, dreams are usually conceived as conscious states. 66 Chapter 3: States of Seeing

Most cognitive scientists assume that our capacity of being conscious is the actual exercise of some specific kind of brain activity. The search for the biological and computational correlates of conscious experience is the aim of the research program on the neural correlates of consciousness or NCC (Bayne 2007; Chalmers 2000; Crick & Koch 1990; Hohwy 2007, 2009; cfr. Ch. 5). In the scientific literature, Rosenthal’s concept of creature consciousness is often identified with the concept of state consciousness. State consciousness refers to an organism’s overall mental state of being conscious. For instance, right now I am in an overall conscious state, independently from the specific conscious mental states I enjoy. As I am using the terms here, however, state consciousness is not equivalent to creature consciousness. The difference is roughly this: creature consciousness refers to the entity’s capacity of being conscious (p- or a- conscious), whilst state consciousness refers to the organism’s overall condition of being conscious at a particular moment in time. We can clarify this by means of an example. Suppose that the (blind) neuroscientist Mary has a tragic accident and falls in a potentially reversible coma. While in coma, we would still say that Mary is creature conscious, i.e. she is a being that has the capacity of being conscious, although this capacity is not actually exercised. Yet, Mary is arguably not in a state of consciousness, as coma may be an entirely unconscious condition. Research on the content-NCCs assumes that the subjects are already in a state of consciousness (cfr. Ch. 5).

Being in a state of consciousness, at least in humans, implies that the subject is always in a specific background state of consciousness (Chalmers 2000, pp. 18-19). The notion of background state of consciousness is still controversial like the notion of “levels of consciousness” (e.g. Bayne 2007; Laureys 2005; Overgaard & Overgaard 2010). It is difficult to explain the notion of a background state without recurring to some examples. Typical forms of background states include: normal conscious wakefulness, mind-wandering, different stages of sleep like N-REM 1 or REM sleep (e.g. Flanagan 2000; Mancia 2006) or forms of detachment like derealization or depersonalization, whereby the subject feels, respectively, reality as “dream- like” (whence the name “oniroid states”), or her own mental states as “alien” (e.g. Liotti 2008; Simeon & Abugel 2006; cfr. also Hobson 2007). Alterations of the background state of consciousness might be induced by endogenous or exogenous factors. Endogenous factors include circadian rhythms of wakefulness and sleep; exogenous factors include traumatic events or ingestion of psychotropic substances. There is little doubt that these background states make for a different overall phenomenal experience. Being conscious and drunk is different from a state of normal conscious wakefulness, or of dreaming. What exactly accounts for these differences is far from clear15 . In this work, I follow the implicit assumption of much of

15 I have done some spadework towards an account of background states in terms of restructuring of the overall accessibility of the conscious contents (Vernazzani ms). In short, the proposal is that, for instance, in a state of derealization the subject will have a particular access- profile to her own mental contents, whereas the access-profile to her own mental contents will be different in a state of normal conscious wakefulness; i.e. some mental content might be 67 Chapter 3: States of Seeing contemporary philosophy of mind, and assume that states of seeing are of a subject in normal conscious wakefulness.

Let us now return to the mental states. As we have seen, some mental states are conscious, whilst others are not. Enjoying a conscious mental state entails being in a state of consciousness, having a particular background state, and being creature conscious. The identification of the neural machinery that makes a particular content conscious goes under the name of the search for the neural correlates of conscious content, or “content-NCC” (Bayne 2007; Chalmers 2000; Crick & Koch 1990; Hohwy 2007, 2009). In the case of states of seeing, the driving research question underlying this program is: In virtue of what do we enjoy states of seeing? The question may admit different theoretical and experimental answers, pending also on our assumptions about the divide between p- and a-consciousness (cfr. Ch. 5). If the two phenomena are ontologically and empirically dissociable, one could argue that state of seeing might be p-conscious without being a-conscious, and therefore that two different kinds of mental machineries are involved in making states of seeing both p- and a-conscious. Our philosophical assumptions about the nature of consciousness thus bear on the way we model the underlying mechanisms responsible for a subject’s conscious experience. I will now turn to the debate about the relation between p- and a-consciousness.

2.2 Accessibility and Phenomenal Overflow

In the last decade, Block has marshaled the argument that p- and a-consciousness are not mere conceptual distinctions, but rather two distinct phenomena that may be dissociated under some particular circumstances (Block 2007, 2008, 2011). In his (2008) contribution, Block illustrate this point by means of a simple case. It has been shown by a number of experiments now, that an area at the bottom of the temporal lobe is strongly correlated with experience of faces (e.g. Kanwisher 2010; Kanwisher et al. 1997) (cfr. also Ch. 5, §4.2.1). The area is now widely known as FFA, or fusiform-face area. Many experimental settings showing the correlation of FFA with face experience have involved binocular rivalry: cases of dichoptic presentation, where subjects report experiencing either the left eye content, or the right eye content—say, a house, or a face (e.g. Blake et al. 2014; cfr. also Ch. 5, §4.2.2). Now, when subjects report seeing a face, a much stronger correlation is observed in the FFA, leading to the suggestion that this area may be specialized in face experience.

No one thinks that activation of FFA alone is sufficient for producing a face experience. A whole number of other processes and supporting factors are required for a subject to experience a face (cfr. Ch. 5, §3.1, §4). Yet, the question is: given that all these additional

accessible to a number of higher cognitive functions in a state of normal conscious wakefulness, whereas the very same content might only be partially accessible in an altered state of consciousness, or even accessible to different cognitive functions 68 Chapter 3: States of Seeing processes occur, what is the FFA responsible for? Being phenomenally conscious of a face, or having access to the experience of a face? According to Block, there might be some evidence that the FFA—and this is merely one example among others—is but a neural correlate of phenomenal consciousness. Here’s how the story goes. Some patients, in particular, those affected by visuo-spatial extinction, are perfectly able to see a single object in their visual field. However, if two objects are presented, the patients claim that they can only identify the ones on the right side of their visual field, and not see the object on the left side (Aimola Davies 2004). Suppose that a subject is presented with an ordinary object on the right side, and a face on the left side. Now, since patients suffering from visuo-spatial neglect report not to be able to see the items presented on the left, in this case, a face, it would naturally follow that we should not detect any activity on the FFA. Yet, as Rees has shown in a series of experiments (Rees et al. 2000) on a patient known as ‘GK’ the FFA seems to light up even when a face is presented in the invisible hemisphere of GK’s visual field.

As Block acknowledges (2008, p. 291), this bizarre result may have multiple explanations. For example, it may be the case that the FFA is not alone responsible for face experiences (A possibility I briefly discuss in Ch. 5, §4.2.1, ft. 16). However, the solution Block prefers is that, although FFA is the «core neural base» for face experience, it is so only in the phenomenal sense of consciousness, and not in the access-sense. In other words, what happens in cases like those of patient GK is that a brain lesion has damaged the machinery responsible for a- consciousness, rather than p-consciousness. On this interpretation, therefore, GK does have p- consciousness of a face, but he cannot report it because that information does not make it within the machinery of higher cognitive areas16. Hence, phenomenal and access consciousness would have two distinct neural correlates and would be two distinct phenomena.

2.3 Representational Content and Consciousness 2.3.1 Intentionalism

States of seeing have a presentational character, i.e. they are descriptive or representational states and they are conscious. This leads us to the following question: What is the relation between content and consciousness? Most representationalists have claimed that consciousness is somehow related to content. This relation can be one of supervenience (nomological or not), or assume some stronger form, like identity. I call the position according to which the phenomenal character of a subject’s perceptual state depends completely or partly on its content intentionalism. This position can be articulated in several ways, for example claiming that phenomenology is just identical with a particular kind of content, or that it supervenes on content when some conditions are met. Notice that representationalism—as I have defined it—

16 In Ch. 5, §4.2.1-2, I will briefly return on this example, and show that my account offers a straightforward interpretation that does not require a distinction between a- and p- consciousness. 69 Chapter 3: States of Seeing does not entail intentionalism. One can be a representationalist about mental states in general, whilst claiming at the same time that phenomenology does not exclusively depend on content17.

Most contemporary analytic philosophers endorse a form of separatism. Separatism is the claim that consciousness and intentionality are two distinct phenomena. Separatism is the current orthodoxy among philosophers of mind and finds wide support in the cognitive sciences: a mental representation does not need to be conscious, it is only in virtue of something else, an additional ingredient, that that representation becomes conscious. Separatism is opposed to inseparatism, the claim that consciousness and intentionality cannot be separated, but today this option has largely slipped from prominence and is no longer discussed in contemporary debates18.

2.3.2 Varieties of intentionalism

Schlicht (2011) contrasts two groups of theories of consciousness: physicalist and functionalist theories (cfr. also Rose 2006). The former theories seek to provide thoroughly empirical solutions to the problem of consciousness, such as for example the 40Hz oscillations hypothesis (e.g. Crick & Koch 1990; Engel & Singer 2001) or recurrent processing (Lamme 2006) (cfr. also Ch. 5). The latter theories seek to characterize conscious states in terms of representations with specific functional profiles. As it stands, I think that the dichotomy is only apparent, as most theories of the latter group can be seen as providing only abstract models (cfr. Ch. 5, §1) of the relation between consciousness and representations, and that such models may be logically compatible with one or more empirical hypotheses of the physicalist camp19. Consequently, here I will briefly sketch out the major intentionalist theories of consciousness. According to some intentional theories, consciousness is a property of some mental states, whereas according to other theories, consciousness would be a specific kind of content. Another way to map the conceptual geography of intentionalism is between same-order and higher-order theories. But

17 A case in point is Noë’s (2002) claim that although perceptual states have representational content, their phenomenology is constituted by the active exercise of our sensorimotor capacities (p. 67) (cfr. Ch. 6). Other philosophers posit irreducible qualitative properties, sometimes called “qualia,” that either totally (e.g. Block 2010) or in part (e.g. Shoemaker 1990) determine the phenomenal character of a state of seeing. 18 A further distinction can be drawn between intentionalism and prioritism (cfr. Pautz 2013). The former is the view that consciousness is grounded in intentionality, whereas the latter is the view that intentionality is grounded in consciousness. To the extent that both accept representationalism, they only diverge about how to account for p-consciousness. Since in this work I mainly focus on the content of states of seeing, I take my views to be broadly compatible with both intentionalism and prioritism. I will however assume intentionalism, as it is the mainstream view of representationalism. 19 In addition, Schlicht’s (2011) list can be upgraded with the group of cognitive (McGovern & Baars 2007) and computational theories of consciousness (Sun & Franklin 2007). 70 Chapter 3: States of Seeing for expository reasons, I settle for the widely accepted distinction between weak and strong intentionalism.

Weak intentionalism is sometimes defined as the claim that «phenomenal experiences (of a given class) always have representational content» (Chalmers 2010b, p. 344). As Chalmers observes, this claim is rarely denied, and as it stands it is broadly compatible with most standpoints about perceptual experience. Crane (2001, p. 83-84) gives several definitions of weak intentionalism, among them: «all mental states are intentional, but some have non- intentional conscious properties or qualia». Qualia are supposed to be intrinsic properties of experience, properties that in many (but not all) philosophical accounts are deemed irreducible to representational properties. Crane’s definition assumes that qualia are higher-order properties, or properties of properties. Exactly in virtue of what some representational properties have qualia depends on a number of other assumptions about the ontology of p- consciousness. Another definition of weak intentionalism put forward by Crane is that not every phenomenal difference is mirrored by a representational difference (p. 84). Weak intentionalism, as I understand it, differs slightly from both Chalmers’ and Crane’s use. Weak intentionalism here will be defined as the claim that although consciousness relates to representational content, it is not reducible to it, i.e. weak intentionalism is non-reductive intentionalism.

One way to spell out weak intentionalism is by means of the notion of supervenience (Kim 1993). P-consciousness supervenes on representational content if variations in content produce variations in p-consciousness. The supervenience relation may be one of nomological supervenience (Chalmers 1996). On this view, p-consciousness supervenes on content in virtue of some law of nature that fixes the relation between p-consciousness and content. Weak intentionalism can be formulated in many ways. One way, following Byrne has it that:

For any two possible experiences e and e*, if they differ in phenomenal character, then they differ in content. (Byrne 2001, p. 217).

Fish calls this the “Mirroring Thesis” (2010, p. 67) (Seager & Bourget 2007 express a similar view called the “exhaustion thesis”). The mirroring thesis is a biconditional, and states that every phenomenological variation necessitates a variation in content, and every variation in content necessitates a variation in phenomenology. It is therefore possible to read off the content of a state of seeing from its phenomenal character, and vice versa, it is possible to read off the phenomenal character from the state’s content. We can also cash out the mirroring thesis in a mathematical form, and, say, that given two domains of a state’s content and of phenomenology there is a bijective function such that every element of content is associated to one and only one element of phenomenology, and vice versa. If structure is included, the mirroring thesis can be described as a form of morphism, or better an isomorphism, between the two domains.

71 Chapter 3: States of Seeing

Not every weak intentionalist accepts the mirroring thesis. Some philosophers argue that whilst p-consciousness is at least partially dependent on content, not every phenomenological variation is mirrored by a variation in content. Chalmers for example states that «the most plausible potential cases of phenomenally distinct visual experiences with the same representational content involve differences in attention» (2010a, p. 348), although he does not delve deeper into this proposal. Searle is another defender of this variety of weak intentionalism. On his account, a subject may have two phenomenologically identical experiences that have different conditions of accuracy because each experience is «self-referential» (1983, p. 50). Searle’s example bears on the problem of the particularity of perceptual experience (cfr. Ch. 4, §1; Ch. 7, §2.1), i.e. the problem of explaining how phenomenology seemingly informs us about particulars (cfr. Schellenberg 2016). Searle’s example goes as follows. Suppose two identical twins have type-identical visual experiences of two different but type-identical cars at the same time in type-identical background conditions. According to Searle, the twins will enjoy the same, type-identical phenomenology, but the conditions of accuracy of their states of seeing would be different since they are of two numerically distinct cars. Hence, he concludes: «Same phenomenology; different contents» (Searle 1983, p. 50). Both Chalmers and Searle are intentionalist, i.e. they believe that there is a close relation between content and phenomenal character, but they both deny that the latter is uniquely determined by content20.

Let us now turn to strong intentionalism. I call “strong intentionalism” every view that somehow reduces p-consciousness to content (cfr. Seager & Bourget 2007). There are two main groups of theories: same-order theorists maintain that states of seeing are conscious in virtue of their possessing a particular kind of representational content or functional profile; higher-order theorists, on the contrary, maintain that states of seeing are conscious thanks to some higher- order state that is directed at the first-order state. I will consider them in this order.

Perhaps the best-known examples of first-order theories are Michael Tye’s PANIC theory (1995, 2000) and Dretske’s teleological representationalism (1995, pp. 65-95). I will focus exclusively on Tye’s account. According to Chalmers and Byrne consciousness is a property that some mental states have: «[…] the phenomenal character of an experience is a property of the experience» (Byrne 2001, p. 201). Tye rejects this view: «[…] the phenomenal character itself is not a quality of your experience to which you have direct access» (2000, p. 47), and «Phenomenal content […] is not a feature of any of the representations occurring within the sensory modules» (1995, p. 137). Tye contends that a mental representation is conscious only

20 Searle develops his intentional account of perceptual experience adding further ingredients to the content. More specifically, he thinks that the Network of intentional states as well as the Background of non-representational mental capacities affect perception and therefore its conditions of accuracy (Searle 1983, p. 54ff). Other putative cases of phenomenology not uniquely determined by content may be the inverted spectrum scenario, where representation of colors does not match their subjective experience (e.g. Fish 2010, pp. 70-71). 72 Chapter 3: States of Seeing when it meets some specific criteria, namely, if mental representations are Poised, Abstract, Non-Conceptual, Intentional Contents. “Intentional content” requires no further clarification, it simply refers to the fact that such states exhibit intentionality or aboutness. That states must be “poised” means simply that their contents must «[…] attach to the […] maplike output representations of the relevant sensory modules and stand ready and in position to make a direct impact on the belief/desire system» (ibid., p. 138). The content is furthermore “abstract” meaning that it does not demand any particular object «enter into these contents», since experiences caused by different material objects can look and feel phenomenally exactly alike. (Again, here Tye’s remarks bear on the issue of the particularity of perception, cfr. Ch. 7, §2.1). Finally, the content must be “non-conceptual” meaning that the features (cfr. Ch. 4, §1) entering into the contents of states of seeing need not «be ones for which their subjects possess matching concepts» (ibid., p. 139)21.

In contrast with same-order theories, higher-order theories claim that the content of some mental states become conscious thanks to a further or higher-order state that is directed at its content (Carruthers 2007). An initial distinction can be made between higher-order thought theories, and the inner-sense theory or higher-order perception theory. According to the latter the content of a state of seeing is conscious thanks to a higher-order perceptual state or “inner perception” that scans or is directed at that content (e.g. Lycan 1996). In contrast with the inner sense theory, higher-order thought theories come in at least two varieties: actualist higher-order thought theories, and dispositionalist higher-order thought theories. The former predicts that the content of states of seeing are made available to the conceptual system that classifies them and form judgments. The higher-order awareness is conceptual or propositional in nature, so that, roughly, my state of seeing a red apple is conscious in virtue of a higher-order state that make me come to believe that I am undergoing an experience of a red apple. For some actualist higher-order theorists a merely occurrent higher-order belief is sufficient for phenomenal consciousness (e.g. Rosenthal 1997), whereas for other theorists the belief must be justified and hence count as knowledge (e.g. Gennaro 1996; at least, if knowledge is a justified

21 One of the most debated issues in the philosophy of mind is the problem of conceptual content. Roughly, the question is whether the contents of mental representations are constituted by concepts or not. The issue was famously raised by Evans (1982, §§5.1-2, §7.4), who claimed that the operations of the information-gathering system are less sophisticated than those connected with the formation of belief states, which are connected with the notion of reason (ibid., p. 124). McDowell (1994) famously argued, contra Evans, that perception is indeed conceptual in order to include it within the “space of reasons” (cfr. also Heck 2000). I will have very little to say on this issue, partly because much of my account of visual objects is largely compatible with some form of conceptualism (cfr. Ch. 4, 7; also Pylyshyn 2007), and partly because every work on this issue must preliminary clarify the nature of concepts. Positions about this issue differ significantly (e.g. Newen & Bartels 2007), some philosophers take concepts to be linguistic or linguistic-like entities (arguably Textor 2009), or demonstrative concepts (McDowell 1994), or the exercise of sensorimotor skills (Noë 2004, chapter 6; 2012, chapter 3). 73 Chapter 3: States of Seeing belief). The dispositionalist higher-order thought theory assumes instead that mental states have a dual analog-non-conceptual content both first-order and higher-order, such that a content, in addition to the analog-non-conceptual content “red apple” (first-order), also has the analog-non- conceptual content “seems a red apple” or “experience of a red apple” (higher-order). In other words, such contents present themselves to us «via their higher-order analog contents» and at the same time they present «properties of the world or of our own bodies» (Carruthers 2007, p. 283).

This brief overview of the varieties of intentionalism is not meant to be exhaustive. I have omitted several options, such as for example Brown’s HOROR theory (2014), or Van Gulick’s HOGS theory (2004, 2006). Also, I have glossed over issues such as the divide between phenomenal externalism and phenomenal internalism, i.e. the claims according to which consciousness is either a relation between the perceiver and the environment, or an internally generated state (Veldeman 2009). The take-home lesson of this Section is that, if we accept separatism—as most current philosophers do—many options stand on the floor about how to spell out the relation between content and consciousness. Since I wish to remain neutral on the issue of phenomenal consciousness in this work, the theory that I will develop in the next Chapters will largely be compatible with multiple options, though I will make appropriate remark whenever my account of intentional mechanisms (Ch. 5) seems to conflate with some extant philosophical theory of consciousness.

3. The Role of Consciousness in this Work

In the first Section, I have clarified the notion of a state of seeing and its relation to visual perception and other conscious states. In the second Section, I have elaborated on the relation between content and consciousness. In this final Section, I will first explain the role of consciousness in this work (§3.1), and then offer an overview of the Phenomenological Domain (§3.2).

3.1 Consciousness and PI

In this work I will not espouse a particular theory of consciousness and I do not set out to explain consciousness. Instead, consciousness in this work is used to specify the relevant contents, whose neural correlates form the Neural Domain of PI. To appreciate this, let us recall that in Ch. 2, §1.2 I claimed that PI should be understood as part and parcel of the program of innere Psychophysik, i.e. it is a function between the “psychological” and the “neural.” Furthermore, I have specified that the former domain is restricted to the domain of conscious contents, and in particular the contents of states of seeing visual objects (cfr. also Ch. 1).

74 Chapter 3: States of Seeing

Now, states of seeing visual objects are individuated in virtue of their being conscious. In contrast with other descriptive or representational states, states of seeing have a presentational character, i.e. they are conscious. There is something it is like for the subject to see this or that object. On the (uncontroversial) assumption that states of seeing have some corresponding neural correlates, restricting the Phenomenological Domain to a particular sub-set of mental states has the effect of restricting the scope of the Neural Domain. I will illustrate my point with the aid of Fig. 8.

M C M V C V

i

! Vn Vn Cn Cn N N

Fig. 8: Relationship between different domains. M is the set of all mental states, whilst N is its corresponding neural correlate. C, V and Ψ are, respectively, the set of all conscious states at a time t, the conscious visual field, and the state of seeing a visual object. Each of these domains has a corresponding neural counterpart.

Let me start from M, the set of all mental states. Arguably, M has a corresponding set N of all neural states correlated with M. As we have seen, M is an extremely heterogeneous set that encompasses very different kinds of states, including non-conscious and unconscious mental states, such as mental states postulated by psychodynamics. From M, we identify a proper subset of all conscious mental states. Again, we can extrapolate from N a proper subset Cn all neural states somehow correlated with C. Notice that just like C is a proper subset of M, so is Cn a proper subset of N. The visual field V is the proper subset of C, so, correspondingly, we can identify a subset of neural states Vn that is a proper subset of Cn. From V we finally extrapolate the set of states of seeing visual objects, that is our Phenomenological Domain Ψ. The Phenomenological Domain has, again, a corresponding image—in a mathematical sense—or domain that is the proper subset of Vn, i.e. the Neural Domain ϕ.

75 Chapter 3: States of Seeing

As I have said, the thesis under examination in this work is whether it is heuristically useful to identify a homomorphic bijective function between ϕ and Ψ, i.e. a psychoneural isomorphism i. Importantly, my set-theoretic vocabulary is neutral regarding the nature of both the neural and the mental domains. So, for example, to say that N is the set of the neural or physical states correlated with M is silent about the nature of such neural or physical states. This also implies that different neural states need not be locally connected or even be of the same kind.

To reiterate, the Phenomenological Domain consists of the content of states of seeing directed at visual objects, where “content” is understood in a technical sense, as the conditions of accuracy of the state of seeing. This means that I will not discuss the relation between different states of seeing visual objects. Instead, I will analyze the structure of visual objects and explore whether such a structure is isomorphic in any relevant sense with the underlying Neural Domain. The Phenomenological Domain is the carrier set of the elements that form the relational structure. Following the convention introduced in Ch.2, §1, Ψ refers to the Phenomenological Domain understood as a carrier set, whereas Ψ refers to the relational structure of the Phenomenological Domain. In order to elucidate the nature of this relational structure, I broach in the next Chapter the issue of what are the elements that constitute visual objects.

3.2 An Overview of the Phenomenological Domain

Before I conclude this section, it will be helpful to pause to consolidate what has been said so far. As I have explained in Ch.2, §1, in order to talk about PI, we need to meet the requirements defined by the “Character of Isomorphism.” The Character of Isomorphism dictates that we need to identify two domains, show that they contain elements that stand in some relations, and finally show that there is a function that completely maps the relations of one domain onto the other one. In this Chapter, I have clarified the nature of the Phenomenological Domain. I will return on the Neural Domain in Part III (Ch.5-6).

The Phenomenological Domain under examination in this work belongs to the broader domain of the Phenomenal Unity of Consciousness. In this work, I will only focus on states of seeing. States of seeing have been defined as follows:

SoS = df A mental state that has visual presentational character.

Where the presentational character means that the state has a conscious content. States of seeing are part of the descriptive subsystem of visual perception, which in turn is closely linked to the sensory and the deictic subsystem. The “content” of a state of seeing is defined as its conditions of accuracy, i.e. the conditions under which the state will be an accurate representation of the external object. The content of a state of seeing is our Phenomenological Domain ψ, and as I have argued, consciousness merely has the role of helping us detecting the 76 Chapter 3: States of Seeing relevant content. This urges us to tackle the following issue: What kinds of elements then compose the contents of states of seeing?

77 4

FACTS, SENSORY INDIVIDUALS, AND SENSORY REFERENCE

By now, we know what states of seeing are, but we must still specify what elements compose this domain. In compliance with the “Character of Isomorphism,” we must also specify the relational structure of these elements. As I said in Ch.1 §3, in this work, I will exclusively focus on visual objects. This leads us to the question: What are visual objects? In the literature, there are two mutually inconsistent accounts of visual objects. I will call them factualism and the bundle view (§1.1). According to the former, what we see are facts—i.e. actual states of affairs. According to the latter, visual objects are bundles of compresent properties.

In this Chapter, I argue for the bundle view. My argument has a simple form, I construct a disjunction: either visual objects are facts or they are bundles of properties, but not both. Since I argue that they cannot be facts, they are bundles of properties. Arguing for the negation of the first disjunct will require, however, some work. In contrast with leading philosophical works, such as that of Armstrong, McDowell, Tye, and Textor, I will not settle the issue of the ontology of visual objects on mere phenomenological ground, or by means of conceptual or semantic analysis. The argument instead requires us to take into consideration current scientific accounts of the problem of sensory reference and object tracking. This will involve making a quick detour to the sensory subsystem (cfr. Ch. 3, §1.1).

I take as my foil Fish’s (2009) argument for factualism: «[…] I claim that we perceive facts (metaphysically understood), such as the fact of a’s being F» (p. 22). Fish’s argument for FT rests on two claims: a phenomenological and a scientific claim. The former claim is that we see visual properties as properties of some object. The latter claim purports to give scientific support to FT with an interpretation of some experiments on object perception (Blaser et al. 2000; Cohen 2004; Matthen 2005). More specifically, these studies show that visual sensory individuals are objects, rather than places. This feature makes Fish’s claim particularly interesting, as it would provide scientific support to factualism.

The Chapter has the following structure. In §1 I introduce the problem of the ontology of visual objects, the concept of “fact,” and define the notion of “factualism.” In §2 I will briefly discus the work of several factualist philosophes, and finally identify my target in William Fish’s argument for factualism. What makes Fish’s argument so interesting is that it apparently justifies factualism on the ground that it would be supported by experimental evidence on object perception. I will reconstruct the scientific background of Fish’s argument in §3. This will provide the frame within which I will articulate my rebuttal in §4. Finally, I will discuss some consequences of my conclusion in §5. Chapter 4: Facts, Sensory Individuals, and Sensory Reference

1. Seeing and the Ontology of Visual Objects 1.1 States of Seeing and Visual Properties

According to some philosophers, we see actual states of affairs or facts (e.g. Fish 2009; McDowell 1994; Johnston 2006). Facts are entities composed of two kinds of constituents that stand in a non-mereological relation: particulars and properties or relations (§1.2). I will call factualism the thesis according to which we see facts. On this view, we either visually represent facts (representationalism), or we are directly visually acquainted with facts (naïve realism). Factualism has several deep implications for any philosophical theory of perception. Firstly, since facts have a non-mereological composition, it implies that the objects we see do not have an internal mereological structure (though mereological relations may be construed as external to different facts; cfr. Armstrong 1997). Secondly, if we take facts as the «basic units» of visual perception (Fish 2009 p. 52), such units will then be individuated by two kinds of elements: properties (or relations) and particulars. Finally, although facts are worldly items, their structure «mimic[s]» (Johnston, 2006, p. 290) the structure of judgments like “a’s being F” (§1.2), hence espousing factualism entails that visual objects have a sentence-like structure (Armstrong 1997, p. 96; Textor 2009; cfr. §5.3).

Since factualism is the thesis according to which we see facts, I will elaborate on the concept of “seeing” in this paragraph, and on the concept of ‘fact’ in the next one. I have already elucidated the notion of state of seeing (Ch. 3). Two caveats are in order. The first is that I only focus on cases of genuine or veridical perception, in contrast with hallucinations. I leave to defenders of factualism to explain whether we might hallucinate facts or not. The second caveat is that, although in this work I explicitly espouse representationalism and intentionalism about consciousness, much of what I am going to say in this Chapter is compatible with both naïve realism and representationalism (cfr. Ch. 3, §1.3).

The concept of “seeing” or “state of seeing” has already been defined (Ch. 3, §1): a mental state that has conscious visual presentational character. A mental state has presentational character when it conveys or presents something consciously to the subject. However, in the previous Chapter, I did not specify what exactly is presented to the subject, and hence, what determines the conditions of accuracy of perceptual experience. I will call the “phenomenologically manifest” what fixes the conditions of accuracy of a state of seeing. Determining what is phenomenologically manifest is one of the central tasks of any phenomenology, understood as the study of what is phenomenologically manifest.

That states of seeing make manifest to the perceiver a cluster of visual properties or “features”— as vision scientists usually call them (e.g. Wolfe 1998)—is uncontroversial. Typical examples of these properties include colors and forms: we see things having colors and shapes. In seeing this car in front of me, I am seeing that it has a color, say, red, a particular shape, and so on. Visual

79 Chapter 4: Facts, Sensory Individuals, and Sensory Reference properties are constitutive of states of seeing: nothing can be seen in the absence of visual properties (Siegel 2010a, p. 45). This can be formulated in the following “Phenomenal Principle” (PP):

PP: States of seeing make manifest a cluster of visual properties as instantiated1.

(I assume that states of seeing are states of a perceiver, I will, however, omit reference to the subject in PP and OP, see below). No philosopher or psychologist would take PP as an exhaustive description of states of seeing. From an everyday perspective, our visual experience can be broken up into a number of objects (Rosch et al. 1976). Ask any observer what she is seeing, and she will mention chairs, cars, persons, etc. Following the psychological literature, I call these items “visual objects” (cfr. Ch. 1, §3.2). Visual objects can be defined as coherent unities of visual properties (Feldman 2003), or «elements in the visual scene organized by Gestalt factors into a coherent unit» (Kimchi et al. 2016, p. 35). Typical examples of visual objects include what Austin called «moderate-sized specimens of dry goods» (1962, p. 8) like books or chairs, but also perceptual ephemera such as waves, rainbows, and shadows (e.g. Casati 2015; Cohen 2004). It is uncontroversial that we see visual objects in the sense specified. We can thus put forward the following “Object Principle” (OP):

OP: States of seeing make manifest visual objects as instantiated.

What is controversial about OP is the ontological and metaphysical status of visual objects. A phenomenological theory worth its salt should clarify what visual objects are. The development of such a theory must meet some requirements. One requirement is the particularity of visual objects (e.g. Aristotle 1933, pp. 5-7; Burge 2010, p. 84; Schellenberg 2010, 2016; Soteriou 2000) (cfr also Chapter 7, §2). This requirement captures the idea that states of seeing make manifest unrepeatable entities within more or less clear spatio-temporal coordinates. If I turn my head to the desk, I will see this copy of Stendhal’s The Charterhouse of Parma, having this particular shade of red, and this particular form (cfr. also Moore 1953, p. 30). The particularity of perception comes in two variants. On the one hand, the relational particularity states that particular items in the world—whatever they are—trigger our sensory and perceptual systems. It is the book on my desk that is causing my state of seeing it. On the other hand, phenomenological particularity states that we seemingly have a conscious perception as of particular items in the environment. (The distinction is due to Schellenberg 2016). A second requirement is that visual objects have a spatial-mereological structure (e.g. Mulligan 1999; Pinna & Deiana 2015). This character emerges clearly in the scientific literature, where visual objects are often said to be “complex wholes” (Treisman 1986) or «coherent, unified wholes»

1 Siegel (2010a, p. 71) puts forward a ‘Property View’, but she explicitly links it with the accuracy conditions of experience. In contrast with Siegel, and although I assume a representationalist framework in this work, I do not think that the content view can be derived only from the fact that we see properties. 80 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

(Di Lollo 2012, p. 317). None of these two requirements is strong enough to uniquely identify the ontology of visual objects, as different and mutually inconsistent theories of objects can meet these two requirements.

In the remainder of this Chapter, I will use the terms “substance” and “object” as ontologically neutral between different theories. If we accept that there are properties—universals or tropes (cfr. Ch. 7)2—there are two groups of theories of objects: substance-attribute theories and bundle theories. According to the former theories, an object is an entity made by elements that belong to two different ontological categories, a substratum or “property bearer” and one or more properties (e.g. Armstrong 1997; Martin 1980). According to bundle theories, an object is fully analyzed by its properties structured in a compresence relation. Such properties might be universals (Russell 1940)—i.e. entities which are capable of being instantiated at multiple places at the same time—, or tropes (e.g. Campbell 1990; Ehring 2011, pp. 98-135; Maurin 2002; Robb 2005; Simons 1994; Stout 1921; Williams 1953)—abstract particulars, logically incapable of multiple instantiations (cfr. Ch. 7, §1.3)3. Facts are a paradigmatic example of the former view, whilst friends of tropes usually adopt a bundle view.

The psychological literature presents us with a striking lack of agreement on the question of what is a visual object. A first glance at the debate (for an excellent overview, see Skrzypulec 2015) will identify two camps that basically mirror the foregoing distinction between substance- attribute theories and bundle theories. Psychological models of vision focused on perceptual organization (e.g. Palmer 1999a, pp. 255-309) often define visual objects as bundles of features. Consider the following examples: Blaser et al. say that visual objects are «composed of constellation of visual features» (2000, p. 196); Pinna & Deiana (2015, p. 280) define a visual object as a «structured holder […] an organized set of multiple properties, some of which are explicit, some other implicit, some become explicit or, on the contrary, implicit or invisible after a while». These models usually focus on stationary phenomena (Skrzypulec 2015, p. 29), whereas models of visual objects that seek to explain their persistence and tracking within a dynamic context often embrace a substance-attribute ontology. This is for example the case of studies on tracking mechanisms—such as Pylyshyn’s (2007) FINST theory (see also §3.3). In these studies, reference to a substratum is justified in order to account for tracking of visual objects in spite of featural change.

2 It is perhaps worth emphasizing that, in the philosophical literature, the term “property” is often taken as synonym of “universal.” However, in this work I prefer a more neutral connotation: properties can either be universals or tropes. 3 Bundle theorists can spell out the relation between objects and properties in different ways. An object might just be identical with the bundle or supervene on it. Eliminativists argue that there are no objects, but only bundles of properties. All these options suggest that an object is somehow completely described by its properties. 81 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

The two theories of visual objects provide a different articulation of the relation between PP and OP. Defenders of the bundle view will say that visual objects are structured wholes of properties. Defenders of substance-attribute theories will say that visual objects are composed of both a substratum and visual properties.

1.2 Facts

In the philosophical literature, “fact” is a term of art (Betti, 2015; Mulligan & Correia, 2013; Olson, 1987). Following Betti (2015), we can single out two concepts of facts: propositional facts and compositional facts. Propositional facts are true propositions (e.g. Frege, 1918). It is matter of debate whether in the case of seeing, propositional facts express a genuine perceptual state or rather a form of knowledge (e.g. Williamson 2000, pp. 33-41). Some philosophes maintain, however, that we see facts in this sense. A case in point would be Dretske who argued that there is a sense of seeing that requires a factive complement. He called this “seeing that” or “epistemic seeing” (Dretske 1969, 1979, 1993, 2010). In his words: «Facts are what we express in making true statements about things» (1993, p. 264). Compositional facts are not propositions but building blocks of reality, they are actual states of affairs (e.g. Armstrong 1997; Reicher 2009). In what follows, I will exclusively focus on compositional facts.

Understood in this sense, facts are sometimes contrasted with states of affairs, which are sometimes conceived as merely possible entities (Meixner 2009, p. 57). We can classify facts into two types (Armstrong 1997, pp. 28-29). They can either be particulars exemplifying properties—such as “a’s being F”—, or two particulars exemplifying a relation—as in “a’s having R to b” (Mulligan et al., 1984). In the remainder of this Chapter, I will mainly focus on facts of the former type (cfr. Ch. 7, §2.2). Fact’s particulars play many roles: they individuate objects if the properties are universals; they combine multiple properties into one object; they provide a basis for the existence of universals (at least within Armstrong’s ontology); and they preserve the object’s unity through property change. Sometimes, the particulars are called “substances” (Garcia, 2014) or “objects” (Fish, 2009). Since, as I specified in the foregoing, I take the terms “object” and “substance” as neutral between different theories, I will prefer the term “particular” in the case of facts.

Facts understood in this way are a sui generis kind of entity: they form a non-mereological unity over and above their simple constituents (Armstrong 1989, p. 88). The properties and relations that constitute a fact are usually understood as universals, at least according to the standard notion of facts (Armstrong 1997). This leads us to the problem of understanding how properties and particulars are glued together in a fact. Armstrong rejects a relational interpretation of the property-particular tie, on the ground that this would ensue an infinite regress. If a property is related R to the object, then the instantiation relation itself would need to be instantiated by another relation R2, and so on: in the philosophical literature, this is

82 Chapter 4: Facts, Sensory Individuals, and Sensory Reference known as Bradley’s regress. Armstrong contends that the regress can be stopped if there is some non-relational tie that holds between particulars and properties, he calls this non- relational relation “exemplification.” There is, in other words, an intimate (and «mysterious», Devitt, 1997, p. 98) connection between particulars and their properties. (Unsurprisingly, it is precisely this odd and apparently preposterous notion of non-relational relation, and the problem of accounting for the unity of facts that has courted the most controversies (Betti 2015; Vallicella 2000). The unity of facts is made by two constituents. It is only by means of a process of intellectual abstraction (Armstrong, 1997, p. 29) that we might obtain a particular without its properties—what is called a “thin particular”—, whereas particulars clothed with their properties are called “thick,” i.e. facts (Armstrong, 1989, p. 88; 1997, pp. 123-126; cfr. §4).

2. Factualism and Fish’s Argument 2.1 Factualism

A more precise formulation of factualism (FT) can now be given:

FT: States of seeing make manifest complex entities, i.e. facts, whose ontological constituents are particulars and properties.

Factualism articulates the relation between PP and OP in the following way: visual objects, i.e. the coherent clusters of visual properties, are facts. Since facts are composed by two kinds of entities, accepting factualism entails that the domain of the phenomenologically manifest embraces both visual properties and particulars. We can clarify this with an example, if FT is true, seeing a book being red means to see a unity composed of a particular (the book) and a property, being red.

At first glance, FT offers a plausible answer to the problem of visual objects. Facts meet the two phenomenological requirements (§1.1). Armstrong contends that facts are particulars—what he pompously calls the “victory of particularity” (1978, pp. 115-116; 1997, pp. 126ff)—, accordingly, facts can account for the particularity of perception, both relational and phenomenological (cfr. Ch. 7, §2). Moreover, although facts have a non-mereological structure, and the properties cannot stand in spatial relations, mereological relations can still obtain as external relations between facts (Armstrong, 1997, pp. 119-123). This means that even simple visual objects like a red apple would actually be composed by sets of multiple facts tied together by means of some spatial-mereotopological relation. Furthermore, in virtue of the non- relational tie (cfr. §1.2) none of these facts’ properties could be “related” to their particulars. Factualism is by no means the only alternative, and a trope bundle theory, to mention but one example, coheres equally well with the two phenomenological requirements. Tropes are unrepeatable qualities, logically incapable of being in multiple places at the same time unlike universals. In this sense, tropes can account for the particularity requirement. Moreover, bundles of tropes are often described as mereological sums (e.g. Ehring, 2011, p. 98ff). This 83 Chapter 4: Facts, Sensory Individuals, and Sensory Reference shows that we are not forced to accept factualism on the basis of the phenomenological requirements alone. Given that there are multiple accounts of visual objects, friends of factualism must ground their standpoint with a suitable argument.

Many philosophers take side with factualism, but very few, if any, have actually argued for this position. David Armstrong for example states that «[…] I think that they [facts] are the true objects of perception» (2009, p. 40), but he hardly provides any reason to think that it must be so besides laconically commenting that perception is an «information-gathering apparatus and it tells us about the current state of our environment and our body» (ibid., pp. 40-41). Of course, Armstrong’s words must be embedded within his broader project of constructing a fact ontology; after all, according to him, we live in a world of states of affairs. But notice that there is no obvious reason for thinking that, even if we actually live in a world of states of affairs, we must be able to see facts. Tye and Loux have suggested some kind of flimsy arguments for factualism. Michael Tye states that he has been «transfixed by the intense blue of the Pacific Ocean» (1992, p. 160), whereas Michael Loux says that when focusing on «the colour of the Taj Mahal, I am not only thinking of pinkness in general, but of that unique pinkness, the pinkness that only Taj Mahal has» (2002, p. 86). William Fish interprets these passages as suggesting that when we see objects, we see both the objects and their specific property instances (2009, p. 22). Tye’s and Loux’s claims apparently have no other motivation than mere phenomenological or introspective observation, and while this may appear plausible on a first reading, it rests on dubious evidential support (see below).

Arguably, another defender of FT is McDowell:

That things are thus and so is the conceptual content of an experience, but if the subject is not misled, that very same thing, that things are thus and so, is also a perceptible fact, an aspect of the perceptible world (1994, p. 26).

The correct interpretation of McDowell’s commitment to facts is under dispute. The source of the controversy consists in McDowell’s identity conception of truth. According to the identity conception, a proposition is true iff it is identical with a fact (McDowell 2001). This is in contrast with correspondence theories of truth according to which facts are extralinguistic items that make propositions true, i.e. they are truthmakers. It has been argued (Dodd 1995) that McDowell in his work conflates different versions of the identity theory of truths, a robust theory—according to which facts are conceived as compositional facts—and a modest theory— according to which facts are conceived as Fregean Thoughts that have senses, rather than objects and properties, and therefore are not part of the world. Suhm et al. (2000) starting from Dodd’s remarks, have articulated a criticism of McDowell’s conception as ontologically unstable between different notions of facts, rather than identity theories of truths. I will not dwell on an exegesis of McDowell’s stance. It suffices to observe that if McDowell’s understanding of facts is

84 Chapter 4: Facts, Sensory Individuals, and Sensory Reference closer to the Fregean reading, then it does not bear on our present concern, since my purpose is to determine the ontology of visual objects. Things, however, are different if we interpret the «perceptible fact» in the Russellian or Tractarian sense: as a compositional fact. But if this is the correct reading, then nothing in McDowell’s passage, or in the whole work, supports the claim that facts are perceptible, unless one interprets it as espousing the same thought at the base of the truthmaker argument for facts (Armstrong 1997).

According to the truthmaker argument, facts are introduced in our ontological inventory in order to serve as truthmakers of our propositions, i.e. the things in virtue of what a proposition is true. Within the identity conception of truth, however, there are no truthmakers, as the truth- bearers, the propositions or Fregean Thoughts, are simply identical with the thing itself. This suggests that, if McDowell interprets facts in the compositional sense, he may be simply translating compositional facts within his identity conception. Notice, however, that the truthmaker argument for facts is hardly conclusive of their existence, as shown by Betti (2014, 2015). Strictly speaking, there is no reason as for why facts should be the unique truthmakers of our propositions, or even whether there must be truthmakers at all, given that one could opt for some version of the correspondence theory of truth. Finally, let me observe that even accepting the truthmaker argument for facts it does not obviously (or trivially) force us to think that visual objects are facts, for the only way to determine the ontological status of visual objects is to observe more closely the nature of perception and the capacities of our perceptual system.

2.2 William Fish’s Argument for Factualism

Perhaps, the most interesting argument for factualism has been put forward by William Fish in his 2009 book: «[…] the basic units that feature in presentational character are not properties and objects simpliciter, but rather object-properties couples» (2009, p. 52). He then calls such couples ‘facts’: «[…] I prefer the more metaphysical term ‘facts’» (ibid.). (Here, the term ‘object’ stands in for ‘particular’). Fish’s argument for factualism consists of two steps. The first step is phenomenological. Fish mentions Firth’s observation that «[…] the qualities of which we are conscious in perception are […] presented to us […] as the qualities of physical objects» (1965, p. 222; quoted in Fish, 2009, p. 51; cfr. also Shoemaker, 1990, p. 109). Call this the phenomenological claim. In addition, he maintains that experimental studies on object perception would support factualism. More specifically, he makes reference to studies on multiple object tracking (Blaser et al., 2000), and to Matthen’s (2005) comments on dynamic feature-object integration: «Mohan Matthen has also argued that certain empirical results are adequately explained only on the assumption that we do not see properties or qualities simpliciter, but rather see objects bearing properties» (Fish 2009, p. 51), and «[…] these empirical considerations are adduced as further support of the idea that the basic constituents of presentational character are not objects or properties per se. Instead, they provide additional

85 Chapter 4: Facts, Sensory Individuals, and Sensory Reference reasons to think that to see a property is to see it as inhering in some object or other» (p. 52). Call this the scientific claim. Fish’s argument can be schematized as follows:

(S1) Phenomenological claim: States of seeing make manifest visual properties attributed to objects.

(S2) Scientific claim: Experimental evidence suggests that we see objects having properties.

(C) FT: States of seeing make manifest facts to the perceiver.

(Recall that Fish uses the term “object” whereas I prefer the term “particular”). The argument assumes that we see properties (PP), and that we see visual objects (OP). I have not regimented the steps within a rigorous logical structure. It is possible to rearrange the steps, perhaps adding additional premises to construct a formal argument. My purpose however is not to attack the logical construction of Fish’s argument. Instead, I will address the argument from a different standpoint. In the remainder of this section, I will briefly discuss S1, showing that Fish’s use of the term “object” in the sense of “facts’ particulars” is merely stipulative. In the next sections (§§3-4), I will show that Fish’s scientific claim is false.

Concerning S1, Fish’s claim is not different from the apparent justification for factualism put forward by Tye and Loux. However, upon closer inspection, it can be easily shown that S1 merely expresses our problem, for it does not determine a specific theory of visual objects, but merely states that there is a relation between PP and OP. In other words, the term “object” in S1 is far too vague to support factualism or any other theory of visual objects (Ayers 2004, p. 255). There are at least two possible readings of “object” as I have explained in §1.1. On one reading, an object is just a bundle of features. The alternative reading is that an object is a particular in the fact’s sense (§4). Clearly, Fish seems to take the second reading as obvious. However, visual phenomenology can hardly be taken as an unmistakable source of evidence, as an extensive literature shows (e.g. Dennett, 1991; Schwitzgebel 2011). Moreover, philosophers have found both readings plausible on purely phenomenological grounds. For instance, Mark Textor maintains that: «Seeing x [a visual object] is constituted by seeing features, states or changes of x and additional factors» (2009, p. 141). In other words, there are multiple ways to carve up and describe the phenomenologically manifest. Fish’s use of the term “object” in S1 is therefore merely stipulative.

The fact that there are different ways to describe the phenomenologically manifest brings to the forefront the worry that the different phenomenological readings of S1 might just be a clash between irreducible intuitions. Fish’s argument, however, gives us an interesting perspective on our issue, since the scientific claim provides a non-phenomenological way to examine the ontology of visual objects.

86 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

3. Sensory Individuals and Sensory Reference 3.1 Binding and Places

The experimental evidence that, according to Fish, would support FT is drawn from the literature on the problem of feature-object binding. The problem is that of understanding how different visual properties are attached to the same individual (Clark, 2000; Treisman, 1996; cfr. also Jackson’s “many properties problem,” 1977, pp. 64ff). The assumption is precisely that we see visual objects in the sense defined above (§1.1), independently from the more specific question of their ontological character. In other words, visual objects are the explanandum of the feature-object binding problem.

Every philosopher interested in perception should provide an account of the (re-)presentational binding problem, i.e. explain how visual properties are clustered around visual objects. This is quite a different thing from providing a neural solution of the problem, i.e. to explain how the brain actually implements the binding of different features into one single individual (O’Callaghan, 2008; Plate, 2007; Revonsuo, 1999; Tacca, 2010, pp. 57-66; Treisman, 1996). Here I will focus on the theoretical solutions, and not on their effective neural implementation. To bring the problem into sharper focus, consider the following:

(1) S sees something red. (2) S sees something triangular. (3) S sees something both red and triangular (a red triangle).

Clark (2000) observes that (3) does not follow from (1) and (2). And the problem becomes even more pressing if we introduce in this scenario yet another object being, say, blue and circular. How could then S tell that the circle is blue and the triangle is red? To explain how (1) and (2) are bounded together to form S’s state of seeing a red triangle, Clark proposes a theory of sensory individuals (the term is due to Cohen, 2004). On this theory, solving the binding problem requires that the extracted distal features must be ‘attached’ to one and the same entity, i.e. a sensory individual.

What are sensory individuals? According to Clark (2000, pp. 164ff), sensory features are predicated of specific places (cfr. also Strawson, 1959). S is seeing redness and triangularity here, and blue and circularity there. To borrow Evans’ terminology, location in space would provide the «fundamental ground of difference» (1982, p. 107) that attributes the features to the same individual (Clark 2004, pp. 136-144). Binding sensory features to places seems an attractive solution, for it apparently accounts well also for the spatial location of visual objects. Yet, Clark’s feature-placing theory is at odd with the experimental evidence (Cohen 2004; Matthen 2004, 2005; Pylyshyn 2007; Siegel 2002; for an overview, see also Nanay 2013, pp. 50- 52). As we have seen, in his argument, Fish makes reference to two fatal issues for the feature- 87 Chapter 4: Facts, Sensory Individuals, and Sensory Reference placing hypothesis: the theory cannot account for binding of co-located objects (Blaser et al. 2000), and it cannot account for dynamic feature-object binding (Matthen 2005). These two issues ultimately led philosophers and psychologists into thinking that sensory individuals must be objects, rather than places (Cohen 2004, p. 480).

3.2 Sensory Individuals as Material Objects

I will now briefly discuss the two challenges for the feature-placing theory. I begin with superimposed objects (§2.2.1), and then turn to dynamic feature-object integration (§2.2.2). Fish mentions both experiments as part of his scientific claim.

3.2.1 Superimposed Objects

A problem for Clark’s feature-placing theory is that it is unable to account for feature-object binding when two objects have the same location. This is shown in a series of important experiments by Blaser, Pylyshyn and Holcombe (2000) on multiple object tracking (see also Cohen 2004; Matthen 2005, pp. 278ff; Pylyshyn 2004).

Blaser and collaborators have investigated the visual system’s ability to track distinct visual objects within the same spatio-temporal trajectories. In the experiments, subjects observed two superimposed circular striped “Gabor” patches transparently layered on one another, without noticeable separation in depth (2000, p. 196). (Fig. 9). The Gabors underwent different changes, for example spinning clockwise and then counter-clockwise, or changed saturation, from gray and black stripes to red and black stripes. The featural changes occurred without any change in location, thus testing whether object perception essentially involves the location of features. Blaser and colleagues found that the observers reported that the Gabors were perceptually segregated, in a way similar to figure-ground segmentation. The attended Gabor stood out in the foreground, whereas the distractor Gabor receded in the background. Moreover, the experimenters found that featural attention enhanced processing of the Gabor’s features as a whole. From this, Matthen infers that the observers «were attending to features by attending to the objects to which these features were attributed, and not by attending to the features directly» (2005, p. 281).

Since subjects were able to discriminate two superimposed but distinct Gabor patches, some philosophers conclude that sensory individuals cannot be places (Matthen 2005). The binding of features seems to be object- rather than place-centered.

88 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

Fig. 9: Two Gabor patches. In Blaser et al.’s (2000) experiment, the two Gabors were superimposed.

3.2.2 Dynamic Feature-Object Binding

Apparently, the feature-placing hypothesis is also unable to account for dynamic feature-object binding. Matthen (2005, p. 282) states that the perception of change or of motion demands an identity that underlies change, and locations cannot provide such an identity (Siegel 2002). He illustrates this point by means of the φ-phenomenon.

The φ-phenomenon is a paradigmatic example of illusory movement (Dennett 1991, pp. 114ff; Goodman 1978; Wertheimer 1912). In this experiment, a subject observes a screen upon which an image—say, a white dot—is shown on the left side. A second image—say, an identical white dot—is then shown on the opposite side of the screen. In two different experiments, the researchers change the interstimulus interval between the offset and onset of the two dots. The suitable interstimulus interval depends on the spatial separation of the two items, but the phenomenon is often tested between 50 and 200 msec. (Arstila 2016). In a first experiment, with an interval of c.50 msec. subjects will likely see two flashing dots. However, if the interstimulus interval is increased to c.150 msec., subjects will likely see illusory motion, where one white dot appears to be moving from left to right. Kolers & Von Grünau (1976) devised an interesting variation of this experiment by changing the color of the second dot. If the interstimulus interval is c.150 msec., subjects will see one single dot, moving from left to right and changing color halfway, say, from white to red.

Matthen contends that this phenomenon brings evidence against Clark’s feature placing theory. Our visual systems are sensitive to motion, but motion cannot be attributed to places. Suppose that features are place-indexed, like “red and circular here,” where “here” is the sensory individual: What does it mean to say that “here” moves? Regions of space do not move, and therefore cannot be sensory individuals. Matthen takes this to show that vision is committed «to an ontology of material objects» (2005, p. 281)4. He defines a material object as a «spatio- temporally confined and continuous entity that can move while taking its features with it» (2005, p. 281). Vision thus «attributes features to material objects» (Matthen, 2005, p. 280). Furthermore, he maintains that if material objects can undergo a qualitative change—like

4 Calling such items ‘material’ objects may cause confusion, since we also see shadows, rainbows, etc. (cfr. §1.1). However, Matthen’s definition of material objects is broad enough to include perceptual ephemera (cfr. O’Callaghan, 2008, p. 816). 89 Chapter 4: Facts, Sensory Individuals, and Sensory Reference changing color—and yet they can be tracked, then sensory individuals must be something like substances as defined by Aristotle:

It seems most distinctive of substance that what is numerically one and the same is able to receive contraries. In no other case could one bring forward anything, numerically one, which is able to receive contraries. For example, a color which is numerically one and the same will not be black and white […]. A substance, however, numerically one and the same, is able to receive contraries. (Categoriae 5, 4a10) (Aristotle 1963, p. 11; Matthen 2005, p. 281).

This suggestion is not meant to define the ontology and metaphysics of visual objects. Matthen simply wants to show that properties are attributed to objects, rather than to places. Notice however that it is far from clear whether the visual system attributes or predicates properties of objects. I will explore this issue in the next sections (§4.2.2, §5.1-2).

3.3 Sensory Reference

The experiments selected by Fish and their interpretations reflect a particular conception of visual experience that finds many adherents among philosophers (e.g. Cohen 2004; Matthen 2005; Pylyshyn 2007). According to this conception, vision would be a two-layered process. At a basic level, we find mechanisms that track material objects. At later stages we find mechanisms responsible for (re-)presenting the object’s features and thus make visual objects consciously manifest (§1.1). It is this picture that arguably fuels Fish’s scientific claim: first, some mechanism identifies an object in the world, and then some mechanisms attribute properties to it (§2). The result is that we see «object-properties couples», i.e. facts. Given its relevance in the present context, I will describe this “two-layers” conception in more details by focusing on its paradigmatic incarnation: Pylyshyn’s FINST theory.

Pylyshyn (2007) has developed a theory of tracking mechanisms that he calls “FINST”—from “FINgers of INSTantiation”—or “visual indexes” (2007, p. 13). The gist of this theory is that we possess a limited number of FINST mechanisms (four or five), whose function is that of tracking material objects in the environment. (Pylyshyn calls the sensory individuals FINGs— from FINSTed THINGs (p. 56)—, but they are equivalent to Matthen’s material objects). On this account, material objects are said to “grab” a FINST in virtue of some property that allows a causal connection (p. 68). Pylyshyn argues that the FINST mechanisms are pre- representational and pre-conceptual: they merely register or detect the presence of an object, without representing any of its properties (pp. 74-75, p. 94). Pylyshyn’s theory is, however, sometimes unclear about how material objects grab a FINST. On the one hand, he seems to contend that sensory reference is fixed by means of physical properties or spectral properties of the light striking the retinas (Burge 2010, p. 89). Such properties, however, would not be encoded (represented) (Pylyshyn 2003, p. 219). On the other hand, he contends that sensory 90 Chapter 4: Facts, Sensory Individuals, and Sensory Reference reference is fixed not by means of these physical properties, but by the bearer of the properties itself (2007, p. 96). What is clear is that tracking mechanisms have precisely the role of fixing sensory reference in a way that resembles the role of demonstratives in language (p. 95). For example, in the proposition “This is red” the predicate “red” is attributed to the demonstrative “this.” Vision would have an analogous structure: first index mechanisms fix sensory reference to an undefined material object x and then higher-order mechanisms attach visual properties to it. As Pylyshyn says: «[p]roperties are predicated of things» (2003, p. 201).

It should be noted that this ‘two-layers’ conception of vision and visual processing is not the standard account in vision science. One reason to cast doubt on it is that there is ample evidence that processing of features begins very early, already on the retina. Features such as color or motion are extracted and processed in a hierarchy of topographic maps that preserve, to an extent, the spatial arrangement of the proximal stimulus (e.g. Op de Beeck et al. 2008; Silver & Kastner 2009; Somers & Shermata 2013; Wanderll et al. 2007). This of course does not amount to a rejection of Pylyshyn’s theory, but it may call into question the idea that features cannot fix sensory reference (§3.2; §4.2, §5.2).

Let us return to our central issue, for we have now acquired some precious insights that can help us determine the ontology of visual objects. More specifically, we know that:

a. A material object grabs a tracking mechanism. b. A material object unifies different properties as belonging to the same individual. c. A material object remains constant through featural change.

Whatever material objects are, they must fulfill these three roles. From an ontological point of view, however, it is clear that any theory of object will articulate an account of roles b-c. As we have seen, a central line of argument for defenders of the “material object” view of sensory individuals is based on their role in fixing sensory reference (a). Recall that Fish’s interpretation of the experimental evidence suggests that our visual system first identifies an object and then attributes properties to it. I will later (§4.2, §5.1-2) argue that there are good reasons to cast doubt on this ‘property-attribution’ model. For expository reasons, I will refer to the two-layers conception, mainly because it seems to fuel Fish’s intuition. I will thereby show that whether we accept or not this conception, we cannot find any scientific support for factualism. In order to articulate my argument, we will first need to make explicit some criteria of ontological commitments.

4. Tracking and Seeing Facts?

91 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

4.1 Two Ontological Criteria

Our ontological inventories are always ad hoc with regard to a specific set of problems (Betti, 2015, p. 62): we introduce a new kind of entity in our discourses because we need it to elucidate or explain a particular problem. Of course, this does not mean that any entity can be freely added without following any criterion. Betti (ibid., pp. 62-63) puts forward two elegant criteria. The first criterion is simple: if the problem we want to solve is not genuine—perhaps, it is generated by wrong assumptions—, we do not need to solve it, and hence, we do not need to introduce any new entity. If the problem is genuine, so goes the second criterion, we can first try to solve it with the tools we already have in our ontological inventory. If our ontology does not sufficiently account for the problem, we can vouchsafe the new entity a place in our ontological inventory.

Let us start with the first ontological criterion. Our question is: Is the problem that facts are meant to solve a genuine one or not? I see at least two ways to show that the problem might not be a genuine one. The first way is to show that sensory individuals cannot be material objects, whilst the second way is to show that we can explain object perception without sensory individuals. If we opt for the first way, we are then led back to the question of what a sensory individual is. As we know, the choice is between objects and places (§3.1), but as we have seen (§3.2), the experimental evidence favors an object theory of sensory individuals. Certainly, this does not exclude that there might be a third option according to which sensory individuals are neither objects nor places. However, no third option is discussed in the literature, and it is therefore not possible to evaluate any alternative proposal with regard to our issue. In the absence of an alternative theory of sensory individuals, it is therefore more plausible to accept that sensory individuals are objects.

Let us examine our first criterion. We have seen that sensory individuals play a central role in Fish’s argument. There are at least two ways to show that the problem is not genuine. First, one may show that sensory individuals cannot be material objects. This option brings us back to the problem of what a sensory individual is. Some researchers may argue that the foregoing experiments do not refute the feature-placing hypothesis. One way to salvage this view is to observe that, although Blaser et al. show that the overlapping 2D retinal projections of superimposed objects cannot fix sensory reference, this does not represent a problem for the visual system, as it does not rule out that 3D places in the world may be the sensory referent. (I thank a reviewer for this suggestion). I will not further develop this proposal here (cfr. also §5.2), mainly because it is not sufficient to solve our problem. As long as we think that the visual system identifies a “something” of which properties are predicated—such as “a’s being F”— factualism remains a viable option.

92 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

The second way to show that the problem is not a genuine one is more radical: perhaps, we do not need sensory individuals to fix sensory reference and bind features. We may for example reject the binding problem (e.g. Garson, 2001). Di Lollo (2012) has argued that recent advancements in neuroanatomy and neurophysiology refute the idea of modular specificity and independence that generated the binding problem in the 80s. He suggests that basic features are the units of information against which high-level perceptual hypotheses are compared (ibid., p. 318). Di Lollo’s remarks seem to point against the neural solutions to the binding problem, i.e. perhaps, we do not need any special neural mechanisms that bind different features. However, as I said (§3.1), I remain silent on the actual neural implementation of binding. Furthermore, Clark’s elegant theory of sensory individuals is that these are items in the world that carry properties. This means that even if we reject the neural binding problem, we still need to show how items in the environment are detected and tracked.

In light of Betti’s first criterion, we can conclude that we are facing a genuine problem: how perceptual items solve the problem of property binding and how they are individuated and tracked. It is the latter problem that will play a central role in my argument (§4.2): Does the visual system identify items to which properties are predicated? I will examine this question in light of Betti’s second criterion.

4.2 Material Objects as Particulars

As we have seen, Fish maintains that the experimental results can be explained on the assumption that we see object-properties couples (2009, p. 51). This suggests that he interprets Matthen’s material objects as facts’ particulars. According to Fish (ibid., p. 52), such particulars would be manifest in states of seeing together with their visual properties. Facts’ particulars can either be “thin” or “thick” (§1.2). I explore both options. I call the former option “Fact Thesis 1” (FT1) and the latter option “Fact Thesis 2” (FT2):

FT1: Material objects are thin particulars

FT2: Material objects are thick particulars.

Next, I will examine FT1, and then (§4.2.2) turn to FT2.

4.2.1 Tracking Thin Particulars?

Since thin particulars are abstractions, they can only be given in thought. But if they are objects of thought, it is unclear how they might play a role in perception. Perhaps, one might claim that tracking and individuation mechanisms possess some basic conceptual capacities. Such mechanisms would be able to identify a fact and track the thin particular. This strategy does not seem very promising. On Pylyshyn’s account of FINSTs, tracking mechanisms do not possess any conceptual capacity. Hence, FINST mechanisms cannot identify facts and track thin 93 Chapter 4: Facts, Sensory Individuals, and Sensory Reference particulars. But even if we do not accept Pylyshyn’s account, the idea that tracking mechanisms have some rudimental conceptual capacity is obscure5. Suppose some mechanisms were able to identify a fact, why should it conceptually subtract properties from it?

Suppose, for the argument’s sake, that the previous issue can be solved, and concede that somehow tracking mechanisms can identify thin particulars. If this were the case, it would be utterly unclear in virtue of what tracking mechanisms can detect a particular among many others. Let me explain. Fish (2009, pp. 54-58) states that our visual field embraces many distinct facts. But if there are many facts within the visual field, and tracking mechanisms can only identify and track few items (four or five, §3.3), how can they individuate the relevant particulars? Remember that a thin particular does not instantiate any property, so it cannot be distinguished from other particulars (§1.2). (For a somewhat similar point, cfr. Campbell, 1990, pp. 7ff). We can make this point clear by means of an analogy with the well-known phenomenon of vision in a Ganzfeld (Avanti, 1965). As we know, exposure to a structureless, qualitatively homogenous visual field results in a ‘mist of light’ or ‘empty field’ experience that defies ordinary visual experiences. Nothing is seen within a Ganzfeld. The problem with thin particulars can be understood as the converse of the Ganzfeld experience. Whereas in a Ganzfeld subjects do not see anything due to exposition to a qualitatively homogeneous field, thin particulars cannot be detected because they lack any property that might discriminate them from the background and other thin particulars.

These considerations are broadly consistent with the idea that «to recognize or otherwise analyze a visible object in the world, we must first distinguish it as a primitive individual thing, separate from the other clutter in the visual field» (Pylyshyn 2003, p. 210). Thin particulars cannot be detected because they lack a character that differentiates them from other particulars and allow tracking mechanisms to individuate them. It is worth bearing in mind that this is not an argument against the existence of thin or bare particulars. There might be metaphysical reasons to postulate thin or bare particulars. But even if thin or bare particulars exist, they cannot fix sensory reference.

At this juncture, friends of factualism may raise an objection: facts are complex entities made by particulars and properties. To think that sensory mechanisms track thin particulars is a misunderstanding of factualism and fact ontology. This brings us to FT2.

5 There is a long-standing controversy over conceptualism about perceptual experience (e.g. Heck, 2000; Wright 2015). The issue here is not whether perceptual content is conceptual or requires conceptual skills. Perhaps, content really is conceptual ‘all the way down’ (McDowell 1994). The point is whether some causal mechanisms do possess the conceptual capacity to metaphysically decompose a complex entity, a fact, into its constituents and track thin particulars. 94 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

4.2.2 Tracking Thick Particulars?

A thick particular is a fact, a particular instantiating one or more properties. Obviously, FT2 assumes that facts exist—and their existence is assumed on metaphysical grounds—, but even if we concede that there are facts, it is certainly far from obvious that we see facts! Even if we accept that objects are facts, in order to defend factualism (§2), it must be argued that not only we detect particulars and properties, but that the former, too, must be manifested in states of seeing. Only in this case we would be able to see “[particular]-properties” couples. Factualism thus entails an extension of the ontological inventory required to describe what is “phenomenologically manifest” in a state of seeing. In this paragraph, I argue that perceptual psychology gives us no reason to expand the ontological inventory to include “particulars.”

Earlier (§3.3), I have discussed Pylyshyn’s view that vision would have a structure somewhat similar to language. The ‘two-layers’ conception has it that visual demonstratives cannot be features, since the latter are encoded only at later stages of visual processing. Pylyshyn takes the evidence from dynamic feature-object integration (§3.2.2) as showing that we can track an object in spite of featural change, and argues that visual properties therefore cannot be the referring element of tracking mechanisms (Bahrami, 2003; Pylyshyn, 2007, p. 68; Scholl et al., 1999)6. He therefore suggests that tracking mechanisms are grabbed by some physical properties of the material objects. These properties causally connect a material object with a tracking mechanism. What kind of physical properties can grab a tracking mechanism is matter of empirical investigation (Pylyshyn, 2003, p. 211).

There are two possible interpretations of FT2. The first interpretation follows Pylyshyn’s account and admits a distinction between physical and visual properties. The second interpretation admits that visual properties may fix sensory reference. We thus obtain:

FT2*: Material objects (particulars) instantiate both physical and visual properties7.

FT2**: Material objects (particulars) instantiate visual properties.

6 An anonymous reviewer has pointed out to me that Pylyshyn is actually much more cautions in denying that features can be visual demonstratives. In a passage, Pylyshyn actually states that the speed of objects’ motion or the rate at which they change direction seem to play a role in fixing sensory reference (2007, p. 68, ft. 2). However, he also adds that these properties too «do not appear to be encoded» (i.e. represented). 7 This formulation, which is consistent with Pylyshyn’s theory, includes a conjunction of both physical properties and features. Both must obtain in order for us to see the object. Remember that, on this theory, if no physical properties are given, index mechanisms cannot fix sensory reference, and hence, nothing can be detected. If physical properties are given, but no features, then the object can be detected, but it cannot be seen, as features are constitutive of states of seeing (§1.1). 95 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

Consider FT2* first. Since features do not fix sensory reference, something else must do it. Two options are available. The first option is that physical properties are tracked. Material objects are identified by the visual system in virtue of some physical property, although such properties are neither represented nor accessible in states of seeing. The second option is that the particular itself is being tracked. In some passages, Pylyshyn adumbrates this second option, for example: «[…] if the FINST was captured by a property P, what the FINST refers to need not be P, but the bearer of P (the [material object] that has property P)» (2007, p. 96); and: «I take the view that objects are indexed directly, rather than via their properties or their locations» (2003, p. 202)8. This is unclear. The claim can be interpreted as saying that, when a material object grabs (thanks to some physical property) a tracking mechanism, such a mechanism refers to the bearer of the property itself. The bearer might be the ‘thin’ particular (but why not a bundle of properties?), but in this case, we are confronted again with the challenges already discussed earlier about FT1 (§4.2.1). Indeed, it us unclear how tracking mechanisms may pick out a ‘property bearer’, were it not for some property. Of course, this claim is compatible with the visual system being able to track a cluster of properties, rather than a single property. But this suggests that such properties are all we need to pick out and track items in the world!

Now, suppose that Pylyshyn’s theory is false, and that although features alone do not fix reference, it is always a conjunction of both physical and visual properties that allow object detection and tracking. On this reading, some properties may become conscious and appear in states of seeing, whilst other properties are neither represented nor can they become conscious. Pinna & Deiana’s (2015) definition of visual objects squares well with this suggestion; a visual object is a «structured holder […] an organized set of multiple properties, some of which are explicit, some other implicit, some become explicit or, on the contrary, implicit or invisible after a while». (p. 280). This is an intriguing option, but it does not support factualism is any way. Again, property instances are all we need to detect and track objects.

Consider now FT2**. As I said, most vision scientists believe that the processing of features starts already on the retina (§3.3). Perhaps, features can causally affect the visual system in such a way as to fix sensory reference. If this were correct, we would need some alternative interpretation of the experiments on multiple object tracking or dynamic feature-object integration (§3.2.1-2). I will outline an alternative strategy in §5.2. For now, what matters is that, again, properties are all we need to fix sensory reference.

8 These passages are ambiguous, but Pylyshyn also stresses that «there must be some properties that cause index assignment and that make it possible to keep track of certain objects visually— they may just constitute a very heterogeneous set and may differ from case to case» (2003, p. 213), thus suggesting the first interpretation (physical properties are tracked).

96 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

Under all interpretations, Betti’s second ontological criterion suggests that we do already have all the indispensable ontological tools to explain sensory reference. There is no need, nor the problem of sensory reference suggests us to do so, to introduce an ontologically distinct ‘particular’ to which properties are attributed. All we need are property instances. A property instance is simply a particularized property sample: it is “that shade of red” or “that triangular form.” As I am using it, the term “property” is ontologically neutral: it may be a universal instantiated by some particular (§1.2); or it may be a trope. It is here that the scientific evidence stops. Perceptual psychology does not tell us how to particularize a property. All it tells us is that property instances are all we need to fix sensory reference. Once sensory reference is fixed, thanks to a qualitative discontinuity in the environment, the visual system starts extracting properties from the target. Indeed, the literature on object perception suggests that the visual system first represents property clusters, and then categorizes these clusters as either a face, an object, a building, etc. (e.g. DiCarlo et al. 2012; Grill-Spector, 2003; Op de Beeck et al. 2008). How object recognition works, and whether it presupposes or supports the ‘rich content view’ (e.g. Newen, 2016), is an issue that cannot be addressed here. The main lesson is that the problem of sensory reference does not entail that vision is a process of property attribution to a “particular.”

From these observations we can draw several implications. I will outline some of them in §5. In the next paragraph, I briefly return on Fish’s argument.

4.3 Fish’s Argument Revisited

Fish’s argument for factualism consists of two claims (§2). The first one (S1) is the phenomenological claim: we see visual properties as properties of objects. The problem with the phenomenological claim is that phenomenology does not provide robust evidence for any particular conclusion about the nature of visual objects (§2). On the very same phenomenological grounds, other philosophers may argue for a bundle view of visual objects.

The scientific claim (S2) is based on an interpretation of experiments on object tracking and the feature-binding problem (§3). Fish thinks that these experiments support the idea that we see particulars bearing properties. I have shown (§4.1-2) that perceptual psychology does not support factualism. All we need to fix sensory reference are property instances. Fish’s factualism is thus unwarranted: neither S1, nor S2 provide evidence for factualism.

Before I conclude, in the next section (§5) I will clarify the distinction between visual and material objects (§5.1), outline an alternative explanation of the experiments discussed in §3, and briefly draw some implications about the nature of perceptual content.

97 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

5. Visual objects, Tracking, Binding, and Content

The arguments developed in §4 touch on issues like object perception and the nature of the phenomenologically manifest. I group the implications of my conclusions into the following subsections. In §5.1, I discuss the distinction between material and visual objects. In §5.2, I briefly suggest and alternate solution to the problem of tracking and binding. Finally, in §5.3, I draw some sketchy implications for the theories of perceptual content.

5.1 Material and Visual Objects

In §1.1, I said that factualism is one possible answer to the puzzle of visual objects. An alternative view is to say that visual objects are bundles of properties. From this disjunction, applying our conclusion, we can see that my considerations provide support for the bundle view. There is no scientific support for factualism. But perceptual psychology supports the idea that that property instances are all that enters in a perceptual relation with the subject.

At this point, the reader may feel confused about the distinction between material and visual objects, for I apparently jumped from discussing of tracking material objects to visual objects. Earlier (§3), I have introduced the concept of material object, defined as a «spatio-temporally confined and continuous entity that can move while taking its features with it» (Matthen, 2005, p. 281). Material objects are things in the world. Visual objects, on the contrary, are the coherent units that are made manifest in states of seeing. On my characterization, visual objects are bundles of properties. From this, we cannot conclude anything about the ontological status of material objects. We should not expect our perceptual capacities to reveal the metaphysical status of things in the world, and neither is the metaphysical and ontological status of material objects a problem for perceptual psychologists or philosophers of perception. For all we (perceptually) know, a material object may be a bundle of properties, or a fact, of which we only see some properties. The relation between visual and material objects is graphically displayed in Fig. 10.

The picture shows a material object (the apple) and the visual system (the brain). The visual system is in contact with some of the material object’s property instances, regardless whether they are features or physical properties. (Of course, we should assume that many of an object’s properties are not, even in principle, perceptually accessible). The visual object is constituted in part or entirely by the extracted visual properties. This claim finds also support in the scientific literature on visual objects: Blaser et al. say that visual objects are «composed of constellation of visual features» (2000, p. 196). Here we should use some caution. My claim should not be confused with the stronger—and likely false—claim that all we need to explain object perception are properties. Other factors may play a role, like memories and semantic categorization that have been shown to influence visual processing even at an early stage (e.g. Grill-Spector & Kanwisher, 2005). 98 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

Fig. 10: Material and visual objects. The perceiver tracks some properties of the material objects, and few of them are extracted in order to form a descriptive state.

Finally, the exact ontological distinction between material and visual objects depends on our assumptions about the nature of perception. Naïve realists like Fish usually hold that we have a genuine direct acquaintance relation to items in the world. In this case, the perceiver is in direct contact with the material object’s properties themselves. Visual objects would then be bundles composed by these properties. Intentionalists (e.g. Siegel 2010b) argue that states of seeing have conditions of accuracy. In this case, the perceiver may be representing these properties. Thus, visual objects would be bundles of represented properties. In both cases, we can draw a distinction between visual and material objects.

5.2 Tracking and Binding

The foregoing considerations have some implications for the role of sensory individuals (§3.1). I first consider their role in object tracking, and then turn to their role in the feature-object binding.

As we have seen (§3.2) Matthen, following Pylyshyn, thinks that since features cannot fix sensory reference, something like a ‘substance’ must play this role. If ‘substance’ is interpreted as “particular” in the fact’s sense we are then confronted with the challenges discussed earlier (§4). In this work, it is not my purpose to articulate an alternative account of sensory reference, and what kind of things may fix it. But my anti-factualist conclusion is compatible with all three different options outlined in §4.2, namely, sensory reference may be fixed by: physical properties, features, or both. In all cases, a visual object will be a property bundle, and in neither case we have evidence for Fish’s scientific claim (§2). A plausible alternative could be to say that sensory reference may be fixed not by a single property, but by a property cluster

99 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

(physical, visual, or both)9. The properties may thus collectively fix sensory reference. Consider a simple case of seeing two distinct objects, item 1 and item 2. We thus have:

! ! ! ! Item 1: {!! ∨ !! ∨ !! … !!}

! ! ! ! Item 2: {!! ∨ !! ∨ !! … !!}

On this formulation, a disjunctive cluster of properties fix sensory reference. Not all properties may be required, but perhaps just a collection of heterogeneous properties. If this cluster is an object, this will depend on a number of other assumptions, both regarding the ontology of visual objects and regarding the kind of elements that our perceptual system can identify.

The second role of sensory individuals is in solving the feature-object binding problem (§3.1). Clark’s claim, as we have seen, is that places are the sensory individuals to which properties are attributed. It is dubious, however, whether Clark’s rendition of the sensory individuals is committed to or entails a commitment to factualism for it is far from clear whether places can be facts’ particulars. On Armstrong’s ontology (1997), space is a conjunction of facts. A discussion of this issue would lead us too far away from the present concern. Still, Clark’s theory, too, presupposes a property attribution model to solve the binding problem. This claim seems to contrast with our conclusion.

One way to preserve sensory individuals’ role in unifying different properties could be to say that, when sensory reference is fixed, the visual system starts extracting features from the property cluster. Suppose a subject sees a red triangle and a blue circle. In order to solve the feature-binding problem, the visual system may simply fix reference to a distinct set of property ! ! ! ! ! ! ! ! instances (the individuals)—like {!! ∨ !! ∨ !! … !!} from item 1, and {!! ∨ !! ∨ !! … !!} from item 2—and then further extract properties from the respective items. In this example, from item 1, it will extract the property instances of being red and triangular, and from item 2 the property instances of being blue and circular. In this way, we preserve the role of sensory individuals in the feature-object binding problem without committing to the further claim that properties are attributed to objects. This is a mere suggestion that deserves to be further articulated in subsequent works.

Finally, let me stress that these considerations about sensory reference are compatible with different neural solutions to the binding problem. The brain might actually bind features by means of neural synchrony (e.g. Singer, 1999), or by means of some feature-integration mechanism (e.g. Treisman & Gelade, 1980; Chan & Hayward, 2009).

9 I thank an anonymous reviewer for suggesting me this option. 100 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

5.3 Perceptual Content

Finally, I will briefly consider few implications for philosophical theories of perceptual content. My claim that visual objects are bundles of properties is not new. On different grounds, and without relying on experimental results, Textor contends that «[s]eeing x is constituted by seeing features, states or changes of x and additional factors. (…). I see x in virtue of seeing its features, states or changes» (2009, p. 141) (cfr. also Stout, 1921). These additional factors may be for example the role of memories and semantic categorization (§5.1).

My argument bears on philosophical theories of perceptual content, understood as conditions of accuracy. Many intentionalists about perceptual experience hold that perceptual content is propositional (e.g. Crane, 2009 for a dissenting voice). The propositional character of perceptual content depends not only on our assumptions about the intentional character of states of seeing, but also on our assumptions about the nature of propositions. Some philosophers hold that propositions are functions to possible worlds (e.g. Byrne, 2001; Stalnaker, 1976). In this case, the propositional character of states of seeing is decided by their having veridicality conditions. However, some philosophers hold that propositions are structured contents (e.g. Thompson, 2009). Several theories are available to friends of structured contents. For example, a content is Russellian if it involves the attribution of properties to objects (e.g. Chalmers, 2004). Another option is to argue that perceptual contents are better cast in terms of Fregean contents involving modes of presentation of objects and properties, rather than ‘naked’ properties (Schellenberg, 2010, p. 34).

In these cases, a state of seeing is individuated by means of two kinds of entities: properties (and relations), and a ‘something’ (a particular) the properties are predicated of. My arguments do not cohere well with the structured account of propositional content. As we have seen, states of seeing do not attribute properties to objects; instead, visual objects are exhausted by their properties. My contention is broadly consistent, for example, with a ‘Property Complex Theory’ of visual objects (e.g. Pautz, 2007, pp. 498-499). This theory has it that perceptual content is a property complex structured in different parts and standing in relations R1, R2, etc. On this view, a visual object is roughly a spatial-meretopologically-structured bundle of properties. These complexes are very different from structured propositions. It follows that perceptual content does not have a propositional shape. Perhaps, a better analogy would be to describe such contents as maps (Burge, 2010, p. 540) that make manifest to the perceiver a scenario filled with spatially arranged properties, where some of them at least coalesce into visual objects. For reasons of space, it is not possible to fully articulate this alternative proposal that I leave for future research.

101 Chapter 4: Facts, Sensory Individuals, and Sensory Reference

Conclusion

In this Chapter, I have attacked factualism, the claim according to which we actual states of affairs. Basing my considerations on experimental evidence, I have shown that perceptual psychology does not support factualism, and that we should understand visual objects solely in terms of properties. Notice that this conclusion is perfectly consistent with the claim that visual objects are not exclusively explained by processes of property extraction: memory, background knowledge, and many other factors contribute in the shaping of our visual perceptual experience. With this final step, I have shown what items populate the phenomenological domain, and therefore what items supposedly form the relational structure of this domain. In Ch. 7 I will put forward additional arguments to favor a trope bundle theory of visual objects.

102

PART III

THE NEURAL DOMAIN

INTENTIONAL MECHANISMS AND THE STRUCTURE OF SENSORIMOTOR EXPLANATION

5

A MECHANISTIC STANDPOINT ON CONTENT-NCC RESEARCH

In the Second Part, I have clarified the nature of the Phenomenological Domain, and specified what are the basic elements that constitute visual objects. In the Third Part, I will examine the nature of the Neural Domain ϕ. Again, this Part is divided into two Chapters. In this Chapter, I tackle the issue of the neural correlates of conscious content, or “content-NCCs.” I set out to show, contra Chalmers, that there is no unique mechanism corresponding to what is sometimes called “content-NCC,” but that at least three different kinds of mechanisms fix the visual content of consciousness. Chapter 6 has a more apologetic character. Not every philosopher might agree with the view defended here about content-NCCs. In particular, proponents of the sensorimotor theory, as well as dynamic system theory, are sometimes credited to be critical against mechanisms more generally. I will examine a particular instance of these theories, the sensorimotor theory, and show that it is compatible with the approach developed in this Chapter.

Recently, we have witnessed a growing interest for the problem of the neural correlates of consciousness (or “NCC” for short) (Baars 1995; Crick 1994; Crick & Koch 1990, 1998; Fink 2016; Koch 2004; Miller 2015a; Rees et al. 2002; Singer 2015; Tononi & Koch 2008). Most of these contributions cast doubt on the mainstream definition of “NCC” put forward by Chalmers (2000), and urged for the need of new approaches to the issue (e.g. Bayne 2007; Bayne & Hohwy 2013; Fink 2016; Miller 2014, 2015b; Neisser 2012; Noë & Thompson 2004). Whereas most philosophers of mind, so far, have focused on whether consciousness can be metaphysically explained (e.g. Chalmers 1996), Neisser (2012) has argued that what we really need is a more robust philosophy of science for NCC research, rather than philosophy of mind. From this standpoint, one still open issue is to articulate the explanatory structure of NCC research. Some philosophers have already put forward some considerations, and suggested adopting a mechanistic framework of explanation (Bayne & Hohwy 2013; Hohwy 2009; Neisser 2012; Opie & O’Brien 2015; Revonsuo 2015; Vernazzani 2015).

In this Chapter, I build up the suggestion and elaborate on the problem of the neural correlates of the contents of consciousness. I will continue focusing on my proper object of (phenomenological) interest, i.e. visual objects, and leave open how much of my considerations apply to other kinds of content, like the content of other distal senses, feelings and moods, etc. I advance several claims. Firstly, I claim that scientists try to uncover the brain structures that subserve the perceptual skills we employ by showing us visual accuracy phenomena. Second, I show that the notion of “content-NCC” is opaque, as it may not correspond to a single mechanism. As I am going to show, there probably are many distinct mechanisms working in a Chapters 5: A Mechanistic Standpoint on Content-NCC Research specific arrangement that, jointly, produce our experience of a single visual object. I thereby show that content-NCC research is inherently mechanistic, and try to uncover the putative architecture of the mechanisms underlying conscious visual content.

The Chapter unfolds as follows. I will first (§1) briefly summarize some key concepts about content and consciousness that I have exposed in Ch. 3. This will be helpful in order to bring into sharper focus the proper topic of this Chapter. Next (§2), I will briefly introduce the scientific problem of explaining consciousness, and highlight the goals of the neuroscience research on consciousness. Then (§3), I will introduce Chalmers’ mainstream definition of “content-NCC” and provide some reasons to be skeptical of it. An answer to the worries raised by Chalmers’ definition will be sketched out in the following Section (§4), where I articulate a new approach to content-NCCs in light of a mechanistic-manipulationist framework of explanation. In particular, I will argue that there are no such things as “content-NCCs” in the brain, but rather the complex and orchestrated activity of at least three different kinds of mechanisms. Finally (§5), I will outline the advantages of my approach over Chalmers’ definition, zooming in on the following areas: the heuristic strategies of mechanistic research, the problem of interfield integration in consciousness studies, and the problem of the ontology of visual content. We will later see (Ch. 8, §3), how the heuristic strategies for the discovery of mechanisms help us reshaping the problem of the neural correlates of consciousness.

1. The Contents of Visual Perception 1.1 The Content View and Visual Accuracy Phenomena

As I said in Ch. 3, in his work I focus on a specific kind of mental state that I call state of seeing. A state of seeing is a conscious mental state that has visual representational character. For conciseness, I will frequently use the verbal form “to see” in the next pages. I first briefly return on the notion of content, and later (§1.2) turn to consciousness.

To say that a mental state has a representational character is to say that it has “content.” The word “content” is a term of art, which means something assessable for accuracy or veridicality. (It is far from clear whether accuracy is just a form of veridicality, or not; I do not take stance on the issue, and continue to talk about veridicality and accuracy interchangeably). The view that visual perceptual experience has content is sometimes called the “Content View” (e.g. Brewer 2006, 2011; Byrne 2001; Siegel 2010; Schellenberg 2011). The Content View is the current orthodoxy in philosophy of perception (Locatelli & Wilson 2017) and the sciences of the mind. Philosophers like Byrne (2001), Burge (2005), Chalmers (2010), Evans (1982, p. 226), Harman (1990, p. 34), Pautz (2010), Peacocke (1992), Searle (1983), and Sellars (1956, §§16-18) all

106 Chapters 5: A Mechanistic Standpoint on Content-NCC Research assume or defend some version of the Content View1. The core tenet of the orthodoxy, is thus the claim that states of seeing are assessable for accuracy.

It is analytical that if seeing can be more or less accurate it is so only in relation to what seeing is about. This is just to say that states of seeing, on the Content View, are intentional states, states that exhibit aboutness or directedness to some object (Searle 1983)2. Intentional states belong to a specific subject S, and are individuated by content and modes (perception, belief, thought, etc.) (Crane 2001, p. 32) that specify the kind of relation S stands in to the content of that intentional state. Narrowing down our attention to perception, different senses or perceptual modalities, like vision, taste, etc., are different modes. In a state of seeing, the mode is seeing.

There are mainly three motivations that warrant talk about content in the philosophy of perception (Crane 2011): aspect, absence, and accuracy. The notion of “aspect” is linked with the notion of having a perspective. The observer is always situated in a certain spatial relation towards the target or object of her state of seeing (Crane 2001, pp. 6-8). For instance, two states of seeing might both have Tibble the cat as their object, but represent her in different ways, for instance when the observer sees the cat from behind or from another vantage point. The notion of absence refers to the fact that the intentional object needs not to exist (e.g. Anscombe 1965/2002, p. 63; Crane 2001, p. 22; 2013). An obvious example here are hallucinations, states of seeing that have no object. Finally, contents may differ from how the objects they are about, i.e. a mental state might be inaccurate. A classical example is that of the straight stick that put in a glass of water looks bent, as well as many other examples of illusions. In the next pages, I will use the notions of “content” and “accuracy conditions” interchangeably (following for example Pautz 2011; Peacocke 1992; Searle 1983, p. 39; Siegel 2010a, p. 30).

What kind of entities fixes a mental state’s conditions of accuracy? Or, in other words, what kind of entities can we see? Casati (2015) enumerates for example properties, events, boundaries, and property bundles. Other philosophers maintain that we see both objects and properties (e.g. Nanay 2013; Siegel 2010a, pp. 45-49; Lycan 2003, p. 71), or actual states of affairs, i.e. facts (e.g. McDowell 1994; Nanay 2013; cfr. Textor 2009 for a critical view) (cfr. Ch. 4). As I have shown in Ch. 4, visual objects are better understood as property bundles. However, in this Chapter I remain neutral on the issue, which I will further develop in Ch. 7. Let us assume, like virtually every philosopher and psychologist, that we see properties, like

1 The Content View has recently been challenged by naïve realist theories (e.g. Brewer 2006, 2011; Fish 2009; Travis 2004; for an overview, see Locatelli & Wilson 2017) or relationism (Campbell 2002). In this study, I will simply assume the Content View. I leave to naïve realists to assess whether and to what extent they can accept my account. 2 The intentionality of seeing is not equivalent to their representational character. On some interpretations, a mental state may exhibit intentionality, i.e. be about something else, even though it is not representational. However, a representational state is a state that has intentionality (cfr. Neander 2017). 107 Chapters 5: A Mechanistic Standpoint on Content-NCC Research colors, forms, etc (Ch. 4, §1.1). Two questions arise in conjunction with this claim. First, we should determine the ontological status of properties (e.g. Nanay 2012): they may be universals—perhaps instantiated in a fact—or tropes (cfr. Ch. 7). Second, we should determine what kind of properties we can see: merely basic visual properties, or also state properties—e.g. the property of being a pine tree, of being human, etc.—and others? (cfr. Siegel 2010b)3. As we will see, this will also bear on the search for content-NCCs.

The visual properties that feature in our states of seeing always come in clusters, coherent visible units that we call “visual objects” (Feldman 2003; Kimichi et al. 2016; Palmer 1977; Treisman 1986; Ch. 4, §1.1). Regardless of how we construe visual objects and their properties, one thing seems clear: from a phenomenological viewpoint, there is a recurrent feature of our states of seeing, in that they make us “open” to the physical reality or reveals it (McDowell 1994, p. 26; Ch. 3, §1). The capacity of our visual system to make us open towards objects in the world is a regular feature of our minds that is made possible by our distinctive perceptual capacities that range from color perception to the perception of speed and texture, the perception of shape, Gestalt factors, and many others that jointly form the mereotopological manifolds that we call visual objects (e.g. Palmer 1999; Pinna & Deiana 2015; Pomerantz et al. 1977; Tversky et al. 2008; cfr. Ch. 7, esp. §2.1). It is precisely the agenda of content-NCC research to uncover the underlying brain and computational structures that make the phenomenological appearance of these regularities possible.

From the perspective of vision scientists, the regularities we observe, i.e. visual objects, are explananda. I adopt a realist standpoint here, according to which scientific explanations target phenomena (Bogen & Woodward 1988; Woodward 1989, 2010, 2011). As Bogen & Woodward (1988, p. 321) state, there is hardly a single ontological category all phenomena belong to. Phenomena may be properties, objects, states of affairs, events, or processes. This is why I opt for ontological neutrality here, although I briefly elaborate on the ontology of visual objects in §4.4, and later in Ch. 7. Since our explananda are visual objects, which are conditions of accuracy, I will call them visual accuracy phenomena. Visual accuracy phenomena are clusters of closely interwoven phenomena that determine, at a time t, the conditions of accuracy of a state of seeing 4 . It is sometimes difficult to single out a specific phenomenon for

3 I gloss over many other issues, like the propositional and conceptual character of perceptual content, as they are tangent to the issue of the ontological kinds that are visible by seeing, i.e. the claim that we see properties is compatible with different options, like conceptual content and its propositional character. 4 I do not claim that all visual accuracy phenomena are visual objects, but only that visual objects are an important class of visual accuracy phenomena. Also, the reader should bear in mind that in this work I adopt a synchronic, rather than a diachronic perspective (Ch. 3, §1.2). Arguably, visual accuracy phenomena are diachronic, i.e. they are temporally extended (cfr. also Ch. 6, §4.2). When I look at a visual object, like the pencil on my desk, my state of seeing extends in time, as long as I keep looking at it. I will not dwell on how to account for the persistence in 108 Chapters 5: A Mechanistic Standpoint on Content-NCC Research explanatory purposes from an intricate set of distinct phenomena. Most often, scientists require a substantial amount of research to untangle different phenomena, or discern phenomena from artifacts (Bechtel & Richardson 2010).

Researchers interested in content-NCCs, however, are not simply interested in explaining perceptual content. They want to understand how the brain (or some beyond-the-brain extended system, e.g. Clark 2008) generates conscious content. I will now briefly turn to the relation between consciousness and content.

1.2 Content and Consciousness

States of seeing are conscious mental states, which leads us to the concept of consciousness and its relation to content (Ch. 3, §2.3). There are many concepts of consciousness, but the single most discussed one is that of phenomenal consciousness, which refers to the peculiar qualitative character of some mental states in contrast with unconscious states (e.g. Burge 1997, p. 427, 2006; Searle 2004, p. 134). Philosophers often use Nagel’s (1974) phrase “what it is like to be” to refer to the «way it feels» (Chalmers 1996, p. 11) to have a conscious mental state (Block 1995, p. 377). The terminology is not univocal, and some philosophers have coined other expressions for phenomenal consciousness, like «qualitative character» (Shoemaker 1994, p. 22), or «subjective character» (Metzinger 1995, p. 9; Schlicht 2011).

Phenomenal consciousness is usually contrasted with the concept of access consciousness introduced by Block (1995). Put crudely, a representational content is access conscious if that content is available to other cognitive modules (Block 2007)5. Access consciousness has a distinctive functional character, in contrast with the qualitative one of phenomenal consciousness (Block 1995, p. 170). The relation between phenomenal and access consciousness is a matter of controversy. What is unclear is whether the subjective or qualitative character of some mental states can be reduced to access consciousness, i.e. if it can be functionalized. If not, then one might wonder whether phenomenal and access consciousness have, at least in principle, two distinct neural correlates (Block 2005, 2007). If Block is right, then it is possible, under some circumstances, that a content may be accessible, but not experienced qualitatively, or vice-versa, a content may be experienced, albeit cognitively

time of visual objects, and how to differentiate static visual objects, i.e. visual objects as of motionless items, and changing visual objects, i.e. visual objects that move and change some of their features. 5 During the years, Block has further refined and changed the concept of access consciousness. For example, in his earlier publication (1995) he said that a content is accessible if it is poised for the rational control of action, whereas in his more recent writings (e.g. 2007) he doesn’t make reference to the rational control. The differences will not be important for the present account. 109 Chapters 5: A Mechanistic Standpoint on Content-NCC Research inaccessible (cfr. also Cohen & Dennett 2011; Phillips 2015). We will later return on this issue (§3.3).

Most defenders of the Content View accept some form of intentionalism, the thesis that every conscious state has content, and that the qualitative character of a mental state is somehow determined entirely or in part by its representational character. Some philosophers (e.g. Chalmers 2011) argue that it is possible to have the same content but distinct phenomenal experiences. Other philosophers argue for a bijection relation between content and phenomenal states, i.e. variations in content correspond to phenomenological variations, and variations in phenomenology correspond to content variations (Byrne 2001). The relation between phenomenal consciousness and content may be one of mere supervenience (Byrne 2001), or a more robust relation. For instance, consciousness may just be a particular kind of content that meets some functional requirements, rather than being a property of some mental state (e.g. Tye 1995, 2000); or it may be the product of a higher-order mental state directed at some content (e.g. Rosenthal 1986; Lycan 1996).

For now, we can simply assume the following. Let there be a set R of a subject’s contents at a time t (remember that I am focusing only on states of seeing, hence on visual content). Within this set, we isolate a proper subset of intentional contents Rc that are phenomenally conscious (I remain neutral for now about whether phenomenal consciousness can be reduced or not), such that Rc R. It is this proper subset that fixes the subjects’ visual accuracy condition at a time t. When looking for content-NCCs, neuroscientists are looking for specific brain structures that are responsible for visual accuracy phenomena, Rc.

2. Goals and Aims of Content-NCC Research

Most philosophers, so far, have studied the problem of consciousness from a metaphysical perspective. The key question has not focused on the explanatory structure of the scientific research on consciousness, but rather on an ontological question: Can consciousness be reduced to the “physical”? Framed in this way, the problem of consciousness is one aspect of the broader ontological enterprise of determining the fundamental ontological kinds of our world. In the case of conscious experience, it seems that every account must face the challenge of the explanatory gap (Levine 1983), which constitutes the hard problem of consciousness (Chalmers 1996). If consciousness cannot be reduced to the physical, it is either because it is not a real phenomenon (eliminativism), or because it itself belongs to the building blocks of reality. This in turn might entail some form of panpsychism, or emergentism (Chalmers 2010c). The metaphysical issue of consciousness is a perfectly legitimate philosophical question that will hardly disappear even with the best scientific theories of consciousness. The hard problem will resist, and probably will continue to fuel philosophical thoughts for quite a long time.

110 Chapters 5: A Mechanistic Standpoint on Content-NCC Research

The metaphysical problem has also generated an aura of mystery around the problem of consciousness, often characterized as an elusive or ineffable phenomenon. This has led philosophers to pay far less attention to the problem of what it means to explain consciousness scientifically. This much-neglected problem, however, can reveal some important clues about the nature of consciousness. My claim is not that by shifting perspective we will dissolve the hard problem, but that we can make progress in our scientific understanding of consciousness, and couch the hard problem within an intelligible structure. In other words, the question is not really whether we can in principle reduce consciousness to the “physical,” but rather whether we can articulate a robust philosophy of science that accounts for the explanatory structure of NCC research, and thus help us chart the geography of the problem, identifying which areas seem only tractable with the conceptual arsenal of philosophers and which areas can benefit from a closer cooperation with scientist.

The scientific problem of consciousness belongs to the domain of cognitive neuroscience6. As Craver puts it, neuroscience is mainly driven by two goals (2007, pp. 1-2). The first goal is explanation. Neuroscientists try to explain phenomena like the release of neurotransmitters in the synaptic cleft, how neurons store information, or how an organism can exercise the capacity of spatial navigation. The second goal is to control the brain. Under this goal we find the attempt to diagnose and treat neural diseases for example. The neuroscience of consciousness is no exception to these goals (Hohwy & Bayne 2015, pp. 161-163; Vernazzani 2015).

Focusing on content-NCC research, the first goal is to explain how the brain makes us conscious of a seemingly richly detailed visual field. We want to explain how the brain generates our conscious visual content, and in our case, visual objects. The second goal is to intervene on content-NCCs. There are many reasons for intervening on content-NCCs. An obvious reason is for diagnostic and therapeutic purposes. But there are also other reasons. Koch (2004, p. 100) for example states that manipulating NCCs can help us jumping from a mere neural- consciousness correlation to an account of how the brain causes conscious experience. Another reason is that manipulation of brain areas may help us locating the content-NCCs (e.g. Koubeissi et al. 2014; Parivizi et al. 2012). As we will later see (§4.3), recent developments of NIBS techniques have been successfully applied to content-NCC research trying to disentangle neural prerequisites and neural consequences of conscious experience (e.g. Aru et al. 2012; de Graaf et al. 2011). To these two goals, Hohwy & Bayne (2015, pp. 162-163) add a third one,

6 It may be objected that not all philosophers share this internalist standpoint (e.g. Noë & Thompson 2004). Some philosophers urge that consciousness might not be located in the brain at all, but perhaps be distributed in other bodily parts, or even external items. Virtually all philosophers, however, agree that the brain is necessary for consciousness. What is at stake in the debate is just how much consciousness depends on the brain. Since it is trivially accepted that the brain is necessary for consciousness, I will safely assume that consciousness is first and foremost a problem for neuroscientists. I will later (§4) show how a commitment to externalist views may alter the prospects for the search of content-NCCs. 111 Chapters 5: A Mechanistic Standpoint on Content-NCC Research prediction. Understanding content-NCCs may help us formulate reliable predictions about, for example, what a subject sees from the observation of neural activity (impressive advancements in this area have been achieved by the Gallant laboratory, cfr. Nishimoto et al. 2011). The three goals are complementary. Good explanations warrant reliable predictions (Douglas 2009), and manipulating the content-NCCs can both play an heuristic role, dissecting different kinds of neural correlates, and give us a better understanding of how content-NCC mechanisms work.

3. The Standard Definition of Content-NCC

In this Section, I introduce (§3.1), and later (§3.2) criticize the standard definition of content- NCC put forward by Chalmers (2000). As Fink (2016) rightly points out, this definition is the mainstream account accepted by most neuroscientists and philosophers working on NCC research (e.g. Block 2005; Hohwy 2007, 2009; Koch et al. 2016; Tononi & Koch 2008) (for an overview of theories and methods in the scientific study of consciousness, cfr. Klink et al. 2015). Therefore, criticizing it and its theoretical assumption means to make some steps towards a reconfiguration of the problem of NCC.

3.1 Chalmers’ Definition

In his seminal contribution, Chalmers (2000) provided the first rigorous definition of a “neural correlate of consciousness.” More specifically, and in light of the distinction between state, background, and contents of consciousness (cfr. Ch. 3, §2.1.2), Chalmers distinguished between neural system’s responsible for state consciousness, or “NCCs,” and specific neural systems that are responsible for the contents of consciousness, or “content-NCCs.” NCCs and content- NCCs jointly form the “total-NCC” (or “full-NCCs,” Koch et al. 2016) the sum of all brain systems responsible for an organism’s consciousness in its entirety. Let us focus on content- NCCs. Chalmers provided the following definition:

An NCC (for content) is a minimal neural representational system N such that representation of a content in N is sufficient, under conditions C, for representation of that content in consciousness. (Chalmers 2000, p. 31)

Before we turn to the problems of this definition, let us clarify some of its core features. I single out the following concepts as particularly relevant: correlation, conditions C, and the minimal sufficiency requirement. I will then turn to a special relation that would hold between the neural representational system and content, the matching relation.

The very idea of a “correlation” is so deeply rooted in NCC research as to be inscribed in the very notion of Neural Correlates of Consciousness. Needless to say, correlation is not equivalent to causation. For example, buying a TIAA life insurance may be correlated with better life expectancies, but it does not mean that owing a TIAA life insurance is causing a

112 Chapters 5: A Mechanistic Standpoint on Content-NCC Research longer life (Cartwright 1976). There might be a common factor that explains both why people purchase a TIAA life insurance and also make them living longer (perhaps, because such people are wealthier). The term “correlation” was put forward by Crick (1996, p. 485), who meant it as a mean to sidestep some thorny philosophical problems about the relation of consciousness with neural activity. In the same year, Chalmers declared that «neurobiological approaches to consciousness […] can […] tell us something about the brain processes that are correlated with consciousness. But none of these accounts explain the correlation […]» (1996, p. 115); and in his 2000 paper, he reiterated that the search for correlations «can be to a large extent theoretically neutral» (2000, p. 37). The correlation thesis has been criticized by McCauley & Bechtel (2001) who argue that talk about mere correlation does not make justice of the explanatory practice in the science, which would proceed by making heuristic identity assumptions. Other scientists, like Koch (2004, p. 100) adopt an instrumental account to correlation, with the aim of NCC science being that of jumping from correlation to causation. As I will later argue (§4.3), while the notion of correlation helpfully leaves out the exact metaphysical pattern holding between content and the underlying neural system, it does not helpfully distinguishes between systems responsible for a particular phenomenon, and systems closely correlated with it7.

Let us now consider the “conditions C.” Neural systems do not exists in a physical (biological) vacuum: they are embedded in a wider biological system. Some of the features of such a system make possible its proper working. I will call these features “supporting factors.” There are different kinds of supporting factors. Koch (2004, pp. 88-89) (cfr. also Rees 2002) distinguishes between enabling and specific factors. He defines enabling factors, or NCCe, as «tonic conditions and systems that are needed for any form of consciousness to occur at all» (Koch

2004, p. 88). Examples of NCCe include, but are not limited to: the activity of cholinergic neurons, activity of the intralaminar nuclei within the thalamus (ILN) (e.g. Bogen 1995, 2007), and in general other subcortical structures, like the basal forebrain, portions of the basal ganglia, and the claustrum (Blumenfeld 2016). Koch’s specific factors would be the proper NCCs, the mechanisms that generate consciousness or conscious content, i.e. creature-NCCs and content- NCCs. Block draws a somewhat similar distinction (2007) between “causal” and “constitutive” factors. The latter are equivalent to Koch’s specific factors, whilst the former are the set of specific causal conditions that must obtain in order to enable the activation of the constitutive factors. Note that Block’s causal factors are distinct from the prerequisite neural activity that is required to activate a specific mechanism (e.g. Aru et al. 2012; cfr. §4.1). Among the causal

7 Talk about “correlates” is not universally accepted in the science of consciousness. Some researchers prefer to talk about neural bases or substrates of consciousness (e.g. Aru et al. 2012, 2015; Block 2007; de Graaf et al. 2011; Hohwy & Bayne 2015; Miller 2015b; Revonsuo 2000, 2015). As I will later explain, the focus of scientific interest is on mechanisms that are constitutively relevant for a particular phenomenon, whereas other processes will be merely correlated with it. 113 Chapters 5: A Mechanistic Standpoint on Content-NCC Research factors, Block singles out what he calls “supply factors” such as appropriate oxygen and glucose levels, the role of glia cells in supporting neurons, etc. (Koch et al. 2016, p. 308).

It is obvious that without supporting factors no neural—and perhaps prosthetic systems as well, cfr. §3.2—can work. I will therefore follow Chalmers in adding a generic “condition C” as the set of all conditions under which a mechanism might constitute the phenomenon. As I have anticipated, I will argue that “content-NCCs” are in reality multiple mechanisms having different function. There is no reason of course to suppose that the conditions C will be identical for all these kinds of mechanisms. Arguably, such mechanisms will share some minimal supporting factors, such as the blood supply, but will also require more specific conditions.

Let us now turn to the notion of minimal sufficiency. The motivation for introducing the minimal sufficiency requirement is clear enough. If we target mere sufficiency, then the whole brain would trivially be sufficient for consciousness experience. What we want is a criterion that non-trivially pins down a minimal brain system that is responsible for a particular phenomenon, i.e. some content or consciousness. As Hohwy & Bayne (2015, pp. 157-159) and Miller (2015b) point out, however, the notion of minimal sufficiency is itself far from clear. To illustrate this point, Hohwy & Bayne put forward the following example. Suppose there is neural population N that is the minimally sufficient correlate of a mental state M, and that M only occur when neurons 1 to 10 of N fire and that any one of the 10 neurons may fail to fire and M would still occur as long as the other nine fire. It follows that none of the individual parts of N is indispensable for M (ibid., p. 158). Similarly, the notion of sufficiency is problematic. If with “sufficiency” we understand the idea that a system N generate consciousness all by itself, but other neural activity may be required in order to let N generate a given state M. Such neural activity may belong to the conditions C, the supporting factors, but how could we tell whether some neural activity is merely supporting or rather constituting M?

I have already adumbrated several issues in Chalmers’ definition. Before I raise more serious objections, and my proposal about how to overcome them, we need to dwell on the notion of a matching relation. In the case of content-NCCs, Chalmers posits some kind of matching relation between the content of N—the neural representational system—and the content of consciousness. Noe & Thompson (2004, p. 3) call this the Matching Content Doctrine (MCD). The MCD consists of the following theses:

i. N is the minimal neural representational system whose activation is sufficient for the occurrence of E; ii. There is a match between the contents of E and N.

We can graphically display the MCD by means of Fig. 11:

114 Chapters 5: A Mechanistic Standpoint on Content-NCC Research

N C i n c

Fig. 11: Chalmers’ content-NCCs. The neural representational system N represents content1 (n) in N, which is supposed to match (i) conscious content2 (c) in C.

It is unclear what this matching relation amounts to (perhaps, a form of isomorphism? Cfr. Ch. 8, §3.1) (Vernazzani 2016b). What is clear is that, for Chalmers, the notion of minimal sufficiency for content-NCCs is closely tied to the MCD. If a neural system N counts as content- NCC—excluding the confounding factors—there must be a matching relation between conscious and neural content.

I will now discuss some serious objections against Chalmers’ definition that, together with Fink’s (2016) criticism, will make a case for abandoning the mainstream definition and articulate a new account of content-NCC research.

3.2 Problems with the Standard Definition

I cluster the criticisms against Chalmers’ definition of content-NCC in two groups. The first group of objections is drawn from Fink (2016). Fink does not distinguish between content-NCC and state-NCC and it is unclear how his notion is supposed to do justice of the so-called “state- based approach” in the science of consciousness, i.e. the attempt to identify the NCCs for creature consciousness. State based approaches are usually devised in form of contrastive analyses between healthy participants and patients with severely diminished consciousness, perhaps due to coma, vegetative state, etc. (Koch et al. 2016, p. 308). The second group of objections is mainly mine, and cast doubt more radically on Chalmers’ understanding of content-NCC.

Given that Fink does not distinguish between content-NCCs and NCCs, he raises several objections against Chalmers’ definition of NCC or “state-NCC.” In particular, his objections aim at the following aspects: (i) that Chalmers’ definition targets the capacity of an experience, not the neural correlate of an occurrent experience; (ii) it does not mirror the actual usage of NCC in the science of consciousness; (iii) it connects too closely the definition of NCC with the neural system, without allowing for the possibility of an artificial NCC; (iv) it does not take into account neural plasticity; (v) it does not offer a useful operationalization in light of experiments on NCCs. The criticisms (ii) and (v) will be discussed more thoroughly later, respectively below in this Section, and in §4.3. I concur with Fink that Chalmers’ definition does not mirror the 115 Chapters 5: A Mechanistic Standpoint on Content-NCC Research actual usage of NCC in the science of consciousness, and I agree that it does not offer a useful account in light of experiments, manipulations on content-NCCs (§4.3). However, my considerations will be developed largely independently from Fink’s own standpoint, and steer decisively away from his account in that I espouse a mechanistic approach to content-NCC research (§4).

Concerning criticism (iii) and (iv), Fink’s considerations are as follows. If we take some neural activation as necessary for experience we would rule out a priori the very possibility of: (1) artificial experiencers «i.e. non-biotic conscious machines or programs» (2016, p. 2; Gamez 2008); (2) the possibility of preserving consciousness in silicon brain prostheses; (3) neural plasticity, the possibility that some brain regions might take over the function of an impaired structure (e.g. Wittenberg 2010). In this work, it is not possible to further explore to what extent artificial structures, or maybe completely artificial and non-biological agents are possible8. It is important to stress, however, that Fink’s objection is not so much against the fact that NCCs are neural or biological structures—something that is obvious given that neuroscientists seek to find out the brain structures responsible for conscious experience in animal-biological agents—, but on whether they logically need to be so. I retain this aspect of Fink’s criticism: we should allow NCCs to be (at least conceptually) possible also in non-biological artificial agents. Similarly, we should allow for the possibility of different brain structures being somehow involved in consciousness, according to the plasticity thesis.

However, I am far less convinced by Fink’s suggestion (i) that—since Chalmers’s definition targets neural “systems” that merely subserve the capacity of an experience—we should reformulate the NCC definition as being neural events or processes. Fink here overlooks one important aspect of NCC research—and perhaps of much of neuroscience research more generally—namely the fact that neuroscientists target mechanisms (§4.1). The search for brain correlates of conscious experience, or of conscious content, is instrumental to find out in virtue of what kind of neural activity, or computation, the brain is able to engender our conscious experience. This is part and parcel of the explanatory goal of neuroscience that we have discussed above (§2). Indeed, Fink, just like Chalmers (see below) misses this important feature of content-NCC research, and in this sense, they are both guilty of (ii): producing a definition of

8 As a side remark, I would like to point out that «artificial experiencers» is not synonymous with «non-biotic conscious machines or programs» (Fink 2016, p. 2). A system or component of a system may be artificial and biological. For example, we can imagine in a not-so-distant future a team of neuroscientists growing a population of artificially produced biological neurons to serve as prostheses in case of brain insult. Hence, the criticism may be better reformulated as follows: we should not rule out a priori the possibility that an entirely non-biological agent may be conscious in virtue of the activity of some system S. This way of formulating the criticism avoids construing S as a neural system. Notice, however, that this proposal differs substantially from Fink’s own rendition of NCC2.0, since he rejects altogether the claim that NCC are “systems” (cfr. the text above). 116 Chapters 5: A Mechanistic Standpoint on Content-NCC Research

NCCs that does not capture the actual practice of neuroscientists working on consciousness. In this sense, it violates the criterion of descriptive adequacy (Ladyman & Ross 2007; Machamer et al. 2000). As we will see in short, a mechanistic framework of explanation can overcome Fink’s problems by shifting the perspective to mechanisms as explanatory units of visual accuracy phenomena.

Let us now come to the second group of criticism against Chalmers’ definition of content-NCC. I single out the following problems: (i) it assumes a somewhat unclear relation between content and consciousness; (ii) it is problematic on its own, since it assumes that, along state-NCCs, there will be an indefinite number of content-NCCs; (iii) talk about minimal sufficiency seems to betray the acceptance of some form of covering-law model of explanation which does not seem to fit the explanatory structure of content-NCC research. This latter point dovetails nicely with the problem hinted at earlier with regard to Fink’s criticism, that Chalmers’ definition does not capture the scientists’ research goals on NCC research.

Consider the first (i) issue. Chalmers’ definition assumes a specific interpretation of the relation between consciousness and content. More specifically, it assumes, somewhat unclearly, that a specific content becomes conscious in virtue of being represented in a minimally sufficient neural system N, under some conditions C. But is the content produced in that very same neural system N? Or is it re-represented in N, as if appearing on an inner Cartesian stage (Dennett 1991; Ch. 1, §1.2)? Chalmers remains silent on this issue. But there is more. Not all philosophical theories of consciousness will be compatible with this conceptualization of content-NCCs. For example, suppose that content only becomes conscious if it meets some functional requirements, for example if the content is poised or accessible for other mental states (e.g. Tye 1995, 2000) (cfr. §1.2; Ch. 3, §2). On this understanding of the content- consciousness relation, one may suggest that a re-representation of content in an NCC is unnecessary, and that all is required is that content must be made available to other cognitive functions. Relatedly, the very definition of content-NCC seems to exhibit an odd content redundancy, as if there were two distinct contents (Fig. 11). One is the neural content, and the other one is the conscious content. Chalmers’ terminology strongly suggests that there are two kinds of contents, connected by a relation of supervenience or correlation (cfr. below). Once the neural content gains access to a content-NCC, the very same content is represented in consciousness. This seems to suggest that there will be two vehicles for content: one is the neural representational system N (the content-NCC) and the other one will be consciousness. Indeed, Chalmers assumes the existence of two distinct but parallels planes, a physical or neural plane and a consciousness plane. This also generates the problem of explaining in what relation they stand. As we will later see in relation to problem (iii), we touch here on the core issue of Chalmers’ account: the lack of a clear explanatory link between the content of consciousness and underlying neural representational systems.

117 Chapters 5: A Mechanistic Standpoint on Content-NCC Research

Consider now problem (ii): Chalmers’ account is unclear about how many content-NCCs there should be in the cognitive system. Remember that, for Chalmers, there are two kinds of NCCs: creature-NCCs and content-NCCs. Assuming that there will be a creature-NCC, a system responsible for making a subject conscious, one could argue that there must be as many content-NCCs as are the kind of contents that may be conscious9. For example, there might be content-NCCs for visual content—or perhaps, for many distinct features of visual content (§4.2.1)—for acoustic content, taste content, etc. A somewhat similar suggestion has been advanced by O’Brien & Opie, there would be «several distinct phenomenal consciousnesses, at least one for each of the senses, running in parallel» (1998, p. 387). Content-NCCs, so understood, may be a subcomponent of a content-mechanism, or a non-overlapping but connected system. This interpretation seems coherent with what Searle (2000) called the “building-block” approach (cfr. also Bayne 2007) (cfr. §4.2.2). On this approach, a subject’s overall consciousness emerges as the sum of many distinct phenomenal experiences, and each of these experiences is produced by a specific consciousness mechanism. The clearest statement of this view is the theory of micro-consciousness put forward by Bartels & Zeki (1998) (cfr. also Zeki 2007). The opposite view is sometimes called the “unified field view.” This view has it that consciousness should be understood as a single space—perhaps a global workspace (Dehaene & Naccache 2001)—or as a space of integrated information (e.g. Tononi 2007). The unified field approach casts doubt on the very notion of content-NCC. If consciousness is a single phenomenal field, what are content-NCCs really doing? A suggestion may be that content-NCCs are not really making a content conscious, but rather, they are selecting a content for consciousness (Hohwy 2009; cfr. §4.2.2). This interpretation seems particularly attractive on conceptual grounds (e.g. Bayne 2007; Hohwy 2009, & Bayne 2015), empirical findings, for example on sensory extinction (e.g. Rees 2001, et al. 2002; Driver & Mattingly 1998; Vuilleumier & Rafal 2000)—a failure to report a contralesional stimulus only in the presence of a competing ipsilesional stimulus, when both stimuli are briefly simultaneously presented—and experimental strategies, in particular works on binocular rivalry (e.g. Blake et al. 2014; Tong et al. 2006). In this sense, as I am going to show in the next Section (§4.4), Chalmers assumes the existence of a single locus of control that makes visual content conscious (Bechtel & Richardson 2010, p. 59), but recent advancements in NCC research suggest a more complex interaction between different loci of control. Taken together, these considerations will eventually lead us to reject the very notion of “content-NCC.”

9 Chalmers seems to assume that there will be one single creature-NCC. I am not so sure about that, and I believe that paying closer attention at both the scientists’ research strategies and the conceptual foundations of consciousness science, it will turn out that consciousness is actually a complex of distinct and closely related phenomena, generated by distinct mechanisms or at least by the joint activity of distinct mechanisms. I will not tackle the issue of creature-NCC in this work, and my assumption that there is a “something,” perhaps a single mechanism that makes us conscious, is mainly motivated by expository reasons. 118 Chapters 5: A Mechanistic Standpoint on Content-NCC Research

The final aspects that emerges from the foregoing discussion is that Chalmers (and Fink, too) do not connect the issue of content-NCCs to an explanatory framework. As I have stressed in §2, the quest for the NCCs is mainly driven by an explanatory goal. Neisser (2012) has pointed out that Chalmers’ commitment to the minimally sufficiency thesis betrays the adherence to a covering-law model of explanation (Hempel & Oppenheim 1948; Salmon 1989). In his words: «[t]he language of minimal sufficiency implies that neurobiological research aims at covering laws of the sort familiar from the tradition of logical empiricism» (Neisser 2012, p. 687). According to the covering-law model—a variant thereof is the deductive-nomological (DN) model (Hempel & Oppenheim 1948, cfr. also Ch. 6, §3.1)—explaining a phenomenon means to subsume it under a logical structure where at least a law of nature must figure among the premises. On the DN model, an explanation is just a deductive argument where the explanandum is the conclusion and among the premises must feature at least a law of nature plus some statements of antecedent conditions. Certainly, Hempel & Oppenheim (1948) were well aware that their model was an idealization that serves as a normative ideal, rather than an actual description of how scientists work. Short after its introduction in the debate, philosophers have articulated more and more objections against the covering-law model of explanation (e.g. Craver 2007; Salmon 1984). It is this explanatory model that seem assumed by Chalmers, where the explanatory burden is played by some mysterious law of consciousness or law of nature bridging NCCs with consciousness, as I am going to show. Since we do not know in virtue of what kind of law of nature we might deduce consciousness from some statements of antecedent conditions, we cannot explain consciousness.

On Chalmers’ understanding of content-NCCs, as well as of NCCs more generally, is that neuroscientists can, at best, uncover the antecedent conditions that must be fulfilled for a conscious experience E to occur. However, as we have seen, he does not think that these conditions will suffice to explain the correlation (Chalmers 1996, p. 115). The link between the phenomenon and the explanans in the covering-law model is nomological, not causal. The same assumption is in play in Chalmers’ understanding of NCCs. The main problem is that, for Chalmers, there is a relation of nomological supervenience between NCCs and the explanandum. The idea is basically this: there is a set of mental properties that supervene on a set of neural functional properties. More precisely, supervenience is a purely modal relation of dependent-variation between two sets of properties (McLaughlin 1995, 2011)10. Whenever there is a variation in a set of properties A, there is a corresponding variation in the more fundamental set of properties B. Supervenience is meant to fix a relation of covariance, of dependence, and of non-reducibility of the higher-set of properties to the more fundamental one (e.g. Kim 1993a; Savellos & Yalciņ 1995). It is precisely the irreducibility of the supervening set of properties that has fueled the philosophical interest for supervenience after

10 I construe the supervenience relation as holding between (sets of) properties for simplicity’s sake, but supervenience can also be construed as holding between other kinds of relata. 119 Chapters 5: A Mechanistic Standpoint on Content-NCC Research the critiques of Putnam (1967) and Davidson (1970) against reductionism in the philosophy of mind (Nagel 1961). According to the standard reductive story, a theory T1 can be reduced to a more fundamental theory T2 iff all the laws of T1 can be deduced from the laws of T2 in conjunction with bridge laws that connect the heterogeneous vocabularies of the two theories. Against him, Putnam argued that the bridge laws would not help us reducing the mental on the physical because of the multiple realizability of the former, whereas Davidson’s criticism centered on the absence of a law-like relation between the physical and the mental. Supervenience seems to be a promising move if one wants to preserve the non-reducibility of the mental, while keeping the mind dependent on the brain’s activity. There are different forms of supervenience, pending on the modal operators, or the range of world-binding quantifiers. We do not need to sort out the different versions of supervenience, it suffices here to mention one specific form of supervenience relation that Kim calls “Correlation Thesis:”

For each psychological event M there is a physical event P such that, [as a matter of law], an event of type M occurs to an organism at a time just in case an event of type P occurs to it at the same time (Kim 1993c, p. 178, brackets added)

It is easy to fill in this quotation with our vocabulary. The physical event P is nothing else but a neural content of N (n), and the psychological event M is the corresponding conscious content (c) in C (cfr. Fig. 11). Chalmers’ definition of content-NCC seems to be committed to a supervenience thesis that aims at isolating a specific subset of neural activity N that is alone minimally sufficient to merely correlate via supervenience to the corresponding content of consciousness. It is possible to further develop the supervenience framework without making any causal commitment between the two domains of the visual contents of consciousness (visual accuracy conditions) and the underlying neural system. All that is required is to fix some relation of systematic co-occurrence such that, in a statistically significant number of cases, an event P will co-occur with an event M. Per se, the supervenience relation is silent about the exact pattern that holds between the properties. And it is precisely this aspect that makes the correlative relation «theoretically neutral» (Chalmers 2000, p. 37). My contention is that in the present case supervenience is either philosophically uninteresting, or it offers an inadequate description of the explanatory practices of neuroscientists.

There are two possible interpretations of the supervenience relation. We might call the first interpretation “mere supervenience.” We get this interpretation if we suspend the bracketed phrase “as a matter of law” from the Kim’s quotation above. According to this reading, we merely state that the visual accuracy conditions supervene (strongly or weakly, etc.) on some minimally sufficient neural state. The problem with this sort of supervenience is that, as Kim observed, it «merely affirms a dependence relation of an unspecified sort and does nothing more to explain the nature of psychophysical covariance. [...] supervenience itself is not an explanatory relation» (Kim 1993b, p. 167). This is the critical issue: supervenience is not an

120 Chapters 5: A Mechanistic Standpoint on Content-NCC Research explanatory relation. A mere supervenience interpretation of the relation between neural states and contents of consciousness does simply not capture the explanatory undertaking of the science of consciousness. In fact, stating that there is some form of co-variation between contents of consciousness and underlying neural states only expresses a philosophical triviality. It is trivial to say that there is some co-variation between contents of consciousness and the underlying neural states. What we want from a definition of content-NCC is something more robust than mere supervenient correlation. Furthermore, as we will later see (§4.3), talk about mere correlation creates the problem of conceptual—rather than merely methodologically— untangling “the” proper NCC from its confounds: prerequisite and consequent neural activity. As I am going to show in the next Section, scientists aim at identifying the mechanisms that are constitutively relevant for the target phenomenon.

Let us now turn to the second interpretation of supervenience that we can call “nomological supervenience.” We get this interpretation if we remove the brackets from the Kim’s quotation above: it is «a matter of law» that property M supervenes on P. Here, the supervenience relation is fixed by means of some yet-unknown law of consciousness. This confers the correlation relation the appearance of an explanation, of a sort similar to a DN model. The putative explanatory structure of content-NCC becomes clear: we should expect a law, and statements of antecedent conditions, like the neural content being represented in N, conditions C, etc. I think that this is precisely what Neisser has in mind when he attacks the notion of minimal sufficiency. Indeed, there is some textual and theoretical evidence that some philosophers have interpreted the correlation of NCCs precisely in this sense. Neisser correctly quotes Metzinger, who defines an NCC as a «[...] minimal set of basic physical properties [...] that the system needs in order to exhibit the target properties by nomological necessity» (Metzinger 2000, p. 285). Analogous remarks can be found in Chalmers (1996), where he maintain that we could introduce a set of supervenience laws that show how phenomenal properties correlate with physical properties. Such supervenience laws would not «interfere» with physical laws, but would rather form another closed set of laws (1996, p. 127).

There are at least two motivations for being skeptical of the nomological interpretation. A first motivation is a generic commitment to some form of naturalism. There are different forms of naturalism in contemporary philosophy, and I won’t try to spell them out given the complexity of the problem (cfr. also Ch. 8, §2). Suffice to say that I espouse a minimal form of naturalism according to which we should take scientific knowledge as a key source of our understanding of the world (e.g. Bechtel 2008a; Ladyman & Ross 2007). Hence, philosophy should account for how phenomena are scientifically investigated and explained. This stance is also expressed as a criterion of descriptive adequacy (Craver 2007, pp. 19-20; Machamer et al. 2000, pp. 20-25), according to which philosophical theories about science and scientific explanations should adequately describe how scientists work. From this stance, it follows that a DN-like explanation of content-NCC plainly violates the criterion of descriptive adequacy. Scientists do not explain 121 Chapters 5: A Mechanistic Standpoint on Content-NCC Research content-NCCs by means of “laws of consciousness” (whatever these might be), nor they try to deduce conscious experience from statements of antecedent conditions (Bechtel & Abrahamsen 2005; Craver 2005, 2007). As I will argue in the next Section (§4), content-NCC research is inherently mechanistic. The second reason for being skeptical of nomological supervenience is motivated by the well-known critiques directed against the DN model (e.g. Craver 2007; Salmon 1984, 1989), such as the problem of screening off explanatory irrelevant premises from the deductive argument (for a review, cfr. Salmon 1989), or the problem of identifying exceptionless laws that are required to make the DN model work. There has been a considerable debate around the problem of laws in the special sciences. The problem with these kinds of laws is that they apparently admit many exceptions, making them unhelpful in the construction of DN explanations. Mitchell (2000) maintains for example that although biological laws are less stable than physical laws, they can nonetheless provide causal knowledge and be exploited to predict, explain, and guide interventions. Woodward (2001, 2003) on the contrary lays emphasis on the notion of invariance under interventions. Finally, Cummins (2000) has characterized the alleged cases of “psychological” laws as mere effects that are themselves explananda. In short, the main crux of this approach is that it would rely on a highly controversial—and still, to a large extent—mysterious, notion of a “law of consciousness.”

4. A Mechanistic Approach to Content-NCCs

The problems outlined in the previous Section can be overcome by adopting a mechanistic- manipulationist framework of explanation. Espousing this approach will pave the way also to other advantages for the science of consciousness (§5). I will first (§4.1) define the notion of mechanism and clarify the nature of mechanistic explanation. Next (§4.2), I will argue that the notion of “content-NCC” somewhat obscurely refers to different mechanisms, with different functions: intentional mechanisms, selection mechanisms, and the “proper NCC.” Later (§4.3), I adumbrate a manipulationist standpoint on content-NCC research. Finally (§4.4), I briefly summarize my results highlighting in what sense my account is an improvement over Chalmers’ definition.

4.1 Mechanisms and Mechanistic Explanation

Conceptually, there are at least two distinct problems in relation to content-NCCs. The first problem is that of explaining visual accuracy phenomena (§1.1). The second problem is to explain how (some) visual accuracy phenomena become conscious (§1.2). In both cases, our goal is to explain how the cognitive system engenders visual accuracy phenomena and consciousness thereof. As an extensive literature shows, explanation in the life sciences—as well as cognitive science—is broadly mechanistic (e.g. Bechtel 2008, & Abrahamsen 2005, & Richardson 2010; Craver 2007, & Darden 2013; Kauffman 1971; Machamer et al. 2000; Miłkowski 2013; Piccinini 2007; Wimsatt 1972). Mechanistic explanations are often contrasted

122 Chapters 5: A Mechanistic Standpoint on Content-NCC Research with deductive-nomological (DN) explanations. As we have seen (§3.2), according to the DN model, scientific explanations are deductive arguments, where the explanandum features as the conclusion, and among the premises there must be some statements of antecedent conditions, and at least a law of nature (Hempel & Oppenheim 1948). Another term of contrast is functional explanations (e.g. Cummins 1983), where the explanandum phenomenon is functionally decomposed into a number of sub-functions (e.g. Craver 2007, pp. 107-162).

Mechanistic explanations are sometimes called “how” explanations, i.e. they explain why a particular explanandum phenomenon occurred by showing how the underlying mechanism that constitute it works. On this framework, the explanatory emphasis is not on deductive arguments, but on causal relations holding between the mechanism’s parts and operations. To do so, according to many philosophers working on mechanisms—especially those who accept an ontic view of explanation (e.g. Craver 2007, 2014)—means, borrowing a term due to Salmon (1984), that correct explanations situate a phenomenon within the “causal structure of the world.” Quite obviously, since explanation is achieved by elucidating the structure of the mechanisms underlying a phenomenon, it is important to give a clear definition of “mechanism” before moving on. There are different notions of mechanism in the literature (e.g. Bechtel & Abrahamsen 2005; Glennan 1996, 2002; Garson 2013; Illari & Williamson 2012; Machamer et al. 2000). Although there are important differences among these definitions, all agree that mechanisms are systems with a specific organization that allows its internal parts to jointly produce the phenomenon. The following definition will work for our purposes:

Mechanism: A hierarchical system of component parts {c1, c2, …cn} and their

operations {o1, o2, …on} structured in such a way as to constitute a system-level activity {A} that is the explanandum phenomenon11.

Mechanisms are for a specific phenomenon, a feature that has been sometimes called “Glennan’s law” (Bechtel 2008a, pp. 13-14; Craver 2013; Darden 2006, p. 273; Glennan 1996; Kauffman 1971) 12 (interestingly, Dretske expressed a similar though in his 1995, p. 5). A complete mechanistic explanation is achieved when all and only the relevant component parts, operations, and their structure are uncovered, showing how the explanandum results from the

11 The definition is clearly tailored to capture the notion of mechanistic constitution. One could easily adjust the definition to make room for etiological explanations, but I will not further discuss the notion here. 12 This feature of mechanisms is sometimes characterized differently. For example, Machamer et al. (2000) stress that mechanisms are sought to explain how a phenomenon is produced, Bechtel & Richardson (2010) how a task is carried out, and Glennan (1996) how a mechanism behaves. At a finer grained level of analysis, these distinctions have significant consequences in characterizing the metaphysics of mechanisms, but for my purposes these remain merely terminological choices. I stick to “phenomena” in order to emphasize the realist commitment I laid out earlier. 123 Chapters 5: A Mechanistic Standpoint on Content-NCC Research joint activity of parts and operations (Craver 2007, p. 111; 2014, p. 40). The “relevancy” of a part, operation, or of the structural arrangement between them is determined essentially by our explanatory and descriptive goals. In other words, there is no general recipe for determining in every case what parts and operations will be: it is essential that they must play an active role within the mechanism, operating or being operated on (Bechtel & Wright 2009). On this aspect, philosophers sometimes speak of the perspectival nature of what is picked out to explain a phenomenon (Craver 2013, Darden 2006, pp. 273-274; Kauffman 1971)13.

A mechanistic explanation can be achieved in two ways: either by showing the causal chain of events that led to the explanandum, or by showing how the activity of the mechanism constitutes the explanandum. In the former case, we talk about etiological explanations, in the latter case of constitutive mechanistic explanations (Craver 2007, pp. 107ff; Kaiser & Krickel 2017). It is constitutive explanations that are of particular interest here. The notion of constitution refers to the «causal behavior of [the mechanism’s] constituents» (Salmon 1984, p. 270). A constitutive explanation thus strives to fathom out in virtue of what underlying constituents a capacity can be explained. Prototypical examples of constitutive explanations are the heart pumping blood (Bechtel & Abrahamsen 2005, p. 425; Craver & Darden 2013, pp. 98- 117), or the decomposition of the visual system (Bechtel 2008a, pp. 89-128; cfr. §4.2.2), and arguably content-NCC explanations (Miller 2014, 2015b).

Mechanistic explanation can be characterized as a piecemeal approximation towards a complete description of what is relevant about a mechanism. The final goal is to move from an incomplete sketch of how a mechanism may work—a representation containing black boxes that stand for component parts or operations of the mechanism—to an exhaustive how-actually description of all relevant parts, operations, and their structure (Bechtel 2008a, p. 18; Wright 2012; Wright & Bechtel 2007, pp. 49-54). In order to do this, the first step is to circumscribe the explanandum phenomenon, and then try to identify its locus of control (Bechtel 2002, Bechtel & Richardson 2010, pp. 63- 92; Craver & Darden 2013, chapter 4). Once the locus of control has been individuated, the next step is to decompose it and show what are its component parts, operations, and their organization. The decomposition strategy espoused by mechanicists is fairly similar to other decompositional strategies discussed in the literature, like articulation of parts explanation (Kauffman 1971), functional analysis (Cummins 1983, 2000), and reverse engineering (Dennett 1994) (cfr. also Craver 2007, p. 109). Bechtel distinguishes between phenomenal and mechanistic decomposition. The former consist in sorting out the varieties of the explanandum, differentiating similar but distinct phenomena. For each explanandum, a locus of control is sought. The identification of component parts in a

13 For example, in the mechanism of neurotransmitter release described by Craver (2007, pp. 22-24) Ca2+ is a component part of the mechanism, whereas the operations are the intracellular reactions triggered by Ca2+. They are parts and operations precisely because they play a role in producing the phenomenon, in this case, the release of neurotransmitters in the synaptic cleft. 124 Chapters 5: A Mechanistic Standpoint on Content-NCC Research mechanism is, by itself, not sufficient to achieve a complete explanation. Operations must be localized as well: this is to say that operations must be connected with the relevant component parts of the mechanism. This in turns yields insights into the system’s organization (Bechtel & Richardson 2010, p. 246)14. Finally, once a complete decomposition has been achieved, we should reassemble the mechanism to see whether it produces the explanandum phenomenon (Bechtel & Richardson 2010).

Is content-NCC research mechanistic? A quick glance at the literature reveals the widespread use of the term “mechanism” in NCC research. Consider the following passages; Aru & Bachamnn’s (2015) paper bears the title “Still wanted — the mechanisms of consciousness;” Koch defines content-NCCs as «the smallest set of brain mechanisms […] sufficient for some conscious feeling» (2004, pp. xv-xvi); Bachmann & Hudetz talk about «brain mechanisms and processes that have been proposed as necessary for consciousness» (2014, p. 3); Tononi & Koch define the NCCs as «minimal neuronal mechanisms that are jointly sufficient for any one specific conscious percept» (2008, p. 246) (all emphases are mine). Of course, mere terminological congruence is not by itself a reason to believe that content-NCC research is mechanistic. More relevant than terminology is the decompositional strategy of consciousness studies, which is exemplified by the words of Francis Crick: «while the whole may not be the simple sum of the separate parts, its behavior can, at least in principle, be understood from the nature and behavior of its parts plus the knowledge of how all these parts interact» (1994, p. 11). As I will show, this strategy is perfectly consistent with a mechanistic approach.

We have seen (§1.1) that human agents15 exercise the remarkable cognitive skills of identifying material objects in the world (Ch. 4), objects whose properties are extracted and made manifest in states of seeing. In virtue of what does an organism possess this capacity? A first answer may be to accept something like Chalmers’ definition: a visual accuracy phenomenon is conscious in virtue of a specialized mechanism that just makes contents conscious. However, a closer inspection at this concept reveals that things are more complex. Firstly, it seems that—given at least the possibility (and the empirical evidence thereof) of unconscious content—one phenomenon that deserves explanation is content itself. Secondly, given that not all contents are conscious, and that sometimes a particular content is selected over other options, it seems that there must be some kind of mechanism selecting contents to become conscious. Thirdly, there is the proper question of explaining what exactly makes a content conscious. In the next subsections, I set out to show that all these different phenomena are at play behind the

14 It is sometimes said that mechanisms afford a third way for thinking about explanation, between ruthless reductive accounts (Bickle 2003), and emergentism (Bechtel 2008a, & Richardson 2010, pp. xliv-xlvii; cfr. also Hensel 2013). In what sense mechanistic explanations can be said to occupy such a middle ground depends also on how we articulate the notion of “reduction.” I gloss over the problem of reduction in this work. 15 I here focus on humans, but of course, many of the experiment devised to uncover the structure of the visual system have been performed on monkeys. 125 Chapters 5: A Mechanistic Standpoint on Content-NCC Research somewhat obscure concept of “content-NCC.” These phenomena all contribute to fix the representational content of states of seeing. I will thereby show that current scientific research on content-NCCs actually reflects these different functions, and that the very notion of “content- NCC” is just a somewhat obscure label for research on different mechanisms that subserve different functions. In other words, scientific practice does not aim at finding Chalmers’ content-NCCs.

4.2 Decomposing Content-NCCs

The scientific search for neural correlates of the contents of consciousness, it will now be shown, is not tailored to look for Chalmers’ “content-NCCs.” A closer look at the experimental practice and theoretical assumptions reveal that there are at least three different kinds of mechanisms that are the object of content-NCC science. I call them: intentional mechanisms (§4.2.1), selection mechanisms (§4.2.2), and proper-NCC (§4.2.3). I justify the distinction between these different kinds of mechanisms in the next subsections. My account stands in contrast with Chalmers’ assumption that there must be (a) “content-NCC(s).” All these mechanisms are jointly responsible for producing and making a content conscious.

4.2.1 Intentional Mechanisms

In order to enjoy a state of seeing a visual object, the cognitive system must first generate content. In other words, there must be some mechanisms responsible for fixing the visual accuracy phenomena, in our case, visual objects. Strictly speaking, the problem of explaining how the cognitive system generates content is not an aspect of the quest for explaining consciousness, but an aspect of the problem of naturalizing intentionality. The explanation of visual accuracy phenomena is a paradigmatic example of successful mechanistic decomposition. I call the mechanisms responsible for content intentional mechanisms (IM) (Vernazzani 2015).

Explanation in psychology proceeds by means of functional analysis or decomposition (e.g. Cummins 1983, 2000; Dennett 1978; Fodor 1968; Piccinini & Craver 2011; Putnam 1960). There are different forms of functional analysis, but the general idea is that a complex capacity can be studied through decomposition into smaller and more tractable sub-capacities. Analogously, we can start by observing a psychological phenomenon, or set of phenomena, and then decompose it into a number of sub-phenomena. This is precisely the strategy developed by Cummins (1983, 2000), and that Piccinini & Craver (2011) call “task analysis.” The decomposition of the explanandum capacity can obey different functional criteria. Consider the case of visual objects. Visual objects are composed by many, distinct, properties which can be studied independently, and that together form a coherent whole, a visual accuracy phenomenon. Take a visual object, Ψ. (Notice that I follow the convention, introduced in Ch. 2, §1.1-2 of addressing the relational structure of the Phenomenological Domain Ψ with Ψ; as it

126 Chapters 5: A Mechanistic Standpoint on Content-NCC Research is by now clear, such relational structures are the visual objects, cfr. Ch. 4, Ch. 7. Visual objects are our explananda). Ψ can be decomposed into a number of sub-phenomena (Fig. 12).

p1 p3 p2

Fig. 12: A visual object Ψ can be decomposed into a number of visual accuracy phenomena, p1,

p2, p3. There may be different ways to achieve the decomposition. One way, for example, is to pursue a functional decomposition: each property of the object will play a distinct functional role. Other invisible aspects of the visual object may be playing a functional role. For example, there may be some active binding mechanism (e.g. Tacca 2010; cfr. Ch. 4, §3.1) connecting the features to the visual object. Another approach may suggest decomposing the object in virtue of the distinct qualitative character of its properties. Consider for example distinguishing colors from forms, texture, etc. According to Craver (2007), as well as Craver & Piccinini (2011), task analysis by itself does not produce explanatory but merely phenomenological models. Phenomenological models (cfr. Ch. 6, §3.2; Ch. 8, §2.2.3) are models that do not explain phenomena, but that can nonetheless be used for a variety of purposes, like identifying regularities, summarize data, etc. (e.g. Bogen 2005; Craver 2014, pp. 38-39). I will later put forward the claim that a form of model pluralism of visual objects can open different epistemic spaces (Ch. 8, §2) to study distinct features of their targets. Yet, such models do not provide explanations. Craver & Piccinini argue that the only way in which task analysis can provide explanations is by integrating task analysis with a mechanistic approach to explanation. I do not make the claim that all task analysis must invariably be supplemented by a mechanistic approach and that, therefore, the only way to achieve explanation of a psychological capacity is by means of mechanisms. However, I agree with Bechtel (2001b, 2002, 2008a) in considering the decomposition of the visual system as a paradigmatic case of successful mechanistic explanation. There is indeed considerable evidence that the visual system—at least, the descriptive visual system (cfr. Ch. 3, §1)—can be decomposed as to identify a number of operations that are then related to specific brain structures (Singh & Hoffman 1997) (Fig. 13).

127 Chapters 5: A Mechanistic Standpoint on Content-NCC Research

p1 p3 p2

IM1 IM2 IM3

Fig. 13: The phenomenal decomposition (functional or categorical) identifies different visual accuracy phenomena that compose a visual object Ψ. These phenomena are then recognized as the functions of different intentional mechanisms IMs.

The intentional mechanisms responsible for a particular feature of visual objects can be detected in different ways. For example through studies of brain insults and their effects (e.g. Zeki et al. 1991), by means of fMRI variation of the brain’s activity under different tasks (e.g. Grill-Spector & Malach 2004). In general, in order to determine the causal relevance of a particular brain region to a specific function, the neural activity «must be manipulated either experimentally (e.g. using transcranial magnetic stimulation (TMS)) or as a consequence of neurological disease causing brain damage» (Rees 2016, p. 67) (cfr. also §4.3). For example, awareness of motion is impaired if feedback signals from V5/MT to V1 are disrupted by means of TMS (e.g. Pascual-Leone & Walsh 2001). A similar case is that of achromatopsia, which is engendered by lesions on areas V4 and V4α (Sacks et al. 1988; Zeki 1990) (cfr. Ch. 1, §2.2). Evidence about the functional specialization of the cortex is well-established, at least since the seminal study conducted by Fellman & Van Essen (1991). More recent developments include findings of specialized regions responsible for higher-order aspects of visual object perception. The most famous case is perhaps that of the fusiform-face area (FFA) apparently involved, as the name suggests, in face processing (Kanwisher 2010; Kanwisher et al. 1997). Other cortical regions seems to have a similar specialization, for example the parahippocampal place area is apparently involved in the perception of building and scenes (e.g. Aguirre et al. 1998), other specialized areas seem to respond to tools (e.g. Martin et al. 1996), animals (ibid.), the human body (e.g. Downing et al. 2001), and even chairs (e.g. Ishai et al. 1999) (cfr. Malach et al. 2002)16.

16 The exact function of these areas is far from clear. Philosophically, this issue intersects with the problem, mentioned in §1.1, of the richness of perceptual experience. Some may argue that these regions work exactly as feature/property detectors. This would support the rich content view: we would not only see “basic” visual features, such as color, forms, etc., but also higher- level properties, such as “being a building,” “being a face,” or “being a corkscrew.” Alternative interpretations of the experimental results are also possible. For example, one may interpret them as neural correlates of a conceptual recognition. I will briefly return on this issue later (Ch. 8, §2.3) when I will describe visual objects as structural representations. A further problem is touches on the putative specialization of these areas. It has been shown, for example, that FFA 128 Chapters 5: A Mechanistic Standpoint on Content-NCC Research

The fact that cortical regions have different functions, processing different features of visual objects that «must be constructed by the visual system» (Singh & Hoffman 1997, p. 98) does not mean that these units operate independently. There is a good deal of evidence against the “locality assumption,” according to which distinct cortical mechanisms process information as informationally encapsulated, i.e. a rigidly modular way (Fodor 1983). In general, cortical specialized areas are highly interactive, and it is a matter of empirical investigation to show to what extent some cognitive functions will work interdependently (Farah 1994). A nice illustration of the interdependence of different cortical specialized mechanisms is the case of neon color spreading (cfr. Ch. 1, §1.1, fig. 2). In the case of the neon color spreading, the impression of a visual object—a colored, bluish large circle in the middle of the four represented circles—is given by the concerted activity of color and shape mechanisms.

I call these mechanisms “intentional mechanisms” (IM) because their function seems that of fixing the description, i.e. the accuracy condition, of states of seeing. A definition of IM can now be given:

IM: A hierarchical system of component parts {c1, c2, …cn} and their operations {o1, o2,

…on} structured in such a way as to jointly constitute—under conditions C—a visual accuracy phenomenon {Ψ} that is the explanandum phenomenon.

An IM can of course be further decomposed into a number of smaller intentional mechanisms to reflect the fact that a visual object is a construction made by different accuracy phenomena. In this case, we would change the explanandum phenomenon from Ψ (the relational structure, i.e. a visual object) to p1, p2, …pn, which include both the properties and other configuration factors that may, together with the properties, constitute a visual objects (cfr. Ch. 7, §2.1). As we will later see, it is IMs that will play a prominent role in the matching relation between content and neural activity (Ch. 8, §3.1).

4.2.2 Selection Mechanisms

Intentional mechanisms belong to the descriptive visual system (Ch. 3, §1.1), and their function is to fix the accuracy condition of descriptive visual states. But as we have seen in Ch. 3, not all descriptive visual states are conscious. States of seeing are a subset of conscious descriptive visual states. In virtue of what does a descriptive visual state become conscious? It is precisely the purpose of content-NCC research to puzzle out a solution to this question. I argue that much of content-NCC studies, and in particular studies on binocular rivalry target selection mechanisms (Hohwy 2009), i.e. mechanisms that select a proper subset of the representational

fires even in the absence of face perception (e.g. Rees et al. 2000). This may call into question the hypothesis of a region specialized for face detection. Alternatively, the region may simply be involved in many different functions, being thus a component part of multiple mechanisms. 129 Chapters 5: A Mechanistic Standpoint on Content-NCC Research states to become conscious. Furthermore, I show that this solution seems consistent with a predictive coding approach.

The paradigmatic approach to content-NCC research, as conceived for example by Koch (2004), is based on a contrastive analysis between different perceptual contents. These studies have mostly focused on binocular rivalry (e.g. Blake & Logothetis 2002; Blake et al. 2014; Haynes et al. 2005; Rees 2001; Tong et al. 1998, 2006), attentional paradigms (e.g. Klink et al. 2015, pp. 35-36), sensory extinction (e.g. Rees 2001, et al. 2002; Driver & Mattingly 1998; Vuilleumier & Rafal 2000), various forms of neglect (e.g. Buxbaum et al. 2004; Li & Malhotra 2015), and masking (e.g. Rees & Frith 2007). Binocular rivalry experiments have been a staple of content-NCC research for decades now. The experimental setting is simple and elegant. When dissimilar images are presented to the eyes of a subject (dichoptic presentation), they compete for perceptual dominance: only one image is visible in turn to the subject, whilst the other one is suppressed. Usually, the dominant image is presented to the subject for few seconds, and then the formerly suppressed image acquires dominance and become present to the subject (for a classic review of binocular rivalry, cfr. Blake & Logothetis 2002). The dominance of one image over the other can be manipulated. In a study conducted by Zaretskaya et al. (2010), for instance, the use of TMS over the right intraparietal sulcus has been shown to prolong the periods of perceptual stability. Other studies on binocular rivalry have also shown that the LGN—a region of the thalamus that serves as an anatomical bridge connecting the optic nerve, and thus the retinas, with the striate cortex—does not actually play a passive role, merely passing over the information detected by the eyes (Haynes et al. 2005; Wunderlich et al. 2005; for a brief review, cfr. Leopold & Maier 2006). Instead, these studies show the presence of activity fluctuations in the LGN during spontaneous perceptual changes in binocular rivalry. This shows that subcortical structures are also directly involved in conscious perception. The reason why studies on binocular rivalry have been so popular in the last decades is that, in the absence of a stimulus variation, one may have the chance to distinguish «neural correlates of consciousness» from «neural correlates attributable to stimulus characteristics» (i.e. intentional mechanisms) (Rees 2001, p. 151). It is thus assumed that the putative content-NCCs are distinct from the feature-specific intentional mechanisms.

As Hohwy (2009), following Searle (2000, 2004, chapter 5), and Bayne (2007) remark, studies on content-NCC are methodologically (Hohwy) and conceptually (Bayne) problematic. The reason is that studies on content-NCCs presuppose that the subject is already conscious. It is therefore natural to doubt whether contrastive studies may reveal something like a “content- NCC.” As we have seen, Searle contrasts a building-block and a unified field approach (2000, pp. 272-274). The former considers consciousness as the product of distinct mechanisms, each constituting part of the machinery of a specific content of consciousness. A paradigmatic example of this strategy is Zeki’s theory of microconsciousness (e.g. Bartels & Zeki 1998; Zeki 2007). The unified-field approach, on the contrary, assumes the existence of a single 130 Chapters 5: A Mechanistic Standpoint on Content-NCC Research background state of consciousness. On this view, there should be a single generator of consciousness, which may be a something like a global workspace theory (Dehaene & Naccache 2001), or an integrated information network (e.g. Tononi 2007). As Hohwy remarks, given that subjects are already conscious in studies on binocular rivalry, it seems problematic to consider these research strategy as targeting anything like a “content-NCC” in the sense defined by Chalmers. Instead, they may be targeting a group of selection mechanisms, i.e. mechanisms that select among many contents, the one (or “ones”) that become conscious. Suppose that the brain is producing multiple contents at any time, i.e. multiple representations of visual objects. As we have seen (Ch. 3, §1.1) not all descriptive visual states are conscious, only states of seeing, a subset of descriptive visual states, are. There will arguably be many distinct non-conscious representations of visual objects in the brain17. However, only one (or few, if we consider that the visual scene is populated by many distinct visual objects) will make it to consciousness (e.g. Block 2005, p. 47; Dehaene & Changeaux 2004). The role of selection mechanisms is that of selecting the contents that will become conscious, much like recruiters identify and select people for a particular position (going to the front as soldiers, become sellers and advertisers, etc.). The actual selection, of course, needs not be an arbitrary process (nor is it plausible to assume so). The selection may be the result of a complex computational process where multiple factors play a role18.

A definition of selection mechanisms (SM) can now be given:

SM: A hierarchical system of component parts {c1, c2, …cn} and their operations {o1, o2,

…on} structured in such a way as to jointly constitute—under conditions C—the phenomenon of selecting {S} a cluster of visual accuracy phenomena for consciousness.

I refer to a “cluster of visual accuracy phenomena” to allow the possibility for: (1) considering visual objects as composed of distinct visual accuracy phenomena disjointly selected; and (2)

17 In light of this, we can briefly return on Block’s discussion of the GK patient (cfr. Ch. 3, §2.2). As we have seen, Block interprets the fact that the GK’s FFA lights up even when the face is presented in the blind area of his visual field as a sign that GK is phenomenally, but not access conscious of a face stimulus. Although I will not explicitly engage against this interpretation, my account naturally suggests an alternative explanation. Assuming, but not conceding, that FFA is indeed the sole mechanism responsible for face perception, it may be the case that GK’s impairment is a failure of the selection mechanisms to recruit information from a particular side of the visual field. Although the IM for face perception works, its content is not selected for consciousness. 18 It may for example be the case that some configurations of visual objects are preferred due to background higher-cognitive states, for example to fit expectancy, or to conform to a particular knowledge of the selected object, or due to contextual cues. Attention may play a role in the process. In many cases, thus, ambiguous stimuli would be “disambiguated” by the background knowledge of the perceiver and only in a small subset of cases—in particular, when neither contextual cues, nor any background knowledge may guide visual object presentation—there will be a constant switching between different contents. (Vernazzani & Marchi, manuscript). 131 Chapters 5: A Mechanistic Standpoint on Content-NCC Research considering SM as selecting multiple visual objects that populate the visual scene at a time t. It is far from clear how many SMs there will be in the brain. On merely conceptual grounds, it seems at least plausible to posit the existence of multiple SMs. Perhaps, there will be as many SMs as there are the modes of contents (cfr. §1.1; Ch. 3, §1.3). So, for example, it might be the case that there will be SMs specialized for acoustic content, as well as other specialized for taste content, etc. So far, this is a merely theoretical option, and substantial research may corroborate or disprove it.

4.2.3 The Proper-NCC

With “proper-NCC” I refer to the constitutive mechanism of consciousness. Two things are worth stressing. First, I retain the notion of “NCC” for terminological continuity with the researches, although I steer away from the account sketched out in the literature, as shown in this Section. Second, although I talk about a single “NCC,” I merely intend this to be a terminological choice. In fact, one could argue that the proper NCC is actually a complex of many distinct mechanisms that jointly produce what we define consciousness. The rationale for introducing the proper NCC (or “pNCC”) is that, as we have seen in the previous paragraph, investigating the contents of consciousness is only possible within an already conscious subject.

In short, the pNCCs may be mechanisms responsible for making us “conscious” in general, independently from the specific state we are in. They may be creature-NCCs, or a cluster of more specific pNCCs. A vast literature is now slowly uncovering multiple NCCs for such state of being conscious. As I said, in the case of “content-NCC” research, it is assumed that experimental subjects are already conscious, and that «the level of consciousness is assumed to be constant» (Boly et al. 2013, p. 5). I have introduced the notion of level of consciousness earlier (Ch. 3, §2.1.2) (cfr. Laureys 2005; Overgaard & Overgaard 2010). Such levels of consciousness are better captured by examples: different stages of sleep (N-REM I-IV and REM sleep), coma (e.g. Gosseries et al. 2014; Owen et al. 2006), vegetative state (e.g. Boly 2011), minimally conscious state (e.g. Giacino et al. 2002). To a great extent, this literature rests on contrastive studies, comparing recorded activity of healthy subjects with that of subjects in altered states of consciousness. Adopting the terminology that I introduced earlier (cfr. Ch. 3, §2.1.2), I distinguish state-consciousness, as the condition of being conscious at a give time (at, may be, different levels of consciousness), from creature-consciousness, the capacity of an organism to be conscious. The neural mechanisms responsible for making us conscious, i.e. creature conscious, may be—and probably are—distinct from the mechanisms that subserve different levels of consciousness. Different theories have been advanced regarding the former: information integration theories (e.g. Tononi 2008), global workspace theory (e.g. Baars et al. 2012), recurrant or reentrant theories (e.g. Lamme 2006), among others.

132 Chapters 5: A Mechanistic Standpoint on Content-NCC Research

Orthogonal to this question, we find the issue of the distinction between a- and p-consciousness. If we give credit to Block (2005, 2007, 2008, 2011), it may be the case that there are at least two distinct pNCCs, one responsible for phenomenal consciousness, and another one responsible for access consciousness (cfr. §1.2; Ch. 3, §2). Block has illustrated this distinction in multiple ways across the years. To stick with his 2005 formulation, consider the case of movement perception already illustrated above (§4.2.1). Transcranial Magnetic Stimulation applied to MT/V5 and V1 disrupting movement perception. As Block points out, it seems implausible that activation of MT/V5 alone would suffice for conscious perception of movement. Following Lamme (2004) it is likely that this activation has to be part of a feedback loop or “recurrent processing.” Block takes this to be a case of phenomenal conscious content: the joint activation of MT/V5 and of V1 via feedback loop produced the phenomenal experience of movement. Supposing—and not granting—for a second that Lamme’s theory is right, and that the pNCC for consciousness is a system responsible for feedback loops, for Block this would only illustrate the neural machinery of phenomenal consciousness. However, even a system working correctly, where the activation of MT/V5 is followed with a feedback loop of the kind specified, the system may fail to integrate this kind of content within a system responsible for access consciousness. Such a system may have the form of a neural global workspace theory (e.g. Dehaene & Naccache 2001; Dehaene & Changeaux 2004), for example. According to this model, different contents compete for gaining access to the global workspace. As Block pinpoints, since a content may be processed by the machinery responsible for phenomenal consciousness, but it may fail to be captured by the machinery responsible for access consciousness, we may have evidence that the NCCs are actually of two different kinds: NCC for access and NCC for phenomenal consciousness.

Block’s “two-NCCs” hypothesis might be easily wedded with my approach. For example, it may be the case that whereas intentional mechanisms are responsible for fixing the visual accuracy conditions of states of seeing, selection mechanisms will ultimately identify and “recruit” the right contents that will later be delivered either to phenomenal-NCC or to access-NCC. It stands than as an open question to determine whether there will be two distinct kinds of selection mechanisms, and why they mostly seem to project content to both systems, rather than only one, at least in normal condition (patients affected by forms of neglect or blindsight may be interesting precisely because they offer a window to disentangle these different phenomena and related mechanisms; cfr. Block 2007, p. 491).

Perhaps, one may ultimately argue that the distinction between two kinds of NCCs is unsound, and that accessibility and phenomenality are either not two distinct phenomena, but merely two concepts that pick out the same phenomenon, or two distinct concepts that pick out two really distinct phenomena that have overlapping, and non-dissociable, neural bases. For example, it may be argued that consciousness cannot be—even in principle—separated from “function” or

133 Chapters 5: A Mechanistic Standpoint on Content-NCC Research any related notion of accessibility, and that making this distinction simply calls for theoretical and experimental confusion (e.g. Cohen & Dennett 2011).

I will not decide the question of what model is the most correct way to approach the proper neural correlates of consciousness, or pNCC. Strictly speaking, this problem belongs to the question of the neural correlates of consciousness, rather than the problem of the neural correlates of conscious content. It seems at least likely that the problem of consciousness may actually be decomposed into a number of distinct phenomena that are constituted by very different mechanisms. Another option available may be that the concept of “consciousness,” as it is standardly used, does not actually reflect the current practice of scientific research, and finds therefore no correspondence in the scientific enterprise. However we construe the problem of the neural bases of consciousness, it seems likely that so-called «state-based research» (Hohwy 2009) will have to proceed in tandem with the content-based approach sketched out earlier.

4.3 Manipulating the Contents of Consciousness

Earlier, we have seen that manipulating the brain—and therefore cognitive functions—is one of the major goals of the search for content-NCCs (§2). Moreover, I have already hinted at the fact that manipulation of particular brain regions is helpful in determining the function of a specialized area. Manipulation, or intervention, of course, need not be artificial. As Woodward makes clear (2003) we should also count as forms of intervention brain lesions due, for example, to accidents, or deterioration of cognitive functions due to endogenous factors. In this paragraph, I will briefly discuss the prospects of a manipulationist account of content-NCCs. First (§4.3.1), I will present the correlation – constitution distinction, and the problem of disentangling prerequisite from consequent neural activities and the various mechanisms (IMs, SMs, pNCC). I agree with Klein (2016) in regarding brain regions as difference-makers. Next (§4.3.2), I will argue that my mechanistic-manipulationist account is well equipped to meet the foregoing challenge and disentangle different kinds of mechanisms.

4.3.1 Prerequisite vs. Consequent Neural Activity, and the Correlation Problem

Some neuroscientists (Aru et al. 2012, 2015; Bachmann 2009; de Graaf & Sack 2014, 2015; de Graaf et al. 2011; Kanai & Tsuchiya 2012; Melloni & Singer 2010; Miller 2015b; Ruhnau et al. 2014), as well as philosophers (Hohwy & Bayne 2015), have recently highlighted a methodological problem in the search for NCCs: the identification of the right NCCs among the neural confounds. Although formulated within Chalmers’ understanding of content-NCC, the problem can easily be transposed to my account as well. In short, the problem can be formulated as follows. Suppose that the right conditions C obtain and that the mechanisms we are looking for—say, a SM or IM—is actually activated in such a way as to constitute the explanandum phenomenon. Together with our target mechanism, many other distinct kinds of 134 Chapters 5: A Mechanistic Standpoint on Content-NCC Research neural activities will be observed by means of our detection techniques. Granted that we may somehow exclude non-neural supportive factors, there will arguably be mechanisms or in general neural activity that precedes the activation of our target mechanism, as well as consequent neural activity that may be difficult to disentangle from these confound activity. The problem is illustrated in Fig. 14.

In other words, the target neural activity or mechanism will always be embedded within a four- dimensional system consisting of: prerequisite-NCCs, mechanisms or neural activity that are temporally antecedent of our target mechanism, they are neural processes (or maybe mechanisms) whose activity is required for the activation or the proper working of the constitutive mechanism of the target phenomenon19; consequent-NCCs, mechanisms or neural activity that follow activation of our target mechanism20; and parallel-NCCs, mechanisms or neural activity that co-occur with the activation of our target mechanism but do not constitute our target phenomenon. (To my knowledge, the problem of parallel-NCCs has not yet been discussed in the literature) 21 . All these processes are correlated with our explanandum

19 In this sense, they may be regarded as constituting part of the “conditions C,” the supporting factors. An example of prerequisite activity may be connectivity between reticular formation and precuneus (Silva et al. 2010). 20 Consider an example. Neurons in medial temporal lobe (MTL) respond with high fidelity to the subject’s visibility of the stimulus, such that «conscious versus non-conscious trials can be distinguished solely based on the[se] neurons’ firing rate» (Bachmann 2009, p. 739; cfr. also Quiroga et al. 2008). However, damage or even resection of these neurons does not lead to suppression of conscious perceptual experience (Postle 2009). This suggests that firing of neurons in MTL is probably a neural consequence of a content-NCC’s activity. Several researchers, namely Aru et al. (2012, 2015), Seth (2009), and de Graaf & Sack (2015) point out that expecting a neural consequence is simply a logical upshot of assigning a functional role to conscious experience. Two things must be said in relation to this remark. First, neural consequents are, of course, to be expected not only in the case of consciousness. For example, there will certainly be some neural consequent activities in response to IMs, etc. Second, and more importantly, whether some mechanism underlying consciousness—probably, one of the pNCCs (cfr. §4.2.3)—will have a neural consequent cannot lead to the simplistic conclusion that the phenomenon (consciousness) will affect in some way the neural activity of the consequent. As far as I can see, the only way to adjudicate the question depends on how we articulate the phenomenon-mechanism relation, i.e. how we spell out the notions of mechanistic constitution and of constitutive mechanistic phenomena. Although I later mention this issue (cfr. Ch. 8), I remain largely neutral on it in this work. 21 There is no terminological consensus. Miller (2001, 2007, 2014, 2015b) speaks of constituents and correlates, and specifies that the former, but not the latter, are the proper explanatory targets of consciousness science. Hohwy & Bayne (2015) prefer the distinction between “upstream” and “downstream” neural activity, whereas de Graaf & Sack (2015) speak of prerequisite and consequent neural activity. I have adopted the latter terminology, as it strikes to me as particularly clear. Later in this subsection (§4.3.2) I will distinguish between prerequisite, parallel, and consequent neural activity on the one hand, and constitutive mechanisms on the other. 135 Chapters 5: A Mechanistic Standpoint on Content-NCC Research phenomenon, whether it is a particular visual accuracy phenomenon, selection of content, or “consciousness,” following my tripartite account.

The introduction of the concept of “minimal sufficiency” (cfr. §3.1-2), was precisely meant by Chalmers to screen off redundant neural activity. Prerequisite, consequent, and parallel neural activity will not be minimally sufficient for the target phenomenon (Bayne & Hohwy 2013, o. 25; Hohwy & Bayne 2015). But as we have seen, Chalmers does not elaborate on the explanatory structure of our problem, and misses the crucial point, that it cannot be merely by “fiat” that some “systems” (as he calls them) are related to the explanandum, otherwise we would simply be exposed to the following natural question: Why would consciousness or any other explanandum phenomenon be related to the activity of a particular system or mechanism? (We will later see another version of this question, the “mere description worry,” in Ch. 6, §3.2). Furthermore, as Neisser (2012) points out, Chalmers’ account seems very close to a DN understanding of the whole issue. Within the mechanistic framework that I have been sketching out in this Chapter, however, there is a straightforward answer as for why some systems or neural processes will not be related to the explanandum phenomenon: because they either are not constitutive parts or operations of a mechanism responsible for the explanandum—in short are not constitutively relevant for it—, or, if they are mechanisms, they are not mechanisms for the sought target phenomenon. The notion of “constitutive relevance,” referred to either parts and operations, or to a whole mechanism, nicely does the job requested to sort out different kinds of neural activity. One way to determine whether a mechanism, or some part or operation are constitutively relevant to the explanandum is by means of manipulation or interventions.

p

p Pre-NCC Cons-NCC

Target

p p

t

Fig. 14: Target mechanisms (in blue) are always embedded within a larger system. Their activation is usually preceded by prerequisite-NCCs, and followed by consequent-NCCs (both in green). In addition, parallel p activities (in red) are to be expected as a byproduct of the target’s activation, or simply as unrelated activity of other sorts. 136 Chapters 5: A Mechanistic Standpoint on Content-NCC Research

4.3.2 Manipulation and Mechanisms

Most mechanicists espouse a manipulationist approach to causation (Bechtel 2008a; Craver 2007) (on manipulationism or interventionism, cfr. Woodward 1997, 2000, 2003, & Hitchcock 2003a, 2003b). Woodward thinks that it is «heuristically useful to think of explanatory and causal relationships as relationships that are potentially exploitable for purposes of manipulation and control» (2003, p. 25). It is important to bear in mind that the term “manipulation” should not be understood in a restrictive, purely artificial sense. Lesions of brain regions are also a form of manipulation that alters the normal functions of the brain, even when they are not intentionally produced by an agent, e.g. for diagnostic purposes. Another important feature of manipulationism is that a manipulation needs not always to be achievable. There may be different reasons for this: lack of the appropriate technology, unrepeatable historical events, etc. Quite obviously, we cannot prevent Carole Lombard from starring in Lubitsch’s «To Be or Not To Be», which was released in 1942. But were we ideally able to (perhaps, with the aid of a specially modified DeLorean, to remain within a filmic context), we could have altered the causal structure of this historical event, and we might now be watching Lubitsch’s film starring Miriam Hopkins instead. The simple lesson is that, if we are able to intervene in a given system, we might alter its causal structure. More precisely, given a variable X that is causally relevant to variable Y iff—when all other variables V that may exert an influence on X and Y are kept at some fixed value—there is an ideal intervention on X that changes the value of Y, or its probability distribution (cfr. also Craver 2007, pp. 93-106). Following Woodward, the notion of “variable” is neutral about the nature of the relata.

The adoption of a manipulationist approach is both: (a) descriptively more accurate in relation to the problem of content-NCCs (as well as, more generally, in the neurosciences, cfr. Craver 2007, pp. 63-106); and (b) it offers an “in principle”—as opposed to the actual and more specific ways employed in scientific practice—way of disentangling different kinds of mechanisms or activities. I will not articulate a full account of manipulation regarding IMs, SMs, and pNCCs. This far exceeds the scope of the present work, and I will therefore return on it in a separate study. I will, however, briefly elaborate on points (a) and (b), as they help me further marking the distance of my account from that of Chalmers.

Consider (a) first. Interventions on putative content-NCCs is useful for different purposes. One such purposes is to disentangle the different mechanisms we have examined so far—i.e. intentional, selection, and pNCC mechanisms—from neural confounds such as prerequisite and consequent neural activity (cfr. my Vernazzani 2015). Hence, intervention techniques can help us locating the right mechanisms (e.g. Koubeissi et al. 2014; Parivizi et al. 2012). In their recent review paper, de Graaf & Sack (2014) highlight the role of non-invasive brain stimulation techniques to these purposes. New techniques, such as NIBS now play an increasingly important role in the search for brain mechanisms. NIBS stands for Non Invasive Brain

137 Chapters 5: A Mechanistic Standpoint on Content-NCC Research

Stimulation. Among NIBS techniques we find transcranial magnetic stimulation (TMS), and transcranial electric stimulation (TES), which includes transcranial direct current stimulation (tDCS) as well as transcranial alternating current stimulation (tACS). As Friston (2011) observes, NIBS are not an alternative way to study the brain, but rather provide a valuable complement to the current neuroimaging techniques. Whilst a regional BOLD response in fMRI cannot tell us whether neural processing is «imperative for the task at hand» (de Graaf & Sack 2014, p. 6), manipulating a specific brain mechanism thanks to NIBS might elicit an alteration that reveals the function of the putative mechanism (together with other interventionist assumptions, like keeping other factors constant, etc., cfr. below). The use of NIBS techniques in NCC research is flourishing. A TMS pulse on the occipital lobe can, for example, generate a phosphene (e.g. Kammer 1999). Application of TMS pulse on the motion area MT/V5 (Fellman & Van Essen 1991) elicits moving phosphenes (de Graaf & Sack 2014, p. 6). Another example of application of TMS is the induction of virtual lesions in the parietal cortex in cases of experiments in bistable vision (e.g. Carmel et al. 2010). More generally, all previous instances of IMs, SMs, and pNCCs are but cases of manipulation. I other words, we identify IMs precisely because we rely on manipulations—active manipulations done by scientists in order to investigate the brain, or neuropsychological case-studies—that reveal selective impairments in the cognitive functions.

Regarding (b), many mechanistic philosophers, as I said, rely on manipulationism to identify the roles of parts, operations, or whole mechanisms (Bechtel 2008a, p. 38; Craver & Darden 2013, pp. 119-143). Manipulations can therefore be operated on mechanisms’ parts, operations, their structure, or on its behavior as a whole. Manipulations should be accompanied by suitable detection techniques that may reveal the specific alteration caused by the intervention. As I said, I cannot fully articulate a manipulationist account for the mechanisms under discussion in this work, but the present remarks show that my approach offers a theoretical solution to the problem of disentangling different neural processes, which are unrelated to the target mechanism or activity.

4.4 A New View of Content-NCC Research

Let me briefly summarize the results achieved so far. Earlier (§3), I have argued that Chalmers’ definition of content-NCC does not capture the nature of the scientific work on consciousness. Among the drawbacks of his definition, the worst problem is that he doesn’t embed a definition of content-NCC within a proper, descriptively adequate, explanatory framework. A closer inspection of scientific work on content-NCC reveals not only that much of the experimental work does not actually target what he calls “content-NCC,” but that, strictly speaking, there is no such a thing as a “neural correlate of conscious content.” Instead, there are a number of distinct mechanisms, which include intentional mechanisms that fix the accuracy condition of descriptive visual states, selection mechanisms that pick out the content that will be “broadcasted” on consciousness, and the proper-NCCs, the machinery responsible for

138 Chapters 5: A Mechanistic Standpoint on Content-NCC Research consciousness. Moreover, I have characterized such mechanisms as constitutively relevant for the function they constitute.

One implication for current philosophical research on the physical bases of consciousness is that, on my approach, we can (a) locate the “hard problem” within a coherent explanatory framework; and (b) provide the foundational work for a manipulationist theory of consciousness (rather than only of contents). Concerning (a), my point is simple: scientific advancements in consciousness science have been made, even though we haven’t solved the hard problem yet. Articulating a better explanatory framework for consciousness science shed light on the structure of scientific research, and clarifies both how scientists are working, and what paths do not seem promising in light of current research. Since not all mechanisms in the brain will be related with consciousness, but arguably, only a fraction of the brain’s activity will be related to it (e.g. Hobson 2007), we may further restrict the “locus” where the hard problem happens. Concerning (b), the point is, again, quite simple: the identification of the consciousness-relevant structures may lead, one day, to manipulating consciousness—like bringing back to consciousness a patient in a vegetative state, for example—in way that is both helpful for diagnostic and therapeutic purposes, and for scientific and purely research purposes. Certainly, being able to manipulate consciousness—a project, it is worth emphasizing, not new, one can simply think of cases of anesthesia as providing just an excellent example of manipulation of consciousness—does not solve the hard problem of consciousness. However, and again, it proves that progress in the study of consciousness is possible, in spite of all its difficulties.

The account introduced here will change substantially the problem of the matching relation between content individuated by states of seeing, and the underlying mechanisms. I will return on this issue in Ch. 8, §3. Before I conclude this Chapter, I want to briefly address some more advantages of my mechanistic approach to NCC research.

5. Schemas, Integration, and Content Ontology

So far, I have shown that what Chalmers calls “content-NCC” is actually an orchestrated set of distinct mechanisms that jointly enable us to see items in the world. I have also highlighted the fact that my definitions enable us to make some steps towards an explanation of how we can consciously perceive the world around us. In this final Section, I will introduce and briefly discuss some advantages of my approach over Chalmers’ understanding of content-NCC. In §5.1, I will present a schema of content-NCC mechanisms, comprising intentional, selection, and pNCC mechanisms, and show how scientific research—contrary to some pessimistic voices (e.g. McGinn 2012)—is slowly making progress in our explanation of consciousness. In §5.2, I will show that my account offers better prospects for integrating different fields of research in

139 Chapters 5: A Mechanistic Standpoint on Content-NCC Research consciousness studies. In §5.3, I will finally outline some remarks on the ontology of visual content.

5.1 Schemas, Sketches, and Strategies

So far, I have shown that what goes under the name of “content-NCC” is actually not a single mechanisms, but a complex structured process that involve different mechanisms. On the face of the current status of scientific research on consciousness, it is premature to draw stable conclusions about the abstract schema that illustrate how content-NCCs work. It is also not possible to venture conjectures about the detailed working of these mechanisms, given that current research has yet to move on to this next stage. However, a clear advantage of my view over Chalmers’ definition is that we can now make use of the heuristic strategies developed by proponents of the mechanistic framework. Indeed, an interesting aspect of mechanistic explanation is that it lays emphasis on the heuristics and the logic of discovery, an aspect of scientific practice that was downplayed by the logical empiricists (Bechtel & Abrahamsen 2005)22. I will briefly discuss the notions of schema and sketches, and then turn briefly to some strategies for the discovery of mechanisms.

My account of content-NCC mechanisms can be construed as a tentative mechanism schema, «a truncated abstract description of a mechanism that can be filled with more specific descriptions of component entities and activities» (Darden 2006, p. 281). An often-discussed example of mechanism schema is Watson’s “central dogma of molecular biology:” DNA → RNA → protein (ibid.). As it stands, Watson’s schematic diagram is only a very abstract description of protein synthesis, since it lacks all details about how the mechanism works. The schema is, however, useful in guiding research and providing a first description of how the target mechanism(s) is supposed to produce, maintain, or otherwise underlie the explanandum (Craver & Darden 2013, pp. 67ff). The process of mechanisms’ discovery starts with a preliminary description of the explanandum (conscious visual content), and then proceeds by specifying the features that an adequate description of the underlying mechanisms should satisfy. Creating an abstract mechanism schema enables researchers to conceptualize the search for content-NCC mechanisms (IMs, SMs, pNCCs) and elaborate a number of mechanism sketches (see below) that are consistent with the schema. The schema for content-NCCs that emerges from the foregoing examination of the empirical literature can be represented in Fig. 15.

22 Famously, Popper (1994, pp. 6-7) claimed that the act of conceiving new ideas was the target of empirical investigation—more precisely, psychology’s object of inquiry—but not of philosophy of science. 140 Chapters 5: A Mechanistic Standpoint on Content-NCC Research

Once scientists have constructed such a schema, another useful strategy is to rely on modular subassembly, i.e. starting from the hypothesis that a mechanism may consist of multiple modules or types of modules23 (Craver & Darden 2013, pp. 74-77; Darden 2006, pp. 286-287). It is perhaps too premature to venture conjectures about the modular subassembly of SMs, but we can advance some considerations regarding IMs. A textbook example of modular subassembly of IMs is Fellman & Van Essen (1991) map of the hierarchy of visual areas in the macaque (Fig. 16).

Intentional Selection pNCCs Mechanisms Mechanisms

Fig. 15: A mechanism schema for content-NCC research. Some visual accuracy conditions are passed onto the selection mechanisms, which in turn make them available to pNCCs. The pNCCs may be diversified in phenomenal-pNCCs or access-pNCCs.

Fig. 16 does not illustrate the actual hierarchical structure of the human visual cortex, but it as nonetheless been adopted as a blueprint or sketch of a mechanism that can be applied to human subjects as well. A sketch is an incomplete representation of a mechanism, or a how- possibly representation of the structure {S} of a mechanism, specifying a certain amount of information regarding component parts and operations. Unknown parts and operations may be substituted by filler terms, or black boxes that future research is up to uncover, moving from an incomplete diagram to a complete one.

Hypotheses about component parts and operations of a mechanism, say, for example, an IM or SM can profit from another heuristic strategy of mechanism discovery, which relies on forward and backward chaining (Darden 2006, pp. 287-288). Forward chaining is a strategy that enables researchers to infer something about what kind of operations or component parts may be found at later stages. Some operation-enabling properties (Craver & Darden 2013, p. 78) may be identified in an IM that narrow down the spectrum of possible operations or parts of a SM mechanism. This strategy simply recognizes that not every kind of operation or part will interact with any kind of operations or parts of a mechanism. The converse strategy is backward chaining. In this case, operation-signatures can be used to infer what kind of operations or parts may have previously interacted with the target mechanism. Again, one might infer some property of IMs by relying on knowledge of some properties of SMs.

23 Talk about modules is usually associated with Fodor’s (1983) notion of modularity, which I call “strong modularity.” Bechtel & Abrahmsen (2002) talk instead of “weak modularity” in the case of mechanisms. The link between mechanisms and modules deserve closer attention than I can pay to it here. It suffices to say that talk about modules, more or less specialized components or mechanisms within a larger system, does not necessarily entail Fodor’s strong modularity. 141 Chapters 5: A Mechanistic Standpoint on Content-NCC Research

Fig. 16: Fellman & Van Essen (1991, p. 30) hierarchy of visual areas in the macaque can be used as blueprint or sketch in the search for IMs in humans.

In this Sub-section, I have outlined the gist of strategies fro schema construction in content- NCC research. I do not aim at completeness, but merely to show that my approach paves the way to more fruitful ways of investigating the problem of the neural correlates of conscious content, in comparison with Chalmers’ definition.

5.2 Interfield Integration for Consciousness Studies

A further advantage of my account over Chalmers’ standard definition is that it provides a better account of interfield integration. Indeed, Chalmers is silent on this issue, which happens to be of crucial importance in consciousness studies. As I have described in Ch. 1 (§2), some researchers believe that a fruitful study of consciousness must rely on multiple sources, including phenomenological reports, or phenomenological methods. The problem of the naturalization of (Husserlian) Phenomenology that I have described earlier belongs precisely to 142 Chapters 5: A Mechanistic Standpoint on Content-NCC Research this strategy (cfr. also Ch. 8, §3.2-3). Indeed, consciousness studies is an highly interdisciplinary field of research, which encompasses the neurosciences, philosophy, psychology, anthropology, psychiatry, and many others. It is therefore most natural to couch the subject matter within a workable and fruitful framework of interfield integration.

The notion of “field” (or, for our purposes, “intertheoretic”) has been defined in an influential paper by Darden & Maull (1977):

[…] a field is an area of science consisting of the following elements: a central problem, a domain consisting of items taken to be facts related to that problem, general explanatory factors and goals providing expectations as to how the problem is to be solved, techniques and methods, and sometimes but not always, concepts, laws, and theories which are related to the problem and which attempt to realize the explanatory goals. A special vocabulary is often associated with the characteristic elements of a field. (p. 128)

Craver (2007, pp. 246-271) (cfr. also Craver & Darden 2013, pp. 161-185; Vernazzani 2016a) has shown how Darden & Maull’s account of interfield integration via interfield theories can be supplemented and improved with a mechanistic account. Instead of trying to iron out the conceptual heterogeneities of distinct fields within a reductive intertheoretic framework via interlevel homomorphisms (or isomorphisms) (cfr. Ch. 8, §3.2.2), the mechanistic approach provides a much richer account that does justice of the interdisciplinary practice in the sciences. Essentially, mechanistic integration is achieved via constraints (Miłkowski 2016ab; from a non- mechanistic perspective, cfr. Danks 2014). Multiple approaches at different intra- or interlevel identify a number of constraints regarding various aspects of mechanisms’ schema construction without the need to reduce them on any fundamental or “primary” (Nagel) science.

Again, it is not possible to fully articulate the problem of interfield integration in relation to consciousness studies, as the topic deserves a much deeper study on its own. (Though, I will briefly return on this issue later, in Ch. 8, §3). My point is simply that approaching the problem of content-NCC from a mechanistic standpoint enables us to fully appreciate the contribution made by multiple fields, instead of aiming at a thoroughly interfield reductive account.

5.3 The Ontology of Visual Content

As a final advantage, I will briefly consider the problem of the ontology of perceptual content. Most philosophers working on the debate about the contents of perceptual experience remain agnostic about the exact nature of the properties we are manifested in states of seeing. Siegel (2010a) for example does not take a stance regarding the ontology of visual properties. Others, like Byrne (2009) argue instead that such properties must actually be events, since they have a temporal profile: they have a starting and an ending time. For example, I start seeing something red, and then I stop seeing something red, perhaps because I have turned my head elsewhere,

143 Chapters 5: A Mechanistic Standpoint on Content-NCC Research etc. It is of course possible to argue for different stances, depending on our views about the nature of properties, and our arguments about the nature of perceptual experience.

My approach adds a further constraint that might be helpful in determining the ontology of visual content. Since accuracy conditions of perceptual experience are fixed by intentional mechanisms, we may rely on reflections regarding the nature of constitutive mechanistic phenomena (e.g. Kaiser & Krickel 2017) to determine the ontology of visual accuracy phenomena. If, as Kaiser & Krickel seem to argue, such phenomena are actually events, we may bring further support to Byrne’s contention that such properties may actually be events. Notice that the claim that events may be properties is incompatible with a universalist stance (cfr. Ch. 7, §1.3.1), i.e. the claim that properties are universals, but it is compatible with a tropist approach to properties (cfr. Ch. 7, §1.3.2). The claim that some tropes may be events is, after all, not new (cfr. Bennett 2002). I will return on the problem of the ontology of visual objects in a later Chapter (7).

Conclusion

In this Chapter, I have investigated the nature of content-NCCs, and claimed that the scientific search for content-NCC is inherently mechanistic. I have then shown the advantages of my approach over Chalmers’ mainstream definition. In particular, my approach seems to naturally reflect the structure of scientific research on content-NCC; it nicely captures the manipulationist approach adopted by many scientists, both in neuropsychological studies and in more recent and sophisticated techniques like the NIBS; it enables us to focus on mechanisms’ schema and sketches that can help guide research on content-NCCs; it provides a better perspective on interfield integration; and finally it may lead to some insights about the ontology of visual accuracy phenomena. In the next Chapter, I will turn to the sensorimotor theory of visual consciousness, and show that, although it is a nice instance of a dynamical theory, it is perfectly compatible with my mechanistic approach, in that it seems it must accept the account of intentional mechanisms sketched out earlier (§4.2.1). As we will see later (Ch. 8), it is precisely between intentional mechanisms and perceptual content that a psychoneural isomorphism should hold.

144 6

THE EXPLANATORY STRUCTURE OF THE SENSORIMOTOR THEORY

Noë & Thompson (2005) distinguish two groups of theories of visual perception: orthodox and heterodox theories. The orthodoxy states that visual perception is a process whereby the brain builds up detailed representations of the environment on the basis of the sensory inputs delivered by receptors (2005, p. 2). The most obvious example of this orthodoxy is Marr’s (1982) computational approach, according to which visual perception is achieved via information processing on three distinct stages of representations, from the primal sketch to 3D representations (cfr. also Frisby & Stone 2007). In contrast with the orthodoxy, heterodox approaches usually reject or downplay the role of representations in visual perception. Paradigmatic examples of such theories are Gibson’s (1979) ecological optics, Ballard’s (1991) animate vision, and O’Regan & Noë (2001ab) sensorimotor theory (SMT). Defenders of the sensorimotor theory maintain that visual perception is constituted by the active exercise of our sensorimotor skills that obey a set of specific sensorimotor laws (cfr. Noë 2002; 2004; 2009ab; 2012) (O’Regan 2011; 2014; & Noë 2001abc) (Bishop & Martin 2014).

The SMT has stirred much debate in the last years (e.g. Block 2005; Clark 2006; Flament- Fultot 2016; Hutto 2005; O’Regan & Block 2012). There have been contrasting reactions to some of the SMT’s claims. It is not clear whether the SMT marks a significant «departure from traditional computational functionalism» (the orthodoxy) or should rather be understood as an enrichment of it (Buhrmann et al. 2013, p. 1). Furthermore, although there have been some recent proposals to extend and apply this approach to robotics (e.g. Hoffman 2014; Maye & Engel 2011), the theory suffers from a severe limitation, due to the lack of any formal or operational definition of its key concepts (Burhmann et al. 2013; Hoffman 2014). Finally, it is not clear which role the theory ascribes to representations. Although defenders of the SMT seem to admit representations—as I will show in the next pages—they also downplay their role in a way that makes unclear why and to what extent the visual system relies on representation.

In this study, I argue that we can throw light on the contribution of the SMT in relation to the orthodox theories by elucidating its explanatory structure. While many aspects of the SMT have been object of intense discussions, to my knowledge, there still is no analysis of the kind of explanation of visual perception provided by the SMT. I will begin with an outline the most important features of the SMT (§2). A central thrust in my line of argument is to exploit the similarity between the SMT and the dynamical hypothesis (§3). Indeed, just like proponents of the SMT, some defenders of dynamicism have proposed a radically different way of studying a cognitive system that does not rely on internal information-processing or representations (e.g. Chemero & Silberstein 2008; Van Gelder 1995, 1998; Van Gelder & Port 1995). This claim Chapter 6: The Explanatory Structure of the Sensorimotor Theory has fuelled controversy about the explanatory status of dynamical models (e.g. Bechtel 1998; Gervais 2015; Kaplan & Bechtel 2011; Kaplan & Craver 2011; Ross 2015). According to the mainstream view, dynamical models afford covering law or nomothetic explanations that depart from the inherently mechanistic framework that underlies computational or connectionist approaches to cognition (Zednik 2011). I will argue that the standard formulation of the SMT conforms the framework of nomothetic explanation (§4). This however creates the problem of determining the role of representations within the cognitive system, and exposes the SMT to the “mere description worry.” A closer inspection of the SMT will show that the theory can be upgraded to a mechanistic framework, and that in this way it can eschew some of the problems of the standard formulation (§5).

1. An Outline of the Sensorimotor Theory.

The central notion of the sensorimotor theory (SMT) is that of “sensorimotor contingency,” which O’Regan & Noë define as: «[…] the sensory changes produced by various motor actions […]» (2001a, p. 941). Of course, vision scientists have long been aware of the importance of sensorimotor regularities in shaping our conscious perceptual experience (e.g. Cliff 1991; MacKay 1962). What is distinctive of the SMT, however, is the stronger claim that our perceptual experience is constituted by the active and lawful exercise of our sensorimotor skills: «[…] to see something is to interact with it in a way governed by the dynamic patterns of sensorimotor contingency characteristic of vision […]» (Hurley & Noë 2003, p. 146). We can capture this claim in the following thesis:

T1: Seeing is constituted by the active exercise of our sensorimotor skills1.

When an object falls within the reach of the senses, it triggers a sensorimotor reaction in the organism. The exercise of our sensorimotor skills obeys a set of “sensorimotor laws” or “regularities.” It follows that the main task of proponents of SMT is that of finding the laws that govern the sensorimotor contingencies: «[…] we must direct our investigations not on some ineffable inner event, bur rather to the temporally extended activity itself, to the laws that govern this activity» (O’Regan & Noë 2001c, p. 80; my emphasis). According to the original formulation (O’Regan & Noë 2001ab), the sensorimotor contingencies can be grouped into two categories. The first category is that of sensorimotor contingencies determined by the visual system, whereas the second category is specific to the visual attributes or “features” of the perceived items, such as colors and shapes. The former category of sensorimotor contingencies

1 O’Regan (2011) disagrees with Noë on the link between action and seeing. Whereas Noë (2004, 2009b) insists that seeing (and perception more generally) is action, O’Regan merely states that seeing «[…] requires having previously acted, and it requires having the future potential for action», but «action right now is not necessary for vision» (2011, p. 67). I express T1 in a form that fits better within Noë’s framework, however, this will not substantially affect my considerations. 145 Chapter 6: The Explanatory Structure of the Sensorimotor Theory is «independent of any characterization or interpretation of objects» (O’Regan & Noë 2001a, p. 943), and is the fundamental level of visual sensation. The sensorimotor contingencies determined by visual attributes are specific to visual properties at a perceptual level and are related to the nature of the objects themselves (O’Regan & Noë 2001c, p. 88). The two categories are governed by a distinct set of sensorimotor laws that modulate the corresponding motor outputs that constitute visual perception:

T2: The exercise of sensorimotor skills obeys a set of sensorimotor laws2.

On this formulation, the theory rejects the fundamental claim of orthodox theories of vision, according to which visual perception would be fully explained by computations or representations in the neural system3. Noë for example concedes that the perceptual states exhibit intentionality or aboutness (2012, p. 25), but he denies that perception is constituted by representations4. He clearly rejects the idea of explaining the feeling of perceptual presence by means of the notion of representation (2012, p. 30), and maintains that: «It doesn’t seem to us, when we see, that we represent environmental detail in our heads all at once in the way that detail can be present, all at once, in a picture» (2007, p. 242; cfr. also 2012, p. 31). This can be read as making a claim about perceptual experience: the personal level is not a representation of the external environment. However, in several other passages, both O’Regan and Noë do seem to admit the existence of representations in the cognitive system:

The claim is not that there are no representations in vision. That is a strong claim that most cognitive scientists would reject. The claim rather is that the role of representations in perceptual theory needs to be reconsidered (Noë 2004, p. 22)

2 It has been pointed out to me by a reviewer that in many formulations the point is not that the exercise of sensorimotor skills obeys sensorimotor laws, but that it makes use of understanding about the sensory consequences of movement given the sensorimotor laws (cfr. for example Noë 2004, pp. 77-79; 2009a, p. 478). Whether this is just equivalent to obeying sensorimotor laws is a separate issue that I leave out for another study. 3 Some philosophers might equate representations with computations, but the two concepts are not equivalent (for a useful discussion, see Miłkowski 2013, especially chapters 2 and 4). Following Noë and O’Regan, I will assume that there are representations in the cognitive system; hence, I will not defend the notion of representation from the attacks of the “radically embodied” research program (e.g. Chemero 2009). 4 Noë admits that perceptual experience exhibits intentionality (e.g. Crane 2009), but only as a genuine relation towards objects that obtain in the world (2012, p. 25, pp. 70-73). Also, he claims that perceptual content is conceptual—a form of sensorimotor understanding—and is always propositional (Noë 2004, pp. 246-247, ft. 4). I thank an anonymous reviewer for pointing this out to me. 146 Chapter 6: The Explanatory Structure of the Sensorimotor Theory

[…] I have nothing against representations per se. Information from the outside world must be processed in the brain, and thus it must somehow be represented (O’Regan 2011, p. 64)5.

(cfr. also Noë 2002, p. 67; O’Regan 2011, p. 62, ft. 1; O’Regan & Noë 2001b, p. 1017; the notion of representation is also used frequently in Philipona et al. 2003). Although they seem to admit representations, sensorimotor theorists are suspicious of the very notion of representation (e.g. O’Regan 2011, p. 64). Indeed, what they seem to object to the orthodoxy are two specific claims. First, the claim that all we need in order to explain perceptual presence is a process whereby the brain constructs representations of the external world. Second, they contest against the orthodoxy the unnecessary postulation of static, photographic, or picture-like representations (Noë calls it the “snapshot conception” 2004, pp. 35ff; O’Regan talks about “postcard representations” 2011, p. 41; cfr. also §5.2). As they specify, to think that vision is some form of richly detailed static pictorial representation of the environment is a form of Cartesian materialism (Dennett 1991), i.e. a way of conceiving the mind as a stage where completed representations can be shown to an internal spectator. The problem with Cartesian materialism, in its various forms, is that it simply pushes the mystery of conscious visual perception to a specific brain region where the “magic” happens (for analogous considerations, cfr. Pessoa et al. 1998). The SMT’s non-representationalism is formulated in the next thesis:

T3: Perception is not constituted by static pictorial representations of the environment6.

The SMT, as we will see (§§4.1, 5.2) lays emphasis on the explanatory role of sensorimotor laws rather then on the generation of static internal representations, and rejects static representations in virtue of the active character of perception.

5 It is perhaps worth emphasizing this point, since it has caused much confusion in the literature. Noë and O’Regan contend that perception should not be characterized as something that merely happens in the brain: «What perception is, however, is not a process in the brain, but a kind of skillful activity on the part of the animal as a whole» (Noë 2004, p. 2). But this does not mean that the brain does not represent states of the environment. O’Regan clarifies his position on this issue, and specifies that he still finds useful the concept of representation, and distances himself from “enactivists” who reject representations altogether (2011, p. 62, ft. 1). Noë is perhaps more cautious on this point, since he simply claims that we should reconsider the role of representations in vision. Furthermore, although he adopts the term “enactive” (at least in his 2004, p. 2), his definition of the concept does not entail a rejection of representations (in contrast, it seems, to Varela et al. 1991). 6 The attack against the snapshot conception is also closely related to the controversial issue of the richness of perceptual content (cfr. Block 2007; Cohen, Dennett, & Kanwisher 2016; Haun et al 2017; Phillips 2015). The snapshot conception may suggest that perceptual content is richly detailed, just like a photographic representation of a specific tract of the environment captures many of its details (cfr. Noë 2004, p. 50; O’Regan 2011, pp. 50-61). 147 Chapter 6: The Explanatory Structure of the Sensorimotor Theory

The notion of “activity” plays a central role in the SMT: the perceiver is an agent in a dynamical context (Noë 2006). In various publications, both Noë and O’Regan urge that the perceiver should be understood as an agent:

It is thus only in the context of an animal’s embodied existence, situated in an environment, dynamically interacting with objects and situations, that the function of the brain can be understood (Noë 2009b, p. 65).

…seeing involves actively interacting with the world. (O’Regan 2011, p. 41).

Over the years, Noë has proposed different names for the theory: in collaboration with Susan Hurley (Hurley & Noë 2003) the theory was called “dynamic sensorimotor approach,” whereas more recently he calls it “actionism” (Noë 2012, p. 23). We can enucleate this claim in the following thesis:

T4: The perceiver is an agent that is part of a dynamical system.

In order to make an agent consciously visually aware of an object two conditions must be met. The first condition is that the agent’s visual perception must actively exercise her knowledge of the sensorimotor laws or, to put it in other terms, visual perception only occurs «when the organism masters what we call the governing laws of sensorimotor contingency» (O’Regan & Noë 2001a, p. 939; 2001c, p. 82). O’Regan and Noë urge that the knowledge involved in the exercise of our motor skills is not a form of intellectual or propositional knowledge, but is instead a practical knowledge, a form of know-how (Noë 2004, p. 11, pp. 117-122; 2005a; 2012, pp. 147-151; Silverman 2017). The second condition is that there must be an item in the environment that triggers the perceiver’s sensorimotor reactions (Noë 2005b; 2012, p. 25). In this sense, the sensorimotor theory is a form of disjunctivism (McDowell 1982), the claim according to which genuine perceptual states are different in kind from merely hallucinatory states. This second condition also lays bare the theory’s vehicle externalism. In Noë’s words: «According to active externalism, the environment can drive and so partially constitute cognitive processes. […]. The mind reaches […] beyond the limits of the skull» (2004, p. 221; also 2009b, pp. 67-95). The SMT’s commitment to externalism dovetails nicely with the theory’s non- representationalism: the job of the brain is not that of generating static internal representations (T3), but rather «[…] that of facilitating a dynamic pattern of interaction among brain, body, and world. Experience is enacted by conscious beings with the help of the world» (2009b, p. 47). This leads us to the following thesis:

T5: Cognitive processes are partly constituted by the environment.

There is much more to be said, of course, but the five theses introduced so far present the theoretical nucleus of the SMT. It is noteworthy that defenders of the SMT usually bring forward their theses by showing the implausibility of the alternative options. Thus, for example, 148 Chapter 6: The Explanatory Structure of the Sensorimotor Theory

T1 is never explicitly argued for, but its strength follows from a number of other assumptions, in particular the rejection of picture-like representations.

The SMT purports not only to provide a phenomenologically adequate description of our perceptual experience, but also «to offer an explanation of visual consciousness» (Noë 2004, p. 226) with a robust empirical support. In this Chapter, I will focus exclusively on the explanatory capacity of the SMT. I maintain that the SMT is a form of nomothetic dynamical theory of visual perception, and that this explanatory structure exposes the theory to some challenges. In order to substantiate my proposition, I will now turn to the nature of dynamical system theory.

2. Dynamical System Theory and the Dynamical Hypothesis.

Dynamical system theory (DST) is based on the notion of “dynamical system:” a mathematical description of how things change with time (Hotton & Yoshimi 2011). Such systems take the form of models that are expressed by means of differential equations:

[A] typical dynamical model is expressed as a set of differential or difference equations that describe how the system’s state changes over time. Here, the explanatory focus is on the structure of the space of possible trajectories and the internal and external forces that shape the particular trajectory that unfolds over time, rather than on the physical nature of the underlying mechanisms that instantiate this dynamics (Beer 2000, p. 96)

Hotton & Yoshimi (2011) define a dynamical system as a function of the form ϕ: S × T → S (where S is the set of states of the system, and T the set of times) that satisfy the following properties:

• There is a time t0 ∈ T such that allows for all states s0 ∈ S ϕ(s0, t0) = s0.

• For all states s0 ∈ S and all times t1, t2 ∈ T ϕ(s0, t1 + t2) = ϕ(ϕ(s0, t1), t2).

The first property says simply that there is a time t0 which is the present state of the system and that each state s1, s2…sn must be mapped to itself in the present moment. The second property says that future states are uniquely determined by the present state (ibid., p. 446). In this sense, dynamical systems are not opposed to more traditional connectionist and computational approaches, and they do not imply a rejection of representations (cfr. §5). As an example of dynamical model we can describe Thelen et al.’s (2001) explanation of the A-not-B-error. In this famous experiment, 8 to 10 month old infants are placed in front of two containers A and B. A small toy is hidden in container A, and the infants correctly reach repeatedly for the toy until they are habituated to its presence. Then, an experimenter hides the toy in container B in plain view. Yet, in spite of the fact that they have seen the toy being hidden in container B, infants will reach container A. Piaget’s classical explanation was that it is not until they are 12 months old that infants are able to construct reliable mental representations of the perceived objects. Before that, their actions are primarily guided by motor routines. Thelen et al. (2001) 149 Chapter 6: The Explanatory Structure of the Sensorimotor Theory called into question the standard explanations: «The A-not-B error is not about what infants have and don’t have as enduring concepts, traits, or deficits, but what they are doing and have done» (ibid., p. 4). They designed a dynamic field model (ibid., pp. 16-20) that traces the evolution of activation levels in the dynamic field as a function of different types of inputs: environmental inputs, task-specific inputs, and memory inputs. However, the model was «neutral as to an anatomical instantiation in the central nervous system; it is a model of the behavioral dynamics» (ibid., p. 28), and it only «captures an integrated behavioral outcome» (ibid., p. 31).

Some proponents of DST in cognitive science (e.g. Beer 2000; Port & Van Gelder 1995; Van Gelder 1995, 1998), however, argue that the best way to study the human mind is not in terms of information-processing, but with the mathematical tools of dynamical modeling (Van Gelder 1998). This is known as the dynamical hypothesis in cognitive science. Defenders of the dynamical hypothesis emphasize that the DST provides a new research paradigm, in contrast to the traditional computationalist and connectionist approaches in cognitive science. The hallmark of the dynamical hypothesis is the rejection of the computer metaphor of the mind, and therefore of computational and connectionist approaches to the study of cognition. Cognition, it is claimed, is not a process of computing over static, discrete mental representations (Chemero 2000, p. 634; Van Gelder 1998, p. 622). Using the mathematical tools of DST, we may be able to describe or explain (cfr. §4) the behavior of the target system. In what is widely regarded as the “manifesto” of the dynamical hypothesis, Van Gelder & Port put forward the following claims:

The cognitive system is not a computer, it is a dynamical system. It is not the brain, inner and encapsulated; rather, it is the whole system comprised of nervous system, body, and environment. [DH4] The cognitive system is not a discrete sequential manipulator of static representational structures [DH3]; rather, it is a structure of mutually and simultaneously influencing change. Its processes do not take place in the arbitrary, discrete time of computer steps; rather, it unfolds in the real time of ongoing change [DH1][…] The cognitive system does not interact with other aspects of the world by passing messages or commands; rather, it continuously coevolves with them [DH2]… (Van Gelder & Port 1995, pp. 2-3; my emphases).

The novelty of the dynamicism, according to proponents of the dynamic hypothesis, is reflected in terms explanatory structures. Whereas dynamical systems would afford covering-law or nomothetic explanations of non-decomposable systems, defenders of the “orthodoxy” would espouse mechanistic explanations (Zednik 2011). I will elaborate on the concept of covering law and mechanistic explanation in the next sections (§§4-5), for now, it suffices to say that the covering law model achieves scientific explanation of an explanandum by subsuming it under one or more laws of nature plus antecedent conditions. In a dynamical system, the role of laws

150 Chapter 6: The Explanatory Structure of the Sensorimotor Theory is taken up by differential equations that are meant to support counterfactuals (Bechtel 1998, p. 311; Clark 1997, p. 117-120; Walmsley 2008): they tell us what would happen to the system, if things (parameters and variables) had been different. By means of the equations we can «predict and explain subsequent states of the system» (Bechtel & Abrahamsen 2002, p. 267). We can call this the fifth thesis of the dynamical hypothesis:

DH5: A dynamical system obeys a set of specific dynamical laws.

Notice that the dynamical hypothesis is a specific philosophical interpretation of DST. Hence, theses DH1-5 are characteristics only of the dynamic hypothesis, rather than of DST in general. The reader will have recognized the similarities between DH1-5 and the theses of the sensorimotor theory (T1-5) (Tab. 1).

Tab. 1: The dynamical hypothesis and SMT’s theses.

Since the SMT and the dynamical hypothesis share the same theoretical commitments, I claim that the SMT is a version of the dynamical hypothesis. This may cause some confusion, so it needs to be spelled out. The dynamical hypothesis is not a scientific or mathematical approach, it is a philosophical theory, or better a philosophical interpretation of DST. The original formulation of the SMT due to O’Regan & Noë (2001ab) does not provide mathematical definitions for its key concepts, as we have seen, but the assumptions and terminology suggest not only that the theory is consistent with the theoretical approach of DST, but also that it can be upgraded into a full-blown DST model (§4.1). This much suffices to my argument. Both the standard formulation of the SMT, and proponents of the dynamical hypothesis, maintain that their targets are explained by means of covering law explanations. I will exploit the similarity between the two to show that the problems that beset the dynamical hypothesis about explanation apply also the standard formulation of the SMT. 151 Chapter 6: The Explanatory Structure of the Sensorimotor Theory

3. The Explanatory Structure of the Standard SMT.

With “Standard formulation of the SMT” I simply refer to the current SMT, in contrast with the mechanistic version that I put forward in §5. I will first show that the SMT conforms to a covering-law or nomothetic model of explanation (§4.1), and then I will show that this exposes the SMT to the mere description worry (§4.2).

3.1 A Nomothetic Explanation.

As we have seen, O’Regan and Noë argue that the sensorimotor contingencies obey a set of sensorimotor laws (T2). The concept of sensorimotor law has not been explicitly defined in the original paper, however, it is possible to form a better idea about the nature of these laws by discussing the two examples advanced in O’Regan & Noë (2001a), these are: eye rotation for the category of sensorimotor contingencies related to the visual system, and visual shape for the category of sensorimotor contingencies related to specific features.

When we rotate our eyes, the stimulations on the retinas are altered in a lawful way determined by the size of the eye movement, the shape of the retina and the nature of ocular optics (O’Regan & Noë 2001a, p. 941). As the eye moves by voluntary control or simply by saccadic movements, the distal stimulus of a straight line is distorted in such a way as to describe a greater or smaller arc. The alteration of the stimulus on the retina depends not only on the eye rotation, but also on the structure of the retina:

When the line is looked at directly, the cortical representation of the straight line is fat in the middle and tapers off to the ends. But when the eye moves off the line, the cortical representation peters out into a meager, banana-like shape […]» (2001a, p. 941, my emphases)7.

Alterations of the stimulus and the consequent sensorimotor response would be constrained by different structural laws that are specific to the visual apparatus (cfr. also Noë 2004, pp. 107ff). No law is explicitly mentioned in the text, but we can arguably advance some suggestion as about what these sensorimotor laws would be: simple mechanical laws governing eye rotation and optical regularities.

The example of visual shape is an instance of the second category of sensorimotor contingencies that are specific to perceptual features. In a dense passage, O’Regan & Noë argue that shape perception would be «the set of all potential distortion that the shape undergoes» (O’Regan & Noë 2001a, p. 942) when we move in relation to the object or when it is the object itself which moves in relation to us. From these movements, the brain would abstract a set of

7 The passage clearly mentions representations, thus supporting my interpretation regarding the presence of internal representations, i.e. at the subpersonal level (cfr. §1). 152 Chapter 6: The Explanatory Structure of the Sensorimotor Theory laws that code shape perception. Shape perception would depend on the laws abstracted by the variations produced by body movements. To illustrate their point, the authors discuss the case of perceptual restoration by surgical intervention on patients born with congenital cataract. Helmholtz for example (O’Regan & Noë 2001a, p. 942) cites the case of a patient who, after visual restoration, feels surprise when first observing that a coin seems to change shape when rotated. According to O’Regan & Noë, the “surprise” felt by the patient is due to her new capacity, enabled by the surgical intervention, to abstract specific laws that govern shape distortion.

In the words of O’Regan (2011, p. 127) the quality of a perceptual state is given by the laws that govern the specific sensorimotor skills. It follows that, in order to explain how an agent perceives, we must find the relevant sensorimotor laws. Before I further elaborate on this suggestion, I will briefly discuss Burhmann et al.’s (2013) dynamical model of the SMT.

Burhmann et al. (2013) remark that many of the SMT’s concepts are unclear, and that this may lead to «practical uncertainty at the time of designing an experiment or modeling the behavior of a robot» (p. 2). O’Regan & Noë (2001ab) have developed a philosophical theory, which is consistent with the theses of the dynamical hypothesis, but they did not provide mathematical formulations of their concepts, without which no experiment or scientific investigation can be set up. Buhrmann and his colleagues have thus filled in this gap. The result of this operation, it is claimed, provides also several theoretical insights, for example bringing into clearer view the similarities and differences between the sensorimotor theory and ecological psychology (pp. 11- 14)8.

The first step is to define the organism and its environment as a coupled dynamical system that can be described by a set of differential equations. The environment is described by a function E that assigns changes in the values of an environment state e to each agent’s body position or configuration p in the world, taking also into account its own independent dynamics:

(1) ė = E(e, p)

The position vector p describes the body configuration of the agent in relation to its environment. A set of sensors S transforms the environmental states e into sensory states s that modulate the agent’s internal state a.

(2) ṡ = S(e, a)

8 My choice to focus on Buhrmann et al.’s model is not arbitrary. Indeed, there have been other attempts to formulate the SMT in scientific terms. For example, Seth (2014) has proposed a predictive processing theory of sensorimotor contingencies that brings the SMT closer to the orthodox approaches (cfr. Flament-Futot 2016). In this sense, Seth’s work supports my thesis that the SMT should be interpreted as continuous with the orthodox approaches. By focusing on Buhrmann et al. I set out to show that even a dynamical formulation of the theory must be consistent with the orthodoxy. 153 Chapter 6: The Explanatory Structure of the Sensorimotor Theory

The sensor states s also depend on internal factors a. The efferent movement-producing signals m are functions of the internal state, and activate effectors in the agent’s body B that in turn bring to changes in body configuration p (here I follow closely Burhmann et al.’s description, 2013, pp. 3-4).

(3) ȧ = A(a, s) (4) ṁ = M(a) (5) ṗ = B(m, e)

Equation (1) describes the agent-environment coupling, (2) the agent’s sensory dynamics, (3) and (4) the internal dynamics, and (5) the body dynamics. Concluding their study, Burhmann and colleagues say that these four kinds of sensorimotor structures present various relevant regularities, captured by the equations, and that «[t]hese regularities […] are the “laws” or “rules” of [sensorimotor contingencies] that form the basis of» the SMT (p. 14). Not only the dynamical model would capture Noë’s suggestion that «[b]rain, body, and world form a process of dynamic interaction» (2009b, p. 95), in addition, Buhrmann et al. have also refined the SMT, extending the categories of sensorimotor contingencies to four.

The first kind of sensorimotor contingency is sensorimotor environment, and it refers to the set of all possible sensory dependencies on motor states (s, m) for a particular type of agent and environment considered independently of the agent’s internal dynamics (Burhmann et al. 2013, p. 4). Think for example of how rotations of the head lead to lawful changes in the optic flow on the retina, like expanding when one moves forward, or contracting while moving backwards (cfr. the example of the eye rotation discussed earlier in this paragraph). The second kind of sensorimotor contingency is the sensorimotor habitat: the set of all sensorimotor trajectories traveled by a closed-loop agent for a range of values, taking into account the evolution of internal states a. The regularities of the sensorimotor environment constrain, but do not determine the regularities or laws of the sensorimotor habitat. The first two categories of sensorimotor contingencies are independent of the agent’s functional context. The third category of sensorimotor contingency is related to regular patterns that play a crucial role in task performance. Burhmann et al. (2013, p. 5) call these stable patterns of task-related activities sensorimotor coordination. These contingencies are «determined by a dynamical analysis of the agent within the context of a given task performance» (ibid.), and often play an important role for task performance in the area of autonomous robotics (e.g. Beer 2003). Finally, the last category is that of sensorimotor strategies: the organization of sensorimotor coordination patterns regularly used by agents because they have been evaluated as preferable for achieving a particular goal (ibid.).

Time to take stock. The examples discussed so far point to a covering-law model of explanation similar to the well-known deductive-nomological model (DN) (Hempel & Oppenheim 1948; Salmon 1989). According to the DN model, explanations are deductive arguments in which the explanandum phenomenon figures as the conclusion (in our case, the agent’s visual perception of the object). Among the premises, there must be at least a law of nature plus some antecedent

154 Chapter 6: The Explanatory Structure of the Sensorimotor Theory conditions. Consider a simple example. A DN explanation of the fall of a body is achieved by specifying some antecedent conditions—like the height from which it falls, the body’s mass and structure—plus the law of gravity. Given these premises, we can deduce the explanandum phenomenon, i.e. the fall of a body. Thus, the DN model bestows a central explanatory role to laws of nature. Proponents of the SMT do indeed stress the role of laws, and the necessity to finding out these laws to understand how the activity of perceptual experience unfolds. As we have seen (§2), this is clearly expressed in the words of O’Regan and Noë: «[…] we must direct our investigations not on some ineffable inner event, bur rather to the temporally extended activity itself, to the laws that govern this activity» (O’Regan & Noë 2001c, p. 80; my emphasis). The example of the covering law explanation of the fall of a body is echoed in the words of O’Regan: «Like the law of gravity that describes how objects fall, these laws [the sensorimotor laws] describe how changes made by our body provoke changes in the information coming into our sensors» (2011, p. 157). In this, and the subsequent passage, O’Regan talks about the brain as deducing features of the outside environment:

Suffice it here to say that it is possible to deduce things about the structure of outside physical objects by studying the laws relating movements that an organism makes to the resulting changes in sensory input9. (O’Regan 2011 p. 45, my emphases)

However, just like the brain deduces things about outside items, so do the scientists who, in order to explain vision from a sensorimotor standpoint, build algorithmic models with which we can deduce characteristics of the external environment. (cfr. Philipona et al. 2003). The explanatory structure of the standard SMT may thus be described as follows:

LP1: Sensorimotor Laws of the Visual Apparatus. LP2: Sensorimotor Law of Visual Attributes. AP1: Target object O. AP2: Standpoint of the agent. C. Conscious visual perception of O.

LP1-2 and AP1-2 are premises, from which the conclusion C can be deduced. LP1-2 are the two sets of sensorimotor laws corresponding to the two categories of sensorimotor contingencies. Antecedent conditions are specified in AP1-2. These include the presence of an

9 A potential objection to this characterization of the Standard SMT may come from Gervais & Weber (2011), who argue contra Walmsley (2008) that dynamical covering-law explanations are not deductive, but rather causal covering law explanations using default rules. Default rules are regularities, but in contrast with laws, they admit exceptions. Alternatively, dynamical explanations may be characterized as inductive-statistical explanations using probability statements. I think that Gervais & Weber’s remarks should also be applied to the SMT, after all, most regularities in biology are not iron laws, but do frequently admit exceptions. I will not further pursue this critique, however, as it does not play a relevant role for my considerations. 155 Chapter 6: The Explanatory Structure of the Sensorimotor Theory object within the reach of the senses and the standpoint of the agent. (I borrow the term “standpoint” from Campbell (2009), where it refers to various factors, such as the sense modality involved, the relative orientation of the agent, its distance from the object, etc.; cfr. also Philipona et al. 2003, especially the mathematical appendix, where several of these conditions are mentioned, among others: apertures of diaphragms, position of the light, and the euler angles for the orientation of eye). The scheme is modeled on O’Regan & Noë (2001ab) version of the SMT, but it can be easily upgraded to accommodate Burhmann et al.’s (2013) four categories of sensorimotor contingencies, where the differential equations would play the role of laws.

At this juncture, one worry concerns the status of the sensorimotor laws. The foregoing discussion has shown that O’Regan and Noë are evasive when it comes to the task of clarifying their nature. One potential objection is that the so-called sensorimotor laws are nothing but mere regularities, and since only laws may play an explanatory role, they cannot provide nomothetic explanations. Indeed, the existence and status of laws in biology and psychology is matter of debate (e.g. Dorato 2012), but the problem can easily be sidestepped by adopting a pragmatic perspective. Woodward (2001) for example states that the problem can be bypassed if we «focus directly on the question of whether the generalizations of interest are invariant in the right way» (p. 6). Similarly, Mitchell (2000) maintains that biological generalizations are less stable than physical laws, but that they can nonetheless provide causal knowledge and be used to predict, explain, and guide interventions. Woodward and Mitchell disagree on how exactly to characterize such regularities, as the former lays emphasis on the notion of invariance under interventions, whereas Mitchell focuses on the degree of stability (for a discussion, cfr. Woodward 2003, pp. 295-307). For our purposes, it suffices to notice that characterizing the sensorimotor laws as mere regularities does not represent a significant challenge to the nomothetic structure of the SMT.

3.2 The Mere Description Worry and the Role of Representations.

The idea that dynamic systems provide covering law explanations is called the “mainstream view” by Zednik (2011). The SMT, as we have seen, can easily be accommodated within the nomothetic framework of explanation—independently from how we conceive the sensorimotor laws, as strict regularities or admitting exceptions—both in its “pure” philosophical form and the dynamical model. In this paragraph, I will show that it is precisely this nomothetic explanatory structure that apparently justifies the SMT’s non-representationalism. However, it also exposes the SMT to the “mere description worry,” a well-known drawback of covering law explanations.

Let’s start from representations. Notice that reference to representations does not play any significant explanatory role within both the standard SMT and the dynamical hypothesis. This is a feature shared by most dynamical models of cognition: they remain neutral about the actual structure (Clark 1997, p. 118), and (often) make no explicit commitment to representations. 156 Chapter 6: The Explanatory Structure of the Sensorimotor Theory

Beer makes this clear where he says that whereas computational and connectionist models lay the explanatory focus on representations, dynamical models provide a characterization of the internal states that «does not necessarily have any straightforward interpretation as a representation of an external state of affairs. Rather, at each instant in time, the internal state specifies the effects that a given perturbation can have on the unfolding trajectory» (2000, p. 97). Indeed, the nomothetic structure of the SMT does not require any explicit reference to representations, thus making them explanatory irrelevant (cfr. Chemero 2000; Van Gelder 1995, p. 352)10.11 Both the DST and the SMT take representations to be explanatory irrelevant, and they both endorse a form of externalism. Both approaches put the explanatory burden on laws or regularities, rather than representations or internal states. Since representations are (allegedly) explanatory irrelevant for both the DST and the SMT, we can interpret the latter’s non-representationalism as a form of epistemological anti-representationalism (Chemero 2009, p. 67; cfr. also Chemero 2000). However, as we have seen (§2), proponents of the SMT seem to assume that the cognitive system have representations, and the literature produced in light of the SMT contains many references to representations. But if there are representations, we face the following problem. If representations are explanatory irrelevant for visual perception, then what is exactly their role? If we assume that biological agents are the product of evolution, the capability of forming representations requires a sophisticated cognitive machinery that is able to capture relevant features of the environment and create representations. Proponents of the SMT, as we have seen, call for a reconsideration of the role of representations, however, they have not clearly spelled out the relation between perceptual experience and representations.

Another, more serious problem with the nomothetic model endorsed by the standard SMT is the mere description worry. Defenders of the dynamical hypothesis usually say that, since dynamical models allow for testable predictions, the regularities we rely on are explanatory (Van Gelder 1998, p. 625; cfr. also Chemero & Silberstein 2008; Walmsley 2008). Yet, it is generally agreed that prediction does not suffice for explanation. For example, the Ptolemaic system can deliver reliable predictions of the positions of planets in the sky, but it does not explain why they move in this way (Craver 2006). In general, merely predicting and describing the behavior of a system by means of generalizations or laws can be very useful for a variety of purposes (e.g. Craver 2006; Hochstein 2013), but this is insufficient for an explanation. The reason is that phenomenological models—i.e. models that merely describe the observable behavior of the target system, but refrain from postulating the hidden causes behind it (e.g. Craver 2006, p. 358;

10 Notice that the problem of representations within dynamical systems is far more complex and controversial (e.g. Bechtel 2001; Dennett 1998; Nielsen 2010). As I said, I follow the SMT assuming that there are representations in the cognitive system. 11 An anonymous reviewer has pointed out to me that Noë’s (2004, p. 235, ft. 15) position is better interpreted as being non-committal to the existence of internal representations, whereas O’Regan seems to explicitly accept internal representations. Indeed, it seems to me that Noë’s position is somewhat similar to Thelen et al. (2001): his version of the SMT aims at describing the overall behavior of the agent, independently from the underlying brain machinery (cfr. ft. 4). 157 Chapter 6: The Explanatory Structure of the Sensorimotor Theory

Frigg & Hartmann 2009)—can afford a limited number of predictions12. Phenomenological models, as I said, can play a variety of helpful roles in scientific investigation, but they are not explanatory. One way to account for the observed regularities and ground the predictions within a robust explanatory framework is to show that the behavior of the system results from the coordinated activity of underlying mechanisms (Andersen 2011). Another way to ground the predictions into an explanatory framework is to hold the observed regularities, or laws, as explanatory. However, opting for the latter strategy exposes a model or theory to the mere description worry, a well-known drawback of the nomothetic model of explanation (for a synthetic discussion, cfr. Craver 2007, pp. 34-40). In short, the problem is that it is unclear why the laws or regularities apply in the first place (Cummins 2000). One way to resist to the mere description worry, while rejecting a mechanistic account, is to stipulate that a covering law account qualifies as a genuine explanation. However, in the words of Zednik: «As long as dynamical explanation is viewed as a form of covering-law explanation […] the mere description worry looms» (2011, p. 246; cfr. also Gervais 2015).

As we have seen the standard formulation of the SMT is construed as a search for laws, and hence, the mere description worry looms also for the SMT. As I said, mechanisms do provide a way to distinguish between merely phenomenal and genuinely explanatory regularities. Accordingly, in the next section, I will articulate a mechanistic interpretation of the SMT and show that it can reconcile the SMT with the orthodoxy.

4. Towards a Mechanistic SMT.

Although the mainstream view of dynamical explanation dictates that it conforms a nomothetic model, recent debates have shown that at least some dynamical models are mechanistic (e.g. Zednik 2011; Gervais 2015). In contrast with covering-law explanations, mechanistic explanations are how explanations: they show why a phenomenon occurred by exposing how operating parts arranged in a particular way jointly produce the explanandum (e.g. Bechtel 2008; Craver 2007; Glennan 1996; Krickel forth.; Machamer et al. 2000; Miłkowski 2013). It should be noted that no one doubts that mechanisms explain, the central issue in the debate about dynamicism is whether non-mechanizable dynamical models are also explanatory (e.g. Kaplan & Craver 2011). Some researchers respond in the affirmative. For example, Ross (2015) argues that Ermentrout and Kopell’s canonical model does not meet Kaplan & Craver’s (2011) 3M requirement (cfr. §5.1), although it conforms to Batterman’s minimal model explanation. Moreover, it is matter of debate whether all explanatory regularities are such in virtue of underlying mechanisms (e.g. Andersen 2011; Leuridan 2010).

12 Incidentally, I must stress that although explanations are not «just mirror images of predictions» (Douglas 2009, p. 462), this does not mean that explanation should be completely divorced from prediction. I concur with Douglas (2009) and Miłkowski (2013, pp. 104-105) about the importance of predictions, which can be used to check explanations. 158 Chapter 6: The Explanatory Structure of the Sensorimotor Theory

I will not try to settle the grand debate about whether dynamical models are explanatory even when non-mechanizable. My purpose is more modest. Since everybody agrees that mechanisms explain, I will focus on the SMT and show that it is compatible with a mechanistic approach (§5.1).

4.1 Mechanizing the SMT.

Researchers who espouse mechanistic explanations consider the law-like regularities as effects that are themselves in need for explanation (Cummins 2000). Such effects are explained by describing how the mechanism(s) responsible therefore generate them. By identifying the mechanism(s), one can not only identify spurious generalities, but also account for the fact that some «generalizations are explanatory because they describe the causal relationship that produce, underlie, or maintain the explanandum phenomenon» (Kaplan & Craver 2011, p. 612).

There are different concepts of mechanisms in the literature, and the very definition of “mechanism” is object of some controversies, but for my purpose Bechtel’s definition will do: «[A] structure performing a function in virtue of its component parts, component operations, and their organization. The orchestrated functioning of the mechanism is responsible for one or more phenomena» (2008, p. 13; cfr. also Bechtel & Abrahamsen 2005). As Bechtel & Richardson (2010) show, mechanisms are discovered mainly thanks to the heuristics of decomposition and localization. The former consists in either structural decomposition—the discovery of the mechanism’s working parts—or functional decomposition—the decomposition of a complex behavioral phenomenon into a series of simpler behaviors. The heuristic of localization consists in pairing the relevant operations with the corresponding working parts. Of course, the process of discovering and describing a mechanism responsible for a given phenomenon is usually rather complex, as the system under investigation may admit no simple decomposition.

It is not controversial that mechanisms explain, but whether dynamical models can provide explanations that are not reducible or convertible to mechanistic explanations. Kaplan & Craver’s (2011) main contention is that «[d]ynamical models do not provide a separate kind of explanation subject to distinct norms. When they explain phenomena, it is because they describe mechanisms» (p. 618). Focusing on the SMT, the mere description issue (§4.2) looms as long as the standard formulation is cast in terms of a purely nomothetic explanation. However, since one of the primary virtues of the mechanistic view of explanation is that «it neatly dispenses with several well-known problems of predictivism» (Kaplan & Craver 2011, p. 606), showing that the SMT can be mechanized would provide the theory with the means to overcome some obstacles, and also throw light on some interesting developments. So, how can we show that the SMT is compatible with a mechanistic framework of explanation? A useful resource is provided by Kaplan & Craver’s (2011) 3M criterion: 159 Chapter 6: The Explanatory Structure of the Sensorimotor Theory

(3M) In successful explanatory models in cognitive and system neuroscience (a) the variables in the model must correspond to components, [operations], properties, and organizational features of the target mechanism that produces, maintains, or underlies the phenomenon and (b) the (perhaps mathematical) dependencies posited among these variables in the model correspond to the (perhaps quantifiable) causal relations among the components of the target mechanism. (Kaplan & Craver 2011, p. 611)13.

The 3M criterion is designed for cognitive and system neuroscience, but as the authors say, it may easily be extended to other domains of cognitive science (cfr. Kaplan & Bechtel 2011). To reiterate an important point, what is at stake is not whether dynamical models explain or not, but rather in virtue of what norms or explanatory framework they may achieve explanations. I will therefore apply the 3M criterion to the foregoing examples of sensorimotor laws. If the SMT conforms to the 3M criterion, we will be able to show not only that it can provide explanations, rather than mere phenomenological regularities subject to the mere description worry, but also that the SMT represents an interesting complement to the orthodox approaches (cfr. §1-2). I will start with Buhrmann et al. (2013) dynamical model of the SMT.

In §4.1 I have described Buhrmann et al.’s dynamical model. Later in their article, they discuss in more details a minimal agent built following the tradition of minimal cognition models developed by Beer (e.g. 2003) and the Sussex school (e.g. Cliff 1997; Harvey et al. 1997). Specifically, they introduce a minimal model of active categorical perception represented below (Fig. 17). Categorical perception refers to the activity of partitioning the world into distinct objects with distinctive properties. The continuous signals received by the sensory organs (natural or artificial) are sorted into discrete categories whose members stand in some resemblance relations (for further references on categorical perception, see Beer 2003, p. 210). In Buhrmann et al.’s model, the agent can move horizontally within a one-dimensional environment that contains two bell-shaped gradients with different widths that are detected thanks to a distance sensor. The agent’s task is to move away from the wide-shaped figure and approach the peak of the narrow-shaped one.

In this model, the environmental states e are described by the Gaussian functions that describe the two shapes:

(!!!)! e = E(p) = h ∙ e !!! Where h is the height of the shape, x the position of its peak, ±w the maxima of the function’s derivative, and p is the agent’s horizontal position. The sensor S transforms the environmental variables e into sensory states:

13 In the original version, Kaplan and Craver refer to activities, rather than operations. I have adjusted the text to Bechtel’s definition of mechanism, but the variation does not alter the 3M criterion. 160 Chapter 6: The Explanatory Structure of the Sensorimotor Theory

! – !(!) s = S(e) = 1 – !"# !!"#

Where !!"# is the maximum distance between the agent and the shapes. This serves as the input to a neural network composed by two nodes: a and m. Each node is governed by the following equation:

! !!ẏ! = −!! + !!! !!" ! (!! + !!)

Here !! is the activation of node i, !! its time constant, !!" the strength of the connection from node j to i, !! a bias term, and !! the logistic activation function. Further details are not relevant in this context. The equations that describe this simple model meet the 3M requirement. There are four elements, the environment, the sensor, and two nodes. The sensor and the nodes perform a computation described by the corresponding equations. The relations among the equations, also, correspond to causal relations between different mechanisms, e.g. the sensor S measures the proximity of the Gaussian shape include the environmental state e which is in turn defined by the first equation. In other words, the dynamical model described by Buhrmann and collaborators is mechanistic, hence the equations can be used as laws for explanation and prediction because they describe mechanisms.

Fig. 17: Buhrmann et al.’s (2013, p. 7) minimal agent model. The agent, represented as a big circle, can sense the proximity s to objects, with different widths w (0.03 and 0.08). The time derivative Δs of the sensor signal provides the input to the node a of the agent’s neural network. The node is recurrently connected to itself and drives the motor node m, which control the agent’s horizontal velocity.

Let us now turn on the classical examples discussed by O’Regan & Noë (2001ab). In these cases, we do not have any mathematical or formal description of the relevant sensorimotor contingencies. The SMT, as described by O’Regan and Noë, is a philosophical theory, and as such it does not provide scientific explanations. Rather, it should be understood as the philosophical blueprint, or more aptly as a philosophical model, defining how an abstract sensorimotor model of visual perception works. It can be shown that this abstract model providing indications about how to construct a sensorimotor explanation is also inherently

161 Chapter 6: The Explanatory Structure of the Sensorimotor Theory mechanistic. In other words, I suggest that the blueprint of the SMT is mechanistic. In order to show this, I return to the cases of sensorimotor laws discussed earlier (§4.1).

Consider the case of eye rotation. As we have seen, the eye movements alter the representation of the stimulus on the retina in lawful ways, but the eye movements themselves are indeed a classical example of physical mechanism. The system can be decomposed in a number of parts, such as the eye, the orbit, the muscles, etc. each performing a particular operation or being operated on. The variations of stimulus on the retina result from the behavior of the ocular mechanism plus environmental conditions that specify the relevant parameters concerning the light array that impinges the retina.

The sensorimotor contingencies related to visual features represent another case of mechanistic decomposition. O’Regan & Noë (2001a, pp. 941-943) focus on shape perception, where the perception of such a feature would be the result of an abstraction from the set of all potential distortions that shapes undergo under different behavioral standpoints (p. 942). Two things are worth mentioning. The first is that it is difficult to fathom out what it means to say that the brain “abstracts” a “set of laws” (cfr. §4.1). Hence, it is not clear what kind of regularities may be extrapolated for the purpose of explanation and prediction. The second is that a decompositional strategy seems to be assumed by O’Regan and Noë in that they recognize that there are distinct regularities or laws related to different features of conscious visual perception. This requires some explanation.

O’Regan & Noë state that there is a subset of sensorimotor contingencies which «correspond» to visual attributes of sensed objects in a way that is «neural-code-independent»—i.e. it does not depend on some mysterious quality of the neural information related to the nature of the features (O’Regan & Noë 2001a, p. 942). In short, this means that there are distinct sensorimotor contingencies related to distinct features. Since the sensorimotor contingencies are defined as sensory changes, and that perception is, according to the SMT, essentially active, the distinct sensorimotor contingencies can be interpreted as a kind of operations or activities. Hence, there are distinct activities causally related to specific features of the outside physical objects, like colors, texture, size or shape, i.e. the basic elements out of which our visual perception is composed (for a somewhat old, but still useful review, cfr. Wolfe 1998; Treisman 1988). As O’Regan & Noë remark, then «visual consciousness is not a single thing, but rather a collection of task and environment-contingent capacities, each of which can be appropriately deployed when necessary» (2001a, p. 967). If this interpretation is correct, then the SMT seems compatible with a decompositional mechanistic strategy whereby one recognizes a set of operations (related to the distinct features), and then tries to identify the component within the system that are responsible for them. This is called by Bechtel & Richardson (2010, p. 18) the ‘synthetic’ strategy that projects from a ‘top-down’ perspective, in contrast with an ‘analytic’ strategy based on the prior identification of the component parts, whose role is subsequently

162 Chapter 6: The Explanatory Structure of the Sensorimotor Theory specified. One way to proceed is, for example, by assigning the features to specific functional areas in the extrastriate cortex, as shown by Fellman & Van Essen (1991). There is, however, a complication. Following O’Regan and Noë, I have shown that they distinguish between different feature-specific operations, but not that they assign these operations to specific subcomponents of the system. How should we understand these subcomponents of the system?

The first step is to acknowledge that, for sensorimotor theorists, seeing is partially constituted by the environment (T5, cfr. §2). From this, it also follows that although the brain plays a necessary role to enable perception, there is no straightforward «one-to-one correspondence between «visual experience and neural activations», this because «seeing is not constituted by activation of neural representations» (O’Regan & Noe 2001a, p. 966; cfr. also Noë & Thompson 2004; Pessoa et al. 1998). The relation between brain and environment is explicitly couched in terms of the dance metaphor, seeing is «somewhat like dancing with a partner» (ibid.; also Noë & O’Regan 2005, p. 567), which suggests that seeing is a process that couples organism and environment. A straightforward implication is that neural correlates of conscious content (e.g. Chalmers 2000) cannot be understood as (minimally) sufficient to generate a specific experience (Noë & O’Regan 2005). This is not to deny that the brain plays a fundamental role in perception, and it does not amount to a rejection of more or less specialized cortical areas. For example, Noë & Hurley (2003) do assume that distinct cortical areas are often associated to distinct kinds of experiences, like intramodal differences—e.g. cortical areas engendering a “red” instead of “yellow” experience—or intermodal differences—e.g. visual instead of smell experiences. Areas normally associated with a certain qualitative character are called “cortically dominant,” whereas areas that, due to neural plasticity, may take over the function of other areas (for example due to lesions, etc.) are called “deferent.” To explain the qualitative differences correlated with the distinct areas, Noë & Hurley refer to a “dynamic sensorimotor approach” (2003, p. 146), according to which different cortical areas are attuned to different sources of input. In this complex dynamic process of constant interaction between environment and neural structures, the role of the brain and cortical areas is, as we have seen, to «causally enable […] our embodied mental life» (Noë & Thompson 2004, p. 19; also O’Regan & Noë 2001a, p. 968).

In short, the SMT acknowledges the following:

− Objects in the environment having particular characteristics (colors, shapes, etc.), − Distinct sensorimotor contingencies for specific characteristics, − Different neural structures related to different characteristics,

Hence, we have: distinct components (like cortical areas and perhaps objects in the environment), and distinct activities (the specific sensorimotor contingencies), which are causally related to the relevant components via a dynamic process. Given the theory’s vehicle externalism (T5), it may be that physical objects can also be subcomponents of a broader, 163 Chapter 6: The Explanatory Structure of the Sensorimotor Theory extended mechanism(s) (cfr. §5.2). As Noë & O’Regan say: «Just as mechanical activity in the engine of a car is not sufficient to guarantee driving activity (suppose the car in a swamp, or suspended by a magnet), so neural activity alone is not sufficient to produce vision» (2005, p. 584); and «The mechanical substrate is sufficient only given the embodiment of that substrate in a normal vehicle and the appropriate embedding of that vehicle in a normal environment» (Noë 2001, p. 47). In short, the point is not to deny that there are specialized cortical areas, but how to interpret their degree of autonomy from the environment.

Neither O’Regan nor Noë provide a detailed description of how the components are associated with the activity of sensory changes, and how they are arranged. But this does not threaten the correctness of a mechanistic interpretation. Explaining mechanistically is a process that unfolds over time, and initial sketches of a mechanism often include many black boxes and filler terms that ought to be specified by subsequent research (e.g. Craver & Darden 2013, pp. 64-118). If I am right, however, the SMT strategy of explaining feature perception is paradigmatically mechanistic, since it involves the functional decomposition of a task into a number of sub- operations or functions, and the identification of distinct components, from external objects to cortical areas that are involved in what seems a mechanistic dynamic process (Zednik 2011).

4.2 The SMT as a Complement to the Orthodoxy.

Zednik observes that dynamical models amenable to a mechanistic analysis «resemble computationalist and connectionist cognitive science» (2011, p. 255). Explanation in psychology, for example, often takes the form of functional decomposition of a problem (Cummins 1983), where the subcapacities or subfunctions of the explanandum are assigned to specific operating parts of the cognitive system (Craver 2007) that realize the computations (cfr. Miłkowski 2013, pp. 51-76; Piccinini 2007). Computational models, in other words «specify the component operations of a mechanism that are […] localized in neurobiological component parts» (Zednik 2011, p. 241). The same lesson can be applied to the SMT as well. This of course does not mean to deny the contribution of dynamical models, and as I have insisted earlier (§3), DST is not inherently opposed to connectionist and computationalist approaches. Rather, this radical opposition is the hallmark of the dynamical hypothesis, i.e. a specific philosophical interpretation of dynamicism, and of the standard formulation of the SMT. By trying to reconcile the SMT with the orthodoxy, T2 (and DH5) should be read as meaning that sensorimotor laws (or the differential equations of a dynamical model thereof) are explanatory because they describe mechanisms. This claim is not entirely original. Cliff (1991), following Lakoff’s (1988) criticism of Smolensky’s (1988) connectionist approach, stressed the importance of sensorimotor contingencies for neural models in order to overcome the problem of ad hoc semantics of connectionist models; further, he observed that the «sole focus on information processing may omit important factors» (p. 34). Moreover, he claimed that whilst the behavior of a system can be studied as the outcome of computation, models that make no

164 Chapter 6: The Explanatory Structure of the Sensorimotor Theory direct reference to computations can still be of interest. Writing few years earlier, Grossberg made much the same remarks «Without a behavioral linkage, no amount of superb neurophysiological experimentation can lead to an understanding of brain design, because this type of work, in isolation, does not probe the functional level on which an organism’s behavioral success is defined» (1984, p. 389). Nothing in these remarks amounts to a rejection of the orthodoxy, as advocated by defenders of the dynamical hypothesis and the SMT, but rather its extension. More recently, Hotton & Yoshimi (2011) have adopted DST to model embodied cognition with the concept of open dynamical system in a way that makes it compatible with the orthodoxy rather than in opposition to it.

To say that mechanistic sensorimotor explanations explain the sensorimotor behavior of a system by relying on regularities generated by the underlying operating mechanisms does not mean that a full understanding of such mechanisms is always required. Miłkowski (2013, p. 53- 54) for example states that a mechanistically adequate model of computation must include both an abstract specification of the computation including the relevant variables (the mechanism’s function), and a complete blueprint of the mechanism implementing the relevant computation. But a complete computational explanation is not necessary in most studies on sensorimotor models. For pragmatic reasons, it may be helpful to rely on mechanisms’ sketches (Craver & Darden 2013), incomplete blueprints of the mechanisms. Such an approach can be a forced choice when the structure of such mechanisms is unknown, or when the researchers’ explanatory interest is directed at higher level of organization of the system, or finally when the target system comprises a high number of variables that make a full articulation of the underlying mechanisms difficult or impossible. This is consistent with Cliff’s observation that the behavior of the system can be studied as the outcome of computations even though no direct reference to computation is made.

A further implication for the SMT is that, since the theory can be understood as an enrichment of traditional representationalist approaches, it can more easily incorporate representations within the explanatory framework. Remember that the SMT does not reject representations altogether (§2), although it puts the explanatory burden on the sensorimotor laws. In the standard formulation, the SMT does not successfully address the issue of representations, as it seem to include them in the cognitive system, but denies that they have any explanatory relevance. Within the mechanistic SMT representations are better integrated within the framework. In the neurosciences, researchers often characterize neural processes as representational (Bechtel 2016). On this reading, much of research in cognitive science consists in identifying the representational vehicles and their contents. This is an important aspect of the heuristic of localization of mechanisms’ operations that are seen as control systems (e.g. Bechtel & Richardson 2010)14. Moreover, representations do not need to be understood as static and

14 From this it does not follow that representations can explain all phenomena. It may well be the case that representations are still explanatory irrelevant in some contexts. There is no easy 165 Chapter 6: The Explanatory Structure of the Sensorimotor Theory pictorial (at least not if pictorial is taken to be synonymous with ‘static-photographic’). This observation will not surprise neuroscientists and philosophers of neuroscience, however. The fact that representations are highly dynamic is well acknowledged in neuroscience. For example, Nishimoto et al. (2011) explicitly refer to perceptual experience as dynamic. Shadlen & Newsome (1994) discuss how information may be represented, either in the spike rate of neurons, or in the timing of individual spikes; Eliasmith (2003) describes neural dynamics in terms of neural representations as control theoretic state variables. Mechanisms seem well suited to account for the dynamic character of representations, as they are inherently dynamic. More work is needed to integrate dynamical and active understanding of representations within the orthodoxy, however, and it is likely that this issue will be at the forefront of future research (Bechtel 2012). Finally, given that, as I said, not every computation is a representation we cannot just conclude that every computational process in the SMT and Buhrmann et al.’s dynamical model is related to representations, but the claim suffices to show that if we accept representationalism, then at least some processes can be seen as representations and play an important role in the overall explanation of the system’s sensorimotor behavior.

In conclusion, I want to briefly address the problem of externalism. Limitations of space preclude a full exploration of this issue, but in short, the mechanization of the SMT proposed in this study does not itself constitute a counter-argument to the SMT’s externalism. One way to accommodate the claim that the mind is “wide”15 (to borrow a term from Clark 1997) would simply be to say that mechanisms span the boundaries of the skull and organism to extend in the environment. As Gervais observes, mechanisms can account for cases of embodied or embedded cognition (2015, p. 52) where there is a continuous and smooth interaction between agent and environment. Zednik (2011, p. 257-258) seems to accept that mechanistic dynamical explanations may show that the mind is, indeed, extended beyond the limits of the organism. Although I cannot fully articulate a rebuttal of this suggestion here, I want to invite caution in drawing this conclusion. From the fact that a satisfactory explanation of the sensorimotor behavior of an agent must take into account mechanisms beyond the boundaries of the organism it does not follow that such mechanisms are cognitive in any relevant sense of the term. For reasons of space, I leave this issue open for further studies.

Conclusion

To sum up, if my arguments are correct, the SMT subscribes to the dynamical hypothesis in cognitive science, emphasizing the role of the perceiver as an active agent within a dynamic rule we can refer to to assess the role of representations, and each case must be examined separately. 15 Talk about “wide minds” is usually associated with content externalism (think of the distinction between wide and narrow mental content, Brown 2016). However, the SMT seems mostly concerned with a form of vehicle externalism. Noë uses the term “wide” in this sense (for example in his 2009b, where a whole chapter bears the title “Wide Minds”). 166 Chapter 6: The Explanatory Structure of the Sensorimotor Theory environment. Proponents of the dynamical hypothesis uphold a nomothetic model of explanation that makes the notion of representation redundant, and bestow explanatory power to the dynamical regularities described by set of differential equations. The standard formulation of the SMT is also construed along the same line as a search for sensorimotor laws, and relevant passages discussed above show that defenders of the standard SMT think possible to deduce the behavior of a system (its perceptual states) from the sensorimotor regularities. I have shown that if the SMT endorses a covering law model it exposes to the mere description worry, and also generates a puzzle about representations. The mere description worry can be avoided if we show that the SMT is consistent with a mechanistic approach, and that the sensorimotor laws are explanatory because they describe the behavior of underlying mechanisms. I have then argued that both a concrete dynamical model of the SMT realized by Buhrmann et al. and the blueprint of the SMT as outlined by O’Regan and Noë conform the 3M-requirement for mechanistic explanation. The mechanistic SMT has two advantages: it can escape the mere description worry, and it can also better account for the role of representations. This, however, comes with a cost, as my reformulation of the SMT makes it continuous, rather than in opposition to, with the orthodoxy in vision science, and thus provides an answer to Buhrmann et al.’s initial question (§1).

Again, in the conclusion, I want to stress that opting for a mechanistic approach is one way to avoid the mere description worry. Some covering law models, at least, do seem to be explanatory (e.g. Ross 2015; §5) and there may be other ways to cope with the mere description worry. I leave open to other researchers to show that the theory does not square well with a mechanistic approach and the orthodoxy. The mechanistic interpretation of the SMT, however, seems a perfectly viable strategy, which finds support in the relevant literature, a strategy that comes with costs and benefits for the standard formulation. The cost is the continuity of the SMT with the orthodoxy. Yet, researchers may want to welcome the latter aspect as a positive feature. My remarks, after all, call for a revision of the SMT in tandem with the orthodoxy, not a rebuttal. Sensorimotor theorists are right when they say that perceivers are agents in dynamical contexts, and ultimately this lesson can lead to mutual insights for both defenders of the orthodoxy and of the SMT. Whether my arguments lead to further consequences for the SMT, will be the object of future studies.

167 PART IV

THE ROAD TO STRUCTURE

FROM CONFIGURATIONS OF TROPES TO PSYCHONEURAL ISOMORPHISM

7

THE CONFIGURATION AND ONTOLOGY OF VISUAL OBJECTS

In Part II and III I have clarified, respectively, the nature of the Phenomenological and of the Neural Domain. As I argued, the basic elements of the Phenomenological Domain are properties, whereas, the basic elements of the Neural Domain are mechanisms. However, as I showed earlier (Ch. 2, §1), talk about isomorphism is only warranted when structure is considered. Hence, in order to assay whether there is any PI, and whether it plays any interesting role in the search for intentional mechanisms, we must shed light on the structure of visual objects.

In this Chapter, I set out to provide an ontological account of visual objects, and of visual perceptual content, that can accommodate two constraints. The first constraint is that visual objects have a spatial-mereotopological structure. For conciseness, I will often simply refer to the configuration of visual objects. The second constraint is the particularity of visual objects, i.e. the fact that we visually relate to particular material objects and our states of seeing are as of particular items. I argue that a trope-theoretic account of visual objects is better suited to meet these two constraints. I thereby also articulate an account of perceptual content given the ontology of visual objects. Importantly, the ontological scope of this Chapter is limited to visual objects, i.e. I gloss over the ontological status of objects as such. Consequently, although I espouse a trope account of visual objects, I remain neutral about what are the building blocks of the world.

I develop my considerations as follows. In the first Section (§1), I set the stage for the whole Chapter introducing the two constraints, and then three families of theories of objects: blob theories, substance-attribute theories, and bundle theories. After ruling out what I call type-three nominalism, and thus the option of blob theories, I briefly introduce universals and tropes. In the second Section (§2), I show that theoretical considerations favor a trope account of visual objects, highlighting the implications for a theory of content. Accepting a trope account of visual objects, however, forces us to consider the problem of trope similarity. If trope theory cannot solve the problem of similarity, the theoretical advantages of tropes over ontological options are made irrelevant. In the third Section (§3), I propose a solution for the problem of trope similarity. Specifically, I argue for an improved version of natural class trope theory. I maintain that the job of the descriptive visual system (cfr. Ch. 3, §1) is that of tagging particulars in the environment, thereby fixing their phenomenological qualitative nature. Intentional mechanisms thus discriminates items in the world, making manifest to the subject a world of similarities and qualitative discontinuities. The descriptive visual system is thus a similarity detector. The advantages, and limits, of my view are finally discussed. Chapter 7: Configuration and Ontology of Visual Objects

1. Objects, Universals, and Tropes. 1.1 Two Constraints on Visual Objects.

In Ch. 4, I have introduced a distinction between visual and material objects. The latter are items in the environment, whereas the former are their appearances constituted by a subset of material objects’ properties. A visual object is thus a visual appearance or, to borrow Kriegel’s (2004, p. 11) terminology, a “phenomenal individual.” The process whereby the visual system captures a material objects’ properties is, as I have argued, a process of property-extraction. This does not mean that all the properties we see can be identified with the properties of the material objects. Sometimes, the extraction process simply goes wrong—for example due to particular lesions in the visual system; arguably this is the case of achromatopsia (cfr. Ch. 1, §2.2), or other similar syndromes—, some other times, the visual system, under specific conditions, employs a distinct set of perceptual capacities and tags the material objects incorrectly, or simply differently under different circumstances1.

To say that visual objects are property bundles is not tantamount to saying that properties are the only kind of entities that feature in states of seeing. It may be argued that objects supervene on bundles as ontologically distinct entities. Boundaries may also appear in states of seeing, since they are regarded as derivative entities, whose existence depends on that of other properties and objects they demarcate (Casati & Varzi 1999, pp. 95-97). Relations may also be entities that feature in states of seeing, connecting different parts or properties of an object. Talk about relations brings us to one central concern of this Chapter. An isomorphism is a function holding between relational structures. We must therefore clarify in what sense visual objects are structured. One striking feature of visual objects is that their composing elements are not distributed helter-skelter, but they must be «properly arranged» (Tversky et al. 2008, p. 445). A vast body of literature in vision science suggests that objects have a spatial-mereotopological structure (e.g. Casati & Varzi 1999; Kimchi et al. 2016; Palmer 1999b; Pinna & Deiana 2015; Pomerantz et al. 1977; Tversky & Hemenway 1984; Tversky et al. 2008). The structure of visual objects also plays a critical role in enabling object recognition (e.g. Biederman 1987; Marr & Nishihara 1978; cfr. Barenholtz & Tarr 2007 for an overview).

As shorthand for “spatial-mereotopological structure,” I borrow Tversky et al.’s (2008) term “configuration.” Every ontological account of visual objects must be able to account for their configuration. This makes the first constraint for any metaphysics of visual objects:

1 I here prefer to talk about different tags, rather than incorrect. Consider the famous case of the straight stick that once put into water looks bent. Most philosophers have tended to interpret this case as a wrong perception. Another line of argument may suggest that seeing the bent stick isn’t an incorrect perception, but rather a correct perception, under specific environmental conditions, i.e. that this is the correct way the stick looks when observed under water. 172 Chapter 7: Configuration and Ontology of Visual Objects

Configuration Constraint (CC): Every theory of visual objects must account for their configuration.

(cfr. §2.1 for a more detailed articulation of CC). Notice that CC is silent both about how the visual system recognizes objects and about how the objects’ parts are detected. A lively debate in vision science touches on the issue of object parsing (e.g. Hoffman & Richards 1984; Palmer 1999b, pp. 348-361). Furthermore, CC is silent on the specific mereotopological system that better describes the configuration of visual objects. In this sense, CC is very abstract, and can accommodate many distinct mereotopological systems. From this feature of CC, it follows that it can hardly guide the work of researchers interested in finding out what specific structure visual objects have. However, my theoretical purpose is not to articulate such an account, but merely to lay the ontological and metaphysical basis for such a work, a basis that will serve us, in the next Chapter, to assay the problem of psychoneural isomorphism.

CC is an important constraint, but there is more. If I turn my head and look (see) at my copy of Ulysses on my desk I will neither see a type of object, nor a set of distinct objects. Instead, what I see is this particular copy of the Ulysses. (Notice that, in this Chapter, I am using the word “particular” in a different sense from that specified in Ch. 4, §1.2, where I talked about facts’ particulars; cfr. also infra §2.1). The causal relation established in sensory reference connects the perceiver to a specific item in the environment within the purview of the senses. Every theory of visual objects must be able to account for this feature of our perceptual experience. This is our second constraint:

Particularity Constraint (PC): Every theory of visual object must account for their particularity.

Later (§2), I will review the options regarding the ontology of visual objects in light of these two constraints. Before I do so, however, I must first exclude a group of theories of objects that belong to what I call type-three nominalism.

1.2 Against Type-Three Nominalism.

In Ch. 4, I laid out a disjunctive argument based on the assumption that visual objects are either property bundles or facts. As we know, I have argued that there is no scientific evidence in support of factualism, and that considerations on object detection and tracking provide some support for the bundle view. A potential objection against my argument can now be considered, namely that the disjunction at the base of my argument does not exhaust the spectrum of available theories of objects. The objection can be unfolded in two ways. The first way has it that facts are not the only option of substance-attribute or complex theories. For instance, Heil (2003) defends a complex theory of objects that is distinct from Armstrong’s states of affairs ontology. Heil maintains that objects are constituted by a substance plus attributes, but he

173 Chapter 7: Configuration and Ontology of Visual Objects characterizes the properties as modes, i.e. trope-like entities 2. Modes are always bundled with a substance that serves as property bearer (cfr. also Martin 1980).

We can easily sidestep this criticism. My argument against factualism was broad enough to apply to Heil’s objects as well. As I have made clear earlier (Ch. 4, §1.2), it is irrelevant for my argument whether facts are composed by universals plus particulars, or rather modes or tropes plus particulars. Evidence from perceptual psychology suggests that all we need to fix sensory reference and allow property extraction are property instances (Ch. 4, §4). What these property instances are, instantiated universals, or tropes, is not revealed by our perceptual apparatus. If such property instances are all we need to fix sensory reference, there is no need to suppose that visual objects are facts. Hence, we have evidence for a bundle view of visual objects, but not for a factualist account. It matters little, for our purposes, whether objects are facts instantiating universals, or modes. My argument applies to Heil’s objects as well.

The second way to develop the criticism is somewhat more complex. Again, the criticism makes the claim that the disjunction of Ch. 4 is not exhaustive, because I worked under the assumption that objects must have something to do with properties. However, not every philosopher accepts such an ontological commitment. Some philosophers contend that, strictly speaking, there are no properties, both as tropes or universals, and that objects are unstructured blobs. Blob theory is incompatible with every account of properties and is the account of objects favored by resemblance object nominalists or natural class object nominalists (Ehring 2011, p. 11). If we take this option seriously, then my argument could have at best shown that visual objects may either be property bundles or blobs.

In order to articulate a rebuttal, we need to flesh out some background. The blob theory is only available to philosophers who espouse what I call type-three nominalism. Nominalism is usually charted into two distinct camps (Rodriguez-Pereyra 2015). I call them type-one and type-two nominalism. Type-one nominalism denies that there are universals, i.e. properties or relations that have multiple instances, both as Platonic and Aristotelian universals (cfr. §1.3.1). This form of nominalism is perfectly compatible with, for example, trope theory in all its varieties (cfr. §1.3.2). Type-two nominalism consists in the denial of abstract objects like numbers, sets, or propositions (Gendler Szabó 2003; cfr. Rosen 2017 on abstract objects). On this view, there are only concrete items. It is possible to adopt a type-two nominalism only about a specific class of entities, like numbers or propositions. This version of nominalism is compatible with

2 The distinction between modes and tropes is largely terminological. By calling these entities modes, Heil stresses his distance from other trope theorists, who usually regard objects as trope-bundles (2003, pp. 127-128). But importantly, Heil sees his account of objects as very close to Armstrong (1997), and therefore to facts. The only distinction between the two seems to be that, whereas for Armstrong properties are universals, for Heil they are modes. In later works (e.g. 2010), however, Armstrong too has come to accept that facts may be constituted by particulars and tropes, rather than universals (cfr. Ch. 4, §1.2). 174 Chapter 7: Configuration and Ontology of Visual Objects

Aristotelian universals, but is incompatible with Platonic universals (cfr. §1.3.1). Also, not all trope theories will be compatible with it, at least versions of the theory like class trope nominalism.

The distinction between type-one and type-two nominalism is standardly accepted in the debate. From the foregoing exposition, it is also clear that the two forms of nominalism are not mutually exclusive. For example, one can be type-one nominalist against universals, but at the same time be a type-two nominalist against sets and propositions. Finally, for purpose of exposition, I single out a type-three nominalism. Type-three nominalism consists in the rejection of both universals and tropes3. This form of nominalism denies that, in order to explain the similarities between two or more objects, we have to postulate some entity—a universal or trope—that accounts for such similarity. This form of nominalism is neutral on abstract entities. So one can accept numbers or sets, if no additional claims are made against these specific kinds of entities. (But notice that on my definition, Platonic universals are still excluded, not qua abstract objects, but qua universals). To sum up, we can display the three types of nominalism in the following table (Tab. 2):

Three Types of Nominalism

Type-one Nominalism: Rejection of universals (Aristotelian or Platonic).

Type-two Nominalism: Rejection of abstract objects (e.g. numbers, sets, etc.).

Type-three Nominalism: Rejection of both universals and tropes.

Tab. 2: The three types of nominalism.

The blob theory is only available in type-three nominalism (Armstrong 1989, p. 38). Type-three nominalism is compatible with some forms of class nominalism. According to the family of views known as class nominalism, an object (or “concrete particular” or sometimes “independent particular”) has similarity relations with other objects not in virtue of some entity, but solely in virtue of its membership to a set or class. Thus for example, my copy of the Ulysses is similar to my copy of Lord Jim, i.e. they both have the property of being a book, because they are both members of the same class, the class of being a book. Of course, objects may belong to multiple classes. So, my copy of Ulysses (a) is a book and it has a red cover, whereas my copy of Lord Jim (b) has the property of being a book and a white cover. Both objects are members of the same class B (being a book)—or !, ! ∈ !—, but they are members of distinct color classes, namely R (red things) and W (white things)—and thus ! ∈ !, ! ∈ !. Class nominalism comes in different varieties, which are shown in Tab. 3:

3 Type-three nominalism, as I have characterized it here, is roughly equivalent to what Armstrong called “extreme nominalism” (in his 1997, p. 21). 175 Chapter 7: Configuration and Ontology of Visual Objects

The Varieties of Class Nominalism

*Concept nominalism: Objects belong to a specific class if they satisfy a certain concept.

*Predicate nominalism: Objects belong to a specific class if a certain predicate applies to them.

*Ostrich nominalism: There is nothing in virtue of which objects are the types of objects they are.

*Mereological nominalism: The property of being P is the mereological fusion of P-things, and for something to have P means to be a part of the sum of the P-things.

*Class nominalism: Objects have a property P in virtue of belonging to a class of objects.

*Resemblance nominalism: Objects belong to a class in virtue of resemblance relations with other members of the same class.

Resemblance class trope nominalism: Objects belong to a specific class in virtue of resemblance relation between their tropes.

Natural class trope nominalism: Objects belong to a specific class of similarity in virtue of tropes whose similarity is grounded in their membership of natural classes.

Tab. 3: The varieties of class nominalism (cfr. Allen 2016, p. 70; Armstrong 1978, pp. 12-17). Varieties marked with * are compatible with type-three nominalism.

As the table shows, trope theory is a form of nominalism, and more specifically a form of type- one nominalism (rejection of universals). What I identified as type-three nominalism is broadly compatible with the following options: concept nominalism, predicate nominalism, ostrich nominalism, mereological nominalism, class nominalism, and resemblance nominalism. Type- three nominalism is neither compatible with resemblance class nor with natural class trope nominalism.

One first (but admittedly weak) reason for being skeptical of the blob theory is that it is, nowadays, mostly considered a merely logical option. Mereological nominalism, for example, is widely held to be implausible, and it is most often contemplated as a merely logical option (Allen 2016, p. 90, ft. 2). The best and most developed defense of resemblance object nominalism is offered by Rodriguez-Pereira (2002). On this view, an object o has a property P in virtue of its resemblance relation with other actual (or possible) objects. So, a red rose is similar to another red rose in virtue of a primitive resemblance relation that is not reducible to a universal (cfr. Rodriguez-Pereyra 2001, 2002b, pp. 103-123). The main problem with this form of nominalism is that it does not give a plausible answer to Armstrong’s causal argument for properties (1989, pp. 49-50). This is the second, and somewhat stronger reason for being

176 Chapter 7: Configuration and Ontology of Visual Objects skeptical of blob theories. The causal argument rests on the plausible view that it is only in virtue of some property that an object is able to causally affect something else: «[…] when [x] causes something to happen, it will be usually be only some of its properties that are causally significant.» (Zimmerman 2008, p. 107). To borrow Zimmerman’s example, a sphere translates motion to another ball in virtue of its mass and speed, i.e. in virtue of some of its properties. Armstrong’s argument is meant to show that class nominalism is incapable of accounting for the causal roles of properties. In his informal presentation of the argument, Armstrong made reference to objects, whereas most often causal relata are thought to be events. With a simple adjustment, and regimenting the argument, we can formulate it as follows:

(1) An event e causes f in virtue of its having a property P. (2) For e to have property P means to be a member of a resemblance or natural class C. (Class nominalism). (3) Even if e had not been a member of C—in virtue of the absence of some or all the other members of C—e would still have caused f in virtue of P. (4) Hence, being a member of C is irrelevant to e’s causing f. (Irrelevancy thesis)4.

The point brought about by (3) is that being a member of a given class seems to be irrelevant to e’s causal power. Suppose we have an electron a that has electric charge P. On resemblance and natural class nominalism, it follows that: «being an electron is constituted by being a member of the class of electrons» (Armstrong 1989, p. 9; cfr. also Denkel 1996, pp. 157-158). The class of electrons includes every electron: all actual electrons, future electrons, or merely possible electrons (which can be as real as the actual electron, if we embrace modal realism). But certainly, it seems that, for a’s having a causal effect f at a time t, the other members of the class are causally irrelevant. Since the presence of the other members of C is causally irrelevant, for the causal event would have occurred even in their absence, it follows that it is irrelevant for a’s to be a member of class C. But notice that, for resemblance and natural class nominalism, being member of C is precisely what defines the electron a’s property being P. The causal argument is exploited to show that something more substantial (an entity) than being a member of a class C is required to ground the causal event5.

4 Ehring (2011, p. 223) gives a similar rendition of Armstrong’s causal argument, but he focuses specifically on his natural class trope nominalism. 5 It might be objected that, if one espouse singualrism about causation, then properties play no role in the causal process, all is needed are particulars (thanks to Beate Krickel for pointing this out to me). However, this does not represent a serious threat for my attack against type-three nominalism. Only if properties are understood as Platonic universals (cfr. §1.3.1), which are beyond space and time, singular causation may be a problem. On an Aristotelian universalist account, properties must always be instantiated by a particular with which they form a unit or a fact. Aristotelian universals are wholly located where the particular is. Finally, trope theorists do not have a problem with the singular account of causation, since properties themselves are particulars: «when we say that the sunlight caused the blackening of the film we assert a 177 Chapter 7: Configuration and Ontology of Visual Objects

This problem directly bears on the issue of sensory reference that I have discussed earlier (Ch. 4, §§3-5). As I said, an item can only be detected in virtue of a qualitative discontinuity in the environment. A disjunctive cluster of properties (physical, visual, or both) is required to fix sensory reference via a simple causal relation with tracking mechanisms:

! ! ! ! Item 1: {!! ∨ !! ∨ !! … !!}

! ! ! ! Item 2: {!! ∨ !! ∨ !! … !!}

The two items, as we have seen (Ch. 4, §5.2), differ for their disjunctive set of properties that stand in some causal relation with the perceiver’s perceptual system. The account sketched out in Ch. 4 is neutral about the nature of these properties, but if the causal argument applies, then it seems clear that some form of realism about properties must be accepted. With “realism” I understand a commitment to the existence of an entity, universal or trope. This rules out forms of type-three nominalism from the start; at least in the absence of a plausible nominalistic reply to the causal argument. Rejecting type-three nominalism brings us then to the rejection of the blob theory.

I now turn to the options that are broadly compatible with the causal argument.

1.3 Universals and Tropes.

Accounting for causal relations is just one of the many roles of properties (Oliver 1996, pp. 17- 18), but there are several reasons that have led philosophers to espouse realism. It is sometimes argued that properties provide an easier explanation for similarities (Armstrong 1978, 1989; Oliver 1996, pp. 52-54), they explain why things fall under predicates (Oliver 1996, pp. 49-51), because we quantify over them (Armstrong 2010, pp. 11-12; Jackson 1977b), or because they enable inter-world identifications in possible worlds scenarios (Oliver 1996, pp. 18-19), to mention just a few.

To be realist about properties, in the sense I have defined at the end of the previous Section, simply means to say that in order to account for the ways things are (e.g. Armstrong 1997, p. 30; Lowe 2006, p. 90) we must introduce a special kind of entity, i.e. a property. This leads to two questions. First, what are properties? Second, what are objects? The two questions are closely interwoven. We have seen that the answers to the second question are two: either objects are property bundles or they are substances plus attributes (cfr Ch. 4, §1.1). Regarding the first question, there are two options: properties are either universals or tropes. Accordingly, visual objects can either be bundles of universals or tropes. Recall also that the properties that

connection between two tropes» (Williams 1953, p. 172; cfr. also Campbell 1990, p. 23; Ehring 2011, p. 47). 178 Chapter 7: Configuration and Ontology of Visual Objects constitute a visual object are properties of the material object, and that states of seeing are silent about the metaphysical and ontological status of material objects.

Since the distinct theories of properties warrant distinct theories of objects, I will first review the two general options regarding properties. I first (§1.3.1) introduce universals, and then (§1.3.2) turn to tropes.

1.3.1 Universals.

According to the mainstream view, properties are universals. I will call this “universalism.” Universalism comes in two versions. The first version is that properties are transcendental entities. Properties conceived in this way are usually called Platonic properties or “universalia ante rem.” 6 The most popular version of universalism today is the Aristotelian account. Defenders of this view argue that universals are always instantiated (or will be instantiated in the future at least once), and they can be instantiated by multiple entities in which they are wholly present. Aristotelian universals are often called “universalia in rebus.”

Consider Platonic universals first. A proponent of this view was Russell (1912) who argued that universals must be Platonic (for a modern defense, cfr. Hoffman & Rosenkrantz 2003). Two red things are spatio-temporally located, but the property of being red is wholly present in both items, regardless as whether they co-exist in the same place at the same time, or at two distinct places at distinct times. This is so because, so the theory goes, properties are not located in space-time. The two main reasons Russell gave in support of his view is that properties are (i) eternal, immutable, and indestructible, and because (ii) only particulars can be the object of sense experience, and since properties are not particulars (as multiple objects can have the property of being red), they cannot be the objects of sense experience. Russell’s view is of course not unproblematic. The theory for example simply assumes that properties themselves cannot be particulars. With regard to the theory of objects, two options are available. Either objects are bundles of Platonic universals (e.g. Castañeda 1974; Russell 1912), or they are substances plus Platonic universals. Both views must of course reconcile the particularity and spatio-temporality of (at least some) objects with the a-locality and a-temporality of properties. The challenge can be met in two ways. On the first way, one may postulate that a bundle of Platonic universals also contain a single-instance universal called “haecceitas” (Heil 2003, p. 170). In my Ulysses example, the copy of my book would be a bundle of eternal, immutable and indestructible universals, such as having a red cover, being a book, etc. What makes its

6 Unfortunately, the terminology in metaphysics is often quite subjective. Some philosophers take nominalism to be identical with the rejection of abstract entities (Gendler Szabó 2003), i.e. what I have called type-two nominalism. Some philosopher use the label Platonic realism to assert the thesis that there are abstract entities (e.g. Hoffman & Rosenkrantz 2003), whereas the adjective “Platonic” will be used here only with reference to the theory of transcendent universals. 179 Chapter 7: Configuration and Ontology of Visual Objects particularity, however, is the compresence of a further universal, the haecceitas, that is exclusively instantiated in this copy of the book, i.e. the x such that x=this copy of Ulysses. The second way to cope with the problem is to opt for a complex theory of objects, according to which the Platonic universals somehow inhere to a substance. But then it is unclear to understand in what sense eternal, a-temporal, and non-local properties may be wholly present in a substance. For these, among other challenges, the Platonic theory of universals has today faded (cfr. also Heil 2003, pp. 147-149)7.

The most popularly held view about universals is the Aristotelian version. On this view universals do exist in space and time, and precisely, always and only in the particulars that instantiate them. The most popular form of “in rebus” universals has been developed by Armstrong (1989; 1997; 2010; cfr. also Hochberg 1965; Meixner 2009). Armstrong’s account of universals is based on two principles, called “The Principle of Instantiation” and the “Principle of the Rejection of Bare Particulars” (Armstrong 1989, p. 94). The former claims that every property must be instantiated by some particular. The latter claims that no bare particulars exist—i.e. particulars shorn of every property—, and that every existent particular must instantiate at least one property. The resulting unity of a particular plus some property is a fact, an actual state of affairs (1997; cfr. Ch 4, §1.2). Properties and particulars are glued together by a fundamental non-relational tie, called sometimes “exemplification” (Hochberg 1965), or “instantiation” (Armstrong 1989, pp. 53-57, pp. 108-110). The fundamental tie is included in order to sidestep the problem of Bradley’s regress. The problem is basically that a relation of instantiation between a particular and a universal would itself need another relation in order to be instantiated. But then, again, we would need another relation, and so on. Since there cannot be relations between properties and particulars, facts do not embody a mereological or spatial composition, although relations can obtain externally between distinct facts.

1.3.2 Tropes.

Some philosophers cast doubt on the very idea of universals: «[…] there must be something dubious about items that can be simultaneously completely present in indefinitely many objects,

7 Again, there may be some other roles for Platonic universals. For example, it may be argued for the indispensability of universals thus construed on the ground that statements about sharable properties necessary require the existence of such universals (for such an argument, cfr. Hoffman & Rosenkrantz 2003, pp. 56ff). Another line of argument may rest on the Platonic character of mathematical entities. Although I remain largely neutral on these issues, there are some reasons for being skeptical of such universals. Heil expressed the worry that: «I have no idea what it might mean to say that universals residing (‘in some sense’) outside space and time have instances in the here and now» (2003, p. 148); Denkel’s remarks that: «[…] in the concrete reality of observables, properties and their changes never exist in isolation and always inhere in objects» (1996, p. 33). A further problem for Platonic universals comes from singularist accounts of causation: Platonic universals are beyond space and time, and it is unclear how they may play a role in causal relations. 180 Chapter 7: Configuration and Ontology of Visual Objects items that are not affected by the vicissitudes of the objects to which they, perhaps temporarily, belong […]» (Campbell 1990, p. 12; cfr. also pp. 27-51). Other reasons for being skeptical of universals is that it seems to force us to accept a parallel ontological category, that of substances or “property bearers” (cfr. also Ch. 4, §1.2), that gives rise to a number of puzzles (Campbell 1990, pp. 7-11; Denkel 2000). Philosophers who accept trope theory believe that the qualities that we observe in things are themselves particulars, without the need to postulate universals or bearers. These qualities go under different names, they are variously called “tropes,” “moments,” “modes,” “abstract particulars,” or “accidents” (e.g. Campbell 1990; Denkel 1996; Ehring 2011; Maurin 2002; Mulligan et al. 1984; Simons 1994; Stout 1921; Williams 1953). For terminological clarity, I will continue to call them “tropes.” Robb defines tropes as follows:

[…] particularized ways [of being] instantiated by (wholly) distinct objects at the same time. Tropes are not, and cannot be, a "one across many.” So the yellowness of a tennis ball is not the same as the yellowness of another tennis ball, even though these two properties resemble each other, perhaps exactly. (Robb 2005, p. 469)

This passage clarifies nicely the nature of tropes in contrast with universals. Roughly speaking, if we accept universalism, two yellow tennis balls will share a common property, the property of being yellow—where “sharing” may be understood as either the co-instantiation of the same property, or as the participation of two distinct particulars to the same transcendent universal. Trope theorists argue, on the contrary, that there is no such shared entity, and that the first tennis ball just is yellow1 whereas the second tennis ball is yellow2. Tropes, unlike universals, are simple and unique, they are incapable of being wholly present in multiple places at the same time (Maurin 2002, chapter 1).

Trope theory provides an elegant and ontologically parsimonious solution to the problem of properties, but it comes with a cost. Indeed, whereas universalists explain similarity in terms of sharing of the same properties, trope theorists cannot do this and must resort to some other theoretical option8. The standard strategy is to consider distinct tropes in terms of similarity relations. Thus, for example, the two tennis balls would be exactly similar, having the same shade of yellow. A third tennis ball with a similar, but somewhat darker color, would be similar to a lesser extent to the two previous balls, etc. One advantage of this view is that, as I will later argue, it nicely accounts for the problem of imperfect community—which indeed arises specifically for Aristotelian universalists. However, this comes with a price, as no trope theory seems able to provide a satisfactory solution to the problem of trope similarity. I will elaborate on this issue, and provide my solution to it, limited to the case of visual objects, in §3.

8 Some philosophers, notably Lowe (2006, p. 16), admit tropes in their ontology, but consider them just as instantiations of universals. (Notice that Lowe prefers the term “mode” to distinguish them from tropes standardly construed as entities that make universals redundant; cfr. 2006, p. 14). In this case, the problem of trope similarity is easily accounted for in terms of shared universals. 181 Chapter 7: Configuration and Ontology of Visual Objects

Since tropes are themselves particulars, most trope theorists have exploited this feature to argue for a one-category ontology of objecthood (Maurin 2002, p. 127ff; McDaniel 2001). On this view, objects are not just mereological sums of tropes, for not every mereological sum forms an object. Instead, objects are mereological sums of compresent tropes (e.g. Campbell 1981, p. 483; Denkel 1997; Ehring 2011, p. 98). (Compresence is also sometimes called by other names, such as “co-inherence” (Mill), “concrescence” (Stout), “togetherness” (Goodman), and “concurrence” (Whitehead, Keynes, Mill) (cfr. Williams 1953, p. 8). In this sense, objects embody a mereological kind of composition, in contrast with facts (cfr. §1.3.1; and Ch. 4, §1.2) 9 . However, some trope theorists still admit substances or “property carriers.” For example, Martin (1980) and Heil (2003) both accept trope-like entities, but they argue that objects cannot be analyzed into tropes. Martin argues that: «An object is not a collectable out of its properties or qualities as a crowd is collectable out of its members. For each and every property of an object has to be had by that object to exist at all» (Martin 1980, p. 8).

2. The Ontology of Visual Objects and Properties.

As we have seen, theories of objects are not independent from theories of properties. In this Section, I analyze the available options about properties and objects (§1.3) in light of the two constraints introduced in §1. A cost-benefit comparison will favor a trope view.

2.1 Configuration and Visual Objects. 2.1.1 The Configuration Constraint.

The first constraint on the ontology of visual objects is the Configuration Constraint (CC). In §1, I have formulated CC as follows:

Configuration Constraint (CC): Every theory of visual objects must account for their configuration.

In what sense do visual objects have a configuration? Let us first briefly remind that in Ch. 4 I introduced a distinction between material and visual objects. (This is analogous to the distinction between visual objects and objects of vision, cfr. Casati 1991, pp. 15-21). The structure of visual objects does not neatly correspond to the structure of material objects. As Feldman stresses, in studying visual objects the point is «not on how the world is structured, but rather on how the subjective perceptual interpretations are organized, and then ask how this kind of organization most naturally decomposes into object-like components» (2003, p. 252; cfr. also Marr 2010, p. 50). Kimchi et al. (2016, p. 35) define visual objects as «coherent unit[s]»

9 Notice that the bundle theory does not entail that properties can exist independently from objects. This needs to be argued separately. Denkel calls the “Benign Doctrine of the Substratum” the claim that every property needs an object, although objects themselves are analytically constituted by properties (Denkel 1996, pp. 35ff). 182 Chapter 7: Configuration and Ontology of Visual Objects that are «organized by Gestalt factors». The key term here is “organization,” that I take as synonymous of my “configuration.”

No psychologist denies that visual objects have different visual properties, but these properties must be combined and organized in some way in order to get a coherent whole. To this end, the visual system exploits multiple Gestalt factors, or grouping cues that bring distinct elements into spatially organized wholes. Typical examples of such grouping cues include: closure (e.g. Elder & Zucker 1993; Pomerantz et al. 1977), connectedness (e.g. Palmer & Rock 1994), convexity (e.g. Bertamini 2001; Liu et al. 1999), regularity of shape (e.g. Feldman 2000), top- bottom polarity (Hulleman & Humphreys 2004), probably attentional selection (Kimchi et al. 2016), as well as other Gestalt organization principle or “Gestalt laws” (for an overview of organizational principles, cfr. Wagemans 2015). None of these cues is alone sufficient to make a visual object, and the exact nature of perceptual grouping is still an open and controversial issue in vision science. I will refer to these cues collectively as “configurational factors.”

Configurational factors do not operate exclusively at the level of basic visual features. Rather, some factors operate between properties, whereas some other factors seem higher-order relations holding between different complex parts that contain different properties. Talk about parts is stressed by Palmer who says that visual objects are «multileveled hierarchical structure[s] of parts and wholes, each of which has a representation of holistic properties as well as component structure» (1977, p. 443; cfr. also Kimchi 2015). As Palmer goes on to clarify, such parts are «elements of mental representation that can be processed as a single entity, regardless of their internal complexity, at a global level of analysis» (ibid.). A different perspective is taken by Pinna & Deiana (2015), who regard visual objects as structured holders of properties and their spatial relations, as well as Blaser et al., who define visual objects as «constellations of visual features» (2000, p. 196). In general, vision scientists discuss many distinct configurational factors. Some of them operate at a more basic level. For example, feature-object binding is arguably a low-level configurational factor that attaches the relevant features to the right object (cfr. Ch. 4, §3). Some other configurational factors operate at higher-complexity levels, where the elements are processed as basic units regardless of their internal complexity. This is the case of object parsing (e.g. Palmer 1999, pp. 348-361). Think for example of a human body seen as composed by arms, legs, head, etc., or a table composed by a surface and some legs. It has been shown that judgments about objects’ decomposition into parts are consistent across different subjects (e.g. De Winter & Wagemans 2006). The universality of visual object parsing is indeed the product of systematic rules the descriptive visual system relies on. Parsing objects plays also a critical role in object recognition (e.g. Barenholtz & Tarr 2007)10. The exact nature of these

10 Adopting a viewer-centered perspective (Marr & Nishhara 1978), object recognition—so has been suggested—may work by comparing the structural variations of objects due to perspectival changes to some shape primitives stored in the perceiver’s memory (Minsky 1975). A popular version of this idea has been developed by Biederman (1987). On his account, segmented 183 Chapter 7: Configuration and Ontology of Visual Objects parsing rules is still, largely, unknown, and several theories have been discussed (Palmer 1999, p. 353). Proponents of the boundary rules approach suggest that the visual system detects some boundaries first, and that parts emerge as the result of boundaries individuation. A critical question within this camp is therefore to explain how boundaries are detected. For example, Hoffmann & Richards (1984) have proposed a rule for the detection of boundaries, the concave discontinuity rule, which states that objects are divided into parts where they exhibit abrupt changes in surface orientation towards the interior of the object. To explain what are the configurational factors and rules that determine the configuration of visual objects is, of course, the task of vision science. From a philosophical viewpoint, our only interest here is to merely register the configuration of visual objects and elaborate an ontological account that is able to do justice of it.

So, visual objects are constellations of visual properties that exhibit a specific configuration. The overall configuration of a visual object is the result of configurational factors that operate at various hierarchical levels, from features to larger units. Assuming a visual object O, to say that O has some configuration is to say that:

1. O can be decomposed into a set of parts P1…Pn .

2. The connected parts P1…Pn have boundaries.

3. Each part P1…Pn has a shape.

Green (2017) has a very similar definition of what he calls “compositional structure.” His concept of compositional structure of visual objects is coextensive with my concept of “configuration.” However, our definitions differ in the following respects. First, Green adds in (1) that the parts are “proper parts;” I do not think this is necessary. Although there is no univocal definition of parthood in mereology (cfr. Casati & Varzi 1999). The notion of proper parts is usually spelled out as of a part that may count as an independent object, were it not connected to a larger whole. Second, Green adds in (1) that the parts must be “pairwise disjoint,” which means that, for all pairs (Pi, Pj) drawn from P1…Pn, Pi and Pj do not overlap. Again, this does not seem necessary, as distinct parts may overlap in a visual object. Third, Green talks about «approximate part-centered locations» of boundaries in (2) and of «approximate intrinsic shapes» in (3). This allows him some flexibility in applying the definition of configuration of the dynamics of changes and alterations of visual objects. This is due to the fact that Green’s primary goal is to focus on structure constancy as allowing object recognition in dynamic contexts. As I said in Ch. 3 (§1), I adopt a synchronic perspective. Finally, I disagree regions of an object are approximated to a set of basic, simple geometrical components called “geons.” How exactly object recognition works, and whether recognition supervenes on structure or whether a holistic approach is to be preferred will not be relevant in the present work. What is important is that an account of the structure of visual objects is central for object recognition as well as other cognitive functions, like prediction of future shape changes (e.g. Ling & Jacobs 2007). 184 Chapter 7: Configuration and Ontology of Visual Objects with Green in referring to the shapes mentioned in (3) as “intrinsic.” A part’s shape sometimes depends on subjective factors that change in different contexts due to different configurational factors. We will later see that shapes are, in this respect, no less “subjective” than colors (§3.3).

The configuration of a visual object is thus mereotopologically articulated11. Simple mereology, the study of part-whole relations, does not suffice to account for the configuration of visual objects, since notions of connectedness and boundaries also play a role in defining, and demarcating wholes and parts from one another. The latters are the proper object of study of topology, both in mathematics and in the branch of logic with the same name. To say that visual objects have a mereotopological structure is to say that the proper way to study their configuration is by means of both mereology and topology. The exact definition of the parts and, in general, elements that feature in the configuration of visual objects depend on the level of hierarchical organization of visual objects. Some parts may just be simple properties, or basic visual features, whereas other parts may be more complex units that exhibit some internal complexity but that are, nonetheless, processed as a single unit by the visual system. This does not need be a problem, as larger chunks of visual objects may simply count, from a metaphysical point of view, as objects on their own, amenable of the very same considerations about objects discussed earlier (§1.2), and below (§2.1.2).

There are two more things that must be addressed, before I turn to the ontology of visual objects in light of CC. Firstly, Green thinks that an argument is needed to show that we see the configuration of visual objects. He then sets up an argument bases on Siegel’s (2010) contrastive argument. I disagree that we need an argument to show that we see the configuration of visual objects. I regard CC as a basic, constitutive feature of the phenomenology of states of seeing (cfr. Ch. 3, §1). That we see visual objects as having a particular configuration is no less in need of demonstration than color or shape perception is. My definition of states of seeing is broad enough as to accommodate CC:

SoS = df A mental state that has visual presentational character. (Ch. 3, §1.1)

I have prudently omitted specifying what kinds of entities are “presented” in a state of seeing. By now, we know that visual properties are a fundamental component of the phenomenologically manifest. But properties alone do not exhaust the ontological spectrum of our phenomenology. Objects may supervene on property bundles, for example. Furthermore,

11 Green points out that not all shape representations format are mereologically structured. For example, in some view-based approaches to object recognition shape is represented by means of a vector composed of some critical features of an object from a viewer-centered perspective (e.g. Ullman & Basri 1991). Surely shape need not be represented in mereological terms. However, notice that the view-based approach does not seek to provide an account of the appearance of visual objects within states of seeing, but only of object recognition. As I specified, I remain silent on object recognition, and focus exclusively on the structure of visual objects. 185 Chapter 7: Configuration and Ontology of Visual Objects if we espouse CC, boundaries must be included in our visual ontology. Mereological relations represent a special challenge, and in general relations of other kinds that hold between properties. Do we see mereological relations? Do we see that colors are always spatially constrained by forms? And what is the modal strength of this claim? The question does not need to be answered here, since it depends on the nature of mereological relations within an object, it depends on whether the relations are intrinsic or extrinsic12.

Since visual objects have a configuration that holds both between basic visual features and between different parts of a visual object, vision scientists find often useful to study visual objects as relational structures (Ch. 2, §1; Ch. 8, §3). One advantage of doing so, as I will argue in the next Chapter (8, §3), is that we can exploit different formal models, that exemplify the configuration of visual objects understood as visual accuracy phenomena (Elgin 2017). Modeling visual objects as relational structures is possible in virtue of the fact that visual objects themselves have a configuration. The next question is: what kind of ontology and metaphysics of objects accounts for the configuration of visual objects?

2.1.2 Facts and Bundles.

I will review different ontological theories of objects in light of CC. Each account of objects can be further refined, and countless metaphysical disputes exist about how we should conceptualize the notion of “object.” However, for my purposes it will suffice to review the most abstract versions of each theory, basing on considerations that must be shared by all further, and more specific, elaborations of that particular ontological account. The accounts to be considered are substance-attribute theories, and bundle theories. The latter accounts can be developed both in terms of universals, and of tropes. Regarding the former account, I will here briefly consider the option that visual objects may be facts, or that the properties we are visually acquainted with may be instantiated universals. It will turn out, that CC puts further pressure on factualism.

12 A further question is the following: It seems that states of seeing necessarily present visual properties, but: a. is the modal strength of this proposition correct, is it necessarily the case that when we see we see visual properties? And also, does it follow that visual properties are necessarily manifested in states of seeing? A prima facie answer to the first question seems: yes, considerations about material object detection and tracking (Ch. 4) make it necessarily the case that states of seeing make manifest visual properties. An answer to the second question is likely: no, for some visual properties may appear in different sense modalities. For example, one can think of cases synesthesia, where a subject enjoys a color phenomenology by acoustic stimulation. To my knowledge, there still is no work addressing the former question, and articulating a modally stronger definition of states of seeing. This needs not be a problem for the present account, however. As I said in Ch. 3, I only focus on ordinary cases of visual object perception, leaving out hallucinations as well as many cases of “weird” perception (e.g. muscae volitantes, etc.). 186 Chapter 7: Configuration and Ontology of Visual Objects

2.1.2.1 Facts, Facts, and Facts.

Let us consider the first option, suppose that visual objects are facts. In Ch. 4, I have mostly talked about facts of the following kind “a’s being F.” Take a simple visual object like the one represented in Fig. 18.

Fig. 18 A simple visual object.

Suppose that this is a fact. The fact can be metaphysically analyzed as p’s (the particular) being a straight line. However, that would obscure the internal spatial arrangement of its parts. The object is composed by four distinct elements in a specific spatial arrangement. Each short segment is an element of its own, and therefore instantiates the property of being a segment. Let us, for now, ignore that each segment has a specific form, and probably also color (in this case, white). So, what we have are four distinct particulars, let us call them a, b, c, and d, and each particular instantiates the property of being a segment. If the properties are construed as Aristotelian universals, then there is a single property of being a segment that is wholly instantiated in the four distinct particulars. If the properties are construed as tropes, then each property being a segment is a distinct entity, s1, s2, s3, and s4 that stand in some similarity relation with each other. Remember that, in both cases, what we get is a fact, a complex, ontologically heterogeneous entity. But if each particular instantiates a trope—like a’s having s1, b’s having s2, etc.—then what we get are four distinct facts. Something is needed to connect them. It is at this point that we may want to introduce facts of the second type (cfr. Ch. 4, §1.2).

The second type of facts has the scheme “a’s having R to b” (Armstrong 1997, pp. 28-29; Mulligan et al. 1984). Consider an example, the fact that “the book is on the table.” Here we have two particulars, the book and the table, and a spatial relation, i.e. being on. We can now see how to account for the spatial structure of visual objects within factualism. What we have are four distinct particulars, each, say instantiating a property (universal or trope). The particulars are connected with each other, and in a specific spatial arrangement, as shown in Fig. 18. For simplicity’s sake, let us just call this spatial-topological arrangement “connectedness.” It is also clear that the different particulars are not connected with each other. For example, d is not connected with b. Also, nothing prevents from considering the distinct facts—a’s being s1, being s2, etc. if we focus on tropes, or a’s being S, b’s being S, etc., if we focus on universals—as themselves the particulars that stand in some connectedness relation (C) with each other. In other words, what we get is the following (for simplicity, I formalize with universals):

187 Chapter 7: Configuration and Ontology of Visual Objects

− a’s standing in relation C with b & b standing in relation C with a. − b standing in relation C with c & c standing in relation C with b. − c standing in relation C with d & d standing in relation C with c.

Notice that we have now multiplied the facts! We started by asking whether the simple object represented in Fig. 18 is a fact. But in order to capture the spatial structure of that fact, we had to decompose it into further facts. Also, each of the relata in the relational facts is a fact on its own. (I leave open whether relational facts on the same line are really distinct, or just the same fact; in that case, the relations are clearly symmetrical, but not transitive). This rendition captures Armstrong’s idea that mereological and spatial relations can be given as holding externally between distinct particulars (Armstrong 1997, pp. 37-38).

Now let us focus on a single segment, like a. That segment has a shape G and a color D, so one could say that a is G &D. But now a psychologist could point out that there is an intimate relation between the segment’s shape and the color, a relation of spatial constraining, the color is kept within the boundaries of the shape of a. To account for this, friends of facts may readily introduce a further relation R holding between the properties. However, the relation cannot hold between the properties, but between particulars that instantiates them. So, we need to break again our fact and introduce a relation R as holding between particular a1 that instantiates G and particular a2 that instantiates D, i.e a1 having R to a2. The same operation repeats for all four segments. If one opts for a trope version of facts, then the relation itself is a trope that connects different parts.

By now, it will be clear that one can virtually continue ad infinitum, adding or decomposing further facts. This may appear unpalatable to some philosophers (like the present writer), but is perfectly consistent with fact ontology. Indeed, as Armstrong stresses, space-time is but a conjunction of myriads of facts, facts that can be further structurally decomposed, in principle with no end in a bottomless decomposition. The upper limit, so to speak, will be a single fact, the only fact that instantiates property W—i.e. being a world—, the fact that comprises everything that exists (Armstrong 2010, p. 31). A lingering worry is how to exactly identify specific facts, if virtually everything turns out to contain many distinct other facts. However, we should not loose sight of our central concern: the only way to account for the configuration of the visual object within factualism is by decomposing the fact of the object’s being a straight line into a number of other facts. The claim is not that a single visual object will contain an infinite number of facts— arguably, we might put a biological limit to the number of facts we might be able to (visually) detect. But rather, that if we accept factualism, then visual objects cannot be single facts, but rather clusters of distinct facts.

An alternative, outlined at the end of Ch. 4 (§5.1), is that a visual object may be not a fact, but just composed of Aristotelian universals, properties and relations, instantiated by a fact. In this case, the material object—the thing in the world that forms the singular perceptual referent of a 188 Chapter 7: Configuration and Ontology of Visual Objects state of seeing—may be a fact (or a manifold of distinct facts), from which we only extract some properties. These properties then form a given property-bundle of instantiated universals that feature in our states of seeing.

2.1.2.2 Configuration and Bundles.

Let us now suppose that visual objects may be bundles of properties. Bundles can be made by universals, or tropes. I consider both in this order.

Suppose that visual objects are bundles of universals. Such a bundle may only be composed by Platonic entities, as Aristotelian universals must, by definition, be instantiated by some particular, and therefore cannot form genuine bundles. There are at least three problems for bundles of Platonic universals in relation to CC. First, an obvious problem for bundles of Platonic universals is that it is far from clear how they could form some kind of structure, given that they are entities without location in space and time. Suppose a philosopher wants to construe the structure of visual objects out of items that exist beyond space and time. To do so would entail postulating transcendental, Platonic relations holding between properties themselves. But obviously, from the fact that my copy of Ulysses has a certain shape L and color C standing in some relation R, it does not follow that the properties themselves L and C stand in a transcendental relation R. Otherwise, how could different objects participating in the same properties have a different structure? One way to sidestep this problem could be to work out a theory of Platonic universals where the properties of being colored and having a certain shape have multiple corresponding universals. This particular shape L and this particular color C are Platonic universals that always happen to be in relation R. However, this solution comes with a very high price, for now we have a world made by particulars plus a virtually infinite number of Platonic entities.

The second problem is that transcendental entities can only be in configuration relations in a very abstract and metaphorical sense, for obviously, entities that do not exist in space-time do not stand in configuration relations. Overall, philosophers who maintain a transcendental account of universals, like Grossmann (1992, pp. 30ff) deny that properties are somehow located in space. In doing so, they are rejecting not only tropes, but also immanent universals.

The third, and final problem for Platonic bundles is epistemological. If we can only be perceptually acquainted with things in space time and of a certain mass (cfr. infra §2.1; also Ch. 3, §1), how can we come to know of visual properties if they are not located in the space-time array? Denkel has put some pressure on Grossmann’s account of non-located universals precisely from the epistemological viewpoint:

If properties are neither in space nor in time, and thus are not located in objects even as instantiations, it turns out to be a total mystery how it is ever possible that we come to

189 Chapter 7: Configuration and Ontology of Visual Objects

know that things exemplify properties, by perceiving them in space and time. (Denkel 1996, p. 173).

Surely, not all knowledge needs to be of spatially located things. (And it is well-beyond the scope of this work to determine the nature and source of knowledge more generally). But if one thing seems as clear as it may be it is that we see things having colors, and shapes, and textures, etc. and we seem to see the color of that book’s cover as being spatially located precisely in the region occupied by the book. For these reasons, a Platonic account of visual object bundles does not seem able to account for the configuration constraint.

An alternative option is to say that visual objects are bundles of tropes. On this view, particularized entities, tropes, stand in some configuration relation with one another to form a complex whole, i.e. a visual object. Mulligan (1999) explicitly mentions this option as a viable, and preferable ways of construing the ontology of visual (perceptual) objects, in contrast with Armstrongian states of affairs, i.e. facts, which embody a non-mereological composition with a sentence-like shape. Indeed, trope theorists agree that objects composed by tropes have a mereological structure (e.g. Ehring 2011, pp. 98ff). Trope bundles, however, are not mere aggregates of tropes. A simple fusion of tropes would still be insufficient to account for the unity of an object. What glues together tropes in a bundle is a compresence relation (§1.3.2).

Trope bundles can easily accommodate CC because tropes have a double nature, they serve both the role of substances and of properties. Parts of objects that may form the units of higher- level configurations are but chunks of objects’ parts that are composed of visual properties, i.e. tropes.

2.1.3 Interim Conclusion.

A single configurational factor alone does not suffice to univocally determine the configuration of visual objects. However, one option does not apparently square well with CC: factualism (cfr. Ch. 4). If a visual object is understood as a single fact that features in our state of seeing, we cannot satisfactorily account for its configuration. The only way to do so, on a factualist account, is to further decompose the object into a variety of facts. But if this is correct, it then follows that what we see must be a manifold of distinct facts. If we combine this consideration with the argument developed against factualism in Ch. 4, it follows that, if what we see are instantiated, Aristotelian universals, then we only see the facts’ properties. From this perspective, facts and trope bundles are very similar, since in both cases what we see is a constellation of particularized properties. However, we will later see (§3.3), in relation to the problem of subjective variations of perceptual content, that tropes have a distinctive advantage over Aristotelian universals.

190 Chapter 7: Configuration and Ontology of Visual Objects

2.2 The Particularity Constraint 2.2.1 Keeping Particularity Within Representationalism.

The second constraint of theories of visual objects is the Particularity Constraint (PC), which states:

Particularity Constraint (PC): Every theory of visual objects must account for their particularity.

Recently, the problem of the particularity of perceptual experience has been object of some discussion (e.g. Burge 2010, p. 84; Gomes & French 2016; Nanay 2012; Schellenberg 2010, 2016; Soteriou 2000). Different notions of particularity have been discussed, and different proposals have been advanced about how to account for it. I first briefly spell out the notion of “particularity” that feature in the present context. Then, I justify the introduction of particularity within the perceptual content. I thereby argue, specifically contra Schellenberg (2010) that we can account for perceptual particularity without adopting a (partially) relationist view. Also, I show that we can dispense with Fregean contents if we adopt a trope account of visual properties.

I understand PC as the claim that we only see particulars (e.g. Aristotle 1993, pp. 5-7; Moore 1953, p. 30; Russell 1912, §10)13. There is, unfortunately, no simple way to define the notion of particularity (and some, may even cast doubt on the distinction between universality and particularity, e.g. Macbride 2005). I do not wish to settle the question that would otherwise require an extensive work. For my purposes, we can define particulars, in contrast with universals, as unique entities, i.e. entities that cannot be multiply instantiated. It also seems a further requirement for the notion of particularity in this work, that particulars must be located in space-time. The latter feature may not be an indispensable feature of particularity, since theoretical physics may arguably produce some suitable counterexample. However, it does seem that what we see must have some location in space-time. (This feature may easily remind of Kant’s account in his transcendental aesthetics, cfr. KrV, A48-49/B66). The idea is however very intuitive and simple: When I look at my copy of Ulysses, what I am seeing is this particular copy, having this particular shape, and this particular shade of color(s). Our question is: Should we require the content of states of seeing to make room for particularity?

The problem of the particularity of perceptual experience emerges as soon as one tries to elaborate a satisfactory account of the accuracy condition of experience (Soteriou 2000).

13 It is unfortunate that terminology in metaphysics happens to be so messy and confusing. The reader may easily be mislead by the term “particularity” used here, confusing it with the facts’ particulars discussed in Ch. 4. Again, to be as clear as possible, I distinguish between facts’ particulars, which are property bearers, from “particularity” used in this context in the sense defined above. 191 Chapter 7: Configuration and Ontology of Visual Objects

Suppose that we take a state of seeing to specify the conditions under which it is accurate, i.e. it has content. Whatever fills the generic description of content can then satisfy the state’s accuracy, even though, in some cases, it may be counterintuitive to think so. Consider the following case. In situation C, under normal conditions Mary sees John. Now suppose an alternative scenario, a situation C’ in which the night before Mary sees John, an evil scientists has secretly put a device in her brain. When the day after Mary meets John, the device is activated: it severs the connections between the retina and the lateral geniculate nucleus, blocking the standard causal-physiological path of information transmission from the eye to the striate cortex. At the same time, the device stimulates Mary’s brain in such a way as to produce a hallucination of John exactly from Mary’s viewpoint. Finally, let us consider a situation C’’ in which Mary, unbeknownst to her, meets John’s twin brother Martin. Martin and John are like two peas in a pod, and no one, by just looking at them, can tell the difference.

Although Mary cannot introspectively tell the difference between her states of seeing in the three situations, it seems that we need a way to articulate the plausible intuition that, in all three states, Mary is representing different things. In C Mary has a state of seeing John. In C’’ Mary has a state of seeing Martin. In C’ Mary has a veridical hallucination of John (Lewis 1980). The three states have different causes: John, Martin, and the device. But in all three cases the accuracy conditions seem to be satisfied by different perceptual relata. The point is not, as we might erroneously think, that the states of seeing must be introspectively discernible, but rather that «[i]t is necessary to specify which particular object in a subject’s environment is represented to determine whether the subject’s environment really is as it is represented to be» (Schellenberg 2010, p. 21). I will now articulate an account of perceptual particularity that is compatible with what Schellenberg calls “austere representationalism.” This serves the following purposes in this work: first, it further constraints the ontology of visual objects; and second, it narrows down the spectrum of possible accounts of contents.

Schellenberg develops an account of perceptual particularity that is both representational and relationist (cfr. Ch. 3, §1). On her account, states of seeing (I adopt my terminology, to avoid any confusion) include external elements as constituents of the mental state (e.g. also Searle 1983; McDowell 1984). This feature of perceptual states is used to motivate a Fregean gappy content view that is meant to account for perceptual experience retaining both perceptual particularity and the phenomenological indiscriminability of genuine perceptual state and hallucinations. I argue that we have no reasons to include external items as constituents of perceptual states in order to save perceptual particularity. Perceptual particularity can be saved if we reject generic content (also called “existentially quantified” content) and characterize the properties of perceptual states either as tropes or as immanent universals. In the next Section (§3), I argue for the superiority of a trope account. Before doing that, I reconstruct Schellenberg’s account of perceptual particularity.

192 Chapter 7: Configuration and Ontology of Visual Objects

2.2.1.1 Schellenberg’s Argument.

According to Schellenberg (2010), the particularity of perceptual experience can be understood in two ways. The first way is as relational particularity, which is the perceiver’s relatedness to a particular item in the world. The second way is the phenomenological particularity, the intentional directedness of a mental state to a given particular in the environment. Schellenberg bases her argument for relationism on relational particularity. Roughly, the idea is that the only way to account for relational particularity is to include external items in the state of seeing they are about. There are three ways in which we can spell out the notion of relational particularity:

a) Causal: the perceiver is causally related to a particular. b) Phenomenological-relational: seeing a particular object makes a constitutive difference to the phenomenology of the mental state. c) Content-relational: particular items in the environment are constituents of perception: c.1) A subject has an accurate perceptual experience only if she is perceptually related to the item she is experiencing. c.2) The particular item she is experiencing makes a constitutive difference to the accuracy conditions and thus to the content of the experience.

Schellenberg rules out options (a) and (b) on the following grounds. Causal particularity is not able to discriminate between different states, since, if we accept the common factor view or conjunctivism (Johnston 2004)—i.e. the claim that hallucinations and genuine perceptual states are fundamentally of the same kind—the very same mental state can be brought about both a genuine perceptual relation and an hallucinatory state. Hence, items in the world cannot possibly make a difference to the conditions of accuracy of a state of seeing. Causal relations, in other words, cannot ground relational particularity.

Phenomenological-relational particularity is ruled out on the ground that it entails the bizarre view that genuine perceptual states and hallucinations are necessarily phenomenologically distinct. If a particular item in the world makes a constitutive difference to the perceiver’s phenomenology, it obviously follows that, in hallucinatory cases where the object is absent, the perceiver’s phenomenology must be different. However, even if we accept some form of disjunctivism, on which hallucinatory states and genuine perceptual states are of different kinds, we would still have to explain the fact that hallucinatory states and genuine perceptual states are phenomenologically indistinguishable. Hence, phenomenological-relational particularity does not seem the right account of the particularity of states of seeing.

It is by now clear that, for Schellenberg, the only way to articulate a notion of perceptual particularity is to opt for (c): it is only by admitting external items as constituents of a perceiver’s state of seeing that we might defend perceptual particularity. But, she argues, austere representationalism cannot account for (c), hence, austere representationalism must be rejected

193 Chapter 7: Configuration and Ontology of Visual Objects in favor of a partly relationist and partly representationalist view14. Before I articulate my reply, let us have a closer look at why austere representationalism would fail.

Schellenberg calls “austere representationalism” any view on which a state of seeing lack any relational component. Austere representationalism is broadly compatible with different forms of representationalism (cfr. Ch. 3, §2). All forms of strong representationalism for which a mental state’s phenomenology is just identical with a particular kind of content, and almost all forms of weak representationalism, for which the phenomenology supervenes on content but is not identical with it, are forms of austere representationalism. (From the list are excluded views like Searle’s 1983, McDowell 1984, or her own view, according to which mental states are partly constituted by external items). Austere representationalism is defined by the three following theses:

1. Experiences have contents. 2. A state of seeing and a hallucination can have the same content. 3. The content is identical with phenomenology or supervenes on it.

Schellenberg’s verdict against austere representationalism goes as follows. Since a state of seeing and a hallucination can have the same content, and the content is identical with phenomenology or supervenes on it, it follows that a state of seeing and a hallucination have the same content and phenomenology. As Davies put it: «if two objects are genuinely indistinguishable for a subject, then a perceptual experience of one has the same content as a perceptual experience of the other» (1992, p. 26). Hence, hallucinations and states of seeing cannot specify the particulars they are about. But since we must make sense of the particularity of experience if we want to provide a satisfactory account of the accuracy conditions, the only option is, according to Schellenberg, to reject (3) and allow for particulars in the environment to partly constitute the phenomenology of states of seeing.

There is something suspicious going on in this argument, however. I single out three problems, I call them, respectively: the problem of constituency, the problem of options, and the causal problem. I base my rejection of Schellenberg’s argument on the third problem. I argue that causal relations can ground perceptual particularity. But let us start with the first two problems.

2.2.1.2 Rescuing Particularity within Representationalism.

Let us consider the problem of constituency. Independently from other considerations favoring representationalism, let us assume that particulars may indeed be constituents of states of seeing. But what does that mean? Philosophers like Fish (2009), Campbell (2002), Brewer

14 Partly relationist because external particulars must be constituents of states of seeing, and partly representational in order to account for the apparent indiscriminability of hallucinatory states and genuine perceptual states. 194 Chapter 7: Configuration and Ontology of Visual Objects

(2011) or Martin (2002) defend naïve realism, which should be understood as the view that we are directly perceptually acquainted with things in the world, without mediation of representations. Such items are understood as constitutively shaping our perceptual experience. Little clarification is given of the relevant concept of “constitution” but naïve realists (Fish 2009, chapter 2) clarify that constitution is understood in an ontological and not in a causal sense. When I see the copy of Ulysses on my desk, the object is not causing my perceptual state, but rather, it is a constituent of it, i.e. the state of seeing is literally composed also by the object. It does not seem clear, however, in what sense an external object may be a constituent of a mental state. Surely, everyone would agree that the object is somehow causing my state of seeing it—at least in genuine perceptual states. Yet, proponents of naïve realism explicitly exclude a causal interpretation. One possible reply, at this juncture, is to further clarify the notion of “constitution.” This move will be of little help, however, as there still is no clear understanding of the causation/constitution distinction (e.g. Bennett 2011). Another ploy could be specifying the role of the object within a broadly mechanistic account of states of seeing. Perhaps, external items are themselves parts of a larger mechanism that produces a visual phenomenon. This, however, does not bring us much further, since it does not seem to capture what naïve realists have in mind, nor it sheds light on Schellenberg’s position.

The problem of the options goes as follow. Schellenberg’s argument that austere representationalist cannot account for perceptual particularity is based on a restrictive selection of available representationalist theories. If we espouse the general content view, or existentially quantified view, perceptual content is specified by an existential quantifier for the object and its properties: ∃!(!"). On this view, an unspecified or generic object x is said to have the visual property R (of course, more complex objects may be added by conjunctions of properties). In this case, there is no specification of the particular object the perceiver is seeing. Furthermore, the properties of the generic content are usually construed as abstract universals. Thus, for example, the previous formula can be red as “there is something red,” where red is usually understood as a universal. If content is generic, then, quite obviously, perceptual content cannot account for perceptual particularity. However, perceptual content need not be generic. Versions of object-involving Russellian content, for example, may specify the object perceived as being red: o(P) (e.g. Tye 2007). I do not commit to this view of perceptual content, yet, I merely observe that on this view, we can save the particularity of perceptual content whilst remaining within austere representationalism.

There is another problem, however, that has a more serious implication for Schellenberg’s argument. The argument is constructed as a disjunction of three different options (a), (b), and (c). Since (a) and (b) are false, it follows that (c) is the right way to account for perceptual particularity. But we have seen that (c) does not represent a viable option, since we can hardly make sense of the environment-encompassing view. I set out to show that the causal relation

195 Chapter 7: Configuration and Ontology of Visual Objects can account for the particularity of perception. The critical problem here is the alleged “sameness” of genuine perceptual states and hallucinations.

The exclusion of the causal argument is based on the fact that different causes may bring about the same mental state. Clearly, if the mental state is the same in situations C-C’’, they cannot account for perceptual particularity. Schellenberg quotes Martin Davis as saying that if two mental states are genuinely indistinguishable for a perceiver, it follows that the two mental states have the same contents. I take “genuinely” to underlie the fact that the indistinguishability does not depend on the contingently specific subject’s discriminatory capacities, but it is some kind of in principle indistinguishability. The idea is therefore that:

1. Different causes can produce the same mental state. 2. Mental states are judged to be the same if they are introspectively indistinguishable. 3. But if two mental states are the same, irrespective of their different causes, they cannot account for perceptual particularity. 4. Hence, causal relations do not ground perceptual particularity.

What does it mean to say that two things are “the same”? One notion of sameness (briefly discussed in Ch. 2, §1), is numerical identity. On this view, to say that a is the same as b is just to say that a and b are just one and the same entity. Now, clearly, this cannot be the right notion of “sameness.” It is plain enough that one can have a mental state m1 at a time t1, produced by a distal cause c1 and another mental state m2 produced at a time t2, and produced by a distal cause c2. Of course, the mental states may be indistinguishable, but that does not mean that they are numerically identical. Consider yet another example. A state of seeing x can be a genuine perceptual state, in which case it will be brought about the particular entity x in the environment, and be phenomenologically indiscernible from an hallucination of x. Still, no one would claim that the state of seeing and the hallucination are numerically identical.

Perhaps, what is meant with sameness is rather: sameness of properties. The degree of sameness will then depend on the amount of properties shared by the two items—if we espouse some form of universalism—or the amount and degree of similarity between their tropes. Now, understood in this sense, we might perhaps better make sense of both Davies’ proposition and the causal processes bringing about the same effects. In this case we take the mental states to be of the same kind, where a kind is identified in virtue of a specific number of properties that a mental state must possess in order to being a member of a specific kind. I will gloss over the ontological status of such a “kind,” whether it is a universal or not (for a discussion, cfr. Maurin 2002, pp. 59-116). Under this understanding, a cause c1 at a time t1 brings about a mental state m1 that is of the same kind of a mental state m2 brought about a cause c2 at a time t2. The two mental states are themselves particulars that belong to the same kind. The two mental states are determinates of the same determinable, regardless from their indistinguishability. Under this perspective, it becomes apparent what seems fallacious in the Davies’ quotation: from an 196 Chapter 7: Configuration and Ontology of Visual Objects epistemic failure—i.e. our inability to tell the difference between two mental states—it does not follow the numerical identity of the mental states. But if the two mental states are distinct determinate of the same kind, then it seems that the two causes, c1 and c2, can bring about two distinct token mental states. But if this is the case, then causal relations can bring about distinct mental states, and if the mental states are distinct they can account for the particularity of perception, even though the mental states are indistinguishable.

Behind this conclusion, lies the obvious implication that distinct mental states are distinct tokens. They may perhaps share some properties, or possess similar tropes, but they remain numerically different. What remains to be seen is what these properties are. If we reject the generic content view as incapable of accounting for the particularity of perception, we can achieve an elegant and simple solution to the problem of perceptual particularity. As it will turn out, the generic content view is compatible only with a Platonic account of visual properties.

2.2.2 Universals, Tropes, and the Particularity of Perception.

According to Nanay (2012), representationalism might account for the particularity of perception if we construe visual properties as tropes. If, on the contrary, properties are understood as universals, then content cannot account for the particularity of perception. I agree with Nanay: tropes can indeed account for the particularity of perception, but it is not true that universals cannot do the same. Now, it is quite obvious that tropes can account for perceptual particularity, since tropes themselves are particulars. Mental states themselves may be trope bundles, or facts cum tropes, in both cases they will be able to account for the particularity of perception.

Let us now consider universals. If universals are construed as Platonic entities that have no location or space-time coordinates (cfr. Grossmann 1992) perceptual content will not be able to retain particularity. For a subject to see something red and triangular it would mean to see whatever may participate of these properties. If content does not specify the object, as in the case of object-involving Russellian content, it follows that content cannot account for perceptual particularity. However, if we opt for immanent universals, although universals themselves are not particulars, they must be instantiated by a particular in order to exist. On this view, as we have seen (§1.3.1), universals are particularized in what Armstrong calls the “victory of particularity” (1978, pp. 115-116; 1997, pp. 126ff). Since properties are always properties of facts’ particulars, and that facts are of necessity particulars (unrepeatable entities), it follows that immanent universalism can account for the particularity of perception.

Both immanent universals and tropes can account for perceptual particularity, while remaining within the boundaries of austere relationism. Two different states of seeing may be introspectively indistinguishable, and yet numerically different. But since they are, nonetheless,

197 Chapter 7: Configuration and Ontology of Visual Objects composed of particular elements, they can account for relational particularity, i.e. the claim that we only relate to particular, or unique, entities in the environment.

2.2.3 Interim Conclusion.

Undermining Schellenberg’s argument for relationism as a mean to save the particularity of perception enables us to develop an account of perceptual particularity within the purview of austere representationalism. We can defend representationalism, and defend the particularity of visual objects as representational entities, if we espouse either a form of trope representationalism, or of Aristotelian universals. It also follows that we do not have to accept a version of the environment-encompassing view, and that, therefore there is no reason to accept Schellenberg’s relational Fregean content view.

In the next Section, I put aside for a while the Aristotelian option and further articulate a trope representationalism view. I will show that trope representationalism has an advantage over Aristotelian universals: it avoids the problem of imperfect community (§3.2.2). Before doing that, however, I must first articulate a trope view that avoids the problem of trope similarity.

3. Tagging Things in the World.

Tropes apparently provide the magic key that unlocks our problems and fit perfectly within the roles they are meant to play. At this juncture, a trope bundle theory may appear as just too good to be true. Indeed, trope theory itself faces a number of relevant challenges. More specifically, the single and most relevant issue for trope theorists is the problem of trope similarity (cfr. §1.3.2). It has been argued (cfr. §3.1) that trope theory is unable to solve the problem of similarity without postulating universals. It is clear, then, that my proposed account of visual objects cannot survive if trope theory is shown to be incoherent.

In this Section, I first (§3.1) present the three distinct versions of trope theory: standard trope theory, resemblance class trope theory, and natural class trope theory. Each theory is meant to solve the problem of trope similarity, but each faces some relevant challenges. I will then (§3.2) propose a solution, building on Ehring’s natural class trope nominalism. Following resemblance class trope nominalism and natural class trope nominalism, I will argue that a trope has its appearance quality in virtue of its membership into a class. What explains the appearance membership into a given class, I argue, is its being processed by an underlying intentional mechanism (cfr. Ch. 5).

3.1 Three Theories of Tropes.

Following Ehring (2011, pp. 175-202; also Allen 2016, pp. 42-51) we can distinguish three trope theories in metaphysics. These are: standard trope theory, resemblance class trope nominalism, and natural class trope nominalism. I will briefly discuss them in this order, and briefly mention

198 Chapter 7: Configuration and Ontology of Visual Objects their most pressing problems. My purpose is not to chart exhaustively the philosophical problem of trope theory, but merely to use the exposition as a foil to articulate my solution to the problem of trope similarity.

3.1.1 Standard Trope Theory

With “standard trope theory” it is meant the classical formulation of trope theory, as articulated by Wiliams (1953) and Campbell (1990). On this view, what grounds a trope’s resemblance to other tropes is not something which can be reduced to something else. Rather, the tropes’ resemblance is taken as primitive:

A red trope is red not in virtue of its resemblance to other tropes or in virtue of its membership in various classes of tropes. Resemblance among Campbellian tropes is not further reducible [...]» (Ehring 2011: 176).

A problem with standard trope theory is that it is far from clear whether it is really distinct from facts. To see why, let us briefly consider again that tropes are simple (cfr. §1.2). However, although they are simple, tropes are both particulars and qualitative15. But then it seems that two red tropes, being particular and qualitative, are similar precisely because they are both red, and they are individuated because they are particulars. Framing the problem in terms of truthmakers, it seems that the following two sentences:

− t exactly resembles t’. − t is numerically distinct from t’.

Are grounded in different aspects of the nature of tropes, namely, its quality and its particularity. But then it seems that tropes cannot be simple and, at the same time, ground both similarity and particularity. Tropes must then be complex entities made by a particular and a qualitative character. But if tropes are complex entities made by a particular and a quality, they are formally indistinguishable from states of affairs (cfr. Ch. 4, §1.2). Hence, trope theory is untenable16.

15 Garcia (2015) talks about modulo-tropes and modifier-tropes. Modulo tropes are tropes understood as substances, whereas modifier tropes are tropes understood under their role of properties. He then highlights the tension between the two different roles, casting doubt on the tenability of trope theory on the ground that tropes should be simple whilst being able to play these different roles. I do not think that Garcia’s criticism is particularly compelling. One way to sidestep his criticism against tropes is to note that we are not forced to think of tropes as substances. Under some understanding of tropes (or trope-like entities, like Denkel’s properties 1996), tropes are not capable of independent existence, but can only exist in bundles. The bundle or object has some kind of ontological priority over its analytic parts. On this view, tropes are dependent properties (cfr. also Mulligan et al. 1984; Mulligan 1999). 16 The reader should bear in mind that this sketchy exposition has the sole purpose of showing the complexity of the problem of trope similarity, and provide a base in which I will couch my 199 Chapter 7: Configuration and Ontology of Visual Objects

3.1.2 Resemblance Class Trope Nominalism.

Resemblance class trope nominalism solves the threat of complexity of the standard theory by grounding the trope similarity to other tropes in a primitive resemblance class. On this view, trope resemblance is not further reducible. Two tropes just resemblance each other, and out of resemblances among tropes, we can construct a resemblance class. The advantage of this view is that, by grounding the trope’s resemblances in resemblance classes, rather than in the trope itself, one avoids the threat of complexity.

Resemblance class trope nominalism is the least popular of the three trope theories, mainly because of two apparently fatal issues. The first one is the threat of Russell’s regress. This problem states that, whenever a resemblance is taken as primitive, it then turns into a universal:

If we wish to avoid the universals whiteness and triangularity, we shall choose some particular patch of white or some particular triangle, and say that anything is white or a triangle if it has the right sort of resemblance to our chosen particular. But then the resemblance required will have to be a universal. Since there are many white things, the resemblance must hold between many pairs of particular white things; and this is the characteristic of a universal» (Russell 1912/1997: 48).

Resemblance between tropes cannot thus be taken as primitive, on pain of ensuing a regress that leads to the introduction of universals. But if resemblance is a universal, then the advantage of trope theory over universalism is dissolved.

The second threat for resemblance class trope nominalism is that of unique tropes. Supposes that a trope t is unique, i.e. it does not resemble to any other tropes because it is the only example in the world. According to resemblance class trope nominalism, it would follow that this trope has no nature, since it does not resemble to anything else. This is counterintuitive. One way to avoid this problem is to argue that inexact resemblance to other tropes would be enough to ground the trope’s nature. However, it is unclear how this is supposed to work, since a trope may bear no similarity to any other tropes. The other way to solve the problem is embrace modal realism: although t has no resemblance to any other trope in this world, it may be similar to its counterparts in other possible worlds. This may solve the problem of resemblance, but it does so with the high price of accepting modal realism.

solution. Trope theorists have indeed developed counter-arguments against the charge presented above. One option to resist the truthmaker argument against standard tropes is to reject the idea that the logical independence of the sentences implies the ontological independence of their truthmakers (cfr. Mulligan et al. 1984). 200 Chapter 7: Configuration and Ontology of Visual Objects

3.1.3 Natural Class Trope Nominalism.

Just like resemblance class trope nominalism, natural class trope nominalism claims that trope similarity is explained by some resemblance relation, but in contrast with resemblance class, natural class trope nominalism grounds the similarity relations in natural classes. Again, a trope is not identical with its nature, but the nature of a trope is grounded in natural classes of tropes. In other words, natural class trope nominalists contend that a trope is similar to another trope in virtue of being co-members of the same natural class. The idea is simple. For this shade of red1 to be similar to that shade of red2 is to be members of the same natural class R of red tropes. Of course, it is also possible to determine the degree of abstractness of the natural class, such that, for example, red2 and yellow1 are both members of the natural class C, the class of color tropes, although red2 belongs also to the natural class R of red tropes. A trope can thus belong to more than one natural class. For instance, red2 belongs both to C and R, this means that its nature will be C ∧ R. In this way, natural class trope nominalism makes room for the intuitive idea that similarity may come in degrees. Since tropes may belong to more than one natural class, their degree of similarity will be determined by the number of classes a trope belong to.

Natural class trope nominalism solves the problem that afflicts resemblance class nominalism: Russell’s regress. One potential worry for natural class trope nominalism is that classes themselves may be construed as universals. If this is the case, then members of a given natural class N will be themselves instances of a universal. Again, this would make trope collapse on facts: members t1, t2…tn of N would be just instances of the universal N. The objection can be blocked, however, by espousing some form of type-two nominalism (cfr. §1.2), i.e. if we deny that abstract objects exist. Of course, a proponent of tropes does not need to deny the existence of any sort of abstract entity, but she may simply deny the existence of sets or classes as mind- independent entities that exist in some kind of Platonic realm. Ehring (2011) opt for this solution, and claims that natural classes of tropes are themselves particulars, and thus pose no special threat.

However, natural class trope nominalism also faces some relevant challenges. The most important one being that it is unclear in virtue of what a trope belong to a given natural class. Why a given trope is a member of, say, the natural class of red tropes has, in natural class trope theory, no explanation. Ehring (2011) takes the membership of a trope in a natural class to be an unexplainable, fundamental assumption. However, this makes tropes’ resemblance mysterious and unclear, for it seems that a trope’s similarity to other tropes is grounded in an unexplainable assumption rather than in virtue of its quality.

201 Chapter 7: Configuration and Ontology of Visual Objects

3.2 Tropes and Perceptual Tagging.

Building on natural class trope nominalism, I contend that tropes belong to their natural classes in virtue of the underlying intentional mechanisms that generate them (Ch. 5). Intentional mechanisms, in other words, have the function of discriminating between different items in order to inform the subject of the diversity and richness of items that populate the environment by grouping items into similarities. The perceptual capacities of a subject are discriminatory capacities.

3.2.1 The Solution

As we have seen, the main problem of standard trope theory is that it collapses on facts. In order to dodge this problem, we must somehow distinguish between the trope’s particularity from its nature, while at the same time preserve the intuition that tropes are simple entities. The two solutions proposed of resemblance class trope nominalism and natural class trope nominalism seems to fail for different reasons. In order to avoid resemblance class trope nominalism’s undesirable solution via modal realism, one option left is to try to find a criterion as for why tropes fall within a given natural class. I will now outline a solution limited for visual properties.

Suppose that a subject S sees an object O in situation C. In situation C, the object is a white cube that appears white to S. Given our account of visual objects sketched out above (§2), O is a trope-bundle. Furthermore, O is placed within a cubicle painted in a variety of black so dark that it does not reflect any variation of light and does not offer any depth cue (such a variety of black, recently developed, may be the so-called Vantablack). In this situation, S sees only blackness, and a white cube at the center. Let us now consider another situation C’. Again, S sees O, and O is placed in a black cubicle, but this time, O is bathed in red light. Because of the red light, O does not look white to S, but of some shade of red. S does not know that O is the same object she saw before and therefore may mistakenly think that she is seeing an object O’. What gives S the impression of seeing a different object is of course the fact that O, in C’, appears of some reddish color. The cubicle, absorbing light and therefore not providing any contextual cue, does not let S sees the different situation, because the cubicle is so dark that S cannot identify the light over other surfaces. The cases discussed reflect the typical distinction between three different mental states: genuine perception and illusion.

Let us consider situation C, the subject sees O. If O has a white trope w1, then, according to natural class trope nominalism, w1 is similar to other white tropes in virtue of its being a member of class W of white things. What does justify its inclusion to the class of white tropes? On natural class trope nominalism, this is entirely decided by fiat. One problem with this solution is the issue already seen earlier (§3.1.3), i.e. that it is unsatisfactory to consider tropes as members of class by fiat. Another worry, however, is the following. Suppose that a class is itself

202 Chapter 7: Configuration and Ontology of Visual Objects a property. If this were the case, then indeed tropes would be much like instances of universals. Natural class trope theory would then be just a variation of facts, with the only variation that properties (universals) are natural classes of instances. There is however a problem: in order to explain why a given entity appears in some way we posit a higher-order entity of the very same nature of the one we want to explain. It may be pointed out that the notion of “explanation” in metaphysics is fundamentally different from the notion of scientific explanation. Yet, as I shown earlier (Ch. 5), the primary goal of the search for the neural correlates of conscious content is to explain in virtue of what the brain generates the content of our states of seeing. If what we want to do is to explain the similarity and appearance of visual properties it seems incorrect to look for a metaphysical explanation, when a scientific one is available. Certainly, in many areas of metaphysics any given explanation must be sought within the disciplinary boundaries. But it seems a category mistake to explain the appearance and similarities of visual properties by means of metaphysical tools. My solution consists in replacing a metaphysical explanation of the tropes’ qualities with the blueprint of a scientific explanation. Now, since explanations in vision science are supposed to shed light on why things have the appearance they have without making redundant assumptions (Cummins 1983, p. 75) it seems unclear why, in order to explain a trope’s quality one would have to include reference to a higher-order class of the very same nature, or assume that a trope resemblance is primitive. Equally unclear is the naïve realist move that explains a trope appearance via phenomenal particularism (e.g. Gomes & French 2016): something looks white in virtue of having the intrinsic property of being white. The problem I have been discussing is, therefore, the following: how can we satisfactorily explain property appearances if we somehow include the appearance in the explanans?

At least relative to vision science, there is a perfectly intelligible why-question we might ask with regard to the properties that constitute visual objects: In virtue of what do some particulars have the appearance they have? Instead of positing a higher-order entity, a class, as collecting all entities that have a certain quality, I propose to explain the qualitative character of trope w1 by making reference to the underlying intentional mechanisms that generate it. In short, the proposition is that a particular visual trope t belongs to a natural class C in virtue of its being processed in a certain way by a particular kind of intentional mechanism (Ch. 5). I call the process whereby a particular receives a particular quality “tagging.”17

Consider now situation C’, what does happen in this case? The situation exemplifies the well- known case of color variation. As we know, colors are highly context-idiosyncratic. Changes in

17 The terminology is drawn from Pylyshyn (2007, p. 23). Talk about tagging in vision science is often employed in relation to visual search task. Yantis & Johnson (1990) for example argue that in a visual search task, where the subject is asked to individuate some specified letters in a multiletter display. According to the experimenters, the specified letters are rapidly individuated because they had been signaled by a «priority tagging» process. My use of the term is, of course, different. 203 Chapter 7: Configuration and Ontology of Visual Objects lighting conditions, and in the subject’s internal state can underlie different color perceptions. In C’, the situation is simple enough, as a red light changes the way we perceive w1. Quite obviously, we do not want to renounce to the claim that O has w1 just because we now see it as qualitatively different. And equally obviously, O cannot have both w1 and another color trope, for a thing cannot have two mutually exclusive properties at the same time, i.e. O cannot be both all white and all red18. In my solution, there is no problem in principle in saying that the very same trope can belong to two distinct natural classes at different times, and without any internal change. In C’, S sees that very same trope w1, but instead of looking white, it now looks of some shade of red. The reason is not that there is now a change of tropes. If tropes are conceived according to the standard account, it would the follow that the object has acquired a different trope, say, red1, since tropes are individuated, among other factors by their qualitative character (e.g. Ehring 2011, pp. 76-97; Schaffer 2001). It does, however, seem plainly absurd to say that a trope has now been substituted by means of another trope without any causal change in O. One potential way of replying to my rendition of the case, is to say that in C’ S does not really see w1, but rather, w1 is covered by the red light. Hence, what S really sees in C’ is not w1, but red1. The reply may seem to work in this case, however, consider a variation of C’, call it C’+. In this case, S is subject to the phenomenon of color constancy (Foster 2011). Although the physical parameters of the lighting conditions have been changed, S still sees O as red. Instead of positing some red trope in the environment, as part of O or the light source, one can explain the red appearance of O by means of the internal working of the intentional mechanisms responsible for color.

One worry is that the example discusses seems tailored to articulate an account of color properties. Spatial properties, it may be thought, are not subjective as color properties (but see for example Masrour 2015, 2017). Consider the case of emergent properties in vision (Pomerantz & Cragin 2015). A first example of emergent property is, arguably shape perception. There is some evidence that shape cannot be a basic visual feature (e.g. Wolfe 1998). Instead, shape would emerge as a visual system’s reconstruction based on edge detection. Another example of emergent properties in vision that alter the phenomenological shape of the figure is depicted in Fig. 19. Enns (1990) showed that different combinations of the same elements might yield depth perception given a particular configurational arrangement. For example, by inscribing a Y shape inside a flat hexagon we have the visual impression of a cube. The acquired visual property of having a depth does not, however, emerge in cases where the configurational arrangement is violated or distorted. For example, in Fig. 19 the same components do no yield the same depth-property (right-hand side, first figure).

18 Different solutions have been proposed in the philosophical literature to cope with the problem of illusions. My solution is merely meant to provide an alternative base in which to place different philosophical theories of illusions. In other words, many fine-grained philosophical accounts of illusions can be conceptualized as refined versions of my basic proposal. 204 Chapter 7: Configuration and Ontology of Visual Objects

Fig. 19: Configurations and emergent properties. (from Pomerantz & Cragin 2015, p. 94).

In the account proposed here, what happens in these cases is that the descriptive visual system tags the items in different ways. In the example depicted in Fig. 19, the elements are the same, but their different configurations elicits a different tagging process, whereby in one case the descriptive visual system tags the object as a cube, whereas in the other cases it tags it as a complex figure without depth.

My account is, of course, not entirely new, and bears some similarities with other works in philosophy of perception. I briefly discuss two such accounts, and highlight in what way my proposition is original. The first account is due to Dennett (1991), the second one is, broadly speaking, work on Fregean contents, of which I will consider Schellenberg’s work (2010). I will thereby show the advantage of my view, and thereby show why my trope representationalism account should be preferred over an account of immanent universals.

3.2.2 Advantages of Tagged Tropes.

Let us start with Dennett. In Consciousness Explained, Dennett discussed—as we have seen (cfr. Ch. 1, §1)—the problem of filling-in. A central thrust in his line of argument is to highlight the distinction between content and vehicle. Content reveals little about vehicles, as different representational formats may be exploited by the visual system to convey a specific content. The example discussed by Dennett is that of the parrot picture reproduced below (Dennett 1991, pp. 347-348).

(a) (b) (c)

Fig. 20. Picture (a) represents merely the shape of the parrot. The descriptive visual system may color the picture by tagging surfaces (b), or by providing a bit-map tagged representation of the picture (c). (Adapted from Dennett 1991, pp. 347-348).

205 Chapter 7: Configuration and Ontology of Visual Objects

Now, picture (a) does only contain information about shapes, but no information about colors. The descriptive visual system may then exploit different ways of coloring the picture. One of these ways, illustrated in Fig. 20 (b) is by tagging surfaces or regions of the pictures to which a specific color sample applies. Another way, illustrated in Fig. 20 (c) in an analogous way to a bit- map, i.e. not an image but «an array of values, a sort of recipe for forming an image» (Dennett 1991, p. 349).

In many ways, my suggestion is similar to that of Dennett. A clear difference between our accounts, however, is the purpose they are devised to do. For Dennett, the example is mainly developed to show how filling-in may work, whereas my account is primarily conceived to solve the problem of visual trope resemblance. Different instances of the same color, on my account, are but different particulars (tropes) whose quality is processed by the same (or similar) mechanisms in different occurrences. It is this that explains visual trope similarity.

Another work that bears some resemblance with the present account is that of many Fregean theorists about content (e.g. Chalmers 2010; Schellenberg 2010). The core tenet of Fregean theories of contents is that objects and properties are not presented “nakedly.” Instead, seeing the world always involve a mode of presentation of objects and properties. One key advantage of Fregean content accounts is that they can easily cope with the problem of perceptual variation: the same things may appear differently in different contexts. However, as Nanay (2012) points out, Fregean content offers a complex and implausible account of perceptual experience. It forces us to introduced not only objects and properties, but also their modes of appearance, whose nature is still in need of explanation. Trope representationalism, on the contrary, represents an easier and elegant solution to the problem of the nature of perceptual content that does not require accepting a form of structured propositional account. A trope representationalist account is broadly compatible with a scenario content view (Peacocke 1992), but not with a structured propositional account19.

Finally, let us consider an advantage of my trope representationalism over an immanent universals account of visual objects. As we have seen (§2), neither the Configuration Constraint, nor the Particularity Constraint uniquely determines the ontology of visual objects. Two families of views are able to account for the ontology of visual objects: either they are composed by immanent universals instantiated by some facts, or they are composed by tropes. In virtue of their particularity, tropes can easily account for inexact resemblance or imperfect community. Consider the variety of colors and forms. Two samples of blue will probably be of similar, but different shades, two circles will have two almost exactly similar shapes, but have some minimal difference. Trope theory does not have problems explaining this variety of appearances, since

19 Note that perceptual content may still be propositional, for example if we articulate propositionality in terms of possible-worlds content. 206 Chapter 7: Configuration and Ontology of Visual Objects tropes are particularized properties. Furthermore, if we adopt my account, the nature of each sample is given by the particular set of mechanisms that process them.

Immanent universals are not well-suited to explain inexact resemblance. The problem is known as the imperfect community problem (Allen 2016, pp. 24-27). Two instances of the universal blue will be wholly present in two distinct particulars, but since they are the same entity—the same universal—they cannot qualitatively differ. In other words, a blue color sample cannot be of a given shade of blue without entailing that another blue color sample will be of the very same shade, since they are both instantiations of the same universal. (This problem does not emerge for Platonic universals, since particulars may participate at different degrees of the transcendental universals). One way for defenders of immanent universals to solve the problem of the imperfect community is to postulate that each shade of color or form is an instance of a determinate universal of a determinable kind. In the case of different shades of blue, each will be an instantiation of the very specific shade of blue universal. Hence, the two shades of blue would, after all, instantiate different universals. Yet, this does not only unnecessarily multiply universals, but it also makes far less clear why the two shades of blue are similar, since they are not instances of the same universal. A possible solution would be to specify that the two determine universals are determine of the same higher-order universal, for example, being blue. Even assuming that this might be the right solution, it then seems that this way of explaining the quality of visual properties still postulates a higher-level entity of the same quality. What explains why the particular is blue is blueness, and what explain why another particular is circular is circularity. I have argued that this move is suspicious, as it does not explain why the perceivers come to see things the way they do, but circularly explain appearances in virtue of irreducible qualities.

207

8

TOWARDS PSYCHONEURAL ISOMORPHISM?

The work done in the previous Chapters will serve now to assess the central question of this work: whether the concept of psychoneural isomorphism can be useful in the study of the neural correlates of visual objects. In Part I, I have outlined the problem and a research strategy. In Part II, I have clarified the nature of the Phenomenological Domain. In Part III, I have clarified the nature of the neural domain. In Ch. 7, I have developed a basic account of the ontology of visual objects. In this Chapter, I finally tackle the issue of psychoneural isomorphism. In order to do so, I will presuppose most of the work developed in the earlier Chapters.

I will proceed as follows. In the first Section (§1), I will set the stage for the subsequent Sections by briefly exposing some of the results achieved so far, in particular about isomorphism and visual accuracy phenomena. As I will argue, if we want to assess PI we must provide a formal description of both mechanisms and visual objects. This means that we should work on models of mechanisms and of visual objects. In the second Section (§2), I turn to models. I first clarify the nature of scientific models, and then turn to models of visual objects. I thereby make the claim that there exist multiple models of visual objects, including philosophical models, and that they open up different epistemic spaces that can be explored to raise questions about the nature of the target. In the third Section (§3), I will turn to the problem of mapping formal models of visual objects onto the underlying Neural Domain. I will start by re-examining the problem of the Matching Content Doctrine, and Noë & Thompson’s criticism of it. I will then turn again to Petitot’s morphodynamical model of visual perception, since it offers an interesting case of psychoneural isomorphism. This model represents a case of morphological explanation that is supposed to show the emergence of intentional content through the activity of neural populations. I criticize Petitot’s conclusion on several grounds, and show the limited heuristic use of psychoneural isomorphism. I then show how to reconcile morphological and mechanistic explanations, and this in turn will pave the way to appreciate the role of psychoneural isomorphism.

1. Setting the Stage

In this Section, I set the stage for the remainder of this Chapter. I begin (§1.1) with a brief review of the concept of isomorphism developed in Ch. 2, §1. Then (§1.2), I turn to visual objects as visual accuracy phenomena. In the next paragraph (§1.3), I briefly return to mechanisms and the neural domain. Finally (§1.4), I make two important claims. First, that our research targets exclusively types of phenomena and mechanisms, not tokens. Second, and

Chapter 8: Towards Psychoneural Isomorphism? relatedly, that the only way to assay whether a psychoneural isomorphism is heuristically helpful or not is to shift from phenomena and mechanisms to models thereof. I then turn my attention to models in the next Section (§2).

1.1 Varieties of Isomorphism

Following Lehar (2003) (cfr. Ch. 2, §1.1), I distinguished between two forms of isomorphism: structural and functional. Structural isomorphism is «literal isomorphism in the physical structure» (Lehar 2003, p. 383). The example given was that of two dice. Suppose we have two regular dice with six faces. Under some level of description, we may regard them as structurally isomorphic. This can be particularly helpful when one of the two objects is missing or inaccessible to observation. Let us call this “SI” (for Structural Isomorphism). SI does not assume that all of the Domains or objects’ properties must be identical. Remember that (Ch. 2, §1.1) the two dice may still differ in non-structural properties. For example, one of the two dice may be blue, whilst the other red. Under the label “structural isomorphism” I also include topological isomorphism (also called homeomorphism), a continuous function between topological spaces, a mapping that preserve the topological properties of a given space. On this definition, an object may preserve the same spatial properties, i.e. topological properties, under continuous deformations, like stretching. A topological isomorphism between two objects or onto itself under a continuous transformation is thus a function between topological spaces that completely preserves the topological properties of two spaces, or objects.

The other form of isomorphism is functional. In this case, the two Domains will behave in a similar way, as if the functional isomorph of the Domain where structurally isomorphic to it. Again, take the case of the two dice with six faces. Suppose that there is only one physical die, whereas the other is just a computer simulation. What we want from the computer simulation is to get a random integer within 1 and 6, and that every possible result r within the range is as likely as the result r* from the physical die. Different algorithms can easily simulate the behavior of the physical die. What matters in this case—call it “FI” (for Functional Isomorphism)—is the structural “output” of the two Domains, rather than the spatial properties of the objects themselves.

The concept of psychoneural isomorphism does not uniquely define the kind of isomorphism that should hold between visual objects and mechanisms, hence, both options are, at least in principle, viable.

1.2 Visual Accuracy Phenomena

We have seen that the Phenomenological Domain is composed by visual objects. Visual objects have a configuration (Ch. 7, §2.1), i.e. a mereotopological structure. Furthermore, as we have seen (Ch. 4), these objects are completely analyzed by properties. As I have stressed on several

209 Chapter 8: Towards Psychoneural Isomorphism? passages, this does not mean that objects are exclusively composed by properties. Other obvious candidates are boundaries and relations as constituents of visual objects. But in both cases, we are discussing about ontologically dependent entities, entities that do no exist in the absence of more primitive elements such as properties or objects.

In Ch. 5, I have characterized visual objects as visual accuracy phenomena. The set of all visual accuracy phenomena is broader than the set of all visual objects. Visual accuracy phenomena might include, among other things, phenomena that cannot be clustered within a single visual object, for example, the spatial arrangement of objects within the visual field. Another example, complex scenes are composed of many distinct entities, and large fields, such as for example the sky or the horizon. However, zooming in on visual objects, I will consider, for simplicity’s sake, an object described under the following conditions: first, an object which is under the attentional focus, ideally, that stages precisely on the perceiver’s fovea. Second, an object seen under optimal visual conditions, without interference of any external or internal confounding factors. Third, I only examine the problem from a synchronic perspective, i.e. I do not examine the problem of isomorphism from a dynamical standpoint. (For all these specifications, cfr. Ch. 3).

1.3 Intentional Mechanisms

Let us now turn our attention to the Neural Domain. As I have shown in Ch. 5, scientific research on content-NCC is inherently mechanistic. Neuroscientists’ aim is to uncover the mechanisms that constitute the content of states of seeing. I have sorted out different kinds of mechanisms: intentional mechanisms responsible for fixing content, selection mechanisms that pick out a particular (set of) content(s) to make them conscious, and the “consciousness maker” that constitutes the organism’s consciousness. So, onto what should we map visual objects?

My answer is that, if a psychoneural isomorphism holds, it should be between a visual object and the underlying intentional mechanisms. Arguably, there is some difference between conscious and unconscious contents, not only at a merely theoretical level, but also at a more specific, neural level. But as I specified in Ch. 3, §3, the role of consciousness in this work is that of determining the content that is supposed to stand in isomorphic relation with the underlying mechanisms. So, the content that will later fall under scrutiny is selected content. Whether there is any substantial difference between selected content—for example, due to the role of attention and binding—and unconscious content is not a question that can be pursued here.

210 Chapter 8: Towards Psychoneural Isomorphism?

1.4 The Broad Picture

Building on the foregoing considerations, we can now return on Fig. 6 (Ch. 2, §1.2), and fill the Domains with the appropriate kinds of entities. We obtain the following Fig. 20:

Fig. 21: The relation between the two Domains, Ψ—the phenomenological Domain—and ϕ—the neural Domain. Both are relational structures, Ψ and ϕ respectively, represented as the structures in the Domains (cfr. Ch. 2, §1.1). At the phenomenological Domain, a visual object is composed as a constellation of 5 different elements in a given configuration. At the neural Domain, different mechanisms that constitute the

visual object are entities {X1, X2,…X5} and activities {φ1, φ2, …φ5} with a given organization. The double-headed arrow between the Domains marked with I is the supposed isomorphism between the two Domains. (Adapted from Craver 2007, p. 7).

At the top of Fig. 21 we have the explanandum: a visual object. At the bottom, we have a set of mechanisms whose working would explain the visual object. Our key question is: is there an isomorphism between the two Domains? Four things must be specified before we proceed.

First, as we have seen (Ch. 2, §1), an isomorphism is a mathematical function. In order to show that there is such a bijective function, we must provide a mathematical or anyway formal description of the two Domains1. This means that we should jump from talk about phenomena and mechanisms to models of phenomena and of mechanisms. The relation between model and target (be it a phenomenon or a mechanism) is not one-to-one, but rather many-to-one. In other words, a single phenomenon or mechanism may be the target of many distinct models (Hochstein 2016). This is not a problem for scientists, but actually provides a richer epistemic

1 It may be argued that qua formal, a description is therefore mathematical. I am not sure that any logician would agree, however. Formal logic, i.e. much of contemporary logic, may be defined as independent from mathematics, and one important issue in philosophical logic consists precisely in the question of whether mathematics may be reduced to logic, or logic to mathematics. For example, is mereotopology reducible to mathematical topology? I remain agnostic on this issue, and prefer to talk about “formal” or “mathematical” descriptions. 211 Chapter 8: Towards Psychoneural Isomorphism? source to draw from. In the next Section (§2), I will articulate a version of model-pluralism for visual objects. Later (§3), I will try to connect the models of visual objects with models of the underlying neural activity.

Second, visual objects and mechanisms will be understood as types, not tokens. The reason for this choice is that the scientific goal is not to provide a singular explanation of how a particular subject S comes to see a visual object o at a given time t. Scientists seek to uncover the blueprint of the mechanisms responsible for conscious visual perception, i.e. states of seeing, across different subjects. This is further stressed by the modeling approach that I will elaborate in the next Sections. Although one can construct a model description of a specific instance of a phenomenon—depending on our specific interest—such model description specify a model, which is a more general, or more abstract account of the target.

Third, in compliance with what I specified earlier in Ch. 2 (§3), talk about psychoneural isomorphism does not reveal much, from a metaphysical point of view. Indeed, and contra Petitot (2008; cfr. Ch. 1, §1.3) many distinct metaphysical options are available even if there is an isomorphism or automorphism between the two Domains. Now that I have sketched out a mechanistic framework, one could say, for example, that content emerges on the mechanisms’ activities, or that it is caused by them, or realized, etc. This largely depends on how we characterize the notion of mechanistic constitution. The issue is, unfortunately, controversial enough (e.g. Kaiser & Krickel 2017). It is usually assumed that mechanistic constitution is not identical with causation. So, the joint activity of a mechanism’s constituents are not causing the phenomenon, but rather constituting it. At the same time, mechanists also stress that “constitution” should not be understood in terms of metaphysical constitution (e.g. Salmon 1984). What exactly mechanistic constitution amounts to is something of a riddle. I suggest interpreting it as a form of building relation (Bennett 2011). Building relations are themselves extremely difficult to define, and some researchers may consider them fundamental relations that are, in principle, non-definable (ibid.). In general such relations are captured by phrases like “composed of,” “emerges on,” “constituted by,” “realized by,” etc. Building relations are, for example: constitution, composition, micro-determination, determination, emergence, and realization (to mention just a few of them). All are, at least in principle, compatible with mechanistic constitution2.

Fourth, since psychoneural isomorphism is supposed to hold between formal or mathematical descriptions of visual objects and mechanisms, the proper conceptual space in which we should embed our issue is that of intertheoretic integration and strategies of mechanistic discovery. As I

2 To my knowledge, there still is no systematic attempt to define mechanistic constitution in light of the metaphysical building relations. Such a work is much needed, given the importance of mechanisms in the contemporary literature in philosophy of science. For my present purposes, however, we can gloss over the exact nature of mechanistic constitution. I will return on the issue in another work. 212 Chapter 8: Towards Psychoneural Isomorphism? will show, the heuristic value of psychoneural isomorphism will be placed within this conceptual space to assay whether—starting from formal descriptions of visual objects—we can infer something about the neural underpinnings of content.

2. Modeling Visual Objects

In this Section, I will elaborate on models of visual objects. This Section will be preparatory for the next one, where I will tackle the issue of neural models of intentional mechanisms and try to find an isomorphism between the two. I first (§2.1) explain what models are, in general. Next (§2.2), I build on the idea of models of visual objects. I articulate a form of model-pluralism. In vision science models are usually represented by means of trees that exhibit the relational structure of visual objects. Philosophers, too, I contend, develop models of visual objects. I call these philosophical models “MoPs.” Both scientific and philosophical models of visual objects are properly understood as having a phenomenological character, i.e. models that do not provide scientific explanations. Finally (§2.3), I argue for a form of model pluralism and focus on the primacy of scientific models of visual objects for the present purposes.

2.1 What Are Models?

In this Sub-section, I will mainly rely on Weisberg’s (2013) influential analysis of scientific models (for other sources, cfr. also Frigg & Hartmann 2009; Gelfert 2016, 2017; Giere 1988; Godfrey-Smith 2006a, 2006b, 2009). A scientific model is an interpreted structure, a simulation that is developed to indirectly study some complex, real-world target. Both “interpreted” and “structure” require some clarification. I first spell out the notion of “structure” and then turn to their “interpreted” character.

According to Weisberg’s (2013, pp. 24-31) taxonomy of scientific models, there are three types of structures that vary depending on whether they are concrete, mathematical, or computational models. Concrete models are physical or material constructions. Paradigmatic examples are wind tunnels and scale models of hydrodynamic systems (Godfrey-Smith 2009, p. 103; also Sterrett 2006). These models need not be artificial and, sometimes, natural systems like model organisms are used to infer something about the model’s target(s) (e.g. Levy & Currie 2015; Winther 2006). Mathematical models are composed, quite obviously, by mathematical structures that purport to describe the structure or dynamic behavior of the target. An example of mathematical model, often discussed by Weisberg (2007, 2013, pp. 10-13) is the Lotka- Volterra model of predatory-prey fish population (see below). Finally, computational models emphasize the procedural aspects of a given phenomenon. These models are composed by algorithms that describe how to carry out a particular procedure. As an example, Weisberg picks out Schelling’s model of racial segregation (Weisberg 2013, pp. 13-14). In all three cases,

213 Chapter 8: Towards Psychoneural Isomorphism? a model stands in some representational relation to the target object3. It is precisely in virtue of this relation that scientists exploit models to infer something about the target system in the world. In this work, I gloss over the fictional character of models (e.g. Frigg 2010; Godfrey- Smith 2009).

Importantly, modeling is distinct from other kinds of theorizing. The hallmark of modeling is its indirectness, in contrast with direct representation. Researchers who engage in modeling a particular target(s) construct and directly study the models, rather than the target themselves (Godfrey-Smith 2006b, pp. 730-732; Weisberg 2007). As Godfrey-Smith says:

One approach [direct representation] is to immediately try to identify and describe the actual system’s parts and their workings. A distinct approach [modeling] is to deliberately describe another system, a simpler hypothetical system, and try to understand that other system’s working first. (2006b, p. 734).

For example, Stanley Miller’s (1953) experiment on abiogenesis aimed at corroborating the hypothesis—advanced by Oparin, Urey, and Bernal (ibid., p. 528)—that the primitive atmosphere of the Earth—composed mostly by methane, ammonia, water, and hydrogen—could lead to the formation of basic organic compounds. In order to do so, Miller famously built a model (an «apparatus») that was meant to «duplicate» (ibid., p. 529) the primitive conditions of the Earth. In this apparatus, Miller let CH4, NH3, H2O, and H2 circulate past an electric discharge. The experiment, as we know, confirmed the hypothesis, with the identification of α- and β-alanine. This nicely shows how a modeling approach can be helpful, especially when the target system is complex and still poorly understood. In these cases, scientists construct model descriptions—the structures directly studied by scientists, such as equations, diagrams, scale models, etc.— that specify a particular model (e.g. Weisberg 2013, pp. 31ff; cfr. also Giere 1988, pp. 82ff), and set a number of interpreting rules that define the scope of the model, assigning an interpretation to its components. It is usually possible to construct multiple descriptions of the same models, depending on our specific research purposes. For example, we can construct a physical, 3D tin and cardboard model of the double-helix structure of DNA, or we can represent its combination of acids in a diagram, or explicate the structure of DNA by means of ordinary English sentences. The model, in turn, is in some similarity or representational

3 The representational relation of models to their targets is a matter of a lively controversy in the philosophy of science (e.g. Poznic 2017; Suárez 2010). The debate is broadly divided into two camps, the “informational” camp lays emphasis on objective relations between models and targets (Chakravartty 2010). Some understand this pattern in terms of a structural relation, like homomorphism (Bartels 2005, 2006), or isomorphism (French 2003). The “functional” camp lays emphasis on the facilitating function that models play on our cognitive activities (Elgin 2004, 2017). It has been argued that the two camps are not really opposed, but complementary (Chakravartty 2010). I also argue for such a thesis. As I explain below (§2.3), visual objects are structural representations, and models of visual objects exemplify different features of them, granting us epistemic access to different features of objects (e.g. Elgin 2017). 214 Chapter 8: Towards Psychoneural Isomorphism? relation with its target(s). We thus obtain a threefold relation that can be graphically represented in Fig. 22.

Fig. 22: The relation between model descriptions, models, and target systems (adapted from Giere 1988; cfr. also Frigg 2010).

The representational relation is sometimes described as including models, targets, and users; but some researchers (Giere 2004) argue that the representational relation is a four-term relation between models, targets, users, and purposes, or even add further relata, like audiences or commentaries (Mäki 2009). In the remainder of this Chapter, I assume a simple triadic relation for expository reasons, but this will not bear on my considerations. In order to clarify the strategy of model-building, I will now briefly discuss a specific example: the Lotka-Volterra model.

Vito Volterra studied the dynamics of Adriatic fish population by giving mathematical descriptions of the predatory-prey relations (Gelfert 2016, pp. 58-61, 86; Weisberg 2007, 2013, pp. 10-13). The model, known as the Lotka-Volterra model, is given in the form of two coupled differential equations:

!" a) = r V – (aV)P !"

!" b) = b(a V)P – mP !"

In equations (a) and (b), V is the size of the prey population, and P is the size of the predator population. The intrinsic growth rate of the prey population is r, and m the intrinsic death rate of the predators. Parameters b and a correspond to the prey capture rate and the rate at which each predator converts captured prey into more predator births (Weisberg 2007). This brings us to the ‘interpreted’ character of models. The Lotka-Volterra model has a specific scope or extension that is established by convention—i.e. the model applies by convention to a particular target. The variables are assigned in relation to specific features of the target. This usually

215 Chapter 8: Towards Psychoneural Isomorphism? involves the more “creative” aspect of model-building—and also the aspect most subject to biases (Wimsatt 2007, p. 96)—, which is based in the scientists’ sensibility and capacity to identify the most salient features of the target.

The Lotka-Volterra model is not supposed to be an actual description of a specific fish population dynamics, but rather an ideal type that may capture a highly speculative scenario. Thanks to this model, we can make predictions about the dynamics of fish population, or explore unexpected consequences that would eschew a direct representation approach. One unexpected result is that heavy fishing favors the growth rate of the prey population, and light fishing favors the predator. It is by studying the model description, rather than the target itself, that scientists can gain insights into the phenomenon and may generate new hypotheses, amend the existent model, or create new models that may better account for the target’s behavior. Notice that false models, too, may be helpful in a variety of ways, as far as the model’s failure and our heuristic tools enable us to localize the source of error in some part, assumption, or components of the model (Wimsatt 2007, p. 103).

One last issue must be briefly addressed before we proceed: What are models about? In the foregoing pages, I have mostly talked about the “target” of models. Models can be of one specific target—like the Lotka-Volterra model, or Miller’s (1953) experiment on abiogenesis—, classes of targets, merely hypothetical targets—like models of infinite population growth or perpetual motion (Weisbger 2013, pp. 124-129)—and even non-existent targets. A paradigmatic case of the latter category includes cellular automata, such as Game of Life (Weisberg 2013, pp. 129-131). At least when we study some real-world target or classes of targets—and thus excluding more controversial cases like non-existent targets—I take such targets to be phenomena in the sense spelled out by Bogen & Woodward (1988; Woodward 1989, 2011), i.e. relatively stable features of the world that are the potential objects of explanation and prediction4. Phenomena, in this sense, stand in contrast with data, which are highly idiosyncratic to the experimental context, and may be used as evidence to infer the presence of one or more underlying phenomena. I concur, however, with Feest (forth.), who points out that, at least in some areas of research in cognitive science, scientists are particularly interested in what she calls “objects of research,” rather than specific phenomena. Objects of research are clusters of different phenomena with epistemically blurry boundaries. As examples we can think about memory— arguably, a constellation of different phenomena, rather than a single one—as well as vision and

4 I am aware that my proposal courts controversy. I see at least two different issues here. First, some models are about fictional targets, like infinite population growth. Can these be called phenomena in the sense defined? I will gloss over this issue, since the examples that I am going to discuss are all real phenomena: visual objects, and mechanisms. Second, I implicitly assumed a form of realism, which can be contested by constructive empiricists (e.g. Van Fraassen 2002). Van Fraassen’s sweeping attack against realism also includes the rejection of metaphysics, whereas I argue that metaphysics can play an important role in theory construction (§2.2.1). 216 Chapter 8: Towards Psychoneural Isomorphism? other perceptual modalities. In both cases, some phenomena may be known, but many others may still need to be epistemically disentangled from the others.

To sum up, scientific models are (1) interpreted and idealized structures that stand in some (2) representational relation to their objects or phenomena, and that can be studied (3) to gain indirect insights into the phenomena or objects they are about.

2.2 Models of Visual Objects 2.2.1 Philosophical Models of Visual Objects

The burgeoning interest for scientific models has led some philosophers to propose a modeling approach in philosophy as well. Godfrey-Smith (2006a, 2012) and Paul (2012) have both discussed a model-building approach to metaphysics. Williamson (2017) has recently suggested that other areas of philosophy, too, like formal epistemology and ethics, may be fruitfully understood as involving model-building. Finally, Crane (2015) has proposed a model-building approach to the problem of the contents of consciousness focusing in particular on intentionality and belief ascriptions. None of these researchers maintains that philosophy is exhaustively described as modeling, but that, in an important sense, part of the philosophical enterprise is characterized—as we might say—as an idealization or simulation of nature, rather than an accurate “mirroring” of it (e.g. Rorty 1979).

I contend that, when it comes to the task of characterizing visual objects, philosophers have developed a number of theories that can, for all extent and purposes, be taken as models. I set out to deepen and further articulate Crane’s proposal, applying Weisberg’s analysis of scientific models to theories of perceptual content. I will now (§2.2.1.1) motivate a philosophical modeling approach to content, and then specify the nature of theoretical speculations on content as a family of models (§2.2.1.2).

2.2.1.1 Justifying Philosophical Models

I think that one important reason to adopt a modeling approach in philosophy of perception is that theories of perceptual content meet the three defining features of models:

(1) The theory of contents is articulated in interpreted structures, i.e. abstract and highly idealized structures that purport to capture the metaphysical layout of content. In contrast with scientific models, I claim that MoPs have a distinctive propositional structure. Also, as I will later point out, there are multiple ways to carve up content— a fact that will surely surprise no philosopher—and this is specifically true when we lack the resources to significantly narrow down the spectrum of possible descriptions. MoPs thus embody in propositional form a possible way to describe content.

217 Chapter 8: Towards Psychoneural Isomorphism?

(2) MoPs stand in some representational relation to the described phenomenon or object of research. The exact representational pattern that lies between scientific models and their targets is matter of debate. The MoPs case is different, however. Since MoPs have a propositional character, their representational character does not represent a “special challenge,” but is part and parcel of the problem of the semantics of propositions. (3) Philosophers mostly discuss content only indirectly by analyzing, criticizing, and sometimes rejecting their highly idealized and schematic structures. MoPs are studied, adjusted, and rejected for their theoretical consequences, for logical reasons, or inconsistency with the actual body of scientific knowledge about content5.

It is the third feature that crucially distinguishes modeling from other forms of theorizing, as it sanctions the “indirect” character of modeling in contrast with other, more direct forms of theorizing (§2.1). Two caveats are important. First, I want to stress that I do not believe, nor am I arguing in this sense here, that all philosophers of perception are modelers6. As I said, modeling is a strategy, a strategy that we may wish to pursue or not, depending on our specific theoretical purposes. According to this strategy, one may exploit a hypothetical, highly idealized theoretical model, a MoP, and study it to gain some insights about content, for exploratory purposes, in order to formulate hypotheses, etc. As Paul (2012) explains, the modeling approach can be contrasted with a more conventional conceptual analysis. Whereas conceptual analysis specifies the conditions that must obtain in the world in order for our concepts to be true, and thus provides a theory of the contents of our concepts, philosophical modeling aims at developing an account of the world itself. Philosophical modeling is thus a specific strategy that comes with costs and benefits. A strategy that, I believe, can be particularly fruitful when studying a complex object of research such as content, and when trying to show the relevance of the philosophical work for the sciences of the mind (§2.3).

Second, my claim is not meant in any way to “revolutionize” the way we do philosophy of perception. Actually, I believe that philosophers have engaged in modeling for quite a long time (and arguably, not only in philosophy of perception). In this sense, my proposal should hopefully foster self-consciousness about the nature of a relevant aspect of the philosophical work, and urge philosophers to explore uncharted theoretical paths. In the remainder of this Section, I will further elucidate the character of (1)-(3).

5 Like most philosophers, I assume that there is an epistemic asymmetry regarding the status of philosophical and scientific evidence, but I will not further discuss this issue here. 6 For example, Vermersch’s (1999) studies on perception rely on a carefully trained introspection method, in continuity with 19th century introspectionism (e.g. Greenwood 2015, pp. 330-331). This is not modeling. Vermersch’s approach can likely be characterized as a form of abstract direct representation. 218 Chapter 8: Towards Psychoneural Isomorphism?

Let’s now turn to the first feature of MoPs. In contrast with scientific models, MoPs and other philosophical models have neither a mathematical, nor a concrete, nor an algorithmic structure. I take philosophical models to be propositional7. In other words, MoPs are abstract structures that are truth-evaluable. Construed in this way, we can interpret a complex model as a set or conjunction of propositions. Some of them will turn out to be true, some other false. Conceptualizing MoPs in terms of propositional models also clarified in which kind of representational relation a MoP stands to its target(s). In contrast with scientific models, I think that the representational character of MoPs does not pose any special problem, but is just one aspect of the problem of the semantics of propositions. Of course, MoPs, just like their scientific counterparts, are idealized structures, and it is often a matter of philosophers’ sensibility and expertise to detect the relevant features of a target that are worth incorporating into a MoP. Choosing what to ignore, omit variables, and interactions between components of the target system is somewhat of an art8.

MoPs must satisfy criteria of logical consistency, i.e. a model must be true at least on one interpretation of its expressions (Williamson 2017). This is, I think, an obvious feature of MoPs, as a philosophical model is supposed not to be self-contradictory. The logical consistency should hold both within a specific model description and between a model and its more abstract family members (cfr. §2.2.1.2), of which the model represent a specific instance. Thus, for example, a Russellian model of content, or Russellian MoP, is supposed to be logically consistent within itself and with more general or abstract forms of structured MoPs. This excludes the logical consistency between distinct families of MoPs, i.e. families that lack a more general common model. For example, we should not expect a Russellian MoP to be logically consistent with Fish’s (2009) naïve realist account of hallucinations (interpreted as a model). In this case, the two MoPs do not share a more abstract common model, as they belong, respectively, to the family of structural propositional models and to naïve realist models.

MoPs usually take either a linguistic or symbolic form. Consider an example. Some philosophers maintain that contents are Russellian structures (e.g. Tye 2007), in this case, the word “content” is of course used in a more technical sense, as synonymous of “conditions of

7 Marcin Miłkowski has suggested me that not all philosophes develop propositional models. For example, Churchland (2012) constructs connectionist models. This gives me the opportunity to clarify an important point: my claim is not that philosophers, qua philosophers, develop propositional models. There is nothing strange if a philosopher, after formal training in, say, mathematics, constructs a mathematical model that is an integral part of his philosophical theorizing. But in this case, I call these models not distinctively philosophical, but mathematical, computational, etc. 8 Choosing what to omit—e.g. patterns in the target system, variables, or components—and therefore choosing what to incorporate betrays the philosophers’ theoretical biases in a way not dissimilar to the scientific practice of modeling (Giere 2004; Wimsatt 2007, p. 95-96). How philosophers of perception may detect and identify implicit theoretical biases in their models will be object of a separate study. 219 Chapter 8: Towards Psychoneural Isomorphism? accuracy” (cfr. Ch. 3, §2). Russellian contents are paired relations between an object o and a property P. For example, the content of seeing a green kite would be an ordered pair like . However, in some cases, notably hallucinations, we seem to see something, where in fact there is nothing, like M. Raymon’s hallucination in Maupassant’s tale Lui?, or Macbeth’s hallucinating a dagger covered with «gouts of blood» before murdering king Duncan. A gappy Russellian model of content is supposed to account for these cases by modeling content as an ordered pair of a gap and a property. In Macbeth’s case, this would be something like <—, bloody>. On this MoP, we get the following model descriptions:

(a) A perceptual state manifests an object that instantiates a property. (b) (c) A hallucinatory state manifests a property, but no object. (d) <—, P>

All these propositions are model descriptions that specify a Russellian model of content, where (c) and (d) are gappy Russellian contents. Propositions (a) and (c) express the model descriptions in a linguistic or verbal form, whereas propositions (b) and (d) in a symbolic form. These examples also help us bringing into sharper focus the interpreted character of MoPs, which is analogous to the interpreted character of scientific models. The model description has a specified scope—i.e. content, or better, human content—and fixes a particular interpretation of the symbols in the logical forms (b) and (d). (We could say that in (a) and (c) the meaning of the words is simply given conventionally as in ordinary English). The elements that must feature in a MoP such as the gappy Russellian model must conform some criterion of fidelity with regard to the content itself. Fidelity criteria must be fixed within a specific context, since they must be calibrated in relation to the degree of the MoP’s abstractness. But in general, in philosophy of perception it is not uncommon that some models are criticized for “getting the phenomenology wrong,” or for producing too abstruse models that do not seem to capture the salient features of perceptual experience (e.g. Nanay 2012, p. 12).

The example of the Russellian gappy MoP helps us highlighting the third feature of models (3). As I mentioned earlier, philosophers construct (sometimes at least) propositional models and study these models, rather than directly the phenomena they are about. A criticism leveled against the Russellian gappy content view, voiced for example by Schellenberg (2010, pp. 33- 34), is that it entails a commitment to the controversial metaphysical claim of seeing uninstantiated properties in hallucinatory cases. This undesirable consequence may lead some philosophers to either discard Russellian gappy MoPs or to amend them. Consider now another example using again Tye’s work as critical target. Tye (1994) argued that perceptual content is phenomenally conscious—i.e. there is something it is like to be related to that content—not in virtue of some property of “being conscious” but rather in virtue of a particular kind of content that meets the PANIC requirements. The PANIC theory states that a content is

220 Chapter 8: Towards Psychoneural Isomorphism? conscious if it is poised, abstract, and non-conceptual. A mental state is poised iff it «stand[s] ready and available to make direct impact on beliefs and/or desires» (Tye 2000, p. 62; Tye 1994, p. 138). “Abstract” means that the object the content is about need not obtain in the environment (cfr. also Crane 2009), i.e. it may be the content of an hallucinatory state. Finally, the non-conceptuality requirement means that the features of a conscious state do not require that the subject possesses «matching concepts» (Tye 1994, p. 139).

Schlicht (2011) maintains that empirical evidence on the two-pathway hypothesis of the visual system provides evidence against Tye’s PANIC theory. In doing this, he makes reference to the work of Jacob & Jeannerod (Schlicht 2011, p. 503), who argue for the distinction between two kinds of visual processing in the ventral and dorsal pathway: semantic processing of a stimulus in the ventral pathway produces non-conceptual perceptual representations that are directly poised for further cognitive processing by the belief-system; the pragmatic processing of a stimulus in the dorsal pathway produces instead a non-conceptual visuomotor representation that stands ready for the intention system. Only the former representations, but not the latter, become conscious. Schlicht argues that some forms of visual agnosia shows that although some subjects—due to brain lesions—may lose their ability to spatially locate objects in the environment (Schlicht 2011, pp. 503-504), they were nonetheless surprisingly able to perform the right sort of actions in relation to the objects. This, according to Jacob and Jeannerod, can be explained by the fact that although the subject had impaired semantic processing, the visuomotor representations were still available to the cognitive system. This seems to contradict Tye’s PANIC theory, as the visuomotor representations are both non-conceptual and poised for action control, but they are not phenomenally conscious. These two examples nicely illustrate two things: first, that philosophers usually study and criticize MoPs. Schellenberg examines Tye’s gappy Russellian model, and on the basis of it she draws the metaphysical consequence that we might see uninstantiated properties. Something similar happens in Schlicht’s criticism of Tye’s PANIC theory. He examines the model itself, and on the basis of it, he draws some implications that are apparently disconfirmed by current scientific research. Both examples illustrate the indirectness of a modeling approach. Second, the two examples show how the process of refining or criticizing a model is steered by means of both theoretical- philosophical grounds—as in the gappy content case—, and/or empirical evidence and consistency with the current scientific body of knowledge—as in Schlicht’s criticism of Tye’s PANIC theory. Of course, nothing prevents philosophers to develop their models without taking into account the scientific evidence. But an obvious consequence will be that these models will be of limited use for our scientific understanding of the mind.

I will now argue (§2.2.1.2) that theories of contents can be understood as a family of models, and clarify the nature of their targets.

221 Chapter 8: Towards Psychoneural Isomorphism?

2.2.1.2 Families of Philosophical Models

I understand the theory of contents as a family of models (MoPs). I borrow this idea from the semantic view of scientific theories, which states that theories (all scientific theories) are families of models9. I’ve already hinted at this aspect earlier (§2.2.1.1), when I stressed that models should be logically consistent within themselves and to more abstract models of the same family. This simply means that, say, a gappy Russellian MoP must be logically consistent and minimally logically compatible with structured MoPs, and to an even lesser extent with propositional MoPs. If the minimal requirement for propositional MoPs is that they must be truth-evaluable, then every specific instance of propositional MoP—Russellian, Fregean, possible-world, etc.—must retain this feature, and add further defining features of their own. This suggests that families of MoPs can be grouped into a comprehensive taxonomy (cfr. Fig. 23).

Fig. 23 is an (obviously) incomplete chart of the different philosophical options. But it may nonetheless be helpful to illustrate my point. At the top we find “neutral” MoPs, i.e. MoPs that do not make any assumption regarding perception’s dependency on internal states of the perceiver. A first general distinction can be drawn between representationalist and relationist theories of perception (Campbell 2002; cfr. Ch. 3, §1.3). The former camp can be divided between propositional and non-propositional families of MoPs. The propositional family may be further analyzed into structured and unstructured contents. Structured accounts, for example, may be divided into Russellian or Fregean contents, and so on. Each of these groups of models is more or less abstract and specifies the core features of their more specific instances. For example, structured propositional MoPs specify the main characters of every sub- family, like Russellian and Fregean MoPs. The degree of abstractness at which we may want to study content depends of course on our specific philosophical interest, and it is thus governed mainly by pragmatic considerations.

9 The semantic view of models is an influential attempt to clarify the structure of scientific theories using the concept of ‘model’ (e.g. Giere 1988; Suppes 1960). However, the notion of model in the semantic view is basically the logician’s sense or something close to it, as Godfrey- Smith (2006, p. 727) points out. I do not espouse the semantic view here, and my talk of families of models does not entail it.

222 Chapter 8: Towards Psychoneural Isomorphism?

Fig. 23: Family of models, focusing specifically on representational theories of contents. The higher a model stands in the chart, the more abstract and less detailed it is. The vertical relations stand for derivational relations of logical consistency. Models below in the graphic retain features of their higher “parents” and add further details and features. Models at the bottom of the graphic are far more detailed.

This genealogy of theories of contents, only sketched out here, can of course be further developed into a comprehensive taxonomy. This taxonomy may then help philosophers charting and exploring the logical territory, individuating previously unexplored options and locating old and new theories within the genealogy. Placing a specific MoP within the chart may also be helpful in individuating its (sometimes implicit) theoretical assumptions. (One way to chart the family of MoPs may be by means of formal ontologies, as suggested by Arp et al. 2015).

Just like scientists work on model descriptions that specify a particular model, so do philosophers of perception. The set of propositions that specifies Tye’s gappy Russellian MoP are again an excellent example. That set of propositions offers a model description (in either logical [b, d], or sentence form [a, c]), which specify a particular MoP, in this case, a gappy Russellian MoP, that is supposed to specify and stand in some representational relation to content (Fig. 24).

In most cases, philosophers are interested in issues such as the relation between content and consciousness (cfr. Ch. 3, §2; Ch. 5, §1.2), or the structure of content (Russellian, Millian, etc.). This hardly involves talk about a single phenomenon—in the sense defined above (§2.1)—but rather about a cluster of distinct phenomena, whose boundaries are not always clear. (And might, for some theoretical purposes, better remain fuzzy and unclear). As an example, consider Fregean content. Fregean approaches claim that contents are modes of presentations of objects and properties. For instance, experiencing phenomenal redness would be a mode of presentation of the property that normally causes experiences of phenomenal redness (Chalmers 2010, p. 364ff). Fregean content, as presented here, is not about a single phenomenon, but rather of an object of research in the sense defined above (§2.1), in this case,

223 Chapter 8: Towards Psychoneural Isomorphism? the problem of object and feature perception that involves many distinct, but closely related phenomena: how features are visually attached to a specific object (feature-object binding problem), in what relations modes of presentations stand to their referents, and so on.

Fig. 24: The threefold relation of modeling applied to MoPs.

2.2.2 Scientific Models of Visual Objects

So far, I have talked about philosophical models of contents. Scientists do also represent visual objects by means of models. Relying on the configuration of visual objects described in Ch. 7 (§2.1), I set out to show that visual objects can be studied as relational structures. Both scientific and philosophical models of visual objects, as I will later show (§2.2.3), are phenomenological models, i.e. models that do not provide scientific explanations but that open up a different epistemic space on their targets by means of some property exemplification (Elgin 2004, 2017) (cfr. §2.2.3). Before moving forward, I must specific what a relational structure is.

A relational structure is a tuple T whose first component is a non-empty set W called the domain of T and whose remaining components are relations on W (Blackburn et al. 2001, p. 2). A relational structure must contain at least one relation. In the literature, the elements of W have received different names, but I will settle for the standard “nodes.” The nature of nodes is ontologically flexible: they can be properties, or parts. This is consistent with our claim that configurational factors hold at different hierarchical levels (cfr. Ch. 7, §2.1.1). For example, Pinna & Deiana (2015, p. 280) speak of visual objects as structured holders whose elements are properties and spatial relations. Feldman (2003), on the contrary, seems to focus on object’s parts that are distinct from properties. Models of visual objects can be used either to represent parts of images (especially in computer vision, cfr. Joo et al. 2015), or properties and their relations. A visual object is a relational structure in the sense that its elements, or nodes, can be shown to stand in some configuration relation to other nodes. In mathematics, computer science, linguistics, artificial intelligence, as well as other branches of knowledge, relational structures are formalized in different ways, e.g. as transition systems, graphs of knowledge representation in AI, or by means of finite trees. Furthermore, relational structures can be

224 Chapter 8: Towards Psychoneural Isomorphism? formalized by means of mathematical notation (cfr. the case of Petitot’s formalization of visual content in Ch. 8, §3; also Vernazzani 2016a), or by means of diagrams. A particularly interesting case of diagrams of visual objects is that of trees (Feldman 2003; Joo et al. 2015; Palmer 1977).

A tree ℑ is a relational structure (T, S) where:

(i) T, the set of nodes, contains a unique ! ∈ ! (called the root) such that ∀! ∈ ! !∗!". (ii) Every element of T distinct from r has a unique S-predecessor; that is, for every ! ≠ ! there is a unique !! ∈ ! such that !!!!. (iii) S is acyclic; that is, ∀!¬!!!!. (Blackburn et al. 2001, p. 6).

More informally, a tree is a hierarchical branching structure with a “root” or “head” at the top (Fig. 25). The nodes that depart from the root are called “children.” The children can themselves further branch into other children, or give rise to “subtrees.” Children at the bottom that do not have themselves children are called “leaves.” Children are usually “non-generic” or “regular,” i.e. they represent a particular spatial relation. When the children do not represent significant spatial relations, i.e. they are “generic” and are represented by a Ø in the diagram. A “disjoint” is a generic node with at least one regular child; whereas a “disjoint regular subtree” is an object: a subtree that hangs from a disjoint and has a regular head. Fig. 25 represents thus three distinct objects. It is possible, however, to leave out the background and the representation of the figure-ground relation to construct a tree of a single visual object.

Fig. 25: A hierarchical structure called “tree” (from Feldman 2003, p. 253; cfr. also Palmer 1977, p. 444).

225 Chapter 8: Towards Psychoneural Isomorphism?

Consider how a simple visual object may be represented by means of trees (Fig. 26). Fig. 26 below illustrates the flexibility of the notion of tree. On the left-hand side, the tree represents the structure of a single object composed by four distinct elements. On the right-hand side, the same object breaks, becoming two distinct objects. The respective trees represent the change by placing—in the right-hand side picture—a generic node at the root.

Fig. 26: A tree-like representation of two visual objects, from Feldman (2003, p. 254). On the left-hand side a single object. On the right-hand side two objects represented by means of a generic root.

Trees can also capture more complex visual scenes, which are composed by a number of different components and subcomponents. Two examples of visual scenes, i.e. a portion of the visual field that includes more than a single visual object, analyzed by means of trees are represented in Fig. 27.

Fig. 27: A natural scene (top) and a visual object (bottom) can both be decomposed into parts and subparts which can be represented by means of trees. This model, developed by Joo et al. (2015, p. 920) is directly inspired by Biederman’s (1987) “Recognition-by-components” theory.

226 Chapter 8: Towards Psychoneural Isomorphism?

Trees can also take into account subjective variations of perceptual configuration, for example in cases of bistable images, or in general stimuli that can be interpreted in different and ways. This is shown for example in Fig. 28 where a single stimulus gives rise to two distinct interpretations, and therefore to two distinct visual objects.

Fig. 28: A single material object may give rise to two distinct subjective interpretations, i.e. to two distinct visual objects. This is reflected in the different configuration of the corresponding trees. On the left-hand side, we observe the following conjunction (a, c) and (b, d), whereas on the right-hand side, we observe (a, b) and (c, d) instead. From Feldman (2003, p. 254).

Trees are a form of mathematical model of visual objects. They visually describe the spatial arrangement of visual objects, forming a «coherent whole» (Treisman 1986). The examples examined so far are all model descriptions that specify a single model, a tree-theoretic model of visual objects that stand in some representational relation to their targets. Before turning to model-pluralism, the claim that multiple models of visual objects reveal something distinct about their targets, I will elaborate on the phenomenological character of such models.

2.2.3 Models of Visual Objects as Phenomenological Models

Surely, content-NCC research is mainly driven by an explanatory goal (cfr. Ch. 5, §2). However, not all models are constructed for explanatory purposes. In general, whether a model counts as explanatory depends on requirements that demarcate genuine explanations from mere descriptions and merely predictive models. For example, it is sometimes claimed that dynamical models are only explanatory insofar as they specify the underlying mechanism(s) that generate the observable behavior (e.g. Kaplan & Craver 2011; cfr. Ch. 6, §§3-4). Hence, models such as that of infant preservative reaching (Thelen et al. 2001) that refrain from specifying the actual implementation of the behavior may not be explanatory. Models that lack scientific explanatory power are usually called “phenomenological models.” Such models only capture the observable behavior of a system (Frigg & Hartmann 2009). Although they do not explain the targets they are about, it is widely acknowledged that phenomenological models nonetheless play an important role in the sciences (e.g. Batterman 2002; Bogen 2005; Craver 2006; Hochstein 2013; Wimsatt 2007).

227 Chapter 8: Towards Psychoneural Isomorphism?

Batterman (2002) examines the role of minimal models—i.e. highly idealized models of the target phenomenon—in statistical mechanics, and concludes that the best way to think of their role is «that they are a means for extracting stable phenomenologies from unknown and, perhaps, unknowable detailed theories» (ibid., p. 35). Such generalization may then be used for computational or explanatory purposes (ibid., p. 37). Whether generalizations do play an explanatory role or not, is an issue that has generated a lively controversy in philosophy of science, and cannot be examined here. The point, however, is not whether generalizations are explanatory or not, but the role they can play in non-explanatory contexts. Consider a case from the life sciences. Bogen (2005) argues that the regularities of a system may be used not so much for explanatory, as for epistemic and exploratory purposes. In making this point, Bogen follows Mitchell, who contends that: «The function of scientific generalizations is to provide reliable expectations of the occurrence of events and patterns of properties» (2003, p. 124). According to Bogen, such models may be used to

− describe facts to be explained, − suggest and sharpen questions about causal mechanisms, − suggest constraints on acceptable explanations, − measure or calculate quantities, − support inductive inferences (2005, p. 401).

As an example, Bogen considers the famous Hodgkin-Huxley equations of action potential, the pulse of electricity that traveling down the axon towards the synapse activates the release of neurotransmitters. Studying the squid giant axon, Hodgkin and Huxley argued that the magnitude of the potassium current IK which help repolarize the membrane varies with !K, the membrane’s maximum potassium current conductance, a weighting factor (n4), and a driving electrical force equal to the difference between Em, the membrane potential, and the resting potential for potassium (Ek). The equation for the potassium is:

! !! = ! !! (!! − !!)

As Bogen argues, this equation incorporates the «qualitatively correct idea that IK varies with

(!! − !!)», yet he specifies that this model is also «quantitatively inaccurate to a significant degree». It is, in other words, highly idealized. But in spite of its inaccuracies, the model has served as an indispensable tool for studying action potential. The mechanism governing action potential was, at that time, still unknown, and Hodgkin and Huxley meant their equations to be «empirical descriptions» of the target phenomenon (Hodgkin & Huxley 1952, p. 541; quoted from Bogen 2005, p. 404). The model served a useful heuristic role, describing the behavior of the phenomenon and, as Bogen says, indicated the «features of the phenomena of interest which mechanistic explanations should account for» (ibid., p. 403). This highlights one

228 Chapter 8: Towards Psychoneural Isomorphism? interesting feature of phenomenological models used for exploratory or heuristic purposes in the sciences (Gelfert 2016, pp. 79-97; Wimsatt 2007).

None of the examples of models of visual objects examined so far are explanatory. The question, to be examined in the next Section, is precisely whether these phenomenological models can be exploited, via an isomorphism, to gather information about the underlying mechanisms. Before doing that, below I justify my claim that these models are phenomenological.

Let us consider MoPs first. MoPs do not provide scientific explanations10. There are two reasons that motivate my stance. The first reason is that MoPs aim at describing the metaphysical layout of content, not to speculate about how some actual brain mechanisms or neural dynamics engender our conscious visual experience. How exactly the brain, or perhaps an extended cognitive system generates our contents is object of scientific investigation (Snowdon 2015; see also Schlicht ms. for a dissenting opinion). This claim is consistent with the idea that MoPs—as phenomenological models—may still play an important role in clarifying the structure of content and the behavior or outputs of the cognitive system11.

The second reason is that I assume a distinction between philosophy and the sciences, at least in their goals and/or methods12. It is well beyond the purpose of this Chapter to give a detailed account of the relations between philosophy and the sciences (e.g. Cellucci 2014; Putnam 2010). My assumption is broadly compatible with different stances. Some philosophers may want to follow Quine, and regard philosophy as continuous with the sciences (Quine 1969, pp. 126-127). (Notice also that Quine’s continuity thesis can be interpreted in different ways, cfr. for example Marconi 2012). Some other philosophers may wish to stress that philosophy does not aim at explanation (e.g. Wittgenstein 1958, §109), or emphasize that there is no analogy—or at least, not a useful one—between scientific theories and the products of analytic metaphysics (Putnam 1992, p. 141; on the relations of metaphysics to science, cfr. also Ladyman & Ross 2007). Much of contemporary philosophy of cognitive science, as well as other regional philosophies of science, can be construed as a highly abstract «extension of the natural or

10 MoPs may still provide some non-scientific form of explanation, such as philosophical explanation, or further some kind of philosophical understanding. Whether MoPs may play a distinctive role as philosophical explanations is a question that depends on further meta- philosophical assumptions that have little bearing on my claims. 11 I tackle the issue of the roles of MoPs for the sciences of the mind in my Vernazzani (under review). 12 Some philosophers may argue that both scientists and philosophers aim at truth for example, but adopt different methods. Others, like Paul (2012) argue that metaphysicians and scientists have the same methods, but different subjects. Paul contends that metaphysicians seek to uncover general truths that are prior to those of the sciences. Finally, some philosophers may cast doubt on the very idea of the continuity with the sciences, this is the case of Dummett (2010) and Putnam (1992). 229 Chapter 8: Towards Psychoneural Isomorphism? mathematical sciences» (Williams 2006, p. 182). It seems quite plausible to say that many works will fall within an area of research with fuzzy epistemological boundaries, neither purely philosophical nor purely scientific. However, from the fact that philosophers may conceive their work as continuous with the sciences, it neither follows that philosophers provide scientific explanations, nor that there is no distinction between philosophy and the sciences (Ladyman 2012, p. 41).

Let us now turn to scientific models of visual objects. The main aim of these models of visual objects is not to explain in virtue of what we come to see visual objects, but to serve as basis for the study of perceptual organization. In his seminal study on perceptual organization, Palmer (1977) provided a theoretical framework that offered a middle-ground position for thinking about visual objects between Gestalt or holistic theories, and structural theories of visual objects. The theoretical framework introduced by Palmer lays the foundation for the “tree” representation of visual objects that we have seen above. As I have pointed out earlier, these models do play an important role in research, especially as heuristic tools, even though they do not provide a “vertical” explanation of how the cognitive system engenders our content. Among the useful implications that follow from the tree representation of visual objects, Palmer mentioned the following. First, these models point to the «need for representing information about the perceptual organization of elements into structural units» (1977, p. 469). At least three levels of structural units are identified: the whole figure, the multisegment parts, and the individual line segments (ibid.). Another important implication of these models is that encoding of structural units must be context sensitive: whether a group of elements will be encoded as a unit depends on its contextual place within the structure (1977, p. 570). The two implications mentioned here do not exhaust the useful roles of models of visual objects, they are, however, useful in the present context to highlight their non-explanatory roles.

2.3 Model Pluralism about Visual Objects

I have described different ways in which we can model visual objects. On the one hand, we have philosophical models, or MoPs, that come in different varieties. On the other hand, we have models of visual objects by means of trees. There is no contrast between these different models, i.e. we are not forced to choose between one “authentic” model (or set of models) and spurious models. In fact, model pluralism is just another rich resource to rely on for the study of (a) target(s) (e.g. Wimsatt 2007; Hochstein 2016). This aspect will also play a significant role when we will come to the task of understanding how to model intentional mechanisms (cfr. §3.1).

Different models highlight different aspects of the target. I borrow the terminology of Elgin (2004, 2017), who speaks of exemplification in relation to models. The main thesis is that different models are able to tell us something about the target not so much in virtue of some kind of similarity, which can be spelled out in more or less mathematically precise terms, but in 230 Chapter 8: Towards Psychoneural Isomorphism? virtue of the fact that models exemplify some important features of the target 13 . The exemplification secures the role of models, notwithstanding their falsity and generalizations. This relation of exemplification then opens up an epistemic space that can be further explored to study an unprecedentedly neglected aspect of the target. For example, structured propositional MoPs bring to the fore the question of the sentence-like structure of visual objects (cfr. Ch. 4, §3, §5). Consider the case of Russellian content. On this MoP, a content has a subject-predicate structure, where a property P is predicated of an object o (cfr. §2.2.1). Regardless of whether visual objects turn out to have such a structure—Textor (2009), as we have seen, casts doubt on the propositional character of perceptual experience on the ground that we do not see facts (cfr. Ch. 4, §5.3)—Russellian MoPs open up an epistemic space where such a question is raised. Models of visual objects in vision science (§2.2.2) are formalized structures that exemplify the configuration of visual objects.

Not only different models of visual objects co-exist, they also do not need to be mutually exclusive. Certainly, some examples of MoPs examined in the foregoing paragraphs are mutually exclusive. Structured propositional MoPs and naïve realist MoPs for example are clearly mutually exclusive. However, scientific models and (most) MoPs are not mutually exclusive. It is perfectly legitimate to combine MoPs with tree representations of visual objects because they grant us epistemic access to different (putative) properties of the targets. The fact that visual objects have a configuration (Ch. 7, §2.1) can be accepted as a primitive feature of visual objects, without prejudicing the question of whether they have also a propositional character or not. There is, however, an underlying common assumption regarding the nature of visual objects, since all vision scientists agree that they can be modeled by means of trees, independently from additional considerations regarding, for example, the conceptual or propositional character of content.

13 As I pointed out earlier, there need be not a contrast between informational and functional accounts of models (cfr. Chakravarttty 2010). The reason why I espouse Elgin’s account is that it provides an elegant and straightforward way to do justice of model pluralism. The point is not how much different models resemble—perhaps in homomorphic terms—their targets (i.e. visual objects), but that they may give us access to aspects other models are silent about. For example, tree representations of visual objects may grant us epistemic access to the configuration of visual objects, but this feature, however important, hardly exhaust what we can say about them. For example, structured propositional MoPs may lead us to wonder about the subject-predicate structure of visual objects (cfr. Ch. 4), and thus open up an epistemic space that legitimate such a question. These models may also open up the space where we may raise the question about the rich content view, etc. The question is not much whether these questions find a positive answer. It may turn out that visual objects do not have a subject-predicate structure, after all (as seems implied by my argument against factualism in an earlier Chapter 4), or that the rich content view is false. What matters the most is that these questions broaden the space of legitimate questions about the targets, and thus deepen our understanding of them. This view, that I merely sketch out here, and I will develop elsewhere at length, is directly inspired by the work of both Elgin (esp. 2017), and Wimsatt (2007). 231 Chapter 8: Towards Psychoneural Isomorphism?

The fact that visual objects can be described by means of a formal system—by means of trees, topological models, etc.—independently from whether they are propositionally structured or not (cfr. also Petitot 2011, p. 18 for a contrasting opinion) is the crucial feature that we will exploit in order to assay the problem of psychoneural isomorphism. For, as I have said, in the previous parts of this work (esp. §1, and Ch. 2, §1), in order to show that there is some kind of psychoneural isomorphism between the specified Domains, we need a formal structure. Granted that we can mathematically describe visual objects—thus focusing only on one particular family of models of visual objects—the question is how we can map these models onto the underlying Neural Domain.

3. Connecting the Two Domains

As we have seen, psychoneural isomorphism involves the construction of mathematical models, relational structures of both Domains. In light of what I said in Ch. 1 and 2, we must tackle two issues. The first one, as it will now turn out, more apparent than real, is whether there is a psychoneural isomorphism. The second one, more important, is where this leads us. Does psychoneural isomorphism offer any interesting insight about the Neural Domain? And is it an essential component of any explanation of the “visual” (cfr. Ch. 2, §2.3)? In the next pages, I will return again on themes already discussed early in this work, but this time, in light of the considerations developed in the previous Chapters.

I will proceed as follows. Firstly (§3.1), I will return on the problem of the Matching Content Doctrine that we already met in Ch. 1 (§2.3) and in Ch. 5 (§3) in relation to Chalmers’ understanding of content-NCCs. I will discuss again Noë & Thompson’s (2004) criticism of the concept will be re-presented, together with Petitot’s criticism and a new criticism on the basis of the account developed in Ch. 5. I will then move on (§3.2) to examine Petitot’s morphodynamical account, which provides an actual case of psychoneural isomorphism. Petitot’s work is an impressive achievement, but it suffers from some theoretical shortcomings that will now become apparent in light of Ch. 5. Furthermore, a question will be raised about the value of psychoneural isomorphism so understood. Petitot’s account is a morphodynamical approach—a sophisticated form of dynamical model—that is supposed, in his very words, to yield a mathematical explanation of consciousness. Topological explanations are a form of what I—following Haugeland (1998)—call “morphological explanations.” This brings a tension between my account sketched out in Ch. 5 and Petitot’s approach. I argue that morphological and mechanistic explanations can be reconciled (§3.3). It will be shown that psychoneural isomorphism does not offer much heuristic help in the search for intentional mechanisms, but it can be used as a normative ideal in pruning the space of possible explanations of visual content.

232 Chapter 8: Towards Psychoneural Isomorphism?

3.1 The Matching Content Doctrine

Earlier (Ch. 5, §3), we have seen that, according to Chalmers’ definition of content-NCCs, there must be a matching relation between the contents of consciousness and the underlying neural representations. This has been called the “Matching Content Doctrine” (MCD), by Alva Noë and Evan Thompson (2004). The exact nature of such matching relation is far from clear. One intuitive way to spell it out is by means of the notion of psychoneural isomorphism. Under this reading—marshaled by Noë and Thompson—psychoneural isomorphism is an isomorphism between perceptual content and the underlying neural representations. Noë and Thompson are critical of this understanding of the MCD and argue for an ACD, an “Agreement Content Doctrine.” Notice however that Noë and Thompson do not discuss neural mechanisms, but they largely operate within Chalmers’ definition of content-NCCs. I will first spell out Noë and Thompson criticism of the MCD, and then briefly consider Petitot’s reply before articulating my criticism.

In their paper, Noë & Thompson (2004) target the notion of a Matching Content Doctrine focusing in particular on experiments on binocular rivalry (cfr. Ch. 5, §4.2.1). To reiterate, in case of binocular rivalry we witness the phenomenon of perceptual dominance while the stimulus is kept constant. Presenting two distinct stimuli to the two eyes of a perceiver, the subject can only perceive one of the two stimuli at once, switching every few seconds from one stimulus to the other. These studies have been particularly helpful in dissociating the neural activity related to conscious content perception from the neural activity related to specific perceptual features or, within my account (cfr. Ch. 5, §4), to distinguish between intentional and selection mechanisms. Early studies on binocular rivalry (Leopold & Logothetis 1996; Crick 1996, & Koch 1995, 1998; Logothetis 1999) have shown a positive correlation between an early visual processing activity (in V1 and V2) and the stimulus, whereas later visual processing areas were better correlated with perceptual content. Within Chalmers’ framework, the high correlation relation between later visual areas and perceptual content would offer a case of content-match. Against Chalmers, Noë & Thompson (2004, p. 11) argue that these studies do not offer evidence of a content matching, but rather of content agreements (Thompson 2007, pp. 357-358).

One shortcoming of the debate is that, unfortunately, none of its participants have clarified the terminology. So, there is no clear analysis of the notion of “matching” in Chalmers’ (2000) study on the NCCs. Although Noë and Thompson (2004) identify it with our concept of psychoneural isomorphism, this seems to me an uncharitable reading, as Chalmers’ notion of matching might also be interpreted in different ways. Nowhere, in his text, Chalmers speaks of the MCD as a form of psychoneural isomorphism, and if we interpret it as a kind of morphism, there is no in principle obstacle fro interpreting it as a kind of homomorphism. For example, it might be interpreted as a form of homomorphism of a specific degree of similarity. There is

233 Chapter 8: Towards Psychoneural Isomorphism? also no clear definition of the notion of “agreement” in Noë & Thompson (2004), although they provide some hints. To illustrate the concept of agreement, as understood by our authors, let me make a simple analogy. Suppose you (the reader) are admiring a painting at an exhibition (the example is gently borrowed from Mussorgsky). You stop and describe it, its colors and shapes, how they harmonize and produce that beautiful and elegant figure you are looking at. Now, so goes Noë and Thompson’s interpretation, your words do not match the painting. Strictly speaking, there is no matching between the words and the figure. However, the words used to describe the picture “agree” with the figure in such a way as to pick out its most salient features. The analogy has its limits, and we should take it with a grain of salt, if we want to avoid to be tangled up with complex semantic issues. But the idea is, roughly, clear. The point is that between content (visual object) and the neural activity (neural representation) there cannot be a relation of isomorphism for the contents are of different kinds. I will now examine Noë and Thompson’s motivations for rejecting the MCD.

Noë & Thompson point out that studies on binocular rivalry were mostly conducted recording the spike activity of neurons, focusing on their receptive fields (Rees et al. 2002). A receptive field (RF) is the area surrounding a neuron where the presence of a stimulus will alter the neuron’s firing (Kandel et al. 2000, pp. 515-520). Noë and Thompson (2004, p. 14) claim that RF-contents and perceptual contents cannot match because, as they explain, the latter is: (i) structurally coherent; (ii) intrinsically experiential; and (iii) active and attentional. Since these properties cannot be found in the RF-content, there cannot be a matching relation, but merely one of agreement. Let’s examine these reasons more closely.

(i) The first feature is that perceptual content—or, more restrictively, visual objects in our case—exhibit a structure dictated by Gestalt phenomena, figure-ground segregation, etc. In short, perceptual content is constructed according to the complex configurational factors we have discussed in Ch. 7 (§2.1). However, the RF- content does not seem to obey the very same configurational factors. (ii) The second feature has it that perceptual content is experienced, in the sense we have discussed before (Ch. 2, §2.1; Ch. 5; §1.2) of having a phenomenal character. There is something it is like (Nagel 1974) to undergo a visual experience of seeing this notebook or this journal on the desk. However, there would be nothing it is like to experience a RF-content. (iii) The third feature is that perceptual content, but not RF-content, is used to actively explore our environment. Such exploration would take place not only visually, but also attentionally. Occluded objects are a case in point. Think for example of seeing a cat through the fence, you do not see the whole cat, only her tail, two legs, and the head. The object is not present to you in its entirety, but you nonetheless have the visual impression of a whole cat.

234 Chapter 8: Towards Psychoneural Isomorphism?

The argument is thus simple: since the two contents do not share all the properties they cannot match. Two things are worth saying about Noë and Thompson’s argument. The first thing is that, if we interpret—as they do—the notion of matching in terms of isomorphism, it is not clear why these objections would amount to a refutation of psychoneural isomorphism. An isomorphism, as we have seen (Ch. 2, §1), a structure-preserving function between two domains. If we do not specify the nature of such a structure, there is little point in talking about isomorphism in the first place. Merely talk about perceptual content is too vague and unhelpful to support or reject psychoneural isomorphism: Is perceptual content meant to be a single visual object, or a cluster of many objects? Or is it a visual scene? And how can we formalize their structure? The second thing is that hardly any vision scientist would expect a perfect correspondence between a perceptual content and a single RF! This point brings us to the core of Petitot’s rebuttal of Noë and Thompson’s rejection of psychoneural isomorphism: an isomorphism may hold between a “macro-level” of neural activity and perceptual content (Petitot 2008, p. 367) (cfr. Ch. 1, §2.3, Fig. 5; and below §3.2).

I will discuss Petitot’s account in more details below (§3.2), since it provides an interesting case of psychoneural isomorphism. Before doing that, however, we should also consider another issue that besets Noë & Thompson’s paper. Their work does not assume a clear explanatory structure for content-NCC research, and it does suffer, therefore, from the same issue that besets Chalmers’ understanding of content-NCC (cfr. Ch. 5, §2-4.1). Indeed, in their criticism they are not so much arguing for a new understanding of content-NCCs as much as for a new understanding of the MCD. As we have seen, however, there are no content-NCCs, instead there are multiple mechanisms that subserve different functions whose joint operations and coordination produce the content of states of seeing. Strictly speaking, if a correspondence has to be found, that is with the activity of intentional mechanisms, whose function is precisely that of fixing the accuracy condition of states of seeing. Between what kinds of entities should the isomorphism hold? In §1.4, Fig. 21 I have put our two Domains in an isomorphic relation. Is there an isomorphism between models of visual objects and models of mechanisms? An answer to this question cannot but be negative. Let us first grant that there may be multiple models of intentional mechanisms, such as neurobiological models, computational models, etc. Purely neurobiological models may characterize such mechanisms in physiological and biochemical terms. Models of intentional mechanisms in cognitive neuroscience, and specifically computational models, may articulate an account of such mechanisms as to specify the computations performed by these systems. It is plain enough that there cannot be any isomorphism between models of visual objects and models of mechanisms. A model of a mechanism’s physical structure, or the kind of operations it performs, does not need in any sense to be isomorphic with its phenomenon (Craver 2007). If there is a psychoneural isomorphism, this might be found, following Chalmers’ and Noë & Thompson’s suggestion, between models of the neural processes characterized as representations (Bechtel 2016) and

235 Chapter 8: Towards Psychoneural Isomorphism? models of visual objects, or, as we will see (§3.2), between models of visual objects and models of the geometry of the functional architecture of visual areas.

3.2 Jean Petitot’s Neurogeometry of Vision

Jean Petitot coined the term “neurogeometry” to refer to the geometry of the functional architecture of visual areas (2008, 2013); more precisely, neurogeometry is «l’implémentation neuronale des algorithms de cette géométrie, le problem étant de comprendre comment les structures perceptives “macro” et leur morphodynamique peuvent émerger du niveau “micro” neuronal sous-jacent» (2008, p. 22). His sophisticated mathematical morphodynamical models of this geometry are not meant to merely describe the neural activity involved, but also to articulate an explanation of the structure of percepts:

[…] in neural net models, few hypotheses are made concerning the precise geometry of the connectivity defined by the synaptic weights and yet it is this geometry which explains the structure of percepts.» (Petitot 2013, p. 75; emphasis added).

What is interesting is that the explanation of the structure of percept yields the following:

(a) First, an isomorphism (a psychoneural isomorphism) between mathematical models of visual content and models of the underlying neural activity; (b) Second, a mathematical (topological), or as I prefer to say—following Haugeland—a morphological explanation of perceptual content. As Haugeland states, morphological explanations are explanations «where the distinguishing marks of the style are that an ability is explained through appeal to a specified structure and to specified abilities of whatever is so structured.» (Haugeland 1998, p. 12)14.

14 Haugeland distinguished between three «styles» of explanation: derivational-nomological explanation, morphological explanation, and systematic explanation (1998, pp. 11-14). The former is a special and more restricted kind of deductive-nomological explanation. The latter is similar to mechanistic explanation: it basically consist in the elucidation of a system composed of different functional components that have a specific «job» (but see Craver 2007, pp. 107ff). As he stress, explanations in psychology «should be systematic» (p. 26), since «psychology is not primarily interested in quantitative, equational laws, and […] psychological theories will not look much like those in physics» (ibid.; cfr also Cummins 2000). Notice however that, although most philosophers of science would probably agree that psychological explanations do not rely on law-like generalizations (but the issue is controversial, cfr. for example Mitchell 2000; Woodward 2001, 2003, pp. 295-307), it does not seem quite correct to say that psychologists are not interested in quantitative or equational laws (provided that we characterize the concept of “law” in a suitable way, cfr. also Ch. 6, §3.2). Dynamic system theory is a case in point (Ch. 6, §2). Also, Haugeland stresses that there is a distinction between morphological and systematic explanations («[…] no one, to my knowledge, has previously distinguished morphological and systematic explanation» p. 14); Craver (2007, p. 136) follows him on this point. As I will show, Petitot’s account allegedly provides precisely a morphological account of consciousness. 236 Chapter 8: Towards Psychoneural Isomorphism?

My task will largely consist in showing that (a) does not entail (b), thus providing an answer the explanatory question raised in the first Chapter. The first point also shows, or better it downplays the relevance of one key question that motivated this work: Is there a psychoneural isomorphism? Petitot’s model shows that it is possible to articulate a psychoneural isomorphism. More important than this, however—since it is almost always possible to obtain an isomorphism, provided that we select the right level of abstraction and description of two domains—is the question of its role in our investigation of the mind. The case of Petitot’s neurogeometry will provides us the means to tackle this issue. I will bring to the fore the (limited) role of psychoneural isomorphism as a facet of the «core normative requirement on mechanistic explanation» (Craver 2007, p. 122): that (intentional) mechanisms must fully account for the explanandum phenomenon, i.e. a visual accuracy phenomenon.

I will first elaborate on Petitot’s morphodynamical approach to the neurogeometry of functional architecture, showing how they presuppose a psychoneural isomorphism (§3.2.1), and then (§3.2.2) criticize both some of Petitot’s conclusions and highlight the limits of psychoneural isomorphism. In the next paragraph (§3.3), I will show to reconcile morphological models with my mechanistic approach and discuss the role of isomorphism.

3.2.1 Neurogeometry and Psychoneural Isomorphism

Petitot’s account is fully inscribed within the larger project of naturalizing Phenomenology (cfr. Ch. 1, §2)—i.e. notice that I follow my convention of referring to Husserlian Phenomenology with the capital letter (Ch. 1, §2.1)—namely the attempt to making Phenomenological descriptions continuous with the sciences of the mind. Phenomenology, as it is well known, is a philosophical method that is supposed to provide (but does not reduce to) rigorous descriptions of perceptual experience (cfr. also my Vernazzani 2016a, pp. 31-33). Petitot’s position is quite unique within the project, as he has developed the best-articulated mathematical account of the program originally laid out by Roy et al. (1999, pp. 64-65). According to Roy et al. (1999), the ontological divide between the “mental” and the “physical” can be overcome by means of a mathematization of the two levels. The following passages would be required to naturalize Phenomenology:

1. Phenomenological descriptions. 2. Mathematization. 3. Naturalistic model.

It must be stressed that it is far from clear whether the mathematization—or in general the naturalization, if it is argued that naturalization does not entail mathematization—of Phenomenology can be held consistent with the scopes and means of Phenomenology. As Husserl famously emphasized, Phenomenological descriptions are anexact, and cannot be mathematically described (2009; 2002, §§72-75). Whether it makes sense to naturalize

237 Chapter 8: Towards Psychoneural Isomorphism?

Phenomenology is a problem that cannot be addressed here (e.g. Zahavi 2004), and whether Petitot is right in claiming that Husserl’s opposition against the mathematization is a statement made obsolete by recent advancements in mathematics (e.g. Petitot 1993, 1994; cfr. also Roy et al. 1999, pp. 46-49). The crucial point is that, independently from our stance towards Phenomenology, it is possible to articulate a mathematical description, i.e. a model, of Husserl’s eidetic descriptions of visual content.

Before we proceed, we must first stop and consider why Petitot’s mathematical descriptions are models in the sense defined above (§2.1). As I have shown, models are (1) interpreted and idealized structures that stand in some (2) representational relation to their targets, and that can be studied (3) to gain indirect insights into the targets they are about. That mathematical descriptions are in some sense idealized and interpreted structures is clear enough, and exactly the same considerations developed above with regard to the Lotka-Volterra model of predatory- prey fish population apply. Furthermore, such mathematical descriptions stand in some representational relation to their targets. As I have assumed earlier, these mathematical descriptions—in agreement with other formal descriptions of visual objects—exemplify the configuration of visual objects (Elgin 2017), and thus may tell us something about their structure or “morphology,” as we will now see. Furthermore, Petitot exploits these mathematical descriptions to get indirect insights about what kind of underlying brain computations may be realizing the configuration of visual objects. As Petitot says:

The link between naturalist explanations, mathematical models, and computer simulations on the one hand, and phenomenological eidetic descriptions on the other, can be set up by viewing the latter as constraints on the former. (1999, p. 330).

For Petitot, the only way in which eidetic descriptions can be meant to provide such constraints is via a mathematization that can be studied to gain insights about the putative morphodynamical models that generate them. In order to show how this is possible, we need to flash out in more details Petitot’s account, which bestows a crucial role to the notion of isomorphism.

More specifically, for Petitot there are two important steps. The first is a “theory-theory” (or model-model) correspondence. It is an «exact correspondence» (1999, p. 343) between a Phenomenological description, which is expressed by means of concepts, and a geometrical eidetics, which is expressed by means of morphodynamical models (2008, p. 395). We get the following schema (Tab. 4):

Conceptual eidetics Geometrical eidetics (Phenomenology) (Morphodynamical models)

Tab. 4: Conceptual and geometrical eidetics.

238 Chapter 8: Towards Psychoneural Isomorphism?

This scheme is functional to the integration (naturalization) of Phenomenology with the natural sciences. In other words, it should serve as a way to achieve intertheoretic integration. The passage from a conceptual description to a mathematical one is made necessary by the fact that the vocabulary of neuroscience and that of Phenomenological descriptions are heterogeneous. It is not possible to integrate, or better, reduce Phenomenological descriptions to descriptions of the “neural” given the differences in vocabulary. In this form, the problem Petitot faces and that he attempts to solve by means of mathematization is akin to the problem of intertheoretic reduction (cfr. Vernazzani 2016a). According to this model, integrating different theories is basically a form of deductive-nomological explanation (e.g. Churchland 1986, p. 294) (cfr. §3.2.2).

The second step is a psychoneural isomorphism holding between the mathematical descriptions of perceptual content and the morphodynamical models that describe the neurogeometry of the functional architectures. In his words:

[…] l’accord entre le macro- niveau géométrique (morphologique) émergent M [...] et l’expérience phénoménale E [...] est extrêmement fort, beaucoup plus fort qu’une simple corrélation. C’est même la forme la plus forte possible de matching de contenus puisque, à la limite, c’est un isomorphisme» (2008, p. 367; cfr. Ch. 1, §2.3 emphasis added).

It is clear that for Petitot the concept of isomorphism plays a central role. Moreover, it is, as we will see, an explanatory role: it is in virtue of psychoneural isomorphism that we can, allegedly, explain visual consciousness.

As we have seen, an isomorphism only holds between what Köhler called the “structural properties” (Ch. 2, §2.2), in the case of visual objects, it means that we should focus on their configuration (Ch. 7, §2.1). Consistently with the project of a naturalization of Phenomenology, Petitot focuses on Husserl’s insightful discussion of phenomenal saliency (phänomenale Abhebung), in virtue of which it is possible to individuate a phenomenon (cfr. Husserl 1993, pp. 242-245). (Notice that the term “phenomenon” here differs from the sense defined in Ch. 5, where I followed Bogen & Woodward (1989); in the Phenomenological sense, a phenomenon is, roughly speaking, an appearance). The individuation of a phenomenon is possible by means of a distinction between distinct (gesondert) contents and fused (verschmolzen) contents. The process of fusion (Verschmelzung) creates a whole; whereas the opposite process of distinction (Sonderung) demarcates the different parts. The Sonderung is based on the qualitative discontinuity of the “moments” that compose the contents15. In short,

15 As Mulligan (1999) (cfr. Also Mulligan et al. 1984), observes Husserl’s moments are basically tropes. One specific way in which Husserl’s understanding of tropes differs from some other contemporary accounts is that for Husserl moments are dependent particulars, i.e. they are incapable of independent existence, but always come in clusters. In this sense, Husserl’s notion 239 Chapter 8: Towards Psychoneural Isomorphism? the configuration of visual objects (or of phenomena in general, in the Phenomenological sense) is based on the qualitative discontinuities between the moments.

According to Petitot, these key concepts of Husserlian Phenomenology, and the relation between quality and space, must correspond to a category, a type of mathematical structure (Petitot 1999, p. 339; cfr. also 1993, and 2011, pp. 64-65), and in particular to a fibration or fibred space. A fibration is a differentiable manifold E endowed with a canonical projection !: ! → ! (a differentiable map) over another manifold M. M is called the “base” of the !! fibration, and E its total space. The inverse images !! = ! (!) of the points ! ∈ ! by ! are the “fibres” of the fibration, subspaces of E that are projected to points in M (ibid.; cfr. also Vernazzani 2016a). A fibration must satisfy two axioms:

(1) All the fibres !! are diffeomorphic with a typical fiber F. (2) The projection ! is locally trivial, i.e. for every ! ∈ ! , there exists a neighborhood U of !! x such that the inverse image !! = ! (!) of U is diffeomorphic with the direct product ! × ! endowed with the canonical projection ! × ! → !, !, ! → !.

The following figures (Fig. 29ab) illustrate the concept of fibration:

Fig. 29: Fig (a) (left) represents the structure of a fibration, where M is the base space, E the total space, !! and ! the structural projection, and !! = ! (!) the fiber over the point ! ∈ !. Fig (b) (right) is the local triviality of a fibration every point ! ∈ ! of the base space possess a neighborhood U such that !!!(!) is isomorphic to the trivial fibration !: ! × ! → ! (from Petitot 1999, p. 340).

This mathematical model would capture the relation between quality and extension in Husserl’s Phenomenology (Husserl 1991, pp. 68-71; Petitot 2004). The base of the fibration is the extension and the total space is a kind of sensible quality, say, color. This is the first step towards a mathematical naturalization of Phenomenology. Between Husserl’s conceptual descriptions and Petitot’s mathematical models it holds an exact correspondence. Once we have mathematized the relation between colors and space, and given a mathematical model of

of objecthood is somewhat similar to Denkel’s (1996) account, according to which properties are analytically prior to objects, but ontologically dependent on these (cfr. also Ch. 7, §1.3.2). 240 Chapter 8: Towards Psychoneural Isomorphism? the qualitative discontinuities, we have only realized the first two steps of a naturalization of Phenomenology: we have moved from conceptual eidetic descriptions to mathematical models. We must now move from this mathematical model to a psychoneural isomorphism.

More precisely, the next problem is to pin down some physico-mathematical models that may implement the geometric description of their Phenomenological-conceptual counterparts (Petitot 1999, pp. 338-343; 2008, pp. 380-381). Petitot observes that one of the main problems of natural and computer vision is to understand «how signals can be transformed into geometrically well-behaved observables» (1999, p. 346). The process thereby which an unstructured image I(x, y) becomes segmented. An influential mathematical model for segmenting an image into distinct parts has been developed by David Mumford (1994), or the so-called Mumford-Shah model. Alternative models are also available, more local and based on anisotropic non-linear partial differential equations (Petitot 1999, p. 348; 2011, pp. 78ff). I gloss over the mathematical details, and the reader interested can find them in Petitot’s writings (1999, 2008, 2011, 2013).

The relevant aspect of Petitot’s account is that the same fibration used to model the Phenomenological descriptions of the relation between extension and qualities can be used to model the neurogeometry of the functional architecture of V1. More specifically, Petitot develops his account basing on Hubert & Wiesel’s (1979; cfr. also Bechtel 2001, pp. 232-234) discovery of the micromodules called hypercolumns (Petitot 2008, 2013, p. 75). In other words, exactly the same mathematical model used to describe the relation between quality and extension at the Phenomenological level—what we might call the skeleton of visual objects—is realized by the neurogeometry of V1. Petitot takes this to be an explanation of visual consciousness «[w]ith such a morphodynamical model we can easily explain the topological description physically» (2011, p. 69; emphasis added), where the “topological description” refers to the aspect of the configuration of visual objects considered so far. The model, apparently, extends its explanatory virtue also to the problem of subjective contours (the Kanizsa triangle, for example) or phenomena like the neon color spreading (cfr. Ch. 1, §1.1). The subjective impression of a triangle standing out in the foreground, elicited by three “pacmen” (cfr. Fig. 3), as well as the subjective impression of a color spreading across the four circles represented in the Neon Color Spreading (cfr. Fig. 2, left) (cfr. Petitot 2003, 2013, pp. 81ff).

With this move, psychoneural isomorphism is vindicated: there indeed is a psychoneural isomorphism between the Phenomenological and the Neural Domains. Importantly, Petitot’s model does only cover one aspect of the Configuration of visual objects, i.e. the extension- quality relation, and leaves out many other aspects of the geometry of visual objects. I will later return on this feature of Petitot’s model.

241 Chapter 8: Towards Psychoneural Isomorphism?

3.2.2 The Limits of Neurogeometry

Petitot’s model is extremely interesting, but it suffers from some serious shortcomings. I dwell on the following issues: (§3.2.2.1) the problem of intertheoretic integration; (§3.2.2.2) the problem of the explanatory structure of Petitot’s morphodynamical account. I will then turn to the problem of explanation, trying to reconcile Petitot’s morphological explanation of content with my mechanistic account in §3.3.

3.2.2.1 Intertheoretic Integration

The first problem of Petitot’s account is that it aims to achieve an integration of Phenomenology within cognitive science by means of a form of intertheoretic reduction where a central role is played by the notion of isomorphism. Intertheoretic reduction is a relation between theories (or models), rather than between phenomena (in which case we may properly speak of ontological reduction). For example, to claim that light is reduced to electromagnetic radiation, on this account, means to show: (a) the reduction of optics to electromagnetic theory, and (b) that such a reduction would made possible the identification of light with electromagnetic radiation (ibid., p. 279). In other words, the reduction of a phenomenon FR to a phenomenon FB follows from a reduction of a theory TR to a more «primitive» (or «primary» to adopt Ernest Nagel’s jargon; cfr. Nagel 1961, p. 338) theory TB. In the case of our domains: the reduction of mental states to neural states or processes can be achieved via the reduction of a theory about mental states to a theory about the corresponding neural states or processes.

Now, Nagel distinguished between two kinds of reduction: homogeneous, and heterogeneous (1961, pp. 338-345). In the former case, the reduced theory and the primary theory share the same vocabulary. In this case, it would be rather easy to show the deductive relation holding between the two theories. Things are more complicated in the case of heterogeneous theories, i.e. when the two theories do not share the same vocabulary. The textbook example is the reduction of thermodynamics to statistical mechanics (ibid., pp. 339-345). Within the former we have terms like “temperature” which are absent in the primary theory. It is for this reason that Nagel introduced a «condition of connectability» in order to establish a relation between the laws of the secondary (reduced) theory and those of the primary theory (ibid., pp. 353-354). The case of the mental-physical divide is an excellent example of heterogeneous reduction. Given a phenomenological vocabulary—i.e. a vocabulary consisting of concepts and terms related to our visual experience—we need to reduce them to a theory about the neural processes that do not make use of such concepts and terms. According to Churchland, who famously espouses an eliminativist standpoint about folk psychology 16 , in order to achieve such a

16 According to some philosophers, reduction entails the elimination of the reduced theory (cfr. for example Oppenheim & Putnam 1958). However, not every researcher agrees. Some other philosophers hold that reduced theories may still play some important role (e.g. Nagel 1961). 242 Chapter 8: Towards Psychoneural Isomorphism?

reduction we can replace an inadequate phenomenological theory TF with its isomorphic image

TF* which can be reduced to a theory about neural processes TN. We get the following schema:

TN → TF* isomorphic with TF

In other words, the reduction of a theory about mental states TF to TN is made possible via its isomorphic image. This model is perfectly consistent with Petitot’s mathematization of Phenomenology. As we have seen, the integration of Phenomenology within the natural sciences is made possible only via the creation of mathematical models of the Phenomenological (conceptual) descriptions. These mathematical models correspond to the conceptual descriptions articulated by Husserlian Phenomenology, or perhaps by any form of MoP (cfr. §2), as Petitot says, such a model «corresponds exactly to Husserl’s pure description» (Petitot 1999, p. 343; cfr. §3.2.1). Hence, Petitot’s approach to the naturalization of Phenomenology is captured by the following schema:

TN → TM* corresponds exactly to TF

A model of the neural level TN is isomorphic with a model of the phenomenological descriptions TM, which in turn corresponds exactly to their conceptual formulation. Notice that the arrow between TN and TM highlights the derivational character of the latter from the former. As the reader will have noticed, this is almost exactly the same schema used to describe heterogeneous intertheoric reduction. According to Petitot, the only way to naturalize Husserlian Phenomenology, and phenomenological descriptions more generally, is via mathematization, i.e. the only way to integrate Phenomenology, broadly construed, into the sciences of the mind, is via a reductive mathematical description.

There are several problems with this proposal. First of all, let me specify that Petitot thinks that, in this way, we might achieve an explanation, a mathematical explanation, of consciousness. I will return on this interesting aspect below, and then later in the next Sub-section (§3.3). For now, I will concentrate on the heuristic aspects of the naturalization project. The project of naturalizing Phenomenology—and more generally, the problem of integrating first-person reports in the study of conscious experience (cfr. Ch. 1, §2; Horst 2005)—is mainly driven by a methodological concern, the apparent inescapability of the first-person perspective in the study of the NCCs. Agreed that we can exploit, in some sense, first-person reports to guide the scientific study of consciousness, the next question is: How? Petitot’s answer appeals to mathematization: the identity of forms provides just such a neutral ground where the first- person and the third-person perspectives can be reconciled. I have shown how this conforms to a classical reductionist program, but I haven’t shown why this is problematic. I will zoom in on what seems to me the worst problem: the heuristic role of isomorphism. This problem can be divided in the following issues: (a) the post hoc problem; (b) the question of NCCs. I analyze them in this order.

243 Chapter 8: Towards Psychoneural Isomorphism?

In order to show that there is a psychoneural isomorphism between the Phenomenological and the Neural Domain, we need to articulate some mathematical or formal model of them. As I have shown regarding the Phenomenological Domain, multiple models may give us epistemic access to different facets of the target, depending on our research interest. Of course, different models can be provided of the underlying Neural Domain as well. The fact that, as I have shown, content-NCC research is inherently mechanistic does not narrow down the spectrum of possible models that can be used to characterize them. Indeed, in Ch. 6, I have argued how a dynamical model of the sensorimotor activity can be easily reconciled with a mechanistic perspective. But there is more. The fact that multiple models may be compatible with the mechanistic approach outlined in Ch. 5 also does not mean that we know how these mechanisms are effectively structured. As I have argued, my approach can be characterized as an abstract schema that can be applied upon a vast range of possible mechanisms’ sketches that describe some potential ways the mechanisms may turn out to be, once scientists will have opened up enough black boxes (Ch. 5, §5.1). Since we do not know what are the IMs, SMs, and pNCCs, the following problem can now be appreciated: an isomorphism can only be shown to hold between content and a brain region that we already know to be somehow related to consciousness. This is the post hoc problem. Either scientists provide a mathematical model of multiple brain structures and then check whether they are isomorphic with some mathematical model of perceptual content—a task that may just turn out to be too time- consuming and complicated—or they simply already know that a particular structure is related to consciousness. But if they already know that the structure is somehow related to the relevant phenomenon, then psychoneural isomorphism does not seem to play a heuristic role in their discovery. The mathematization of Phenomenology does not seem to obey one of the main reasons for relying on first-person methods in the study of conscious experience (Ch. 1, §2): the heuristic value of phenomenological descriptions in the search for content-NCCs.

Let us now turn to (b). Even assuming a psychoneural isomorphism between a model of the neurogeometry of V1 and a model of one aspect of the configuration of visual objects, it does not mean that V1 is the neural correlate of conscious content. Different problems here must be disentangled. One issue, on which I will return in below (§3.2.2.2) touches on the problem of the nature of the explanandum phenomenon in Petitot’s account. Another issue is that, as I have shown earlier, there is no such a thing as a “content-NCC.” In reality, content-NCCs are a complex of distinct mechanisms. There is no reason to assume that the neurogeometry of V1 is thus the content-NCC of perceptual content. What it may be, instead, an aspect—or a single complex of mechanisms—of the broader set of intentional mechanisms. The point is that, more generally, understanding the matching content doctrine in terms of a necessary requirement for “content-NCCs” seems to be dispensable. The neurogeometry of V1 can be isomorphic with perceptual content, and still content may be “made conscious” by a different brain process or mechanism, as shown by my account in Ch. 5, or, to adopt Vosgerau et al.’s (2008) terminology, content and consciousness are orthogonal. As I will show in short (§3.3), there is 244 Chapter 8: Towards Psychoneural Isomorphism? an alternative way of understanding both psychoneural isomorphism and some version of the matching content doctrine in light of my mechanistic account.

3.2.2.2 The Explanatory Structure of Petitot’s Model

Let us now turn to the problem of the explanatory structure of Petitot’s account. There are mainly two problems. The first problem (a) is that of the characterization of the explanandum. The second problem (b) is that of the explanatory structure of the neurogeometric model.

First (a), as we have seen, Petitot considers his model to explain not simply perceptual content but phenomenality itself (cfr. Ch. 3, §2). Indeed, he argues (2008) against Noë & Thompson (2004) that the presence of a psychoneural isomorphism between experienced content and the neurogeometry of V1 closes the explanatory gap (Levine 1983) (cfr. Petitot 2008, p. 31). In Petitot’s account, the explanatory gap results from the impossibility of deriving or deducing the forms of perceptual content from the observation of the micro-level of RF activity. This would be shown by the fact that—following Noë and Thompson’s critique of the Matching Content Doctrine (§3.1)—it is impossible to find an isomorphism between perceptual content and the “neural.” This means three things: first, that in Petitot’s account psychoneural isomorphism plays a central explanatory role; second, that to explain the target phenomenon means to show how we may derive or deduce perceptual content from facts about the Neural Domain; third, that the explanandum phenomenon, according to Petitot, is not simply perceptual content—or an aspect thereof—but visual consciousness more generally. However, there are many reasons for being skeptical of this characterization of the explanandum. Indeed, this seems what Craver calls a mischaracterization of the explanandum by lumping together different phenomena (Craver 2007, p. 123). Craver & Darden elucidate this mistake as follows: «[…] in a lumping error, one might assume that several distinct phenomena are actually one, leading to seek out a single underlying mechanism when one should in fact be looking for several more or less distinct mechanisms» (2013, p. 61). The isomorphism at stake in Petitot’s account holds between a model of an aspect of perceptual content, and a model of the neurogeometry of V1. There is no reason to assume that such an isomorphism explains (in the first place) why a particular content is conscious. Again, I refer to Ch. 5 for an argument that shows how content and phenomenality can be independently explained by appeal to different mechanisms.

I already hinted at the second problem (b) above, when I observed that in Petitot’s account psychoneural isomorphism plays a central explanatory role, and that such an explanation seems to centrally involve a derivation of perceptual content from basic facts about the Neural Domain. The explanatory structure of Petitot’s account is far from clear. On the one hand, it seems to retain aspects of what Haugeland called a «derivational-nomological» explanation, a «special case form of deductive-nomological explanation—where the distinction of the special case is that the presupposed regularities are expressed as equational relationships among quantitative variables, and the deduction is mathematical derivation of other such equations» 245 Chapter 8: Towards Psychoneural Isomorphism?

(1998, p. 11). On the other hand, Petitot also characterizes his approach as a kind of mathematical explanation, or, to use Haugeland’s terminology again, as a «morphological explanation», whose distinguished mark is that «an ability is explained through appeal to a specific structure and to specified abilities of whatever is so structured» (ibid., p. 12). This is particularly clear in his characterization of the fibration structure of perceptual content as deriving mathematically from the structure of the underlying neurogeometry of V1 (Petitot 2013, 2008). But

Whether the functional architecture of V1 does indeed implement the morphological model that Petitot uses to describe perceptual content is matter to mathematician and neuroscientists to decide. From our standpoint, what is interesting is that even if this is the case, it does not tell us how the relevant mechanisms work. Since, as I have shown (Ch. 5), the explanatory structure of content-NCC research is mechanistic, the crucial question is how some intentional mechanisms realize the relevant computations. This in turns brings us to the question of how to reconcile, if possible, mathematical explanations with mechanistic explanations in cognitive science. I will examine the question in the next Sub-section; this will finally clarify the role of psychoneural isomorphism in the explanation of visual content.

3.3 Connecting Morphological Explanations with Mechanisms

I have already toned down the exclamations of success of Petitot’s account: at best, it offers an explanation of content, not of consciousness (cfr. Ch. 5, §4). Whether the explanatory gap can be closed thanks to a reduction of consciousness to “forms”—as Petitot, following Thom (1972)—suggests cannot be decided here. The problem I turn to, now, is the problem of reconciling Petitot’s alleged “mathematical” explanation of content, rather than consciousness, with my mechanistic approach. Two caveats are needed before I start. First, although I will claim that the morphological and the derivative explanatory “styles” are not per se explanatory, at least in this case, I do not deny that, in general, there are (or may be) mathematical or derivative explanations 17 . In order to do so, one would have to examine the explanatory structure of all the sciences, and then argue that, qua explanations, they are all mechanistic. This of course exceeds by far the scope of the present work. Second, by examining this particular case-study, we can show the useful role of mathematization in the characterization of phenomena (e.g. Bechtel 2013; Brigandt 2013; Shagrir & Bechtel 2017), the place a

17 It is very important to get clear about the notion of mathematical explanation. The claim is not whether there are explanations in mathematics, or whether we can make use of mathematics in delivering scientific explanations. Obviously, much of the contemporary sciences employ mathematics as an indispensable tool (cfr. Dorato 2012 about the role of mathematics in biology). The point is rather whether some physical phenomena can be explained mathematically. As Lange emphasizes: «such explanations explain not by describing the world’s causal structure, but roughly by revealing that the explanandum is more necessary than ordinary causal laws are» (2013, p. 491). 246 Chapter 8: Towards Psychoneural Isomorphism? psychoneural isomorphism can occupy in contemporary research, and finally the relationship between morphological models and mechanistic explanation that has recently drawn some attention (e.g. Brigandt 2013; Craver 2016; Huneman 2018; Lange 2013; Levy & Bechtel 2013; Rathkopf 2015)18.

The first, interesting aspect of Petitot’s approach is that his mathematical conversion of Husserl’s description amounts to a mathematical specification of the task that ought to be solved by the visual system. In this sense, this modeling approach bears a striking resemblance with Marr’s (2010) computational level. In an earlier publication, Marr (1977) described the work done at the computational level as follows: «the underlying nature of a particular computation is characterized and its basis in the physical world is understood. One can think of this part as an abstract formulation of what is being computed and why» (p. 37). The exact interpretation of Marr’s computational level has been object of some dispute among philosophers (for a brief overview of the debate, cfr. Shagrir & Bechtel 2017). According to some interpreters, the computational level consists solely in the specification of the task solved by the information-processing system (ibid., pp. 193-194). However, it is clear that Petitot’s model is much more ambitious. In his intent, the use of Phenomenological description is preliminary to their mathematization, without which any integration of Phenomenology—and of first-person reports more generally (cfr. Roy et al. 1999)—is impossible. Furthermore, merely specifying the task to be solved does not seem a sufficient reason to require an impressive mathematical apparatus. The most plausible interpretation of Petitot’s approach is perhaps along the lines of Egan’s understanding of the computational level (1991, 1995). According to Egan, Marr’s computational level consists precisely in the mathematical specification of the function(s) computed (1995, p. 185)19. In this sense, Petitot’s approach can be understood as the attempt to provide a mathematically more specific definition of the task to be solved by the underlying implemented algorithms (Petitot 2008, p. 22).

18 These studies focus on diverse examples. Huneman (2018) discusses what he calls «topological explanation», meaning: «an explanation in which a feature, a trait, a property or an outcome X of a system S is explained by the fact that it possesses specific topological properties Ti» (p. 117). Examining Lange (2013), and Rothkopf (2015), Craver (2016) focuses on network models, i.e. models produced following network analysis, a field of graph theory dedicated to the study of the organization of pairwise relations (cfr. also Levy & Bechtel 2013). Brigandt (2013) focuses instead on the integration of mathematical explanations into mechanisms. My use of the term “morphological model” covers all these examples. I call these models “morphological” since they all appeal to some kind of mathematical structure. I prefer to call them “models” instead of “explanations” because, as stated in the text, I want to remain neutral about their explanatory status. 19 Piccinini & Craver (2011) have advanced another interpretation of the computational level as providing, together with Marr’s algorithmic level, a mechanistic sketch that constraints the spectrum of possible mechanisms (the implementation level). Although it is true that, for Marr at least, the algorithmic and computational levels constraints the possible implemented structures, it seems to me an unnecessary interpretative stretch to read Marr in mechanistic terms. 247 Chapter 8: Towards Psychoneural Isomorphism?

This brings us to an interesting aspect about the problem of psychoneural isomorphism. Interpreted in this sense, the isomorphism between models can be interpreted not much as an explanation—since it does not specify how the intentional mechanisms perform the relevant computations—but as mathematically more specific description of the target phenomenon. The mathematical models of the Phenomenological eidetics are used to constraints the spectrum of possible computations that should be performed by the cognitive systems. In turn, the identification of a relevant neural structure, like V1, whose neurogeometry implements exactly the same model used to articulate the conceptual phenomenological descriptions—once we have preliminary accepted that the target neural area is somehow involved, perhaps as constitutively relevant, for the explanandum—offers a mathematically more precise characterization of the target, basing not on purely phenomenological descriptions, but on descriptions of the relevant neural functional architecture. Notice that such model-model isomorphism does not provide a thorough description of the structure of visual objects but, as I have said, merely of a fragment of their configuration. Petitot does not provide a full mathematical model of a single visual object. This is motivated by the fact that: first, we still do not fully understand how the brain generates our conscious experience of visual objects (e.g. Malach et al. 2002); and second, because it would be too complicated too fully account for the structure of visual objects in mathematical terms. In this sense, Petitot’s model is merely homomorphic to the configuration of visual objects, which constitute our Phenomenological Domain. A complete psychoneural isomorphism is thus a mere normative ideal we may tend to in order to characterize mathematically the constitutive mechanistic phenomena of intentional mechanisms.

Finally, it is by now clear that Petitot’s account does not specify any mechanism, nor does it make reference to any specific parts and structures performing some particular kind of function. Nonetheless, and understood in the sense specified above, this form of psychoneural isomorphism can be helpful in characterizing the constitutive mechanistic phenomena of intentional mechanisms, and hence, they may provide some testing hypothesis about the kind of operations that mechanisms may be performing (Bechtel 2008a). In other words, once we know that a given brain structure’s neurogeometry can be modeled in such a way as to tend toward isomorphism with a model of perceptual content (perhaps, there is some level of fidelity homomorphism), we have a fairly robust specification of the task to be solved. Returning to our considerations developed in Ch. 5 (§5.1), we can reason about the kind of operations performed by intentional mechanisms by means of backward chaining. To put it simply, if the two models converge up to a degree of high-fidelity homomorphism, we thus have sufficient proof for the robustness of the explanandum phenomenon (Wimsatt 2007, cfr. also Ch. 1, §1.3), gaining multiple evidence for it from both the Phenomenological Domain and the Neural Domain. Given the mathematically precise definitions, we can venture conjectures about what kind of operations of an intentional mechanism (or set thereof) may be constituting the explanandum phenomenon. In this way, we can move forward in the study of intentional mechanisms by moving from a very vague model described by mean of filler terms and black 248 Chapter 8: Towards Psychoneural Isomorphism? boxes, to an how-possibly model. Such putative operations may then be assigned to specific component parts of the mechanism, in the localization phase.

Understood in this sense, psychoneural isomorphism takes a completely different shape from the one envisaged by early Gestalt psychologists (cfr. Ch. 2, §2). It is a tool for characterizing the explanandum and segmenting it from the environment, rather than for explaining it. Notice, however that in this sense, psychoneural isomorphism does not have a particularly helpful role in the discovery of mechanisms—one cannot individuate a particular intentional mechanism by simply relying on psychoneural isomorphism—but it does play an important role in the determination of the robustness of the configuration of visual objects. In this sense, psychoneural isomorphism leaves out a great deal about visual objects, like aspects that eschew a purely structural or configurational characterization. In order to cover these aspects as well, the form of model pluralism sketched out earlier (§2) may be particularly helpful.

Conclusion

In this Chapter, I have argued for many different theses. Firstly, that psychoneural isomorphism should be understood as a relation between mathematical models, rather than between phenomena and mechanisms. Secondly, that there are multiple models of visual objects, but that for the purposes of our present investigation, formal models that purport to describe the configuration of visual objects are of particular interest. Finally, I have shown, discussing Petitot’s neurogeometry, that a psychoneural isomorphism does not play the role of discovering intentional mechanisms, but that of characterizing the explanandum phenomenon by providing models of from two different perspectives. This characterization can be useful not much in the discovery of mechanisms, but in their characterization by making assumptions about the kind of operations that intentional mechanisms may be performing. Understood in this sense, psychoneural isomorphism may play a role in research for intentional mechanisms as a determination of the robustness of the configuration of visual objects.

249

CONCLUSION

In this work, I have examined the concept of psychoneural isomorphism, focusing on visual objects and their underlying neural correlates. This work is articulated in four Parts. In the first Part I have introduced the main problem of this work, and specified the research strategy for the remaining Chapters. In the second Part, I have characterized the Phenomenological Domain, specifying that visual objects are property bundles. In the third Part, I have turned to the Neural Domain, specifying that the so-called “content-NCCs” for visual experience are, in reality, a complex set of different mechanisms. Moreover, I have shown that at least some dynamic system theories, to which many heterodox approaches to the study of visual perception turn, are amenable of a mechanistic analysis, and are thus compatible with my approach. In the fourth Part, I have specified the nature of the Configuration of visual objects, and then examined the nature of psychoneural isomorphism. I have shown that psychoneural isomorphism is a relation between models of visual objects—i.e. of their configuration—and models of the underlying neurogeometry. The result of this examination is that psychoneural isomorphism does not play an interesting role in the discovery of intentional mechanisms, i.e. mechanisms responsible for fixing the accuracy conditions of states of seeing, but it may play an important role in the identification and mathematical characterization of the explananda. In this sense, psychoneural isomorphism provides some clues about the robustness of the explanandum.

At the conclusion of this work, it is worth mentioning some open issues that deserve further attention in future works. First, we need a more thoroughly articulated mechanistic account of the search for “content-NCCs,” showing in particular the role of manipulationist approaches. Second, the mechanistic approach delineated here can bring further advantages not only in the problem of the explanation of consciousness, but also in the naturalization of intentionality, as shown in Ch. 5. Third, more fruitful understandings of the explanatory integration of phenomenological methods within the sciences of the mind should not being pursued within a reductionist outlook, but rather within an account that takes into account more thoroughly the mutual constraints of different theories (Danks 2014), and further elaborate on the resources made available by the literature on mechanisms (Miłkowski 2016ab). BIBLIOGRAPHY

AGUIRRE, G.K., ZARAHN, E. & D’ESPOSITO, M. (1998). “An Area Within Human Ventral Cortex Sensitive to ‘Building’ Stimuli: Evidence and Implications.” Neuron 21: 373-383.

AIMOLA DAVIES, ANNE (2004). “Disorders of Spatial Orientation and Awareness.” Cognitive and Behavioral Rehabilitation: From Neurobiology to Clinical Practice (pp. 175-223). New York: The Guilford Press.

ALLEN, COLIN & BEKOFF, MARC (2007). “Animal Consciousness.” In Velmans & Schneider (2007): 58-71.

ALLEN, SOPHIE R. (2016). A Critical Introduction to Properties. London-New York: Bloomsbury.

ANDERSEN, HOLLY (2011). “Mechanisms, Laws, and Regularities.” Philosophy of Science, 78(2): 325-331.

ANSCOMBE, G.E.M. (1965). “The Intentionality of Sensation: A Grammatical Feature.” In Vision and Mind (pp. 55-75), edited by Alva Noë and Evan Thompson. Cambridge, MA: MIT Press.

ARISTOTLE (1993). Metaphysics: Books 1-9. Cambridge, MA: Harvard University Press.

ARISTOTLE (1963). Categories and De Interpretatione. Oxford: Clarendon Press.

ARMSTRONG, DAVID M. (1968) A Materialist Theory of the Mind. London: Routledge.

ARMSTRONG, DAVID M. (1978) Universals and Scientific Realism, vol.1. Cambridge: Cambridge University Press.

ARMSTRONG, DAVID M. (1989) Universals: An Opinionated Introduction. Boulder-London: Westview Press.

ARMSTRONG, DAVID M. (1997) A World of States of Affairs. Cambridge: Cambridge University Press.

ARMSTRONG, DAVID M. (2010). Sketch for a Systematic Metaphysics. Oxford: Oxford University Press.

ARNHEIM, RUDOLF (1949). “The Gestalt Theory of Expression.” Psychological Review 56: 156-71. Rep. in Id (Ed.) Toward a Psychology of Art (pp. 51-73). London: University of California Press, 1966.

ARP, ROBERT, SMITH, BARRY, & SPEAR, ANDREW D. (2015). Building Ontologies with Basic Formal Ontology. Cambridge, MA: MIT Press.

ARSTILIA, VALTTERI (2016). “Theories of Apparent Motion.” Phenomenology and Cognitive Sciences (2016) DOI 10. 1007/s11097-015-9418-y.

ARU, JAAN & BACHMANN, TALIS (2015). “Still wanted—the Neural Mechanisms of Consciousness!” Frontiers of Psychology 6. http://dx.doi.org/10.3389/fpsyg.2015.00005.

ARU, JAAN, BACHMANN, TALIS, SINGER, WOLF & MELLONI, LUCIA (2012). “Distilling the Neural Correlates of Consciousness” Neuroscience & Biobehavioral Reviews 36, 737-746. 251 ARU, JAAN, BACHMANN, TALIS, SINGER, WOLF & MELLONI, LUCIA (2015). “On Why the Unconscious Prerequisites and Consequences of Consciousness Might Derail Us From Unveiling the Neural Correlates of Consciousness.” In Miller (2015a): 205-225.

ASCH, SOLOMON (1969). “Gestalt Theory.” In D.L. Sills (Ed.) International Encyclopedia of the Social Sciences, vol. 6 (pp. 158-175). New York: Macmillan & The Free Press.

AUSTIN, JOHN (1962). Sense and Sensibilia. New York: Oxford University Press.

AVANT, LLOYD L. (1965). “Vision in the Ganzfeld” Psychological Bulletin 64(4), 246-258.

AYERS, MICHAEL (2004). “Sense Experience, Concepts and Content: Objections to Davidson and McDowell.” In R. Schumacher (Ed.) Perception and Reality: From Descartes to the Present (pp. 239-262). Paderborn: Mentis

Verlag.

BAARS, BERNARD (1995). “Surprisingly Small Subcortical Structures Are Needed for the State of Waking Consciousness, While Cortical Projection Areas Seem to Provide Perceptual Contents of Consciousness.” Consciousness and Cognition 4: 159-162.

BAARS, BERNARD (2005). “Global Workspace Theory of Consciousness: Toward a Cognitive Neuroscience of Human Experience.” Progress in Brain Research 150: 45-53.

BACHMANN, TALIS (2009). “Finding ERP-Signatures of Target Awareness: Puzzle Persists Because Experimental Co-Variation of the Objective and Subjective Variables” Consciousness and Cognition 18: 804-808.

BACHMANN, TALIS & HUDETZ, ANTHONY G. (2014). “It is Time to Combine the Two Main Traditions in the Search on the Neural Correlates of Consciousness: C=LxD.” Frontiers in Psychology 5.

BAHRAMI, B. (2003). “Object property encoding and change blindness in multiple object tracking” Visual Cognition 10(8): 949-963.

BALLARD, DANA (1991). “Animate Vision.” Artificial Intelligence, 48(1): 57-86.

BARENHOLTZ, ELAN & TARR, MICHAEL (2007). “Reconsiderding the Role of Structure in Vision.” The

Psychology of Learning and Motivation 47: 157-180. doi: 10.1016/S0079-7421(06)47005-5.

BARTELS, ANDREAS (2005). Strukturale Repräsentation. Paderborn: Mentis Verlag.

BARTELS, ANDREAS (2006). “Defending the Structural Concept of Representation.” Theoria 55: 7-19.

BATTERMAN, ROBERT (2002). “Asymptotics and the Role of Minimal Models” British Journal for the Philosophy of Science 53: 21-38.

252 BAYNE, TIM (2004). “Closing the Gap? Some Questions for Neurophenomenology.” Phenomenology and the Cognitive Sciences 3(4): 349-364.

BAYNE, TIM (2007). “Conscious States and Conscious Creatures: Explanation in the Scientific Study of Consciousness.” Philosophical Perspectives 21: 1-22.

BAYNE, TIM (2010). The Unity of Consciousness. Oxford: Oxford University Press.

BAYNE, TIM (2011). “Perception and the Reach of Phenomenal Content.” In Hawley & Macpherson (2011): 16- 35.

BAYNE, TIM & CHALMERS, DAVID (2003). “What is the Unity of Consciousness?” in A. Cleeremans (Ed.) The Unity of Consciousness. doi: 10.1093/acprof:oso/9780198508571.003.0002.

BAYNE, TIM & HOHWY, JAKOB (2013). “Consciousness: Theoretical Approaches.” In A.E. Cavanna, A.Nani, H. Blumenfield, & S. Laureys (Eds.) Neuroimaging of Consciousness (pp. 23-35). Berlin: Springer Verlag.

BAYNE, TIM, CLEEREMANS, AXEL, & WILKEN, PATRICK (Eds.) (2009). The Oxford Companion to Consciousness. New York: Oxford University Press.

BECHTEL, WILLIAM (1998). “Representations and Cognitive Explanations: Assessing the Dynamicist’s Challenge in Cognitive Science.” Cognitive Science 22(3): 295-318.

BECHTEL, WILLIAM (2001a). “Representations: From Neural Systems to Cognitive Systems.” In Bechtel, Mandik, Mundale, Stufflebeam (2001), 332-348.

BECHTEL, WILLIAM (2001b). “Decomposing and Localizing Vision: An Exemplar for Cognitive Neuroscience.” In Bechtel, Mandik, Mundale, Stufflebeam (2001), 225-249.

BECHTEL, WILLIAM (2002). “Decomposing the Mind-Brain: A Long Term Pursuit.” Brain and Mind 3, 229-242.

BECHTEL, WILLIAM (2008a). Mental Mechanisms. New York: Routledge.

BECHTEL, WILLIAM (2008b). “Mechanisms in Cognitive Psychology: What Are the Operations?” Philosophy of Science 75, 983-994.

BECHTEL, WILLIAM (2009). “Looking Down, Around, and Up: Mechanistic Explanation in Psychology.” Philosophical Psychology 22, 543-564.

BECHTEL, WILLIAM (2012). “Understanding Endogenously Active Mechanisms: A Scientific and Philosophical Challenge.” European Journal for Philosophy of Science, 2(2): 233-248.

BECHTEL, WILLIAM (2013). “Understanding Biological Mechanisms: Using Illustrations from Circadian Rhythm Research.” In K. Kampourakis (Ed.) The Philosophy of Biology (pp. 487-510). Dordrecht: Springer. 253 BECHTEL, WILLIAM (2016). “Investigating Neural Representations: The Tale of the Place Cells.” Synthese 193(5): 1287-1321.

BECHTEL, WILLIAM & ABRAHAMSEN, ADELE (2002). Connectionism and the Mind. Malden, MA: Blackwell.

BECHTEL, WILLIAM & ADELE ABRAHAMSEN (2005). “Explanation: A Mechanistic Alternative.” Studies in History and Philosophy of Biological and Biomedical Sciences 36: 421-441.

BECHTEL, WILLIAM, MANDIK, PETE, MUNDALE, JENNIFER & STUFFLEBEAM, ROBERT (Eds.) (2001). Philosophy and the Neurosciences: A Reader. Malden: Blackwell Publishing.

BECHTEL, WILLIAM & ROBERT RICHARDSON (2010a). Discovering Complexity. Cambridge, MA: MIT Press.

BECHTEL, WILLIAM & ROBERT RICHARDSON (2010b). “Neuroimaging as a Tool for Functionally Decomposing Cognitive Processes.” In S.J. Hanson & M. Bunzl (Eds.) Foundational Issues in Human Brain Mapping (pp. 241- 262). Cambridge, MA: MIT Press.

BECHTEL, WILLIAM & WRIGHT, CORY (2009). “What Is Psychological Explanation?” In P. Calvo & J. Symons (Eds.) Routledge Companion to Philosophy of Psychology (pp. 113-130). London: Routledge.

BEER, RANDALL (2000). “Dynamical Approaches to Cognitive Science.” Trends in Cognitive Science 4(3): 91-99.

BENNETT, JONATHAN (2002). “What Events Are.” In R. Gale (Ed.) The Blackwell Guide to Metaphysics (pp. 43-

65). New York: Wiley-Blackwell.

BENNETT, KAREN (2011). “Construction Area (No Hard Hat Required).” Philosophical Studies 154: 79-104.

BERTAMINI, MARCO (2001). “The Importance of Being Convex: An Advantage of Convexity when Judging Position.” Perception 30: 1295-1310.

BETTI, ARIANNA (2014). “The naming of Facts and the Methodology of Language-Based Metaphysics.” In A. Reboul (Ed.) Mind, Values, and Metaphysics (pp. 35-62). Genève: Springer.

BETTI, ARIANNA (2015). Against Facts. Cambridge, MA: MIT Press.

BICKLE, JOHN (2003). Philosophy and Neuroscience: A Ruthlessly Reductive Account. Dordrecht: Kluwer Academic.

BIEDERMAN, IRVING (1987). “Recognition-by-Components: A Theory of Human Image Understanding.” Psychological Review 94(2): 115-147.

BISHOP, JOHN M. & MARTIN, ANDREW O. (2014). Contemporary Sensorimotor Theory. Heidelberg: Springer.

254 BLACKBURN, PATRICK, DE RIJKE, MAARTEN, & VENEMA, YDE (2001). Modal Logic. Cambridge: Cambridge University Press.

BLAKE, RANDOLPH & LOGOTHETIS, NIKOS K. (2002). “Visual Competition.” Nature Reviews Neuroscience 3: 1- 11.

BLAKE, RANDOLPH, BRASCAMP, JAN & HEEGER, DAVID J. (2014). “Can Binocular Rivalry Reveal Neural Correlates of Consciousness?” Philosophical Transactions of the Royal Society, London B, Biological Sciences 369(1641). Doi: 10.1098/rstb.2013.0211.

BLASER, ERIK, PYLYSHYN, ZENON & HOLCOMBE, ALEX O. (2000). “Tracking and Object through Feature Space” Nature 408: 196-199.

BLOCK, NED (1995). “On a Confusion About a Function of Consciousness.” Behavioral and Brain Sciences 18: 227-287.

BLOCK, NED (2005). “Review of Alva Noë, Action in Perception.” The Journal of Philosophy 102(5): 259-272.

BLOCK, NED (2007). “Consciousness, Accessibility and the Mesh Between Psychology and Neuroscience.” Behavioral and Brain Sciences 30: 481-548.

BLOCK, NED (2008). “Consciousness and Cognitive Access.” Proceedings of the Aristotelian Society 108(Part 3): 289-317.

BLOCK, NED (2010). “Attention and Mental Paint.” Philosophical Issues 20: 23-63.

BLOCK, NED (2011). “Perceptual Consciousness Overflows Cognitive Access.” TRENDS in Cognitive Sciences doi:10.1016/j.tics.2011.11.001.

BOGEN, JOSEPH E. (1995). “On the Neurophysiology of Consciousness, Part I: An Overview” Consciousness and Cognition 4: 52-62.

BOGEN, JOSEPH E (2007). “The Thalamic Intralaminar Nuclei and the Property of Consciousness.” In P.D. Zelazo & E. Thompson (Eds.) The Cambridge Handbook of Consciousness (pp. 775-807). New York: Cambridge University Press.

BOGEN, JAMES (2005). “Regularity and Causality: Generalizations and Causal Explanation.” Studies in the History and Philosophy of Biology and Biomedical Science 36: 397-420.

BOGEN, JAMES & WOODWARD, JIM (1988). “Saving the Phenomena.” The Philosophical Review 97(3): 303-352.

BOLY, MELANIE (2011). “Measuring the Fading Consciousness in the Human Brain.” Current Opinions in Neurology 24: 394-400.

255 BOLY, MELANIE, SETH, ANIL, WILKE, MELANIE, INGMUNDSON, PAUL, BAARS, BERNARD, LAUREYS, STEVEN,

EDELMAN, GERALD B. & TSUCHIYA, NAOTSUGU (2013). “Consciousness in Humans and Non-Human Animals: Recent Advances and Future Directions” Frontiers in Psychology 4. doi: 10.3389/fpsyg.2013.00625.

BOZZI, PAOLO (1989). Fenomenologia Sperimentale. Bologna: Il Mulino.

BRAITENBERG, VALENTINO (1984). Vehicles: Experiments in Synthetic Psychology. Cambridge, MA: MIT Press.

BRESSAN, PAOLA, MINGOLLA, ENNIO, SPILLMANN, LOTHAR & WATANABE, TAKEO (1997). “Neon Color Spreading: A Review.” Perception 26: 1353-1366.

BREWER, BILL (2006) “Perception and Content.” European Journal of Philosophy 14: 165-181.

BREWER, BILL (2011). Perception and Its Objects. Oxford: Oxford University Press.

BRIGANDT, INGO (2013). “Systems Biology and the Integration of Mechanistic Explanation and Mathematical Explanation.” Studies in History and Philosophy of Biological and Biomedical Science 4: 477-492.

BRIDGEMAN, BRUCE (1983). “Isomorphism is where you find it.” Open peer commentary (pp. 658-659) to Stephen Grossberg’s “The Quantized Geometry of Visual Space.” Behavioral and Brain Sciences 6: 625-692.

BROWN, CURTIS (2016). “Narrow mental content.” In E.N. Zalta (Ed.): The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/sum2016/entries/content-narrow/

BROWN, RICHARD (2014). “The HOROR Theory of Phenomenal Consciousness.” Philosophical Studies. DOI 10.1007/s11098-014-0388-7

BUHRMANN, THOMAS, DI PAOLO, EZEQUIEL, & BARANDIARAN, XABIER (2013). “A dynamical systems account of sensorimotor contingencies.” Frontiers in Psychology, Doi: 10.3389/fpsyg.2013.00285.

BURGE, TYLER (1991). “Vision and Intentional Content.” In E. LePore and R. van Gulick (Eds.) John Searle and

His Critics (pp. 195-213). Oxford: Basil Blackwell.

BURGE, TYLER (1997). “Two Kinds of Consciousness.” Rep. in Burge (2007): 383- 391.

BURGE, TYLER (2005). “Disjunctivism and Perceptual Psychology.” Philosophical Topics 33(1): 1-78.

BURGE, TYLER (2006). “Reflections on Two Kinds of Consciousness.” Rep. in Burge (2007): 392-419.

BURGE, TYLER (2007). Foundations of Mind: Philosophical Essays, vol. 2. New York: Oxford University Press.

BURGE, TYLER (2010). Origins of Objectivity. New York: Oxford University Press.

BUXBAUM, L.J., FERRARO, M.K., VERAMONTI, T., FARNE, A., WHYTE, J., LADAVAS, E., FRASSINETTI, F. &

COSLETT, H.B. (2004). “Heimspatial Neglect: Subtypes, Neuroanatomy, and Disability.” Neurology 62: 749-756. 256 BYRNE, ALEX (2001). “Intentionalism Defended” The Philosophical Review 110(2): 199-240.

BYRNE, ALEX (2011). “Experience and Content.” In Hawley and Macpherson (2011): 60-82.

CAMERON, ROSS P. (2010) “The Grounds of Necessity.” Philosophy Compass 5(4): 348-358.

CAMPBELL, JOHN (2002). Reference and Consciousness. New York: Oxford University Press.

CAMPBELL, JOHN (2007). “An Interventionist Approach to Causation in Psychology.” In A. Gopnik & L. Schulz (Eds.) Causal Learning: Psychology, Philosophy and Computation (pp. 58-66). Oxford: Oxford University Press.

CAMPBELL, JOHN (2009). “Consciousness and Reference.” In B. McLaughlin, A. Beckermann, S. Walter (Eds.): The Oxford Handbook of Philosophy of Mind (pp. 648-662). Oxford: Oxford University Press.

CAMPBELL, KEITH (1981). “The Metaphysics of Abstract Particulars.” Midwest Studies in Philosophy 6(1): 477-

488.

CAMPBELL, KEITH (1990). Abstract Particulars. Oxford: Basil Blackwell.

CARMEL, DAVID, WALSH, VINCENT, LAVIE, NILLI & REES, GERAINT (2010). “Right Parietal TMS Shortens Dominance Durations in Binocular Rivalry.” Current Biology 20(18): http://dx.doi.org/10.1016/j.cub.2010.07.036.

CARRUTHERS, PETER (2007). “Higher-Order Theories of Consciousness.” In Velmans & Schneider (2007): 277- 286.

CARTWRIGHT, NANCY (1979). “Causal Laws and Effective Strategies.” Noûs 13 (4): 419-437.

CASATI, ROBERTO (1991) L’immagine: Introduzione ai problem filosofici della percezione. Firenze: La Nuova Italia.

CASATI, ROBERTO (2015). “Object Perception.” In M. Matthen (Ed.) The Oxford Handbook of Philosophy of Perception (pp. 393-404). Oxford: Oxford University Press.

CASATI, ROBERTO & VARZI, ACHILLE (1999). Parts and Places. Cambridge, MA: MIT Press.

CASTAÑEDA, HECTOR-NERI (1974). “Thinking and the Structure of the World.” Philosophia 4(1): 3-40.

CELLUCCI, CARLO (2014). “Rethinking Philosophy.” Philosophia 42: 271-288.

CHAKRAVARTTY, ANJAN (2010). “Informational versus Functional Theories of Scientific Representation.” Synthese 172: 197-213.

CHALMERS, DAVID J. (1996). The Conscious Mind. New York: Oxford University Press.

CHALMERS, DAVID J. (2000). “What Is a Neural Correlate of Consciousness?” In Metzinger (2000): 17-39. 257 CHALMERS, DAVID J. (2010a). “The Representational Character of Experience.” In The Character of Consciousness (pp. 339-379), edited by D.J. Chalmers. New York: Oxford University Press.

CHALMERS, DAVID J. (2010b). “Perception and the Fall from Eden.” In D.J. Chalmers (Ed.) The Character of Consciousness (pp. 381-454). New York: Oxford University Press.

CHALMERS, DAVID J. (2010c). “Facing Up to the Problem of Consciousness.” In D.J. Chalmers (Ed.)The Character of Consciousness (pp. 381-454). New York: Oxford University Press.

CHAN, LOUIS K.H. & HAYWARD, WILLIAM G. (2009). “Feature Integration Theory Revisited: Dissociating Feature Detection and Attentional Guidance in Visual Search” Journal of Experimental Psychology 35(1): 119-132.

CHEMERO, ANTHONY (2000). “Anti-Representationalism and the Dynamical Stance.” Philosophy of Science 67(4): 625-647.

CHEMERO, ANTHONY (2009). Radical Embodied Cognitive Science. Cambridge, MA: MIT Press.

CHEMERO, ANTHONY & MICHAEL SILBERSTEIN (2008). “After the Philosophy of Mind.” Philosophy of Science 75: 1-27.

CHISHOLM, RODERICK (1957). Perceiving: A Philosophical Study. Ithaca, New York: Cornell University Press.

CHURCHLAND, PATRICIA SMITH (1986). Neurophilosophy. Cambridge, MA: MIT Press.

CHURCHLAND, PATRICIA SMITH & RAMACHANDRAN, VILAYANUR (1993). “Filling-in: Why Dennett is Wrong.” In B. Dahlbom (Ed.) Dennett and His Critics: Demystifying Mind (pp. 28-52). Cambridge, MA: Blackwell.

CHURCHLAND, PAUL (2012). Plato’s Camera. Cambridge, MA: MIT Press.

CLARK, ANDY (1997). Being There: Putting Brain, Body, and World Together Again. Cambridge, MA: MIT Press.

CLARK, ANDY (2006). “Vision as Dance? Three Challenges for Sensorimotor Contingency Theory.” Psyche, doi:10.1016/j.concog.2009.03.005.

CLARK, AUSTEN (1996). “Three Varieties of Visual Field.” Philosophical Psychology 9(4): 477-495.

CLARK, AUSTEN (2000). A Theory of Sentience. Oxford: Oxford University Press.

CLIFF, D.T. (1991). “Computational Neuroethology: A Provisional Manifesto.” In J.–A. Meyer & S. W. Wilson (Eds.) From Animals to Animats: Proceedings of the First International Conference on Simulation of Adaptive Behaviors (pp. 29-39). Cambridge, MA: MIT Press.

COHEN, JONATHAN (2004). “Objects, Places, and Perception” Philosophical Psychology 17(4): 471-495.

258 COHEN, MICHAEL A. & DENNETT, DANIEL C. (2011). “Consciousness Cannot be Separated from Function.” Trends in Cognitive Sciences 15(8): 358-364.

COHEN, MICHAEL A., DENNETT, DANIEL C., & KANWISHER, NANCY (2016). “What is the Bandwith of Perceptual Experience?” Trends in Cognitive Science, 20(5): 324-335.

COHEN, MICHAEL & GROSSBERG, STEPHEN (1984). “Neural dynamics of brightness perception: Features, boundaries, diffusion, and resonance.” Perception & Psychophysics 36(5): 428-456.

COHN, PAUL MORITZ (1981). Universal Algebra. Dordrecht: Reidel.

CRANE, TIM (2001). Elements of Mind. Oxford: Oxford University Press.

CRANE, TIM (2003). “The Intentional Structure of Consciousness.” In Q. Smith & A. Jokic (Eds.) Consciousness: New Philosophical Perspectives (pp. 33-56). Oxford: Oxford University Press.

CRANE, TIM (2009) “Is Perception a Propositional Attitude?” The Philosophical Quarterly 59(236): 452-469.

CRANE, TIM (2013). The Objects of Thought. New York: Oxford University Press.

CRANE, TIM (2015). “The Mental States of Persons and their Brains.” Royal Institute of Philosophy Supplement 76: 253-270. DOI: 10.1017/S1358246115000053.

CRAVER, CARL F. (2005) “Beyond Reduction: Mechanisms, Multifield Integration, and the Unity of Neuroscience.” Studies in History and Philosophy of Biological and Biomedical Sciences 36: 373-395.

CRAVER, CARL F. (2006). “When Mechanistic Models Explain.” Synthese 153(3): 355-376.

CRAVER, CARL F. (2007). Explaining the Brain. New York: Oxford University Press.

CRAVER, CARL F. (2013) “Functions and Mechanisms: A Perspectivalist View” In P. Huneman (Ed.) Functions: Selection and Mechanism (pp. 133-158). Dordrecht: Srpinger Verlag.

CRAVER, CARL F. (2016). “The Explanatory Power of Network Models” Philosophy of Science 83: 698-709.

CRAVER, CARL F. & LINDLEY DARDEN (2006). “Discovering Mechanisms in Neurbiology: The Case of Spatial Memory.” Rep. in Darden (2006): 40-64.

CRAVER, CARL F. & LINDLEY DARDEN (2013). In Search of Mechanisms. Chicago: University of Chicago Press.

CRICK, FRANCIS (1994). The Astonishing Hypothesis. New York: Simon & Schuster.

CRICK, FRANCIS (1996). “Visual Perception: Rivalry and Consciousness.” Nature 379: 485-486.

259 CRICK, FRANCIS & KOCH, CHRISTOF (1990). “Towards a Neurobiological Theory of Consciousness.” Seminars in the Neurosciences 2: 263-275.

CRICK, FRANCIS & KOCH, CHRISTOF (1995). “Are We Aware of Neural Activity in Primary Visual Cortex?” Nature 375: 121-123.

CRICK, FRANCIS & KOCH, CHRISTOF (1998). “Consciousness and Neuroscience.” Cerebral Cortex 8: 97-107.

CULP, SYLVIA (1994). “Defending Robustness: The Bacterial Mesosome as a Test Case.” Proceedings of the Biennial Meeting of the Philosophy of Science Association 1: 46-57.

CUMMINS, ROBERT (1983). The Nature of Psychological Explanation. Cambridge, MA: MIT Press.

CUMMINS, ROBERT (2000). “’How Does It Work?’ vs. ‘What Are the Laws?’ Two Conceptions of Psychological Explanation.” In F. Keil & R. Wilson (Eds.) Explanation and Cognition (pp. 117-145). Cambridge, MA: MIT Press. Rep. in Robert Cummins, The World in the Head (pp. 282-310). Oxford: Oxford University Press, 2013.

DANKS, DAVID (2014). Unifying the Mind: Cognitive Representations as Graphical Models. Cambridge, MA: MIT Press.

DARDEN, LINDLEY (2006). Reasoning in Biological Discovery. Cambridge: Cambridge University Press.

DARDEN, LINDLEY & MAULL, NANCY (1977). “Interfield Theories.” Rep. in Darden (2006): 127-148.

DARDEN, LINDLEY & CAIN, JOSEPH A. (1989). “Selection Type Theories.” Philosophy of Science 56(1): 106-129.

DAVIDSON, DONALD (1970). “Mental Events” In L. Foster & J.W. Swanson (Eds.) Experience and Theory (pp. 79-101). Amherst: University of Massachusetts Press.

DAVIES, MARTIN (1992). “Perceptual Content and Local Supervenience.” Proceedings of the Aristotelian Society 92: 21-45.

DE GRAAF, TOM & SACK, ALEXANDER (2014). “Using Brain Stimulation to Disentangle Neural Correlates of Conscious Vision.” Frontiers in Psychology 5: 1-13.

DE GRAAF, TOM & SACK, ALEXANDER (2015). “On the Various Neural Correlates of Consciousness: Are They Distinguishable?” In Miller (2015b): 177-204.

DE GRAAF, TOM A., SHIEH, PO-JANG, & SACK, ALEXANDER (2011). “The ‘Correlates’ in Neural Correlates of Consciousness.” Neuroscience and Biobehavioral Review 36: 191-197.

DEHAENE, STANISLAS & NACCACHE, LIONEL (2001). “Towards A Cognitive Neuroscience of Consciousness: Basic Evidence and a Workspace Framework.” Cognition 79: 1-37.

260 DEHAENE, STANISLAS & CHANEUX, JEAN-PIERRE (2004). “Neural Mechanisms for Access Consciousness” In M. Gazzaniga (Ed.) The Cognitive Neurosciences, III. Cambridge, MA: MIT Press.

DEHAENE, STANISLAS, & CHANGEUX, JEAN-PIERRE (2011). “Experimental and theoretical approaches to conscious processing.” Neuron 70: 200-227.

DENKEL, ARDA (1996). Object and Property. Cambridge: Cambridge University Press.

DENKEL, ARDA (1997). “On The Compresence of Tropes” Philosophy and Phenomenological Research 57(3), 599-606.

DENKEL, ARDA (2000). “The Refutation of Substrata.” Philosophy and Phenomenological Research 61(2), 431- 439.

DENNETT, DANIEL C. (1978) “Towards a Cognitive Theory of Consciousness.” In D. Dennett (Ed.) Brainstorms: Philosophical Essays on Mind and Psychology (pp. 149-173). Cambridge, MA: MIT Press.

DENNETT, DANIEL C. (1982). “How to Study Human Consciousness Empirically or Nothing Comes to Mind.” Synthese 53(2): 159-180.

DENNETT, DANIEL C. (1991). Consciousness Explained. Boston: Little, Brown & co.

DENNETT, DANIEL C. (1992). “Filling-in versus Finding out: A Ubiquitous Confusion in Cognitive Science.” In H. Pick, P. Van Den Broek, D. Knill (Eds.) Cognition: Conception, and Methodological Issues (pp. 33-49). Washington DC: American Psychological Association.

DENNETT, DANIEL C. (1994). “Cognitive Science as Reverse Engineering: Several Meanings to ‘Top Down’ and ‘Bottom Up’.” In D. Prawitz & D. Westertahl (Eds.) International Congress of Logic, Methodology, and Philosophy of Science (pp. 679-689). Dordrecht: Kluwer.

DENNETT, DANIEL C. (1998). “Revolution, no! Reform, si!” Behaviroal and Brain Sciences 21(5): 636-637.

DENNETT, DANIEL C. (2005). Sweet Dreams: Philosophical Obstacles to a Science of Consciousness. Cambridge, MA: MIT Press.

DENNETT, DANIEL C. & KINSBOURNE, MARCEL (1992). “Time and the Observer.” Behavioral and Brain Sciences 15: 183-247.

DEVITT, MICHAEL (1997). “‘Ostrich Nominalism’ or ‘Mirage Realism’?” In D.H. Mellor & A. Oliver (Eds.) Properties (pp. 93-100). Oxford: Oxford University Press.

DE WINTER, JOERI & WAGEMANS, JOHAN (2006). “Segmentation of Object Outlines Into Parts: A Large-Scale Integrative Study.” Cognition 99: 275-325.

261 DI LOLLO, VINCENT (2012). “ The Feature-Binding Problem is an Ill-Posed Problem” Trends in Cognitive Sciences 16(6): 317-321.

DODD, JULIAN (1995). “McDowell and Identity Therories of Truth” Analysis 15(1): 160-165.

DORATO, MAURO (2012). “Mathematical Biology and the Existence of Biological Laws.” In D. Dieks, S. Hartmann, T. Uebel & M. Weber (Eds.), Probabilities, Laws and Structure (pp. 109-121). New York: Springer.

DOUGLAS, H.E. (2009). “Reintroducing Prediction to Explanation”. Philosophy of Science, 76(4): 444-463.

DOWNING, P.E., JIANG, Y., SHUMAN, M., KANWISHER, N. (2001). “A Cortical Area Selective for Visual Processing of the Human Body.” Science 293: 2470-2473.

DRETSKE, FRED (1969). SEEING AND KNOWING. CHICAGO: THE UNIVERSITY OF CHICAGO PRESS.

DRETSKE, FRED (1979). “Simple Seeing.” In D.F. Gustafson & B.L. Tapscott (Eds.) Body, Mind and Method (pp. 1-15). Dordrecht: Kluwer.

DRETSKE, FRED (1993). “Conscious Experience.” Mind 102(406): 262-283.

DRETSKE, FRED (1995). Naturalizing the Mind. Cambridge, MA: MIT Press.

DRETSKE, FRED (2010). “What We See: the Texture of Conscious Experience.” In Bence Nanay (Ed.) Perceiving the World (pp. 54-67). New York: Oxford University Press.

DUMMETT, MICHAEL (2010). The Nature and Future of Philosophy. New York: Columbia University Press.

DUNN, M.J. & HARDEGREE, G. (2001). Algebraic Methods in Philosophical Logic. Oxford: Clarendon Press.

EGAN, FRANCES (1991). “Must Psychology be Individualistic?” Philosophical Review 100: 179-203.

EGAN, FRANCES (1995). “Computation and Content.” Philosophical Review 104: 181-203.

EHRING, DOUGLAS (2011). Tropes: Properties, Objects, and Mental Causation. New York: Oxford University Press.

ELDER, JAMES & ZUCKER, STEVEN (1993). “The Effect of Contour Closure on the Rapid Discrimination of Two- Dimensional Shapes.” Vision Research 33(7): 981-991.

ELGIN, CATHERINE (2004). “True Enough.” Philosophical Issues 14: 113-131.

ELGIN, CATHERINE (2017). True Enough. Cambridge, MA: MIT Press.

ELIASMITH, CHRIS (2003). “Moving Beyond Metaphors: Understanding the Mind for What It Is.” The Journal of Philosophy, 100(10): 493-520. 262 ENGEL, ANDREAS K. & SINGER, WOLF (2001). “Temporal Binding and the Neural Correlates of Sensory Awareness.” TRENDS in Cognitive Sciences 5(1): 16-25.

ENNS, J.T. (1990). “Three-dimensional features that pop out in visual search.” In D. Brogan (Ed.) Visual search (pp. 37-45). Philadelphia: Taylor & Francis.

EVANS, GARETH (1982). The Varieties of Reference. New York: Oxford University Press.

FARAH, MARTHA (1994). “Neuropsychological Inference with an Interactive Brain: A Critique of the “Locality” Assumption.” Behavioral and Brain Sciences 17: 43-104.

FARRELL, B.A. (1950). “Experience.” Mind 59: 170-198.

FEEST, ULJANA (2011). “What exactly is stabilized when phenomena are stabilized?” Synthese 182: 57-71.

FELDMAN, JACOB (2000). “Bias Toward Regular Form in Mental Shape Spaces” Journal of Experimental Psychology: Human Perception and Performance 26(1): 1-14.

FELDMAN, JACOB (2003). “What is a Visual Object?” TRENDS in Cognitive Sciences 7(6): 252-256.

FELLMAN, D. & VAN ESSEN, D. (1991). “Distributed Hierarchical Processing in the Primate Cerebral Cortex.” Cerebral Cortex 1(1): 1-47.

FINE, KIT (1994). “Essence and Modality.” Philosophical Perspectives 8: 1-16.

FINE, KIT (2002). “The Varieties of Necessity.” In T. Szabó Gendler & J. Hawthorne (Eds.) Conceivability and Possibility (pp. 253-281). New York: Oxford University Press.

FIRTH, RODERICK (1965). “Sense-Data and the Percept Theory.” In R. Schwarz (Ed.) Perceiving, Sensing, and Knowing (pp. 204-270). London, UK: University of California Press.

FISH, WILLIAM. (2009). Perception, Hallucination, and Illusion. New York: Oxford University Press.

FISH, WILLIAM (2010). Philosophy of Perception: A Contemporary Introduction. New York: Routledge.

FLAMENT-FULTOT, MARTIN (2016). “Counterfactuals versus Constraints: Towards an Implementation Theory of Sensorimotor Mastery.” Journal of Consciousness Studies, 23(5-6): 153-176.

FLANAGAN, OWEN (1992). Consciousness Reconsidered. Cambridge, MA: MIT Press.

FLANAGAN, OWEN (2000). Dreaming Souls: Sleep, Dreams and the Evolution of the Conscious Mind. New York: Oxford University Press.

FODOR, JERRY (1983). The Modularity of Mind. Cambridge, MA: MIT Press.

263 FOSTER, DAVID H. (1983). “Experimental Test of a Network Theory of Vision.” Behavioral and Brain Sciences 6(4): 664.

FOSTER, DAVID H. (2011). “Color Constancy.” Vision Research 51: 674-700.

FREGE, GOTTLOB (1918). “Logische Untersuchungen I: Der Gedanke” Beiträge zur Philosophie des deutschen Idealismus I: 58-77.

FRENCH, STEVEN (2003). “A Model-Theoretic Account of Representation (Or, I Don’t Know Much About Art…But I Know It Involves Isomorphism)” Philosophy of Science 70(5). 1472-1483.

FRIGG, ROMAN (2010). “Models and Fiction.” Synthese 172: 251-268.

FRIGG, ROMAN & HARTMANN, STEPHAN (2009). “Models in Science.” In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy (Spring 2017 edition). https://plato.stanford.edu/archives/spr2017/entries/models- science/

FRISBY, J. & STONE, J.V. (2007). Seeing: The Computational Approach to Biological Vision. Cambridge, MA: MIT Press.

FRISTON, KARL (2011). “Functional and Effective Connectivity: A Review.” Brain Connect 1: 13-16.

FRY, GLEAN A. (1948). “Mechanisms Subserving Simultaneous Brightness Contrast.” American Journal of Optometry and Archives of American Academy of Optometry 25(4): 162-178.

GALLAGHER, SHAUN (1997). “Mutual Enlightenment: Recent Phenomenology in Cognitive Science.” Journal of Consciousness Studies 4(3): 195-214.

GAMEZ, DAVID (2008). “Progress in Machine Consciousness.” Consciousness and Cognition 17: 887-910.

GARCIA, ROBERT (2014). “Bundle Theory’s Black Box: Gap Challenges for the Bundle Theory of Substance” Philosophia 42(1): 115-126.

GARCIA, ROBERT (2015). “Two Ways to Particularize a Property.” Journal of the American Philosophical Association. DOI: 10.1017/apa.2015.21

GARSON, JAMES (2001). “(Dis)solving the binding problem.” Philosophical Psychology 14(4): 381-392.

GARSON, JUSTIN (2013). “The Functional Sense of Mechanism.” Philosophy of Science 80: 317-333.

GELFERT, AXEL (2016). How to Do Science with Models: A Philosophical Primer. Dordrecht: Springer.

GELFERT, AXEL (2017). “The Ontology of Models.” In L.Magnani & T. Bertolotti (Eds.) Handbook of Model- Based Science (pp. 5-23). Dordrecht: Springer.

264 GENDLER SZABÓ, ZOLTÁN (2003). “Nominalism” In Loux & Zimmerman (2003): 11-45.

GENNARO, ROCCO (1996). Consciousness and Self-Consciousness: A Defense of the Higher-Order Thought Theory of Consciousness. Amsterdam-Philadelphia: Benjamin.

GERVAIS, RAOUL (2015). “Mechanistic and Non-Mechanistic Varieties of Dynamical Models in Cognitive Science: Explanatory Power, Understanding, and the ‘Mere Description’ Worry.” Synthese 192(1): 43-66.

GERVAIS, RAOUL & WEBER, ERIK (2011). “The Covering Law Model Applied to Dynamical Cognitive Science: A Comment on Joel Walmsley.” Minds & Machines 21(1): 33-39.

GIACINO, J.T., ASHWAL, S., CHILDS, N., CRANFORD, R., JENNETT, B., KATZ, D., KELLY, J.P., ROSENBERG, J.H.,

WHYTE, J., ZAFONTE, R.D., ZASLER, N.D. (2002). “The Minimally Conscious State: Definition and Diagnostic Criteria.” Neurology 58: 349-353.

GIBSON, J.J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.

GIERE, RONALD (1988). Explaining Science. London: University of Chicago Press.

GIERE, RONALD (2004). “How Models are Used to Represent Reality.” Philosophy of Science 71(5): 742-752.

GŁADZIEJEWSKI, PAWEL & MIŁKOWSKI, MARCIN (2017). “Structural Representations: Causally Relevant and Different from Detectors” Biology & Philosophy. DOI 10.1007/s10539-017-9562-6.

GLENNAN, STUART (1996). “Mechanisms and the Nature of Causation.” Erkenntnis 44(1): 49-71.

GLENNAN, STUART (2002). “Rethinking Mechanistic Explanation.” Philosophy of Science 69(3): 342-353.

GODFREY-SMITH, PETER (2006a). “Theories and Models in Metaphysics.” Harvard Review of Philosophy 14: 4- 19.

GODFREY-SMITH, PETER (2006b). “The Strategy of Model-Based Science.” Biology and Philosophy 21: 725-740.

GODFREY-SMITH, PETER (2009). “Models and Fictions in Science.” Philosophical Studies 143: 101-116.

GODFREY-SMITH, PETER (2012). “Metaphysics and the Philosophical Imagination.” Philosophical Studies 160: 97- 113.

GOMES, ANIL & FRENCH, CRAIG (2016). “On the Particularity of Experience.” Philosophical Studies 173: 451-460.

GOODALE, MELVYN (2001). “Why Vision is More Than Seeing.” Canadian Journal of Philosophy 31(1): 186-214.

GOODMAN, NELSON (1978). Ways of Worldmaking. Indianapolis: Hackett.

265 GOSSERIES, O., DI, H., LAUREYS, S. & BOLY, M. (2014). “Measuring Consciousness in Severely Damaged Brains.”

Annual Review of Neuroscience 37: 457-478.

GOULD, STEPHEN J. & LEWONTIN, RICHARD (1979). “The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Program.” Proceedings of the Royal Society of London. Series B, Biological Sciences 205(1161): 581-598.

GREEN, E.J. (2017). “On the Perception of Structure” Noûs. Doi: 10.1111/nous.12207.

GREENWOOD, JOHN (2015)2. A Conceptual History of Psychology: Exploring the Tangled Web. Cambridge: Cambridge University Press.

GRICE, H.P (1961). “The causal theory of Perception.” Proceedings of the Aristotelian Society 35: 121-152.

GRILL-SPECTOR, KALANIT & KANWISHER, NANCY (2005). “Visual recognition: As Soon as You Know It Is There, You Know What It Is.” Psychological Science 16(2): 152-160.

GRILL-SPECTOR, KALANIT & MALACH, RAFAEL (2004). “The Human Visual Cortex.” Annual Review of Neuroscience 27: 649-677.

GROSSBERG, STEPHEN (1984). “Neuroethology and Theoretical Neurobiology.” Behavioral and Brain Sciences 7: 388-390.

GROSSMANN, REINHARDT (1992). The Existence of the World. London: Routledge.

GRUSH, RICK. “The Architecture of Representation.” In Bechtel, Mandik, Mundale, Stufflebeam (2001): 349-368.

GUR, M. & SNODDERLY, D.M. (1997). “A Dissociation between Brain Activity and Perception: Chromatically Opponent Cortical Neurons Signal Chromatic Flicker that is Not Perceived.” Vision Research 37: 377-382.

HACKING, IAN (1981). “Do we see through a microscope?” Pacific Philosophical Quarterly 62(4): 305-322.

HACKING, IAN (1983). Representing and Intervening: Introductory Topics in the Philosophy of Natural Science. Cambridge, MA: Cambridge University Press.

HAIG, BRIAN D. (2014) Investigating the Psychological World: Scientific Method in the Behavioral Sciences. Cambridge, MA: MIT Press.

HAMBURGER, KAI, PRIOR, HELMUT, SARRIS, VIKTOR & SPILLMANN, LOTHAR (2006). “Filling-in with Colour: Different Modes of Surface Completion.” Vision Research 46: 1129-1138.

HARMAN, GILBERT (1990). “The Intrinsic Quality of Experience.” Philosophical Perspectives 4: 31-52.

266 HARVEY, I., HUSBANDS, P., CLIFF, D.T., THOMPSON, A., & JAKOBI, N. (1997). “Evolutionary Robotics: the Sussex Approach.” Robotics and Autonomous Systems, 20: 205-224.

HAUGELAND, JOHN (1998). “The Nature and Plausibility of Cognitivism.” In Id. Having Thought: Essays in the Metaphysics of Mind (pp. 9-45). Cambridge, MA: Harvard University Press.

HAUN, A., TONONI, G., KOCH, C., TSUCHIYA, N. (2017). “Are We Underestimating the Richness of Visual Experience?” Neuroscience of Consciousness. doi: 10.1093/nc/niw023.

HAWLEY, KATHERINE & MACPHERSON, FIONA (Eds.). The Admissible Contents of Experience. Oxford: Wiley- Blackwell.

HAYNES, J.D., DEICHMANN, R. & REES, G. (2005). “Eye-Specific Effects of Binocular Rivalry in the Human Lateral Geniculate Nucleus.” Nature 438: 496-499.

HECK, RICHARD G. (2000). “Non-Conceptual Content and the ‘Space of Reasons’” Philosophical Review 109: 483-523.

HEIDELBERGER, MICHAEL (2000). “Fechner und Mach zum Leib-Seele-Problem.” In A. Arndt & W. Jaeschke (Eds.) Materialismus und Spiritualismus. Philosophie und Wissenschaft nach 1848 (pp. 53-67). Hamburg: Meiner.

HEIDELBERGER, MICHAEL (2002). “Wie das Leib-Seele Probleme in den Logischen Empirismus kam.” In M. Pauen & A. Stephan (Eds.) Phänomenales Bewusstsein: Rückkehr der Identitätstheorie? (pp. 43-70).Paderborn: Mentis.

HEIDELBERGER, MICHAEL (2003). “Fechners wissenschaftlich-philosophische Weltauffassung.” In U. Fix (Ed.) Fechner und die Folgen außerhalb der Naturwissenschaften (pp. 25-42). Tübingen: Max Niemeyer.

HEIL, JOHN (2003). From An Ontological Point of View. Oxford: Clarendon Press.

HEMPEL, CARL G. & OPPENHEIM, PAUL (1948). “Studies in the Logic of Explanation.” Philosophy of Science 15(2): 135-175.

HENLE, MARY (1984). “Isomorphism: Setting the Record Straight.” Psychological Research 46: 317-327.

HINTON, J.M. (1967) “Visual Experiences.” Mind 76: 217-227.

HOBSON, ALLAN J. (2007). “States of Consciousness: Normal and Abnormal Variation.” In Zelazo et al. (2007): 435-444.

HOCHBERG, HERBERT (1965). “Universals, Particulars, and Predication.” The Review of Metaphysics 19(1): 87- 102.

267 HOCHSTEIN, ERIC (2013). “Intentional Models as Essential Scientific Tools.” International Studies in the Philosophy of Science, 27(2): 199-217.

HOCHSTEIN, ERIC (2016). “One Mechanism, Many Models: A Distributed Theory of Mechanistic Explanation.” Synthese 193(5): 1387-1407.

HOFFMAN, MATEJ (2014). “Minimally Cognitive Robotics: Body Schema, Forward Models, and Sensorimotor Contingencies in a Quadruped Machine.” In Bishop & Martin (2014): 209-233.

HOFFMAN, D.D. & RICHARDS, W.A. (1984). “Parts of Recognition” Cognition 18: 65-96.

HOFFMAN, JOSHUA & ROSENKRANTZ, GARY S. (2003). “Platonistic Theories of Universals” In Loux & Zimmerman (2003): 46-74.

HOHWY, JAKOB (2007). “The Search for the Neural Correlates of Consciousness.” Philosophy Compass 2(3): 461-

474.

HOHWY, JAKOB (2009). “The Neural Correlates of Consciousness: New Experimental Approaches Needed?” Consciousness and Cognition 18: 428-438.

HOHWY, JAKOB & BAYNE, TIM (2015). “The Neural Correlates of Consciousness: Causes, Confounds and Constituents.” In Miller (2015a): 155-176.

HOHWY, JAKOB & FRITH, CHRIS (2004). “The Neural Correlates of Consciousness: Room for Improvement, But on the Right Track.” Journal of Consciousness Studies 11(1): 45-51.

HORST, STEVEN (2005). “Phenomenology and Psychophysics.” Phenomenology and the Cognitive Sciences 4: 1- 21.

HOTTON, SCOTT & YOSHIMI, JEFF (2011). “Extending Dynamical Systems Theory to Model Embodied Cognition.” Cognitive Science 35(3): 444-479.

HUDSON, ROBERT (2014). Seeing Things: The Philosophy of Reliable Observation. New York: Oxford University Press.

HULLEMAN, JOHAN & HUMPREYS, GLYN W. (2004). “A New Cue to Figure-Ground Coding: Top-Bottom Polarity.” Vision Research, 44(2): 2779-2791.

HUNEMAN, PHILIPPE (2018). “Diversifying the Picture of Explanations in Biological Sciences: Ways of Combining Topology with Mechanisms.” Synthese. https://doi.org/10.1007/s11229-015-0808-z.

HURLBURT, RUSSELL T. & SCHWITZGEBEL, ERIC (2007). Describing Inner Experience? Proponent Meets Skeptic. Cambridge, MA: MIT Press.

268 HURLEY, SUSAN & NOË, ALVA (2003). “Neural Plasticity and Consciousness.” Biology and Philosophy 18(1): 131- 168.

HUSSERL, EDMUND (1991). Ding und Raum. Hamburg: Felix Meiner Verlag.

HUSSERL, EDMUND (1993). Logische Untersuchungen, Vol. II/1. Tübingen: Max Niemeyer Verlag.

HUTH, ALEXANDER G., LEE, TYLER, NISHIMOTO, SHINJI, BILENKO, NATALIA Y., VU, AN T. & GALLANT, JACK

L. (2016). “Decoding the Semantic Conent of Natural Movies from Human Brain Activity” Frontiers in System Neuroscience. Doi: 10.3389/fnsys.2016.00081.

HUTTO. DAN (2005). “Knowing What? Radical versus Conservative Enactivism.” Phenomenology and the Cognitive Sciences 4(4): 389-406.

ILLARI, PHYLLIS MCKAY & WILLIAMSON, JON (2012). “What Is A Mechanism? Thinking About Mechanisms Across the Sciences.” European Journal of Philosophy of Science 2: 119-135.

ISHAI, A., UNGERLEIDER, L.G., MARTIN, A., SCHOUTEN, J.L. & HAXBY, J.V. (1999). “Distributed Representation of Objects in the Human Ventral Visual Pathway.” Proceedings of the National Academy of Sciences, USA 96: 9379-9384.

JACKSON, FRANK (1977a). Perception: A Representative Theory. London: Cambridge University Press.

JACKSON, FRANK (1977b). “Statements about Universals”. Mind 86(343): 427-429.

JOHNSTON, MARK (2006). “Better than Mere Knowledge? The Function of Sensory Awareness.” In T. Szabó Gendler and J. Hawthorne (Eds.) Perceptual Experience (pp. 260-290). Oxford: Oxford University Press.

JOO, JUNGSEOCK, WANG, SHUO, & ZHU, SONG-CHUN (2015). “Hierarchical Organization by-and-or Tree” in Wagemans (2015): 919-932.

KAISER, MARIE & KRICKEL, BEATE (2017). “The Metaphysics of Constitutive Mechanistic Phenomena.” British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axv058.

KAMMER, THOMAS (1998). “Phosphenes and Transient Scotomas Induced by Magnetic Stimulation tf the Occipital Lobe : Their Topographic Relationship.” Neuropsychologia 37(2): 191-198.

KANAI, RYOTA & TSUCHIYA, NAOTSUGU (2012). “Qualia.” Current Biology 22(10): R392-R396.

KANIZSA, GAETANO (1994). “Gestalt Theory Has Been Misinterpreted, But Has Also Had Some Real Conceptual Difficulties.” Philosophical Psychology 7(2): 149-162.

KANIZSA, GAETANO & GERBINO, WALTER (1982). “Amodal completion: seeing or thinking?” In J. Beck (Ed.) Organization and Representation in Perception (pp. 167-190). Hillsdale, NJ: Lawrence Erlbaum Associates. 269 KANWISHER, NANCY (2010). “Functional specificity in the human brain: A window into the functional architecture of the mind.” Proceedings of the National Academy of Sciences 107(25): 11163-11170.

KANWISHER, NANCY, MCDERMOTT, J., & CHUN, M.M. (1997). “The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception.” Journal of Neuroscience 17: 4302:4311.

KAPLAN, DAVID & BECHTEL, WILLIAM (2011). “Dynamical Models: An Alternative or Complement to Mechanistic Explanations?” Topics in Cognitive Science 3(2): 438-444.

KAPLAN, DAVID & CRAVER, CARL F. (2011). “The Explanatory Force of Dynamical and Mathematical Models in Neuroscience.” Philosophy of Science 78(4): 601-627.

KAUFFMAN, STUART (1970). “Articulation of Parts Explanation in Biology and the Rational Search for Them.” Boston Studies in the Philosophy of Science 8: 257-272.

KHURANA, BEENA (2000). “Face Representation Without Conscious Processing.” In Metzinger (2000): 171-187.

KIM, JEAGWON (1993a). Supervenience and Mind. New York: Cambridge University Press..

KIM, JEAGWON (1993b). “Postscripts on Supervenience.” In Kim (1993a): 161-171.

KIM, JEAGWON (1993c). “Psychophysical Supervenience.” In Kim (1993a): 175-193.

KIMCHI, RUTH (2015). “The Perception of Hierarchical Structure.” In Wagemans (2015): 129-149.

KIMCHI, RUTH, YESHURUN, YAFFA, SPEHAR, BRANKA, & PIRKNER, YOSSEF (2016). “Perceptual Organization,

Visual Attention, and Objecthood.” Vision Research 126: 34-51.

KLEIN, COLIN (2016). “Brain Regions as Difference-Makers.” Philosophical Psychology http://dx.doi.org/10.1080/09515089.2016.1253053

KLINK, CHRSISTIAAN P, SELF, MATTHEW W., LAMME, VICTOR A., ROELFSEMA, PIETER R. (2015). “Theories and

Methods in the Scientific Study of Consciousness.” In Miller (2015a): 17-47.

KOCH, CHRISTOF (2004). The Quest for Consciousness. Englewood: Roberts & Company.

KOCH, CHRISTOF, MASSIMINI, MARCELLO, BOLY, MELANIE & TONONI, GIULIO (2016). “Neural Correlates of Consciousness: Progress and Problems.” Nature Reviews Neuroscience 17: 307-321.

KÖHLER, WOLFGANG (1929). Gestalt Psychology. Oxford: Liverlight.

KOMATSU, HIDEHIKO (2006). “The Neural Mechanisms of Perceptual Filling-in.” Nature Reviews Neuroscience, 7: 220-231.

270 KOUBEISSI, MOHAMAD, BARTOLOMEI, FABRICE, BELTAGY, ABDELRAHMAN & PICARD, FABIENNE (2014). “Electrical Stimulation of a Small Brain Area Reversibly Disrupts Consciousness.” Epilepsy & Behavior 37: 32-35.

KRICKEL, BEATE (forth.). The Metaphysics of Mechanisms.

KRIEGEL, URIAH (2004). “Trope Theory and the Metaphysics of Appearances.” American Philosophical Quarterly 41(1): 5-20.

KRIPKE, SAUL (1980). Naming and Necessity. Cambridge, MA: Harvard University Press.

LADYMAN, JAMES (2012). “Science, Metaphysics and Method.” Philosophical Studies 160: 31-51.

LADYMAN, JAMES & ROSS, DON (2007). Every Thing Must Go: Metaphysics Naturalized. New York: Oxford University Press.

LAKOFF, GEORGE (1988). “Smolensky, semantics, and the sensorimotor system.” Behavioral and Brain Sciences, 11(1): 39-40.

LAMING, DONALD (1983). “On the Need for Discipline in the Construction of Psychological Theories” Behavioral and Brain Sciences 6(4): 669.

LAMME, VICTOR (2004). “Separate Neural Definitions of Visual Consciousness and Visual Attention: A Case for Phenomenal Awareness.” Neural Networks 17: 861-872.

LAMME, VICTOR (2006). “Towards a True Neural Stance on Consciousness.” TRENDS in Cognitive Sciences 10(11): 494-501.

LAND, EDWARD H. & MCCANN, JOHN (1971). “Lightness and Retinex Theory.” Journal of the Optical Society of America 61(1): 1-11.

LANGE, MARC (2013). “What Makes a Scientific Explanation Distinctively Mathematical?” British Journal for the Philosophy of Science 64: 485-511.

LAUREYS, STEVEN (2005). “The Neural Correlates of (Un)Awareness: Lessons from the Vegetative State.” Trends in Cognitive Science 12: 556-559.

LEGRENZI, PAOLO (Ed.) (2012). Storia della psicologia. Bologna: Il Mulino.

LEHAR, STEVEN (1999). “Gestalt Isomorphism and the Quantification of Spatial Perception.” Gestalt Theory 21(2),122-139.

LEHAR, STEVEN (2003). “Gestalt Isomorphism and the Primacy of the Subjective Conscious Experience: A Gestalt Bubble Model.” Behavioral and Brain Sciences 26, 375-444.

271 LEOPOLD, DAVID A. & LOGOTHETIS, NIKOS K. (1996). “Activity Changes in Early Visual Ocrtex Reflect Monkeys’ Percepts During Binocular Rivalry.” Nature 379: 549-553.

LEOPOLD, DAVID A. & MAIER, ALEXANDER (2006). “Neuroimaging: Perception at the Brain’s Core.” Current Biology 16(3): R95-R98.

LEURIDAN, BERT (2010). “Can Mechanisms Really Replace Laws of Nature?” Philosophy of Science 77(3): 317- 340.

LEVINE, JOSPEH (1983). “Materialism and Qualia: The Explanatory Gap.” Pacific Philosophical Quarterly 64: 354- 361.

LEVY, ARNON & BECHTEL, WILLIAM (2013). “Abstraction and the Organization of Mechanisms.” Philosophy of Science 80: 241-261.

LEVY, ARNON & CURRIE, ADRIAN (2015). “Model Organisms are Not (Theoretical) Models.” British Journal for the Philosophy of Science 66(2): 327348.

LEWIS, DAVID K. (1980). “Veridical Hallucination and Prosthetic Vision.” Australasian Journal of Philosophy 58(3): 239-249.

LI, KORINA & MALHOTRA, PARESH (2015). “Spatial Neglect.” Practical Neurology 15: 333-339.

LING, HALBIN & JACOBS, DAVID W. (2007). “Shape Classification Using the Inner-Distance.” IEE Transactions on Pattern Analysis and Machine Intelligence 29(2): 286-299.

LIOTTI, GIOVANNI (2008). “Patologia della Coscienza: La Dimensione Interpersonale.” Sistemi Intelligenti 20(3).

LIU, ZILI, JACOBS, DAVID W., & BASRI, RONEN (1999). “The Role of Convexity in Perceptual Completion: Beyond Good Continuation.” Vision Research 39: 4244-4257.

LOGOTHETIS, NIKOS K. (1999). “Vision: A Window on Consciousness.” Scientific American 281: 68-75.

LOUX, MICHAEL J. (1978) Substance and Attribute. Dordrecht: Reidel.

LOUX, MICHAEL J. (2002) Metaphysics: A Contemporary Introduction. London: Routledge.

LOUX, MICHAEL J. & ZIMMERMAN, DEAN (Eds.) (2003). The Oxford Handbook of Metaphysics. New York: Oxford University Press.

LOWE, E.J. (2006). The Four-Category Ontology. New York: Oxford University Press.

LUCCIO, RICCARDO (2010). “Anent Isomorphism and Its Ambiguities: From Wertheimer to Köhler and Back to Spinoza.” Gestalt Theory 32(3): 208-234.

272 LUCHINS, ABRAHAM & LUCHINS, EDITH (2015). “Isomorphism in Gestalt Theory: Comparison of Wertheimer’s and Köhler’s Concepts.” Gestalt Theory 37(1): 69-100. Originally printed in 1999, Gestalt Theory 21(3): 145-171.

LYCAN, WILLIAM (1996). Consciousness and Experience. Cambridge, MA: MIT Press.

LYRE, HOLGER (2017). “Structures, Dynamics and Mechanisms in neuroscience: An Integrative Account.” Synthese. http://doi.org/10.1007/s11229-017-1616-4.

MACBRIDE, FRASER (2005). “The particular-Universal Distinction: A Dogma of Metaphysics?” Mind 114(455): 565-614.

MACH, ERNST (1865). “Über die Wirkung der räumlichen Vertheilung des Lichtreizes auf der Netzhaut.” Sitzungsberichte der kaiserlichen Akademie der Wissenschaften, Mathematisch-naturwissenschaftliche Classe 52(II): 303-322. Online at http://echo.mpiwg- berlin.mpg.de/ECHOdocuView?url=/permanent/vlp/lit29389/index.meta&viewMode=image (retrieved December 6th 2016).

MACHAMER, PETER, DARDEN, LINDLEY, & CRAVER, CARL F. (2000). “Thinking About Mechanisms.” Philosophy of Science 67(1): 1-25.

MACPHERSON, FIONA (2011). “Taxonomising the Senses.” Philosophical Studies 153: 123-142.

MADDEN, EDWARD H. (1957). “A Logical Analysis of ‘Psychological Isomorphism’.” The British Journal for the Philosophy of Science 8: 177-191.

MAIXNER, UWE (2009). “States of Affairs — The Full Picture.” In Reicher (2009): 51-70.

MÄKI, USKALI (2009). “MISSing the World. Models as Isolations and Credible Surrogates Systems.” Erkenntnis 70(1): 29-43.

MALACH, RAFAEL (1994). “Cortical columns as devices for maximizing neuronal diversity.” Trends in Neurosciences 17(3): 101-104.

MALACH, RAFAEL, LEVY, IFAT & HASSON, URI (2002). “The Topography of High-Order Human Object Areas.” TRENDS in Cognitive Sciences 6(4): 176-184.

MANCIA, MAURO (2006). Sonno e Sogno. Roma-Bari: Laterza.

MARCONI, DIEGO (2012). “Quine and Wittgenstein on the Philosophy/Science Divide.” Humana.Mente Journal of Philosophical Studies 21: 173-189.

MARR, DAVID (1977). “Artificial Intelligence: A Personal View.” Artificial Intelligence 9: 37-48.

MARR, DAVID (2010). Vision. Cambridge, MA: MIT Press. 273 MARR, DAVID & NISHIHARA, H.K. (1978). “Representation and Recognition of the Spatial Organization of Three-

Dimensional Shapes.” Proceedings of the Royal Society of London B, Biological Sciences 200, 269-294.

MARTIN, A., WIGGS, C.L., UNGERLEIDER, L.G. & HAXBY, J.V. (1996). “Neural Correlates of Category-Specific Knowledge.” Nature 379: 649-652.

MARTIN, C.B. (1980). “Substance Substantiated.” Australasian Journal of Philosophy 58(1): 3-10.

MARTIN, KEVAN (1988). “From Enzymes to Visual Perception: A Bridge Too Far?” Trends in Neurosciences 11(9): 380-387.

MARTIN, M.G.F. (1998). “Setting Things Before the Mind.” In A. O’Hear (Ed.) Current Issues in Philosophy of Mind (pp. 157-180). Cambridge: Cambridge University Press.

MARTIN, M.G.F. (2004). “The Limits of Self-Awareness.” Philosophical Studies 120: 37-89.

MARTIN, M.G.F. (2010). “What’s in A Look?” In B. Nanay (Ed.) Perceiving the World (pp. 160-225). New York: Oxford University Press.

MASROUR, FARID (2015). “The Geometry of Visual Space and the Nature of Visual Expeirence.” Philosophical Studies 172: 1813-1832.

MASROUR, FARID (2017). “Space Perception, Visual Dissonance and the Fate of Standard Representationalism.” Noûs 51(3): 565-593.

MATSUMOTO, MASAYUKI & KOMATSU, HIDEHIKO (2005). “Neural responses in the macaque V1 to bar stimuli with various lengths presented on the blind spot.” Journal of Neurophysiology 93: 2374-2387.

MATTHEN, MOHAN (2004). “Features, Places, and Things: Reflections on Austen Clark’s Theory of Sentience.” Philosophical Psychology 17(4): 497-518.

MATTHEN, MOHAN (2005). Seeing, Doing, and Knowing. New York: Oxford University Press.

MAURIN, ANNA-SOPHIE (2002). If Tropes. Dordrecht: Kluwer Academic Publishers.

MAYE, ALEXANDER & ENGEL, ANDREAS (2011). “A discrete computational model of sensorimotor contingencies for object perception and control behavior”. In 2011 IEE International Conference on Robotics and Automation (ICRA) (pp. 3810-3815). Shangai: IEEE.

MCDANIEL, KRIS (2001). “Tropes and Ordinary Physical Objects” Philosophical Studies 104: 269-290.

MCDOWELL, JOHN (1982) “Criteria, Defeasibility, and Knowledge.” Proceedings of the British Academy, 68: 455- 479.

274 MCDOWELL, JOHN (1984). “De Re Senses.” The Philosophical Quarterly 34(136): 283-294.

MCDOWELL, JOHN (1994). Mind and World. Cambridge, MA: Harvard University Press.

MCDOWELL, JOHN (2000). “Response to Suhm, Wagemann, Wessels” in Willaschek (2000): 93-95

MCDOWELL, JOHN (2001). “The True Modesty of an Identity Conception of Truth: A Note in Response to Pascal Engel.” International Journal of Philosophical Studies 13(1): 83-88.

MCDOWELL, JOHN (2013). “Perceptual Experience: Both Relational and Contentful.” European Journal of Philosophy 21(1): 144-157.

MCGINN, COLIN (2012). “All Machine and No Ghost?” New Statesman, February 20 https://www.newstatesman.com/ideas/2012/02/consciousness-mind-brain.

MCGOVERN, KATHARINE & BAARS, BERNARD J. (2007). “Cognitive Theories of Consciousness.” In Zelazo et al. (2007): 177-205.

MCKAY, D.M. (1962). “Theoretical Models of Space Perception.” In C.A. Muses (Ed.) Aspects of the Theory of Artificial Intelligence (pp. 83-104). New York: Plenum Press.

MCKEEFRY, D.J. & ZEKI, SEMIR (1997). “The Position and Topography of the Human Color Centre as Revealed by Functional Magnetic Resonance Imaging.” Brain 120: 2229-2242.

MCLAUGHLIN, BRIAN (1995). “Varieties of Supervenience.” In E. Savellos & Ü. Yalçin (1995): 16-59.

MCLAUGHLIN, BRIAN & BENNETT, KAREN (2011). “Supervenience.” In E.N. Zalta (Ed.) The Stanford Encyclopedia of Philosophy https://plato.stanford.edu/archives/spr2014/entries/supervenience/

MEIXNER, UWE (2009). “States of Affairs—The Full Picture.” In Reicher (2009): 51-70.

MELHADO, E.M. (1980) “Mitscherlich’s Discovery of Isomorphism.” Historical Studies in the Physical Sciences 11(1): 87-123.

MELLONI, LUCIA & SINGER, WOLF (2010). “Distinct Characteristics of Conscious Experience Are Met by Large- Scale Neuronal Synchronization.” In E.K. Perry, D. Collerton, F.E.N. LeBeau, H. Ashton (Eds.) New Horizons in the Neuroscience of Consciousness (pp. 17.28). Amsterdam-Philadephia: John Benjamin.

METZINGER, THOMAS (1995). “The Problem of Consciousness.” In Id. (Ed.) Conscious Experience (pp. 3-37). Thorverton: Imprint Academic.

METZINGER, THOMAS (Ed.) (2000). Neural Correlates of Consciousness. Cambridge, MA: MIT Press.

275 METZINGER, THOMAS (2000). “The Subjectivity of Subjective Experience: A Representational Analysis of the First-Person Perspective” In Metzinger (2000): 285-306.

MIŁKOWSKI, MARCIN (2013). Explaining the Computational Mind. Cambridge, MA: MIT Press.

MIŁKOWSKI, MARCIN (2016a). “Unification Strategies in Cognitive Science.” Studies in Logic, Grammar and Rhetoric 48(61): 13-33.

MIŁKOWSKI, MARCIN (2016b). “Integrating Cognitive (Neuro)Science Using Mechanisms” AVANT 6(2): 45-67.

MIŁKOWSKI, MARCIN (Forth). “Modelling Empty Representations: The Case of Computational Models of Hallucination.”

MILLER, STANLEY (1953). “A Production of Amino Acids under Possible Primitive Earth Conditions.” Science 117(3046): 528-529.

MILLER, STEVEN (2001). “Binocular Rivalry and the Cerebral Hemispheres: With a Note on the Correlates and

Constitution of Visual Consciousness.” Brain Mind 2: 119-149.

MILLER, STEVEN (2007). “On the Correlation/Constitution Distinction Problem (and Other Hard Problems) in the Scientific Study of Consciousness” Acta Neuropsychiatrica 19: 159-176.

MILLER, STEVEN (2014). “Closing In on the Constitution of Consciousness.” Frontiers in Psychology 5. doi: 10.3389/fpsyg.2014.01293.

MILLER, STEVEN (2015a). The Constitution of Phenomenal Consciousness. Amsterdam-Philadelphia: John Benjamin Publishing Company.

MILLER, STEVEN (2015b). “The Correlation/Constitution Distinction Problem: Foundations, Limits, and Explanation in Consciousness Science.” In Miller (2015a): 104-154.

MINSKY, MARVIN (1975). “A Framework for Representing Knowledge.” In P.H. Winston (Ed.) The Psychology of Computer Vision (pp. 211-277). New York: McGraw-Hill.

MIROLLI, MARCO (2012). “Representations in Dynamical Embodied Agents: Re-Analyzing a Minimally Cognitive Model Agent.” Cognitive Science, 36(5): 870-895.

MISHKIN, MORTIMER, UNGERLEIDER, LESLIE G. &. MACK, KATHLEEN A. (1983). “Object Vision and Spatial Vision: Two Cortical Pathways.” Trends in Neurosciences 6: 414-417. Rep. in Bechtel et al. (2001): 199-208.

MITCHELL, SANDRA (2000). “Dimensions of Scientific Law.” Philosophy of Science, 67(2): 242-265.

MOORE, GEORGE EDWARD (1953). “Sense-Data” in Id. Some Main Problems of Philosophy (pp. 28-40). London:

George Allen & Unwin. 276 MOORE, GEORGE EDWARD, STOUT, G.F., & HICKS, DAWES G. (1923). “Symposium: Are The Characteristics of

Particula Things Universal or Particular?” Proceedings of the Aristotelian Society. Suppl. Volume 3: 95-128.

MÜLLER, GEORG E (1896). “Zur Psychophysik der Gesichtsempfindungen. Kap. 1.” Zeitschrift für Psychologie und Physiologie der Sinnesorgane 10: 1-82. Online: http://echo.mpiwg- berlin.mpg.de/ECHOdocuView?url=/permanent/vlp/lit29843/index.meta&start=81&viewMode=image&pn=82 (retrieved December 8th, 2016).

MULLIGAN, KEVIN (1999). “Perception, Particulars and Predicates.” In D. Fisette (Ed.) Consciousness and Intentionality (pp. 163-194). Dordrecht: Kluwer.

MULLIGAN, KEVIN & CORREIA, FABRICE (2013). “Facts.” In E.N. Zalta (Ed.) The Stanford Encyclopedia of Philosophy. . (Retrieved10 July 2016)

MULLIGAN, KEVIN, SIMONS, PETER & SMITH, BARRY (1984). “Truth-Makers.” Philosophy and Phenomenological Research 44(3): 287-321.

MUMFORD, DAVID (1994) “Bayesian Rationale for the Variational Formulation.” In B. M. ter Haar Romney (Ed.) Geometry-driven Diffusion in Computer Vision (pp. 135-146). Dordrecht: Kluwer Verlag.

MYIN, ERIK & DE NUL, LARS (2009). “Filling-in.” In T. Bayne, A. Cleeremans, P. Wilken (Eds.) The Oxford Companion to Consciousness (pp. 288-290). New York: Oxford University Press.

NACCACHE, LIONEL (2006). “Is She Conscious?” Science 313: 1395-1396.

NAGEL, ERNST (1961). The Structure of Science. New York: Harcourt, Brace & World.

NAGEL, THOMAS (1974). “What Is It Like to Be a Bat?” Philosophical Review 83: 435-450.

NANAY, BENCE (2012). “Perceiving Tropes.” Erkenntnis 77: 1-14.

NANAY, BENCE (2013). Between Perception and Action. New York: Oxford University Press.

NANAY, BENCE (2015). “Perceptual Representation/Perceptual Content.” In M. Matthen (Ed.) The Oxford Handbook of Philosophy of Perception (pp. 153-167). Oxford: Oxford University Press.

NEISSER, JOSEPH (2012). “Neural Correlates of Consciousness Reconsidered.” Consciousness and Cognition 21: 681-690.

NEWEN, ALBERT & BARTELS, ANDREAS (2007). “Animal Minds and the Possession of Concepts.” Philosophical Psychology 20(3): 283-308.

NIELSEN, K.S. (2010). “Representation and Dynamics.” Philosophical Psychology 23(6): 759-773.

277 NISHIMOTO, SHINJI, VU, AN T., NASELARIS, THOMAS, BENJAMINI, YUVAL, YU, BIN & GALLANT, JACK L. (2011). “Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies.” Current Biology 21(19): 1641-1646.

NOË, ALVA (2001). “Experience and the Active Mind.” Synthese 129(1): 41-60.

NOË, ALVA (2002). “On What We See.” Pacific Philosophical Quarterly, 83(1): 57-80.

NOË, ALVA (2004). Action in Perception. Cambridge, MA: MIT Press.

NOË, ALVA (2005a). “Against Intellectualism.” Analysis, 65(4): 278-290.

NOË, ALVA (2005b). “Real Presence.” Philosophical Topics, 33(1): 235-264.

NOË, ALVA (2006). “Experience of the World in Time.” Analysis 66(1): 26-32.

NOË, ALVA (2007). “The Critique of Pure Phenomenology.” Phenomenology and the Cognitive Sciences, 6(1-2):

231-245.

NOË, ALVA (2009a). “Conscious Reference”. The Philosophical Quarterly, 59(236): 470-482.

NOË, ALVA (2009b). Out of Our Heads. New York: Hill & Wang.

NOË, ALVA (2012). Varieties of Presence. Cambridge, MA: Harvard University Press.

NOË, ALVA & O’REGAN, KEVIN (2005). “On the Brain-Basis of Visual Consciousness: A Sensorimotor Account.” In A. Noë & E. Thompson (Eds.) Vision and Mind: Selected Readings in the Philosophy of Perception (pp. 567- 598). Cambridge, MA: MIT Press.

NOË, ALVA & THOMPSON, EVAN (2004). “Are There Neural Correlates of Consciousness?” Journal of Consciousness Studies 11(1): 3-28.

NOË, A. & THOMPSON, E. (2005). “Introduction.” In id. (Eds.), Vision and Mind: Selected Readings in the Philosophy of Perception (pp. 1-14). Cambridge, MA: MIT Press.

NOONAN, HAROLD & BEN CURTIS (2014). “Identity.” In E.N. Zalta (Ed.) The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/sum2014/entries/identity/. (retrieved 20 November 2016).

O’CALLAGHAN, CASEY (2008). “Object Perception.” Philosophy Compass 3(4): 803-829.

OLIVER, ALEX (1996). “The Metaphysics of Properties” Mind 105(417): 1-80.

OLSON, K.R. (1987). An Essay on Facts. Stanford: Center for the Study of Language and Information.

278 OPIE, JON & O’BRIEN, GERARD (2004). “Notes Toward a Structuralist Theory of Mental Representation.” In H. Clapin, P. Staines, P. Slezak (Eds.) Represnetation In Mind: Approaches to Mental Representation (pp. 1-20).

Oxford: Elsevier.

OPIE, JON & O’BRIEN, GERARD (2015). “The Structure of Phenomenal Consciousness.” In Miller (2015a): 445-

464.

O’REGAN, KEVIN (1992). “Solving the ‘Real’ Mysteries of Visual Representations: The World as an Outside Memory.” Canadian Journal of Psychology 46(3): 461-488.

O’REGAN, KEVIN (2011). Why Red Doesn’t Sound Like a Bell: Understanding the Feel of Consciousness. New York: Oxford University Press.

O’REGAN, KEVIN & NOË, ALVA (2001a). “A Sensorimotor Account of Vision and Visual Consciousness.” The Behavioral and Brain Sciences 24(5): 939-973.

O’REGAN, KEVIN & NOË, ALVA (2001b). “Authors’ Response.” Behavioral and Brain Sciences 24(5): 1011-1031.

O’REGAN, KEVIN & NOË, ALVA (2001c). “What It Is Like To See: A Sensorimotor Theory of Perceptual Experience.” Synthese 192(1): 79-103.

O’REGAN, KEVIN & BLOCK, NED (2012). “Discussion of J. Kevin O’Regan’s “Why Red Doesn’t Sound Like a Bell: Understanding the Feel of Consciousness.” The Review of Philosophy and Psychology. doi 10.1007/s13164- 012-0090-7.

ORLANDI, NICO (2014). The Innocent Eye. New York: Oxford University Press.

OVERGAARD, M. & OVERGAARD, R. (2010). “Neural Correlates of Contents and Levels of Consciousness.” Frontiers in Psychology. Doi: 10.3389/fpsyg.2010.00164.

OWEN, ADRIAN M., COLEMAN, MARTIN, DAVIS, MATTHEW, BOLY, MELANIE, LAUREYS, STEVEN, PICKARD,

JOHN (2006). “Detecting Awareness in the Vegetative State.” Science 313: 1402.

OWEN, ADRIAN M., COLEMAN, MARTIN R., BOLY, MELANIE, DAVIS, MATTHEW, LAUREYS, STEVEN, PICKARD,

JOHN (2007). “Using Functional Magnetic Resonance Imaging to Detect Covert Awareness in the Vegetative State.” Archive Neurology 64(8): 1098-1102.

PALMER, STEPHEN (1977). “Hierarchical Structure in Perceptual Representation” Cognitive Psychology 9: 441-474.

PALMER, STEPHEN (1999a). “Color, Consciousness, and the Isomorphism Constraint.” Behavioral and Brain Sciences 22: 923-989.

PALMER, STEPHEN (1999b). Vision Science. Cambridge, MA: MIT Press.

279 PALMER, STEPHEN & ROCK, IRVIN (1994). “Rethinking Perceptual Organization: The Role of Uniform Connectedness.” Psychonomic Bulletin & Review 1(1): 29-55.

PARIVIZI, JOSEF, JACQIES, CORENTIN, FOSTER, BRETT L., WITHOFT, NATHAN, RANGARAJAN, VINTHA, WEINER,

KEVIN S. & GRILL-SPECTOR, KALANIT (2012). “Electrical Stimulation of Human Fusiform Face-Selective Regions Distorts Face Perception.” Journal of Neuroscience 32(43): 14915-14920.

PASCUAL-LEONE, A. & WALSH, V. (2001). “Fast Backprojections from the Motion to the Primary Visual Area Necessary for Visual Awareness.” Science 292: 510-512.

PAUL, LAURIE A. (2012). “Metaphysics as Modeling: The Handmaiden’s Tale.” Philosophical Studies 160(1): 1- 29.

PAUTZ, ADAM (2007). “Intentionalism and Perceptual Presence.” Philosophical Perspectives 21: 495-541.

PAUTZ, ADAM (2010). “Why Explain Visual Experience in Terms of Content?” In B. Nanay (Ed.) Perceiving the World (pp. 254-309). New York: Oxford University Press.

PAUTZ, ADAM (2011). “What Are the Contents of Experience?” In Hawley & Macpherson (2011): 114-138.

PAUTZ, ADAM (2013). “Does Phenomenology Ground Mental Content?” In U. Kriegel (Ed.) Phenomenal Intentionality (pp. 194-234). New York: Oxford University Press.

PEACOCKE, CHRISTOPHER (1992). A Study of Concepts. Cambridge, MA: MIT Press.

PEACOCKE, CHRISTOPHER (2008). “Sensational Properties: Theses To Accept and Theses To Reject.” Revue Internationale de Philosophie 1(243): 7-24.

PESSOA, LUIZ & DE WEERD, PETER (Eds.) (2003). Filling-in: From Perceptual Completion to Cortical Reorganization. Oxford: Oxford University Press.

PESSOA, LUIZ, THOMPSON, EVAN & NOË, ALVA (1998). “Finding Out About Filling-in: A Guide to Perceptual Completion for Visual Science and the Philosophy of Perception.” Behavioral and Brain Sciences 21: 723-802.

PETITOT, JEAN (1992-1993). “Phénoménologie naturalisée et morphodynamique: La fonction cognitive du synthétique ‘a priori’.” Intellectica 17: 79-126.

PETITOT, JEAN (1994). “Phénoménologue computationelle et objectivité morphologique” In J. Proust & E. Schwartz (Eds.) La Connaissance philosophique: Essais sur l’oeuvre de Gilles-Gaston Granger (pp. 213-248). Paris: Presses Universitaires de France.

PETITOT, JEAN (1999). “Morphological Eidetics for a Phenomenology of Perception.” In Petitot et al. (1999): 330- 371.

280 PETITOT, JEAN (2003). “Neurogeometry of V1 and Kanizsa Contours.” Axiomathes 13: 347-363.

PETITOT, JEAN (2004). “Géométrie et vision dans ‘Ding und Raum’ de Husserl.” Intellectica 2: 139-167.

PETITOT, JEAN (2008). Neurogéométrie de la vision: modèles mathématique et physique des architectures fonctionelles. Paris: Les Editions de l’École Polytechnique.

PETITOT, JEAN (2011). Cognitive Morphodynamics: Dynamical Morphological Models of Constituency in Perception and Syntax. Bern: Peter Lang.

PETITOT, JEAN, VARELA, FRANCISCO J., PACHOUD, BERNARD & ROY, JEAN-MICHEL (Eds.) (1999). Naturalizing Phenomenology: Issues in Contemporary Phenomenology and Cognitive Science. Stanford: Stanford University Press.

PHILIPONA, DAVID, O’REGAN, KEVIN, & NADAL, JEAN-PIERRE (2003). “Is There Something Out There? Inferring Space from Sensorimotor Dependencies.” Neural Computation 15(9): 2029-2049.

PHILLIPS, IAN (2015). “No Watershed for Overflow: Recent Work on the Richness of Consciousness.” Philosophical Psychology. http://dx.doi.org/10.1080/09515089.2015.1079604

PICCININI, GUALTIERO (2007). “Computing Mechanisms.” Philosophy of Science 74(4): 501-526.

PICCININI, GUALTIERO & CRAVER, CARL F. (2011). “Integrating Psychology and Neuroscience: Functional Analyses as Mechanism Sketches.” Synthese 183: 283-311. DOI 10.1007/s11229-011-9898-4

PINNA, BAINGIO & DEIANA, KATIA (2015). “Material Properties from Contours: New Insights on Object Perception.” Vision Research 115: 280-301. http://dx.doi.org/10.1016/j.visres.2015.03.014.

PLATE, JAN (2007). “An Analysis of the Binding Problem.” Philosophical Psychology 20(6): 773-792.

POMERANTZ, JAMES R., SAGER, LAWRENCE, & STOEVER, ROBERT J. (1977). “Perception of Wholes and of Their Component Parts: Some Configural Superiority Effects.” Journal of Experimental Psychology: Human Perception and Performance 3(3): 422-435.

POMERANTZ, JAMES R. & CRAGIN, ANNA I. (2015). “Emergent Features and Feature Combination.” In J. Wagemans (2015): 89-107.

POPPER, KARL (1994). Logik der Forschung. Tübingen: J.C.B. Mohr.

PORT, ROBERT & VAN GELDER, TIM (Eds.) (1995). Mind as Motion: Explorations in the Dynamics of Cognition. Cambridge, MA: MIT Press.

281 POSTLE, BRADLEY (2009). “The Hippocampus, Memory, and Consciousness.” In S. Laureys & G. Tononi (Eds.) The Neurology of Consciousness: Cognitive Neuroscience and Neuropathology (pp. 326-338). 1st edition. Oxford: Elsevier.

POZNIC, MICHAEL (2017). “Thin versus Thick Accounts of Scientific Representations.” Synthese. DOI

10.10007/S11229-017-1374-3.

PRIBRAM, KARL H. (1984) “What Is Iso and What is Morphic in Isomorphism?” Psychological Research 46: 329- 332.

PUTNAM, HILARY (1960). “Minds and Machines.” In S. Hook (Ed.) Journal of Symbolic Logic (pp. 57-80). New York: New York Univesity Press.

PUTNAM, HILARY (1967). “Psychological Predicates.” In W.H. Capitan & D.D. Merrill (Eds.) Art, Mind and Religion (pp. 37-48). Pittsburgh: University of Pittsburgh Press.

PUTNAM, HILARY (1973). “Philosophy and our Mental Life” Rep. in Id. Mind, Language, and Reality: Philosophical Papers, vol. 2. (pp. 291-303). Cambridge: Cambridge University Press, 1975.

PUTNAM, HILARY (1992). Renewing Philosophy. Cambridge, MA: Harvard University Press.

PUTNAM, HILARY (2010). “Science and Philosophy.” In M. De Caro & D.Macarthur (Eds.) Science, Naturalism, and the Problem of Normativity. New York: Columbia University Press.

PYLYSHYN, ZENON (2003). Seeing and Visualizing. Cambridge, MA: MIT Press.

PYLYSHYN, ZENON. (2004). “Some Puzzling Findings in Multiple Object Tracking: I. Tracking Without Keeping Track of Object Identities.” Visual Cognition 11(7): 801-822.

PYLYSHYN, ZENON (2007). Things and Places. Cambridge, MA: MIT Press.

QUINE, WILLARD V.O. (1969). “Natural Kinds.” In Id. (Ed.) Ontological Relativity and Other Essays (pp. 114- 138). New York: Columbia University Press.

QUIROGA, QUIAN R., MUKAMEL, R., ISHAM, E.A., MALACH, R., & FRIED, I. (2008). “Human single-neuron responses at the threshold of conscious recognition.” Proceedings of the National Academy of Sciences 105(9): 3599-3604.

RAMACHANDRAN, VILAYANUR (1992). “Filling In Gaps in Logic: Some Comments on Dennett.” Consciousness and Cognition 2(2), 165-168,

RAMACHANDRAN, VILAYANUR S. & GREGORY, RICHARD L. (1991). “Perceptual Filling In of Artificially Induced Scotomas in Human Vision.” Nature 350: 699-702.

282 RASHBROOK, OLIVER (2012). “Diachronic and Synchronic Unity.” Philosophical Studies. DOI 10.1007/s11098- 012-9865-z.

RATHKOPF, CHARLES (2015). “Network Representation and Complex Systems.” Synthese. Doi 10.1007/s11229- 015-0726-0.

RATLIFF, FLOYD & SIROVICH, LAWRENCE (1978). “Equivalence Classes of Visual Stimuli” Vision Research 18(7): 845-851.

REES, GERAINT (2001). “Neuroimaging of Visual Awareness in Patients and Normal Subjects.” Current Opinion in Neurobiology 11: 150-156.

REES, GERAINT (2016). “Neural Correlates of Visual Consciousness.” In S. Laureys, O. Gosseries & G. Tononi (Eds.) The Neurology of Consciousness (pp. 61-70). 2nd edition. Oxford: Elsevier.

REES,G., WOJCIULIK, E., CLARKE, K., HUSAIN, M., FRITH, C. & DRIVER, J. (2000). “Unconscious Activations of Visual Cortex in the Damaged Right Hemisphere of a Parietal Patient with Extinction.” Brain 123(8): 1624-1633.

REES, GERAINT, KREIMAN, GABRIEL & KOCH, CHRISTOF (2002). “Neural Correlates of Consciousness in Humans” Nature Reviews Neuroscience 3: 261-270. Doi:10.1038/nrn783

REES, GERAINT, WOJCIULIK, CLARKE, KAREN, HUSAIN, MASUD, FRITH, CHRIS & DRIVER, JON (2002). “Neural Correlates of Conscious and Unconscious Vision in Parietal Extinction.” Neurocase: The Neural Basis of Cognition 8(5): 387-393.

REICHER, MARIA E (ED.) (2009). States of Affairs. Heusenstamm: Ontos Verlag.

REVONSUO, ANTTI (1999). “Binding and the Phenomenal Unity of Consciousness.” Consciousness and Cognition 8(2): 173-185.

REVONSUO, ANTTI (2000). “Prospects for a Scientific Research Program on Consciousness.” In Metzinger (2000): 57-75.

REVONSUO, ANTTI (2015). “The Future of Consciousness Science: From empirical correlations to theoretical explanation.” In Miller (2015a): 260-270.

ROBB, DAVID (2005). “Qualitative Unity and the Bundle Theory.” The Monist 88(4): 466-492.

RODRIGUEZ-PEREYRA, GONZALO (2000). “What is the Problem of Universals?” Mind 109(434), 255-273.

RODRIGUEZ-PEREYRA, GONZALO (2001). “Resemblance Nominalism and Russell’s Regress” Australasian Journal of Philosophy 79(3): 395-408.

283 RODRIGUEZ-PEREYRA, GONZALO (2002a) “The Problem of Universals and the Limits of Conceptual Analysis.” Philosophical Papers 31(1): 39-47.

RODRIGUEZ-PEREYRA, GONZALO (2002b). Resemblance Nominalism: A Solution to the Problem of Universals. Oxford: Oxford University Press.

RODRIGUEZ-PEREYRA, GONZALO (2015). “Nominalism in Metaphysics” in E.N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2016/entries/nominalism-metaphysics/

RORTY, RICHARD (1979). Philosophy and the Mirror of Nature. Princeton: Princeton University Press.

ROSENTHAL, DAVID (1986). “Explaining Consciousness.” Rep. in D. Chalmers (Ed.) (2002) Philosophy of Mind:

Classical and Contemporary Readings (pp. 406-421). New York: Oxford University Press.

ROSENTHAL, DAVID (1990). “A Theory of Consciousness.” In N. Block, O. Flanagan, G. Güzeldere (Eds.) The Nature of Consciousness (pp. 773-788). Cambridge MA: MIT Press.

ROSCH, ELEANOR, MERVIS, CAROLYN B., GRAY, WAYNE D., JOHNSON, DAVID M. & BOYES-BREAM, PENNY (1976). “Basic Objects in Natural Categories.” Cognitive Psychology 8: 382-439.

ROSE, DAVID (2006). Consciousness: Philosophical, Psychological, and Neural Theories. New York: Oxford University Press.

ROSEN, GIDEON (2017). “Abstract Objects.” In E.N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2017/entries/abstract-objects/.

ROSS, LAUREN (2015). “Dynamical Models and Explanation in Neuroscience.” Philosophy of Science 82(1): 32- 54.

ROY, JEAN-MICHEL, PETITOT, JEAN, PACHOUD, BERNARD & VARELA, FRANCISCO J. (1999). “Beyond the Gap: An Introduction to Naturalizing Phenomenology.” In Petitot et al. (1999): 1-80.

RUHNAU, PHILIPP, HAUSWALD, ANNE, & WEISZ, NATHAN (2014). “Investigating Ongoing Brain Oscillations and Their Influence On Conscious Perception – Network States and the Window to Consciousness.” Frontiers in psychology 5. Doi: 10.3389/fpsyg.2014.01230.

RUSSELL, BERTRAND (1912). The Problems of Philosophy. Oxford: Oxford University Press.

RUSSELL, BERTRAND (1940). An Inquiry Into Meaning and Truth. New York: Routledge.

SACKS, O., R.L. WASSERMAN, R.L., ZEKI, S. & SEIGEL, R.M. (1988) “Sudden Color Blindness of Cerebral Origin.” Society for Neuroscience Abstracts 14: 1251.

284 SALMON, WESLEY (1984). Scientific Explanation and the Causal Structure of the World. Princeton NJ, Princeton University Press.

SALMON, WESLEY (1989). Four Decades of Scientific Explanation. Pittsburgh, PA: University of Pittsburgh Press.

SALVIA, SALVIA (2013). “Emil Wohlwill’s “Entdeckung des Isomorphismus”: A Nineteenth-Century “Material Biography” of Crystallography.” Ambix 60(3): 255-284.

SAVELLOS, ELIAS & YALÇIN, ÜMIT (Eds.) (1995). Supervenience: New Essays. New York, Cambridge University Press.

SCHAFFER, JONATHAN (2001). “The Individuation of Tropes.” Australasian Journal of Philosophy 79(2): 247-257.

SCHEERER, ECKART (1994). “Psychoneural Isomorphism: Historical Background and Current Relevance.” Philosophical Psychology 7(2): 183-210.

SCHEIN, S.J. & DESIMONE, R. (1990). “Spectral Properties of V4 Neurons in the Macaque.” Journal of Neuroscience 10: 3369-3389.

SCHELLENBERG, SUSANNA (2010). “The Particularity and Phenomenology of Perceptual Experience” Philosophical Studies. DOI 10.1007/s11098-010-9540-1.

SCHELLENBERG, SUSANNA (2016). “Perceptual Particularity” Philosophy and Phenomenological Research. doi: 10.1111/phpr.12278.

SCHILLER, PETER H. (1996) “On the specificity of neurons and visual areas” Behavioral Brain Research 76: 21-35.

SCHLICHT, TOBIAS (2011). “Non-Conceptual Content and the Subjectivity of Consciousness.” International Journal of Philosophical Studies 19(3): 491-520.

SCHLICHT, TOBIAS (2012). “Phenomenal Consciousness, Attention and Accessibilty” Phenomenology and the Cognitive Sciences. DOI 10.1007//s11097-012-9256-0.

SCHOLL, BRIAN, PYLYSHYN, ZENON & FRANCONERI, STEVEN (1999). “When Are Featural and Spatiotemporal Properties Encoded As A Result of Attentional Allocation?” Investigative Ophthalmology and Visual Science 40(4): 4195.

SCHWITZGEBEL, ERIC (2011). Perplexities of Consciousness. Cambridge, MA: MIT Press.

SEAGER, WILLIAM (1999). Theories of Consciousness. London: Routledge.

SEAGER, WILLIAM (2007). “A Brief History of the Philosophical Problem of Consciousness.” In Zelazo et al. (2007): 9-33.

285 SEAGER, WILLIAM & BOURGET, DAVID (2007). “Representationalism about Consciousness.” In Velmans & Schneider (2007): 261-276.

SEARLE, JOHN (1983). Intentionality. Cambridge: Cambridge University Press.

SEARLE, JOHN (2000). “Consciousness.” Annual Review of Neuroscience 23: 557-578.

SEARLE, JOHN (2004). Mind: A Brief Introduction. New York: Oxford University Press.

SEARLE, JOHN (2015). Seeing Things As They Are. New York: Oxford University Press.

SEKULER, R. (1996). “Motion Perception: A Modern View of Wertheimer’s 1912 Monograph.” Perception 25: 1243-1258.

SELLARS, WILFRID (1956). “Empiricism and the Philosophy of Mind.” Minnesota Studies in the Philosophy of Science, I: 253-329. Rep. in Id. (1997) Empiricism and the Philosophy of Mind. Cambridge, MA: Harvard University Press.

SETH, ANIL (2009). “Functions of Consciousness.” In W.P. Banks (Ed.) Encyclopedia of Consciousness, vol I (pp. 279-293). Oxford: Academic Press.

SETH, ANIL (2014). “A Predictive Processing Theory of Sensorimotor Contingencies: Explaining the Puzzle of Perceptual Presence and Its Absence in Synesthesia.” Cognitive Neuroscience 5(2): 97-118.

SHADLEN, M.N. & NEWSOME, W.T. (1994). “Noise, neural Codes and Cortical Organization.” Current Opinion in Neurobiology 4: 569-579.

SHAGRIR, ORON (2012). “Structural Representations and the Brain.” British Journal for the Philosophy of Science 63: 519-545.

SHAGRIR, ORON & BECHTEL, WILLIAM (2017). “Marr’s Computational Level and Delineating Phenomena.” In D. Kaplan (Ed.) Explanation and Integration in Mind and Brain Sciences (pp. 190-214). New York: Oxford University Press.

SHEAR, JONATHAN (ED.) (1997). Explaining Consciousness: The Hard Problem. Thorverton: Imprint Academic.

SHEPARD, ROGER & CHIPMAN, SUSAN (1970). “Second-Order Isomorphism of Internal Representations: Shapes of States.” Cognitive Psychology 1: 1-17.

SHEPARD, ROGER & METZLER, JACQUELINE (1971). “Mental Rotation of Three-Dimensional Objects.” Science 171: 701-703.

SHOEMAKER, SYDNEY (1990). “Qualities and Qualia: What’s in the Mind?” Philosophy and Phenomenological Research 50: 109-131. 286 SHOEMAKER, SYDNEY (1994). “Phenomenal Character.” Noûs 28(1): 21-38.

SIDER, THEODORE (2006). “Bare Particulars” Philosophical Perspectives, 20: 387-397.

SIEGEL, SUSANNA (2002). “Review of A Theory of Sentience, by Austen Clark.” Philosophical Review 111(1).

SIEGEL, SUSANNA (2010a). The Contents of Visual Experience. New York: Oxford University Press.

SIEGEL, SUSANNA (2010b). “The Contents of Perception” In E.N. Zalta (Ed.) The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/spr2015/entries/perception-contents/.

SIGWART, JULIA, SUMNER-ROONEY, LAUREN, SCHWABE, ENRICO, HEß, MARTIN, BRENNAN, GERARD P. &

SCHRÖDL, MICHAEL (2014). “A New Sensory Organ in ‘Primitive’ Molluscs (Polyplacophora: Lepidopleurida), and its Context in the Nervous System of Chitons.” Frontiers in Zoology 11(7). Doi: 10.1186/1742-9994-11-7.

SILVA, S., ALACOQUE, X., FOURCADE, O., SAMII, K., MARQUE, P., WOODS, R., MAZZIOTTA, J., CHOLLET, F., &

LOUBINOUX, I. (2010). “Wakefulness and loss of awareness: Brain and brainstem interaction in the vegetative state.” Neurology 74(4): 313-320.

SILVERMAN, DAVID (2017). “Bodily skill and internal representation in sensorimotor perception.” Phenomenology and the Cognitive Sciences. DOI 10.1007/s11097-017-9503-5.

SIMEON, DAPHNE & ABUGEL, JEFFREY (2006). Feeling Unreal: Depersonalization Disorder and the Loss of the Self. Oxford: Oxford University Press.

SIMONS, PETER (1994). “Particulars in Particular Clothing: Three Trope Theories of Substance.” Philosophy and Phenomenological Research 54(3): 553-575.

SINGER, WOLF (1999). “Neuronal Synchrony: A Versatile Code for the Definition of Relations?” Neuron 24(1): 49-65.

SINGER, WOLF (2015). “The Ongoing Search for the Neuronal Correlates of Consciousness.” In T. Metzinger & J.M. Windt (Eds.) Open MIND. Frankfurt am Main: MIND Group.

SINGH, MANISH & HOFFMAN, DONALD D. (1997). “Constructing and Representing Visual Objects.” Trends in Cognitive Sciences 1(3): 98-102.

SKINNER, B.F. (1963) “Behaviorism at Fifty” Science 140: 951-958.

SKRZYPULEC, BŁAŻEJ (2015). “Two Types of Visual Objects.” Studia Humana 4(2): 26-38.

SMITH, ROGER (2013). Between Mind and Nature: A History of Psychology. London: Reaktion.

SMOLENSKY, P. (1988). “On the Proper Treatment of Connectionism.” Behavioral and Brain Sciences 11(1): 1-74.

287 SNOWDON, PAUL (1981). “Perception, Vision, and Causation” Proceedings of the Aristotelian Society 81: 175-192.

SOTERIOU, MATTHEW (2000). “The Particularity of Visual Perception” European Journal of Philosophy 8(2): 173-189.

SPENCHER, J. & SCHÖNER, G. (2003). “Bridging the Representational Gap in the Dynamic Systems Approach to Development.” Developmental Science 6(4): 392-412.

STALNAKER, ROBERT (1976). “Possible Worlds” Noûs 10(1): 65-75.

STAUDACHER, ALEXANDER (2011). Das Problem der Wahrnehmung. Paderborn: Mentis Verlag.

STERRETT, SUSAN (2006). “Models of Machines and Models of Phenomena.” International Studies in the Philosophy of Science 20(1): 69-80.

STOUT, G.F. (1921) “The Nature of Universals and Propositions.” Proceedings of the British Academy 10: 157- 172.

STRAWSON, PETER (1959). Individuals. London: Methuen.

STRAWSON, PETER (1979). “Perception and Its Objects.” In G.F. Macdonald (Ed.) Perception and Identity: essays Presented to A. J. Ayer with His Replies (pp. 41-60). London: Macmillan. Rep. in Noë & Thompson (2002): 91- 110.

SUÁREZ, MAURICIO (2010). “Scientific Representation.” Philosophy Compass 5(1): 91-101.

SUHM, CHRISTIAN, WAGEMANN, PHILIP & WESSELS , FLORIAN (2000). “Ontological Troubles with Facts and Objects in McDowell’s Mind and World” in Willascek (2000): 27-33.

SUN, RON & STAN FRANKLIN (2007). “Computational Models of Consciousness: A Taxonomy and Some Examples.” In Zelazo et al. (2007): 151- 174.

SUPPES, PATRICK (1960). “A Comparison of the Meaning and Uses of Models in Mathematics and the Empirical Sciences.” Synthese 12(2-3): 287-301.

TACCA, MICHELA (2010). Seeing Objects. Paderborn: Mentis Verlag.

TAPIA, EVELINA, & BECK, DIANE M. (2014). “Probing Feedforward and Feedback Contributions to Awareness with Visual Masking and Transcranial Magnetic Stimulation.” Frontiers in psychology 5: 1-14.

TATLER, BENJAMIN, WADE, NICHOLAS J., KWAN, HOI, FINDLAY, JOHN & VLICHKOVSKY, BORIS (2010). “Yarbus, Eye Movements, and Vision.” i-Perception 1: 7-27. dx.doi.org/10.1068/i0382.

288 TAWADA, K. & MIYAMOTO, H. (1973). “Sensitivity of Paramecium Thermotaxis to Temperature Change.” Journal of Protozoology 20(2): 289-292.

TELLER, DAVIDA (1984). “Linking Propositions” Vision Research 24(10): 1233-1246.

TEXTOR, MARK (2009). “Are Particulars or States of Affairs Given in Perception?” In Reicher (2009): 129-150.

THELEN, ESTHER, SCHÖNER, GREGOR, SCHEIER, CHRISTIAN, & SMITH, LINDA B. (2001). “The dynamics of embodiment: A field theory of infant perseverative reaching.” Behavioral and Brain Sciences 24(1): 1-86.

THOMPSON, BRAD (2006). “Colour Constancy and Russellian Representationalism.” Australasian Journal of Philosophy 84(1): 75-94.

THOMPSON, BRAD (2010). “The Spatial Content of Experience.” Philosophy and Phenomenological Research 81(1): 146-184.

THOMPSON, BRIAN (2009). “Senses for Senses.” Australasian Journal of Philosophy 87(1): 99-117.

THOMPSON, EVAN (2007). Mind in Life: Biology, Phenomenology, and the Sciences of the Mind. Cambridge, MA: Harvard University Press.

THOMSON, ROBERT (1972)2. The Pelican History of Psychology. Harmondsworth: Penguin. Trans. in Ita. Emilio Panaitescu, Storia della psicologia. Torino: Bollati Boringhieri.

TOCCAFONDI, FIORENZA (2000). Il tutto e le parti: La Gestaltpsychologie tra filosofia e ricerca sperimentale (1912- 1922). Milano: Franco Angeli.

TODOROVIĆ, DEJAN (1987). “The Craik-O’Brien-Cornsweet Effect: New varities and Their Theoretical Implications.” Perception & Psychophysics 42(6): 545-650.

TONG, FRANK & ENGEL, STEPHEN (2001). “Interocular Rivalry Revealed in the Human Cortical Blind-Spot Representation.” Nature 411: 195-199.

TONG, F., NAKAYAMA, K., VAUGHAN, J.T. & KANWISHER, N. (1998). “Binocular Rivalry and Visual Awareness in Human Extrastriate Cortex.” Neuron 21(4): 753-759.

TONG, F., MENG, M. & BLAKE, R. (2006). “Neural Bases of Binocular Rivalry.” Trends in Cognitive Science 10(11): 502-511.

TONONI, GIULIO & KOCH, CHRISTOF (2008). “The Neural Correlates of Consciousness: An Update.” Annals of the New York Academy of Sciences 1124: 239-261.

TRAVIS, CHARLES (2004). “The Silence of the Senses.” Mind 113(449): 57-94.

289 TREISMAN, ANNE (1986). “Properties, Parts, and Objects” In K.R. Boff (Ed.), Handbook of Perception and Human Performance, Vol.2 (pp. 35-70). John Wiley and Sons.

TREISMAN, ANNE (1988). “Features and Objects: The Fourteenth Bartlett Memorial lecture.” Quarterly Journal of Experimental Psychology A, 40(2): 201-237.

TREISMAN, ANNE (1996). “The Binding Problem.” Current Opinion Neurobiology 6: 171-178.

TREISMAN, ANNE & GELADE, GARRY (1980). “A Feature Integration Theory of Attention.” Cognitive Psychology 12: 97-136.

TVERSKY, BARBARA & HEMENWAY, KATHLEEN (1984). “Objects, Parts, and Categories.” Journal of Experimental

Psychology: General 113(2): 169-193.

TVERSKY, BARBARA, ZACKS, JEFFREY M., & HARD, BRIDGETTE MARTIN (2008). “The Structure of Experience.” In T.F. Shipley & J.M. Zacks (Eds.) Oxford Series in Visual Cognition, Vol. 4: Understanding Events: From Perception to Action (pp. 436-464). http://psycnet.apa.org/doi/10.1093/acprof:oso/9780195188370.003.0019.

TYE, MICHAEL (1992). “Visual Qualia and Visual Content.” In T. Crane (Ed.) The Contents of Experience: Essays on Perception (pp. 158-176). Cambridge: Cambridge University Press.

TYE, MICHAEL (1995). Ten Problems of Consciousness. Cambridge, MA: MIT Press.

TYE, MICHAEL (2000). Color, Consciousness and Content. Cambridge, MA: MIT Press.

TYE, MICHAEL (2007). “Intentionalism and the Argument from No Common Content.” Philosophical Perspectives 21: 589-613.

ULLMAN, SHIMON & BASRI, RONEN (1991). “Recognition by Linear Combinations of Models.” IEEE Transactions on Pattern Analysis and Machine Intelligence: Special issue on interpretation of 3-D scenes—part I 13(10): 992-1006

UNGERLEIDER, LESLIE & MISHKIN, MORTIMER (1982). “Two Cortical Visual Systems.” In D.J. Ingle, M.A. Goodale, R.J.W. Mansfield (Eds.) Analysis of Visual Behaviour (pp. 549-585). Cambridge MA: MIT Press.

VALLICELLA, WILLIAM (2000). “Three Conceptions of States of Affairs.” Noûs 34(2): 237-259.

VAN ESSEN, DAVID & DEYOE, EDGAR (1991). “Concurrent Processing in the Primate Visual Cortex.” In M. Gazzaniga (Ed.) The Cognitive Neurosciences (pp. 383-400). Cambridge, MA: MIT Press.

VAN FRAASSEN, BAS (2002). The Empirical Stance. New Haven: Yale University Press.

VAN GELDER, TIM (1995). “What Might Cognition Be, If Not Computation?” Journal of Philosophy 92(7): 345- 381. 290 VAN GELDER, TIM (1998). “The Dynamical Hypothesis in Cognitive Science.” Behavioral and Brain Sciences 21(5): 1-14.

VAN GELDER, TIM & PORT, ROBERT (1995). “It’s About Time: An Overview of the Dynamical Approach to Cognition.” In Port & Van Gelder (1995): 1-43.

VAN GULICK, ROBERT (2004). “Higher-order Global States (HOGS): An Alternative Higher-Order Model of Consciousness.” In R. Gennaro (Ed.) Higher-Order Theories of Consciousness (pp. 67-92). Amsterdam: John Benjamin.

VAN GULICK, ROBERT (2006). “Mirror Mirror — Is That All?” In U. Kriegel & K. Williford (Eds.) Self-

Representational Approaches to Consciousness. Cambridge, MA: MIT Press.

VAN GULICK, ROBERT (2009). “Concepts of Consciousness.” In Bayne, Cleeremans & Wilkins (2009): 163-167.

VARELA, FRANCISCO J. (1997). “Neurophenomenology: A Methodological Remedy for the Hard Problem.” In J. Shear (Ed.) Explaining Consciousness: The Hard Problem. Cambridge, MA: MIT Press.

VARELA, FRANCISCO J., THOMPSON, EVAN & ROSCH, ELEANOR (1991). The Embodied Mind: Cognitive Science and the Human Experience. Cambridge, MA: MIT Press.

VARELA, FRANCISCO & SHEAR, JONATHAN (Eds.) (1999). The View from Within: First-Person Approaches to the Study of Consciousness. Thorverton: Imprint Academic.

VELDEMAN, JOHAN (2009). “Varieties of Phenomenal Externalism.” Teorema 28(1): 21-31.

VERDEJO, V.M. (2015). “The Systematicity Challenge to Anti-Representational Dynamicism.” Synthese 192(3): 701-722.

VERMERSCH, PIERRE (1999). “Introspection as Practice.” Journal of Consciousness Studies 6(2-3): 17-42.

VERNAZZANI, ALFREDO (2014). “Sensorimotor Laws, Mechanisms, and Representations.” In P. Bello, M. Guarini, M. McShane & B. Scassellati (Eds.) Proceedings of the 36th Annual Meeting of the Cognitive Science Society (pp. 3038-3042). Austin, TX: Cognitive Science Society.

VERNAZZANI, ALFREDO (2015). “Manipulating the Contents of Consciousness: A Mechanistic-Manipulationist Perspective on Content-NCC Research.” In C.D. Noelle, R. Dale, A.S. Warlaumont, J. Yoshimi, T. Matlock, C.D. Jennings & P.P. Maglio (Eds.) Proceedings of the 37th Annual Meeting of the Cognitive Science Society (pp. 2487-2492). Austin TX: Cognitive Science Society.

VERNAZZANI, ALFREDO (2016a). “Fenomenologia naturalizzata nello studio dell’esperienza cosciente.” Rivista di filosofia 107(1): 27-48.

291 VERNAZZANI, ALFREDO (2016b). “Psychoneural Isomorphism and Content-NCCs” Gestalt Theory 38(2/3): 177- 190.

VERNAZZANI, ALFREDO (2017) “The Structure of Sensorimotor Explanation.” Synthese. https://doi.org/10.1007/s11229-017-1664-9.

VERNAZZANI, ALFREDO (under review). “Philosophy as a Simulation of Nature: Modeling Perceptual Content.”

VERNAZZANI, ALFREDO (ms.). “Vividness and the Levels of Consciousness.” paper presented at the 88th Joint Session, Fitzwilliam College, University of Cambridge, July 2014.

VERNAZZANI, ALFREDO & MARCHI, FRANCESCO (ms.). “Cognitive Penetration, Bistable Figures, and the Structure of Visual Objects.”

VELMANS, MAX & SCHNEIDER, SUSAN (Eds.) (2007). The Blackwell Companion to Consciousness. Malden, MA: Wiley-Blackwell.

VON DER HEYDT, RÜDIGER, FRIEDMAN, HOWARD S., & ZHOU, HONG (2003). “Searching For the Neural Mechanism of Colour Filling-in.” In Pessoa & De Weerd (2003): 106-127.

VOSGERAU, GOTTFRIED, SCHLICHT, TOBIAS & NEWEN, ALBERT (2008). “Orthogonality of Phenomenality and Content.” American Philosophical Quarterly 45(4): 309-328.

VUILLEUMIER, PATRICK & RAFAL, ROBERT (2000). “A Systematic Study of Visual Extinction Between- and Within-Field Deficits of Attention in Hemispatial Neglect.” Brain 123: 1263-1279.

WAGEMANS, JOHAN (Ed.) (2015). The Oxford Handbook of Perceptual Organization. Oxford: Oxford University

Press.

WALMSLEY, JOEL (2008). “Explanation in Dynamical Cognitive Science.” Minds and Machines 18(3): 331-348.

WARREN, RICHARD M. (1970) “Perceptual Restoration of Missing Speech Sounds.” Science 167: 392-393.

WEIL, RIMONA & REES, GERAINT (2011). “A new taxonomy for perceptual filling-in.” Brain Research Reviews 67: 40-55.

WEISBERG, MICHAEL (2013). Simulation and Similarity. Cambridge, MA: MIT Press.

WEISBERG, MICHAEL (2007). “Who is a Modeler?” British Journal for the Philosophy of Science 58(2): 207-233.

WERTHEIMER, MAX (1912). “Experimentelle Studien über das Sehen von Bewegung.” Zeitschrift für Psychologie 61(1): 161-265.

WEISSTSEIN, E.W. (2009) 3. CRC Encyclopedia of Mathematics. Boca Raton: CRC Press.

WILLASCEK, MARCUS (ED.) (2000). John McDowell: Reason and Nature. Münster: LIT Verlag. 292 WILLIAMS, DONALD C. (1953). “On the Elements of Being: I” The Review of Metaphysics 7(1): 3-18.

WILLIAMS, BERNARD (2006). “Philosophy as a Humanistic Discipline.” In Id. Philosophy as a Humanistic Discipline. Princeton: Princeton University Press.

WILLIAMSON, TIMOTHY (2000). The Limits of Knowledge. New York: Oxford University Press.

WILLIAMSON, TIMOTHY (2017). “Model-Building in Philosophy.” In R. Blackford & D. Broderick (Eds.) Philosophy’s Future (pp. 159-173). Oxford: Wiley.

WIMSATT, WILLIAM C. (2007). Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality. Cambridge, MA: Harvard University Press.

WINTHER, RASMUS GRONFELDT (2006). “Parts and Theories in Compositional Biology.” Biology and Philosophy 21: 471-499.

WITTGENSTEIN, LUDWIG (1958). Philosophical Investigations. Trans. by G.E.M. Anscombe. Oxford: Blackwell.

WITTENBERG, GEORGE F. (2010). “Experience, Cortical Remapping, and Recovery in Brain Disease” Neurobiology of Disease 37(2). Doi:10.1016/j.nbd.2009.09.007.

WOLFE, JEREMY (1998). “Visual Search.” In H. Pashler (Ed.) Attention (pp. 13-73). London, UK: Psychology Press.

WOODWARD, JAMES (1989). “Data and Phenomena.” Synthese 79: 393-472.

WOODWARD, JAMES (1997). “Explanation, Invariance, and Intervention.” Philosophy of Science 66: 26-41.

WOODWARD, JAMES (2000). “Data, Phenomena, and Reliability.” Philosophy of Science 67: 163-179.

WOODWARD, JAMES (2001). “Law and Explanation in Biology: Invariance is the Kind of Stability That Matters.” Philosophy of Science 68(1): 1-20.

WOODWARD, JAMES (2003). Making Things Happen. New York: Oxford University Press.

WOODWARD, JAMES (2010). “Data, Phenomena, Signal, and Noise.” Philosophy of Science 77(5): 792-803.

WOODWARD, JAMES (2011). “Data and Phenomena: A Restatement and Defense.” Synthese 182(1): 165-179.

WOODWARD, JAMES & HITCHCOCK, CHRISTOPHER (2003a). “Explanatory Generalizations, Part I: A Counterfactual Account.” Noûs 37: 1-24

WOODWARD, JAMES & HITCHCOCK, CHRISTOPHER (2003b). “Explanatory Generalizations, Part II: Plumbing Explanatory Depth.” Noûs 37: 181-199.

293 WRIGHT, CORY (2012). “Mechanistic Explanation without the Ontic Conception.” European Journal for Philosophy of Science. DOI 10.1007/s13194-012-0048-8.

WRIGHT, CORY & BECHTEL, WILLIAM (2007). “Mechanisms and Psychological Explanation.” In P. Thagard (Ed.) Philosophy of Psychology and Cognitive Science (pp. 31-79). Amstardam: Elsevier.

WRIGHT, LARRY (1973). “Functions.” The Philosophical Review 8(2): 139-168.

WUNDERLICH, K., SCHNEIDER, K.A., & KASTNER, S. (2005). “Neural Correlates of Binocular Rivalry in the Human Lateral Geniculate Nucleus.” Nature Neuroscience 8: 1595-1602.

YARBUS, ALFRED L. (1957). “A new method of studying the activity of various parts of the retina.” Biofizika 2: 165- 167.

YARBUS, ALFRED L. (1967). Eye Movements and Vision. Translated from Russian by Basil A. Riggs. New York: Plenum Press.

ZARETSKAYA, NATALIA, THIELSCHER, AXEL, LOGOTHETIS, NIKOS & BARTELS, ANDREAS (2010). “Disrupting Parietal Function Prolongs Dominance Durations in Binocular Rivalry.” Current Biology 20: 2106-2111.

ZEDNIK, CARLOS (2011). “The Nature of Dynamical Explanation.” Philosophy of Science 78(2): 238-263.

ZEKI, SEMIR (1983). “Colour Coding in the Cerebral Cortex: the Reaction of Cells in Monkey Visual Cortex to Wavelengths and Colours.” Neuroscience 9: 741-765.

ZEKI, SEMIR (1990). “A Century of Cerebral Achromatopsia”. Brain 113: 1721-1777.

ZEKI, SEMIR (2007). “A Theory of Micro-Consciousness.” In M. Velmans & S. Schneider (Eds.) The Blackwell Companion to Consciousness (pp. 580-588). Malden, MA: Blackwell.

ZEKI, SEMIR & BARTELS, ANDREAS (1999). “Toward a Theory of Visual Consciousness.” Consciousness and Cognition 8: 225-259.

ZEKI, SEMIR & SHIPP, STEWART (1988). “The Functional Logic of Cortical Connections” Nature 335(22): 311- 317.

ZEKI, S., J.D.G. WATSON, C.J. LUECK, K.J. FRISTON, C. KENNARD, & R.S.J. FRACKOWIAK (1991). “A Direct Demonstration of Functional Specialization in Human Visual Cortex.” The Journal of Neuroscience 11: 641-649.

ZELAZO, PHILIP DAVID, MOSCOVITCH, MORRIS & THOMPSON, EVAN (Eds.) (2007). The Cambridge Handbook of Consciousness. New York: Cambridge University Press.

ZIMMERMAN, DEAN W. (2008). “Distinct Indiscernibles and the Bundle Theory.” In P. Van Inwagen & D. W. Zimmerman (Eds.) Metaphysics: The Big Questions (pp. 105-111). Singapore: Blackwell Publishing. 294