<<

Book of Abstracts Organizers Sabine Ammon (TU Berlin) Alfred Nordmann (TU Darmstadt)

Team Jutta Braun Benjamin Müller Janine Gondolf Stefanie Cosgrove and many TU student-assistants

Editorial Support Timm Behnecke Jonathan Geiger Michael Marquardt

Design Maureen Belaski

Program Committee

Albrecht Fritzsche Felicitas Krämer Mieke Boon Alexander Friedrich Glen Miller Neelke Dorn Andreas Brenneis Guoyu Wang Nicola Mößner Alfred Nordmann Henry Dicks Peter Kroes Anais De Keijser Ibo van den Poel Pieter Vermaas Andreas Kaminski Ingo Schulz-Schaeffer Rafaela Hillerbrand Anthonie Meijers Jan Passoth Sabine Ammon Astrid Schwarz Jan C. Schmidt Sabine Thürmel Barbara Osimani Johannes Lenhard Sascha Dickel Benjamin Müller Jonathan Geiger Shannon Vallor Bruno Gransche Judith Simon Sjoerd Zwart Carl Mitcham Karen Kastenhofer Suzana Alpsancar Catrin Misselhorn Klaus Kornwachs Timm Behnecke Cheryce von Xylander Lara Huber Tobias Matzner Christof Rapp Maarten Franssen Vincent Blok Christoph Hubig Mark Coeckelbergh Wybo Houkes Cornelius Borck Martina Hessler Diane Michelfelder Michael Nagenborg

This program is current as of June 13. There will be last-minute changes. Technische Universität Darmstadt, June 14 to 17, 2017. Table of Content

PROGRAM ...... 4

ABSTRACTS ...... 20

Panels ...... 21 Panel: Acoustic Technics ...... 21 Panel: AI and Expert: Expert Knowledge in the Age of Artificial Intelligence ...... 21 Panel: Assembling a Socio-Technological Future: The Challenge of Russia ...... 22 Panel: Conceptualising aero-material assemblages of the vertical gaze ...... 23 Panel: Critical perspectives on the role of in data- ...... 25 Panel: Ethics and Politics Within Cloud Computing and other Computing Environments ...... 26 Panel: Postphenomenological Research 1: Theorizing Bodies ...... 27 Panel: Postphenomenological Research 2: Technologized Space ...... 29 Panel: Postphenomenologcial Research 3: Augmented Realities ...... 30 Panel: Postphenomenological Research 4 ...... 31 Panel: Reflections on Andrew Feenberg’s of ...... 32 Panel: Relating to things that relate to us 1 ...... 33 Panel: Relating to things that relate to us—ethics and pragmatics ...... 34 Panel: Research in a Design Mode - In Conversation with Ann Johnson (1965-2016) ...... 36 Panel: The language of biofacts: grammars of agricultural „things“ ...... 36

Submitted Papers ...... 37

Special Tracks ...... 74 Track: Anthropocene (1) ...... 74 Track: Anthropocene (2) ...... 75 Track: Anthropocene (3) ...... 77 Track: Biomimicry: applied ...... 79 Track: Critical Infrastructures (1) ...... 80 Track: Critical Infrastructures (2) ...... 81 Track: Engineering Epistemology / general ...... 83 Track: Engineering Epistemology / special ...... 84 Track: Knowledge Productions / Action in Engineering Knowledge ...... 86 Track: Knowledge Productions / Models and Simulation ...... 88 Track: Pedagogical Pragmatics (1) ...... 90 Track: Pedagogical Pragmatics (2) ...... 92 Track: Technology and the City: (Infra-)Structures ...... 93 Track: Technology and the City: Architecture and Design ...... 95 Track: Technology and the City: Exploring the City ...... 96 Track: Technology and the City: What makes a smart city ...... 97 Track: Technology Translations (1) ...... 99 Track: Technology Translations (2) ...... 101 Track: Technology Translations (3) ...... 102 Track: Technology Translations (4) ...... 103

LIST OF PARTICIPANTS ...... 105

THE CONFERENCE IN A NUTSHELL ...... 112

MAPS OF THE AREA ...... 113

Program

4 BackToTOC

Wednesday 2:00 to 4:00 pm

1 Panel: The Language of Biofacts: Grammars of Agricultural “Things” Darmstadtium 3.09 helium Nicole Karafyllis: “Seed Banking - A Modern Grammar of Collecting Plant Origins as Biofacts” Karin Zachmann: “Agency of Biofacts and Rules of Grammar – Introducing Nuclear Techniques in Agriculture to Africa” Bernhard Gill: “The Grammar of Patent Law and the Dynamics of Living Organisms” Chair: Nicole Karafyllis

2 Round Table: Research in a Design Mode – In Conversation with Ann Johnson (1965–2016) Darmstadtium 3.07 argon Johannes Lenhard, Leah McClimans, Alfred Nordmann, Joe Pitt – and others

3 Submitted papers: Articulations Darmstadtium 2.03 vanadium Pieter Vermaas and Sara Eloy: “Grammar and Quality: Assessing the Design Quality of Grammar-System Generated Architecture” Algirdas Budrevicius: “Hylomorphism and Cartesian Coordination as Basis for the Definition of Thing and for the Grammar of Things” Michael Poznic: “Architectural Modeling: Interplay of Designing and Representing” Chair: Timm Behnecke

4 Submitted papers: Extended Selves Darmstadtium 3.05 radon Joshua Earle: “Identity and Normative Danger in Transhumanist Rhetoric” JI Haiqing: “Human Enhancement Ethics and Naturalistic Fallacy” Terry Bristol: “Hawking Evolution: Survival of the Weaker” Federica Buongiorno: “Lifelogging and the ‘ of the Self’ – Some Phenomenological Remarks on Digital Processes of Subjectification” Chair:

5 Submitted papers: Good Governance Darmstadtium 3.03 germanium Sven Ove Hansson: “How to perform an ethical risk analysis” CONG Hangqing and CHEN Ximeng: “Social Governance in Chinese Engineering” Wha-Chul Son: “SDGs in the Era of the 4th Industrial Revolution” Chair:

6 Submitted papers; Ontological Practices Darmstadtium 2.02 chromium Mark Coeckelbergh: “Magic Devices, Gothic Robots, and Romantic People – Understanding the Use of Contemporary Technologies in the Light of Romanticism” Andreas Beinsteiner: “Material Irritations in the Grammar of Things – Post-ontotheological Considerations” Alexandra Karakas: “Applying Object-Oriented to Technical Artefacts” Mark Young: “Maintenance: Technology in Process” Chair: Suzana Alpsancar

7 Submitted papers; Technomorality Darmstadtium 3.08 neon Neelke Doorn, Colleen Murphy and Paolo Gardoni: “Translating Social Resilience to Engineering – How can resilient engineering systems contribute to social justice?” Cristian Puga Gonzalez: “The Techno-Moral Change Revised: How Does Technology Affect Moral Philosophy” Kristene Unsworth: “Smart Cities and Participation in the Polis” Glen Miller: “Superabundance and Collective Responsibility in Engineered Systems” Chair:

5 BackToTOC

8 Track: A New Planetary Orientation for in the Anthropocene? (1) Darmstadtium 2.04 titanium Carl Mitcham: “Engineering Ethics: From Thinking Small to Big” Jochem Zwier and Vincent Blok: “Gaia’s Garbage – Technological Insistence and Existence in the Anthropocene” Steffen Stolzenberger: “Prospects of the Technoscene. Critical Remarks on Sloterdijk’s Mystification of History” Chair: Jochem Zwier

9 Track: Engineering Knowledge (1) – general Darmstadtium 3.02 hassium Thomas Zoglauer: “The of Technological Knowledge” Manjari Chakrabarty: “The Critical Function of Science in the Origins of the Steam Engine” David Montminy and Gabriel Meunier: “Functional stance towards science and engineering” Kiyotaka Naoe: “Engineering and Tacit Knowledge” Chair: Peter Kroes

Wednesday 4:00 to 4:45 pm

coffee break and change of venue

Wednesday 4:45 to 6:00 pm

Opening Session Maschinenhaus 122

Wednesday 6:00 to 7:00 pm

Composition Lecture 1 – George Stiny: “Design as Art and Use” Maschinenhaus 122 Chair: Sabine Ammon

Welcome by Prof. Dr. Hans Jürgen Prömel, President of TU Darmstadt Maschinenhaus 22 to 24

Reception until 9 pm

6 BackToTOC

Thursday 9:00 to 10:00 am

Composition Lecture 2 – Pamela Andanda: “The Integration of Traditional Knowledge into the Intellectual Property Regime using Value-Added Justification and Grammatical Analysis of Derivative Works” Maschinenhaus 122 Chair: Carl Mitcham

Thursday 10:00 to 10:30 am

coffee break

Thursday 10:30 to 12:30 am

1 Panel: Relating to Things that Relate to Us (1) – Ontology and Epistemology Maschinenhaus 22 Heather Wiltse: “On the multi-intentionality of assembled things” Yoni Van Den Eede: “The Omnipresence of Breakdown: Object-Oriented Philosophy of Technology” Holly Robbins: “Design to Situate Im/material Context” Chair:

2 Panel: Critical Perspectives on the Role of Mathematics in Data-Science Classrooms 23 Karen François Christian Hennig Johannes Lenhard Jean Paul Van Bendegem Chair: Patrick Allo

3 Panel: Reflections on Andrew Feenberg’s Philosophy of Technology Classrooms 123 Graeme Kirkpatrick: “Feenberg, Marx and the Dialectic of Technology Reform” Andrew Feenberg: “Lukács & the Philosophy of Technology” Edward Hamilton: “The Nietzschean strain in critical theory of technology” Darryl Cressman: “Contingency & Potential: STS & Critical Theory in Feenberg’s Philosophy of Technology” Chair: Darryl Cressman

4 Panel: Ethics and Politics Within Cloud Computing and other Computing Environments Classrooms 107 Javier Bustamante: “The Role of Social Networks and Cloud Computing in the Spread of Post-Trust Beliefs and Developments in Fundamentalist, Populist and Radical Discourses” Jose Barrientos: “E-Wisdom and E-Experience: Is it possible to develop wisdom and life experience in computing environments?” Emiliio Suñé Llinás: title tba Chair: Javier Bustamante

5 Submitted papers; Designing Things that Design Us Maschinenhaus 23 Philip Nickel: “Distributed consent for mHealth” Wessel Reijers: “Virtue Sensitive Design of Personalised Virtual Assistants” Bruno Gransche: “Things get to know who we are and tie us down to who we were” Vincent C. Müller and Alexandre Erler: "The digital goes 3D – and what are the risks?” Chair: Hajo Greif

7 BackToTOC

6 Submitted paper: Disruptive Technologies Maschinenhaus 24 Eric Kerr and Vivek Kant: “Breaking down malfunction: What happens when socio-technical systems don’t work?” Stefan Böschen: “Grammar of Things – A Field-Theoretical Perspective” Alexandra Dias Santos: “From a rooted southern perspective: the life and thought of angolan Ruy Duarte de Carvalho” Chair: Jan C. Schmidt

7 Submitted papers: Prototyping Classrooms 209 Tatjana Milivojevic and Ivana Ercegovac: “Man-computer Gestalt and The Noosphere Theory: Will the Computer Be Humanized or the Man Computerized?” JIANG Xiaohui and WANG Jian: “On Possible Ways to Working through the ‘uncanny valley’- Effect in Humanoids Design” Robert-Jan Geerts: “The polder as Precedent for Geoengineering” Chair:

8 Submitted papers: Responsible Agency Classrooms 223 YU Xue: “The Moral Agency and Moral Responsibility of Self-driving Cars” Jan Peter Bergen: “In Light of Levinas: Rethinking Innovation in Relation to Responsibility” Rene von Schomberg: “Global perspectives on Responsible Innovation” Martin Peterson: “The Geometry of Engineering Ethics” Chair: Annette Ripper

9 Track: A New Planetary Orientation for Philosophy of Technology in the Anthropocene? (2) Classrooms 113 Arianne Conty: “Who is to Interpret the Anthropocene? Nature and in the Academy” Alexandre Leskanich: “The Technology of History and the Grammar of the Anthropocene” Jessica Imanaka: “Laudato Si’, Technologies of Power, and Environmental Justice” Jose Aravena-Reyes and Ailton Krenak: “Toward an Engineering of Care” Chair: Pieter Lemmens

10 Track: Engineering Knowledge (2) – special Classrooms 100 Claudia Eckert and Martin Stacey: “Object References as Engineering Knowledge” Viola Schiaffonati: “Engineering Knowledge and Experimental Method – The Case of Experimental Computer Engineering” Renee Smit: “Idealisation in Engineering Science Knowledge” Jan Kratzer, Claudia Fleck and Guenther Luxbacher: “Grammar of Mechanical Failure Cases – a Diachronic and Linguistic Comparison” Chair: Rafaela Hillerbrand

11 Track: Technology and the City (1) – Architecture and Design Classrooms 109 Sara Eloy and Pieter Vermaas: “Over-the-counter housing design: the city when the gap between architects and laypersons narrows” Taylor Stone: “The Morality of Darkness: Urban nights, light pollution, and evolving values” Margoh Gonzalez Woge: “Technological Environmentality” Felipe Loureiro: “Binding Surfaces: Architecture and The Interplay of Walls and Screens” Chair: Michael Nagenborg

8 BackToTOC

Thursday 12:30 to 2:00 pm

lunch break – SPT Board Meeting in Classrooms 109

Thursday 2:00 to 4:00 pm

1 Panel: Assembling a Socio-Technological Future: The Challenge of Russia Classrooms 107 Ilya Sidorchuk: “The Wonders of Technology in Soviet Political Propaganda” Anna Mazurenko and Natalia Nikiforova: “The Russian National Technology Initiative as a Site for Sociotechnical Imaginaries” Dmitri Popov: “Political Modernization and Its Technological Limits” Yulia Obukhova: “The Role of Health in Sociotechnical Imaginaries” Chair: Alfred Nordmann

2 Panel: Relating to Things that Relate to Us (2) – Ethics and Pragmatics Maschinenhaus 22

Michel Puech: “Attachment to things, artifacts, devices, commodities: an inconvenient ethics of the ordinary” Steven Dorrestijn: “Ethics of technology below and above reason: The case of living With smart technologies” Diane Michelfelder: “The New Assisted Living: Relating to Alexa Relating to Us” Fanny Verrax: “From dealing with virtual others to the construction of the self: Videogames as an ethical sandbox” Chair:

3 Submitted papers: Cryogenic Probes Classrooms 223 Jol Thomson: “G24|0vßß" Alexander Friedrich: “The Grammar of Cryopreservation” Elena Chestnova: “Time travel in seven mile boots: temporality and distance in Gottfried Semper’s history writing” Chair: Cheryce von Xylander

4 Track: A New Planetary Orientation for Philosophy of Technology in the Anthropocene? (3) Classrooms 113 Bronislaw Szerszynski: “Technology as a planetary phenomenon” Pieter Lemmens: “Re-Imagining the Noosphere. Reflections on Digital Network Technology, Energy and Collective Intelligence in the Emerging Ecotechnological Age” Hub Zwart: “From the nadir of negativity towards the cusp of reconciliation: A dialectical (Hegelian-Teilhardian) assessment of the anthropocenic challenge” Iñigo Galzacorta, Luis Garagalza and Hannot Rodríguez: “Rethinking Technology in the Anthropocene: Relations and ‘Gravitational Forces’” Chair: Hub Zwart

5 Track: The Philosophy of Biomimicry – theoretical and applied Maschinenhaus 24 Aníbal Fuentes Palacios and José Hernández Vargas: “Prose and verse – Accuracy and creativity in the biomimetic transference.” Henry Dicks: “Biomimicry: An Alternative Path for Reconciling Art and Technology” Milutin Stojanovic: “Biomimicry in Agriculture: Is the ecological system-design model the future agricultural paradigm?” Sara Castiglioni: “The Fifth Wave – Applying Biomimicry to Business” Chair: Lucien von Schomberg

9 BackToTOC

6 Track: Questioning the Grammar of ‘Critical’ Infrastructures (1) Classrooms 116 Anais De Keijser: “The criticality of people as infrastructure: Creating resilience in the provision of urban services through System-D” Marcel Müller: “Reading Cities – Building Blocks for a Grammar of Technical Structures” Florian Maurer and Albrecht Fritzsche: “System Protection and the Benefit of Others – a Service Perspective” Chair: Anshika Suri

7 Track: Artefacts, Design Practices, Knowledge Productions (1) – Models and Simulation Classrooms 100 Sabine Ammon and Henning Meyer: “Simulation Models as Epistemic Tools in Product Development: A Case Study” Rafaela Hillerbrand and Eckert Claudia: “Models in Engineering Design Processes” Mieke Boon and Miles MacLeod: “Model-Based-Reasoning (MBR) as a skill for interdisciplinary approaches to socio-technological design and research” Hans Hasse and Johannes Lenhard: “Ennobling Ad Hoc Modifications. Confirmation, Simulation, and Adjustable Parameters” Chair: Sabine Ammon

8 Track: Pedagogical Pragmatics – Teaching Ethics and Philosophy of Technology (1) Maschinenhaus 23 Ashley Shew: “Teaching Technology & Disability” Diana Adela Martin, Eddie Conlon and Brian Bowe: “A Critical Reflection on Extending the Case Study Method in the Teaching of Engineering Ethics” Atsushi Fujiki: “Developing pedagogical method of engineering ethics integrated with environmental ethics” Thomas M Powers: “Globalizing Science and Engineering Ethics: Convergence or Equilibrium?” Chair:

9 Track: Technologie and the City (2) – (Infra-)Structures Classrooms 109 Michael Nagenborg: “Elevators as Urban Technologies: Past, Present, and Future” Vlad Niculescu-Dinca: “Towards a Sedimentology of Infrastructures: A Geological Approach for Understanding the City” Alessia Calafiore, Nicola Guarino and Guido Boella: “Recognizing Urban Forms through the Prism of Roles Theory” Shane Epting: “Automated Vehicles and Transportation Justice: Two Challenges for Planners and Engineers“ Chair: Margoh Gonzalez Woge

10 Track: Technology Translations: Philosophical Interactions across Disciplines and Application Domains (1) Classrooms 123 Ole Kliemann: “The Human in the ” Jan Torpus, Christiane Heibach and Andreas Simon: “Human Adaptivity to Responsive Environments” Cecilia Moloney: “Self-Attention in Learning and Doing Digital Signal Processing: Why and How” Beth Preston: “Sustainable technology in action: Intractable users and behavior-steering technologies” Chair: Albrecht Fritzsche

11 Track: Postphenomenological Research (1) – Theorizing Bodies Classrooms 23 Jesper Aagaard: “Technohabitual Agency: Habits, Akills, and Egoless Agency” Ciano Aydin: “World Oriented Self-Formation: Postphenomenology meets Peircean Pragmatism” M. Kirk: “The Digitized Body: A Twenty-First Century Reconstitution of the Leib/Körper Distinction” Moderator: Don Ihde

10 BackToTOC

Thursday 4:00 to 4:30 pm

coffee break

Thursday 4:30 to 5:30 pm

Composition Lecture 3 – Astrid Schwarz: “Gardening in the Anthropocene: Composing or Combining Things Together” Maschinenhaus 122 Chair: Cheryce von Xylander

Thursday 5:30 to 6:30 pm

Composition Lecture 4 – Christian Bök: “The Unkillable Poem” Maschinenhaus 122 Chair: Alfred Nordmann

Thursday 6:45 pm

Catch buses for evening session with food and drink at Weststadt Bar, returning on your own or by bus at 9:30pm or 10:30pm

11 BackToTOC

Friday morning

acatech special session Shaping Techno-Organizational Processes: Energy – Information – Autonomy Darmstadtium 2.02 chromium

Friday 9:00 to 10:30 am – Darmstadtium 2.02 chromium

1 Klaus Kornwachs: “The Grammar of Techno-Organizational Processes” Eberhard Umbach: “People and Things – The German Energiewende: Do we do the right things and do we do them right?”

Friday 10:30 to 11:00 am

coffee break

Friday 11:00 am to 12:30 pm – Darmstadt ium 2.02 chromium

1 Klaus Mainzer: “People and Things – Complexity 4.0 and Artificial Intelligence” Otthein Herzog: “People and Things – How far Autonomy may go? Autonomous Multi-Agent Systems, controlling Logistics and Production”

Friday 9:00 to 10:30 am

2 Panel: Acoustic Technics – in Art and Science Darmstadtium 2.03 vanadium Don Ihde: “Sonification in Science” Andrea Polli: “Witnessing Space” Pete Stollery: “Playing Sound; some thoughts on listening, embodiment, acousmatics and improvisation” Chair:

3 Panel: Conceptualising Aero-Material Assemblages of the Vertical Gaze (1) Darmstadtium 2.04 titanium Jutta Weber: “Working with Uncertainty. Understanding Contemporary Sensoric Control” Bronislaw Szerszynski: “Filling the volume: towards a grammar of aerial motion” Peter Novitzky and Peter-Paul Verbeek: “Domestic Drones from the Perspective of the Theory of Technological Mediation” Chair: Francisco Klauser

4 Submitted papers: Classics Darmstadtium 3.06 xenon Rayco Herrera: “Günther Anders and the ‘Promethean Gap’: Imagination as a Moral Task” LIU Zheng: “Chuang-tze's Philosophy of Technology and the Technological Mediation Theory” Chair: ZHANG Kang

12 BackToTOC

5 Submitted papers: Communication Darmstadtium 3.08 neon CHN Jia and CHEN Fan: “Research on Technological Communication from the Perspective of Social Integration” Alexis Elder: “Why not just step away from the screen and talk face to face? The Moral Import of Communication Medium Choice” Andreas Henze: “This Machine is Part of Me like your Voice: On Interacting with Technologically Generated Voices” Chair: Nicola Liberati

6 Submitted papers: Affordances Darmstadtium 3.02 hassium Wade Robison: “Engineers’ Oath: Do no unnecessary harm!” Dylan Wittkower: “Disaffordances and Dysaffordances in Code” Tiago Carvalho: “Gardening in the Anthropocene: Going Nowhere, Growing Somewhere” Chair: Astrid Schwarz

7 Submitted papers: Designed Spaces Darmstadtium 3.03 germanium Irene Breuer: “The grammar of architectural design: Graphs, grids and diagrams” Emre Demirel: “The Haptic and Visual Considerations of the Public Spaces: Otto Herbert Hajek’s Proposal for Hergelen Square in Ankara” Jaana Parviainen and Seija Ridell: “Choreographies of Smart Urban Power” Chair:

8 Submitted papers: Metaphysics Darmstadtium 3.09 helium Adam Toon: “Things and Concepts” Hajo Greif: “The Reality of Augmented Reality” Ingvar Tjostheim: “The Feeling of Being There – Is it an Illusion?” Chair:

9 Submitted papers: Normative Developments Darmstadtium 3.05 radon Peeter Müürsepp: “Aim-oriented Approach to Technology” Christiano C. Cruz: “Brazilian Popular Engineering and the Responsiveness of Technical Design to Social Values” Chair:

10 Track: Technology Translations: Philosophical Interactions across Disciplines and Application Domains (2) Darmstadtium 3.07 argon Scott Luan: “Grammars of Creation” Anna Dot: “Translationscapes and its Multidimensional Articulations – An Approach to Borders from the Project ‘On Translation’, by Antoni Muntadas” Silvia Mollicchi: “Mediums as Languages: A Transcendental Naturalist Approach to the Relation between Mediums and Representation” Chair: Albrecht Fritzsche

Friday 10:30 to 11:00 am

coffee break

13 BackToTOC

Friday 11:00 am to 12:30 pm

2 Panel: Conceptualising Aero-Material Assemblages of the Vertical Gaze (2) Darmstadtium 2.04 titanium Ole B. Jensen: “Drones in the Volumetric City” Francisco Klauser: “Spatialities of the Drone: Conceptualising the encounter of big data and the air” Maximilian Jablonowski: “The Transformation of the ‘Vertical Public’: The Aesthetic, Political, and Epistemological Practices of Consumer Drones” Chair: Jutta Weber

3 Submitted papers: Avatars Darmstadtium 3.02 hassium DONG Xiaoju: “Avatar: Multiple or Disrupted? A research on the identity in the age of Internet” Fabian Offert: “Exhibiting Computing – Computational Agency as a Speculative Principle for Exhibition Design” Dylan Wittkower: “Teh Intarwebs: Maed of Cats, Akshully” Chair:

4 Submitted papers: Experimental Ethics Darmstadtium 3.08 neon Manja Unger-Büttner: “Design as Experimental Ethics – Moral Skepticism and the Value of Exploration.” Philip Brey: “Structural Ethics: Ontological Foundations and Implications for the Ethics of Technology” Chair: Abubakr Khan

5 Submitted papers: Technological Politics Darmstadtium 3.05 radon Björn Sjöstrand: “Technology, Ethics, and Politics in a Globalized World” HUNG Ching: “Technologizing Democracy toward Democratizing Technology” Lumeng Jia: “From assessment to design – What is really needed in Technology Accompaniment?” Chair: Peter Müürsepp

6 Submitted papers: Technology in Science Darmstadtium 3.09 helium Mark Young: “Making or Using?: The Geiger Counter and Early Research” Pablo Aceves Cano: “Technological Normativity – Its Epistemic Role in Science and Mathematics” Chair:

7 Track: Pedagogical Pragmatics; Teaching Ethics and Philosophy of Technology (2) Darmstadtium 2.03 vanadium Edward Hamilton: “ of ‚Openness‘ and the Politics of Educational Technology” Cristiano C. Cruz: “How to Form Engineers Capable of Social Technology and Popular Engineering: Some Brazilian Initiatives” Daniel Jenkins: “Moral Philosophy and Automation: How Should Engineers Decide How Machines Should Decide?” Chair:

8 Track: Technology and the City (3) – Exploring the City Darmstadtium 3.06 xenon Diane Michelfelder: “Urban Landscapes and the Techno-Animal Condition” Remmon Barbaza: “Metro Manila: A City Without Syntax?” El Putnam: “Locative Reverb: Artistic Practice, Digital Technology, and the Grammatization of the City” Chair: Taylor Stone

14 BackToTOC

9 Track: Postphenomenological Research (2) – Technologized Space Darmstadtium 3.03 germanium Robert Rosenberger: “Multistable Public Space: From Park Benches to Bathroom Stalls” Søren Riis: “What Is It Like To Be A House? Towards An Ontology of the Internet of Things” Olya Kudina: “Deconstructing a Panopticon Tower of the Digital Age: A Phenomenological Account of Google Glass” Peter-Paul Verbeek: “The Politics of Technological Mediation” Moderator: Don Ihde

Friday 12:30 to 2:00 pm

lunch break

Friday 2:00 to 3:30 pm

1 Submitted papers: Deliberative Spaces Darmstadtium 3.05 radon Yana Boeva: “Process over Form: Making Design Visible with Participatory Making” Hatice Server Kesdi and Serkan Güneş: “The Impact of Ontological Shift From User to Participator in Design Process on Construction of Socio-Technical Systems” Maria Rosaria Stufano Melone and Stefano Borgo: “Ontological Analysis for Shared Urban Planning Understanding” Chair:

2 Submitted papers: Contestations Darmstadtium 3.08 neon Sabine Thuermel: “Online Dispute Resolution based on Smart Contracts: An Example of Disintermediation and Disruption of a Socio-technical System” Paul Thompson: “Sociotechnical Imaginaries for Future Food” Inmaculada de Melo-Martin: “Valuing Reprogenetic Technologies” Chair:

3 Submitted papers: Instruments Darmstadtium 3.07 argon Stanley Kranc: “What Could Instrumental Perception Be?” Maxence Gaillard: “The individuation of a scientific instrument” Chair: Mieke Boon

4 Submitted papers: Privacy Darmstadtium 3.09 helium Wulf Loh: "Publicity and Privacy in the Digital Age" Tobias Matzner: “The interdependence of subjectivity and things: new reflections on the value of privacy” WANG Hao: “Can Privacy Escape Powers? Towards a Theory of Privacy Through the Lens of Play” Chair:

5 Submitted papers: Attractive Objects Darmstadtium 2.04 titanium Levi Checketts: “The Sacrality of Things” Sam Edens: “Constructing a Public on the Plane of Objects: A Case Study of Fairphone” Maria Kapsali: “Novel Affordances: Technological Artefacts in Modern Postural Yoga Practice” Chair:

15 BackToTOC

6 Submitted papers: Technology and Language Darmstadtium 2.03 vanadium Mark Coeckelbergh and Michael Funk: “Transcendental Grammar and Technical Practice in Wittgenstein’s Late Writings: Reading Wittgenstein as a Philosopher of Technology” Joseph Pitt: “Updating the Language of Philosophy” Mark Cutler: “What Do Things Want?” Chair:

7 Track: Artefacts, Design Practices, Knowledge Productions (2) – Social Development of Artifacts Darmstadtium 3.02 hassium Geetanjali Date and Sanjay Chandrasekharan: “What role do formal structures play in the design process?” WANG Wei Min, Konrad Exner, Maurice Preidel, Julius Jenek, Sabine Ammon and Rainer Stark: “Evaluation of Knowledge-Related Phenomena in Milestone-Driven Product Development Processes – An Explorative Case Study on Student Projects” Chair: Sjoerd Zwart

8 Track: Technology Translations: Philosophical Interactions across Disciplines and Application Domains (3) Darmstadtium 3.07 argon Barbara Ziegler: “Unmanned aerial systems in armed conflict: Synergies between historical and philosophical perspectives” Eva-Maria Raffetseder: “Die Episteme von Prozessmanagementsystemen am Beispiel von Salesforce” Terry Bristol: “Design Communication and Innovation Policy” Chair: Geoff Crocker

9 Track: Postphenomenologcial Research (3) – Augmented Realities Darmstadtium 3.03 germanium Galit Wellner: “The Grammar of Augmented Reality” Richard Lewis: “Augmenting Museums with the Self(ie): A Postphenomenological Inquiry” Nicola Liberati: “The Emperor’s New Augmented Clothes: Digital Objects as Part of the Every Day” Moderator: Robert Rosenberger

Friday 3:30 to 4:15 pm

coffee break and change of venue

Friday 4:15 to 5:15 pm

Composition Lecture 5 – Dagmar Schäfer: “The Nominal Group – Things in Ming-Chinese Local Gazeteers (Difangzhi) Maschinenhaus 122 Chair: Zaiqing Fang

Friday 5:15 to 6:00 pm

Plenary Session: Darmstadt for Philosophers of Technology Maschinenhaus 122 – followed by optional small-group explorations

16 BackToTOC

Saturday 9:00 to 11:00 am

1 Round Table: Book Discussion “Philosophy of Technology after the Empirical Turn: Perspectives on Research in the Philosophy of Technology in the Next Decade” Classroom 109 Shannon Vallor, Mark Coeckelbergh, Peter-Paul Verbeek, Albrecht Fritzsche, Astrid Schwarz, Martin Gessmann Chairs: Maarten Franssen, Peter Kroes, Pieter Vermaas

2 Panel: AI and the Expert – Expert Knowledge in the Age of Artificial Intelligence Classrooms 223 Hidekazu Kanemitsu: “Rethinking the Dreyfus model and an Examination of Current Issues Regarding Expert Knowledge” Toshihiro Suzuki: “On the expertise of skilled workers in factories” Tetsuya Kono: “What is work for human beings after AI dominance?” Minao Kukita: “Buridan’s Asimo: Difficulty in mechanisation of moral competence” Chair:

3 Submitted papers: Articifial Intelligence Maschinenhaus 22 Janina Loh: “Responsibility and Robot Ethics – A Critical Overview and the Concept of Responsibility Networks” Nobutsugu Kanzaki: “Possibility of co-design in development process of AI and robot technology” Reina Saijo: “Human’s Vulnerability to AI technology for Decision Support System and a Possibility of Some Autonomous and Authentic Way of Living” Chair:

4 Submitted papers: Classrooms 113 Dennis Weiss: “How Ought We to Treat Our Televisions?” Alberto Romele, Paolo Furia and Marta Severo: “Digital Hermeneutics: Mapping the Debate and Paving the Way for New Perspectives” Michael Funk: “The Grammar of Information – Methodological Constructivism/Culturalism and the Philosophy of Technology” Chair:

5 Submitted papers: Phenomenological Perspectives Maschinenhaus 24 Lucien von Schomberg and Vincent Blok: “Innovation and the Character of our Age” Ashwin Jayanti: “A Revised Phenomenological Hermeneutics for Understanding the Grammar of Things” Jonathan Simon: “The medical drug as technological object” ZHOU Liyun: “Rethinking the boundary between science and technology from the Body – A Review of Donna Haraway’s Subject of ” Chair:

6 Track: Questioning the Grammar of ‚Critical‘ Infrastructures (2) Classrooms 116 José Luís Garcia, Helena Mateus Jerónimo and Pedro Xavier Mendonça: “Philosophy and disasters: the outbreak of Legionnaires’ disease in Portugal” Anshika Suri: “Women’s Everyday Contestations and Negotiations with Technology in Cities of East Africa” Chair: Anais De Keijser

17 BackToTOC

7 Track: Artefacts, Design Practices, Knowledge Productions (3) – Action in Engineering Knowledge Classrooms 100 Sjoerd Zwart: “Prescriptive Knowledge: The Grammar of Actions and Things” Hans Tromp: “Artifact Design Reasoning and Intuition, explored through philosophy of action and cognition” Judith Simon: “Apprehending Big Data: Extended, Android or Socio-Technical Epistemology?” Klaus Kornwachs: “Modalities in describing technological actions: To do, to prevent, to omit” Chair: Rafaela Hillerbrand

8 Track: Technology and the City (4) – What makes a smart city Classrooms 109 Giovanni Frigo: “‘Green Buildings’ in the City? A Reflection about Technology, Sustainability Indexes, and the Ethics of Energy” Brandt Dainow: “Philosophical Framework for Smart City Analysis” WANG Qian and YU Xue: “Technology and the City: From the Perspective of Organicism” Jathan Sadowski: “Parameters of Possibility: Envisioning and Constructing the Smart Urban Future” Chair: Remmon Barbaza

9 Track: Technology Translations: Philosophical Interactions across Disciplines and Application Domains (4) Classrooms 123 Nils-Frederic Wagner: “Doing away with the Agential Bias: Agency and Patiency in Health Monitoring Applications” Ashley Shew: “Walk This Way” Samantha Fried: “Picking a Peck of Pickled Pixels: Thinking Through the Pixel Paradigm in Terrestrial Remote Sensing” Juan Almarza Anwandter: “On the ‘Pathos of closeness’: Technology and experience in contemporary architectural practices” Chair: Ashley Shew

10 Track: Postphenomenological Research (4) – Mediation/the Mediated Classrooms 23 Jonne Hoek: “Technological Mediation of Limits and the Limits of Technological Mediation” Bas de Boer: “Vico’s Verum-Factum Principle and Contemporary Technoscience” Moderator: Robert Rosenberger

Saturday 11:00 to 11:45 am

coffee break

18 BackToTOC

Saturday 11:45 am to 2:00 pm

SPT Plenary and Awards Session Maschinenhaus 122

11:45 am Award announcements SPT DISTINGUISHED SCHOLAR AWARD Don Ihde (Professor Emeritus, Stony Brook University) SPT EARLY CAREER AWARD (Sponsored by Springer/Philosophy and Technology) Vlad Niculescu-Dincă (Erasmus University Rotterdam): ““Towards a Sedimentology of Information Infrastructures: A Geological Approach for Understanding the City.” SPT GRADUATE STUDENT PAPER AWARDS Andreas Beinsteiner (Leopold-Franzens-Universität, Austria): “Material Irritations in the Grammar of Things: Post-onto-theological Considerations” Levi Checketts (Graduate Theological Union, United States): “The Sacrality of Things” Cristiano Cordeiro Cruz (University of São Paulo, Brazil): “Brazilian Popular Engineering and the Responsiveness of Technical Design to Social Values” Taylor Stone (Delft University of Technology, Netherlands): “The Morality of Darkness: Urban Nights, Light Pollution, and Evolving Values” Yu Xue (Dalian University of Technology, People’s Republic of China): “The Moral Agency and Moral Responsibility of Robots”

12:10 pm Shannon Vallor: Presidential Address 1:00 pm Don Ihde: SPT Distinguished Scholar Lecture 1:50 pm Closing remarks

End of conference

Followed by optional excursion to Mainz (including tour of the Gutenberg Museum)

19

Abstracts

20 BackToTOC

Panels

Panel: Acoustic Technics No description

Panel: AI and Expert: Expert Knowledge in the Age of Artificial Intelligence

Panel description We live in the age of Artificial Intelligence. Actually, AI is all over the news. It drives our cars, cleans our homes, answers the quizzes on a TV program, and beats a chess champion. AI is changing our daily lives and is increasingly gaining social attention. Not only to universities, but also companies such as Apple, Google, and IBM engage in AI studies and AI application. We might say that AI is playing leading role in the grammar of things. And, human beings are no match for AI regarding the accumulation of knowledge and the processing speed of it. This situation leads to questions: What is the difference human being and AI? Is it possible for AI to acquire all human skills? What is the expert knowledge? How AI changes human being? In this panel, we’ll deal with these issues.

Contribution 1: Hidekazu Kanemitsu (Kanazawa Institute of Technology, Japan): Rethinking Dreyfus model and examination of current issues regarding expertknowledge Dreyfus model of skill acquisition is well known regarding expert knowledge. It says that there are five stage in skill acquisition: 1. Novice, 2. Advanced beginner, 3. Competent, 4. Proficient, and 5. Expert. According to this model, the expert not only to sees what needs to be achieved, but also sees immediately how to achieve this goal. And Hubert Dreyfus argued that human knowledge, especially expertise depends on unconscious instincts and could never be captured in formal rules, so AI is not able to reach the stage of expert. I will rethink this model including its criticism and bring up issues that we should think in the age of AI in terms of expert knowledge. My presentation will be an introduction of this panel.

Contribution 2: Toshihiro Suzuki(Sophia University, Japan): On the expertise of skilled workers in factories In my presentation, I will show the case study concerning the expertise of skilled workers in factory working. I will analyze the working practice of skilled workers in the Japanese traditional imono industry (foundry industry), and will show what is the essential point of their expertise. First, I will show how skilled workers work in factory and how they answer the question about what they deal with. Then, I will show how we can interpret their special ways of answering the questions using the phenomenological framework of recognition of objects. Skilled workers talk about imono (molten metal), just like they talk about everyday things. The essential point of their expertise is that they can deal with imono as if they deal with everyday objects. Then, I will pick up their problemsolving activity in which their special expertise plays a significant role. They sometimes have to produce a new working process. And when they produce a new working process, they should modify the detail of the process with trial-and-error method. But, in their trial-and-error, they should modify not only numerical values (i.e. percentage of chemical materials, temperature, etc.), which can be modified by machines, but also the way they deal with materials. Such modification is possible only by experienced skilled workers.

Contribution 3: Tetsuya Kono(Rikkyo University, Japan): What is work for human beings after AI dominance? Stephen Hawking had warned that AI machines could wipe out humanity because they are too clever, and we should “shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence” (Independence 8 Oct, 2015). If you are very skeptical about the realization of strong AI which performs general intelligent action and don’t believe in the advent of technological singularity, you might feel that we are up against a real problem that the development of computerization is increasing unemployment. C.B. Frey and M.A. Osborne estimated in their paper (2013) that 47 per cent of total US employment is at risk by computerization. We need to take this prediction seriously. If the computer will replace human labor in more than seven hundred types of businesses as Frey and Osborne predicted, it should provoke not only the problem of unemployment but also a radical social change. Do we have to be more creative in the business in order not to let the computer deprive us of the job? It would be very difficult to request all human beings to become more

21 BackToTOC

creative in business. I want to consider in my talk what work would be and should be after computerization.

Contribution 4: Minao Kukita(Nagoya University, Japan): Buridan’s Asimo: Difficulty in mechanisation of moral competence Today we are surrounded by devises that autonomously collect, process and communicate information, and take actions with little or no human supervision or control. The number and variety of such devises are increasing at a surprising rate. Some researchers think that in order to prevent or mitigate unpredictable damages caused by complex interactions among such devices and human actors, it is desirable that machines should know and respect human values. Thus, a field of research called `machine ethics’ or `artificial morality’ has emerged, where researchers from AI, robotics, philosophy, ethics etc. are trying to develop morally competent artificial systems. Many researcher in this field seem to think that the challenge is how to find and implement appropriate rules or norms for moral judgment and moral actions. In other words, attempts in machine ethics are carried out in line with traditional symbolic AI and expert systems. A natural question is whether moral competence can be regarded as expert knowledge that can be simulated by something like expert systems. This presentation deals with this question, and try to answer it in the negative. We argue that moral competence is in many respects similar to expert knowledge, but include much more than what an expert system can simulate. Most importantly, a moral person should cope with situations where explicit norms cannot tell us what the right thing to do is. Panel: Assembling a Socio-Technological Future: The Challenge of Russia

Panel Description The panel builds on the conceptual framework of sociotechnical imaginaries (Sh. Jasanoff), meaning “imagined forms of social life and social order that center on the development and fulfillment of innovative scientific and technological projects.” This approach emphasizes that not only engineers and scientists possess the capacity for imagination, but rather that imagination is a collective cultural practice of reflection on the desirable and attainable social future of a political community. It also underscores the idea of co-construction of technology and society and highlights that the nation is not a given category, but a concept that can be imagined or re-imagined and performed in processes of technoscientific policymaking and development. The four papers focus on a national case and analyze how the future is and has been imagined in Russia, what role is attributed to technology in designing and achieving it, and how citizens and publics interpret their contribution to the production, promotion and interpretation of innovations.

Contribution 1: I. Sidorchuk (Associate Professor, Peter the Great St.Petersburg Polytechnic University.): "The Wonders of Technology in Soviet Political Propaganda." With the October Revolution came the implementation of projects of unprecedented scale and audacity that concerned not only the socio-economic and political sphere but implicated cultural reality. An important dimension of all these projects was technology. For the new leadership of the country the development of science and technology was the cornerstone of the design of a communist utopia. The desire to disseminate scientific and technological knowledge among the population as quickly as possible led to the acute problem of society's attitude towards technology: Many people, especially the peasants, were not prepared for the invasion of innovations in their daily life. Despite the totalitarian nature of Bolshevism, authorities considered the need for some degree of public consent, and understood the importance of the acceptance of the new national technology policy among the masses. The paper is concerned with the methods of propaganda used by the Bolsheviks in the 1920s for the promotion, popularization and socialization of technology.

Contribution 2: A. Mazurenko (Associate Professor, Peter the Great St.Petersburg Polytechnic University), N. Nikiforova (Associate Professor, Peter the Great St.Petersburg Polytechnic University): "The Russian National Technology Initiative as a Site for Sociotechnical Imaginaries." Russia considers itself on the threshold of a new technological revolution – that it needs a new economy based on a novel technological platform. To foster this process, the National Technology Initiative (NTI) has been launched in 20??. It is concerned with identifying future goals and key technologies for development, and more broadly with linking overarching national challenges to priorities and mechanisms of implementation. The initiative is described by its authors as a “responsible standpoint of citizens” that requires collective involvement in reflection about the future

22 BackToTOC

and creative transformation of cultural specificities and traditional attitudes into advantages or opportunites (turning „bugs into features“). What sets the NTI apart from other national initiatives is its grounding in a specific philosophical tradition.

Contribution 3: D. Popov (Associate Professor, Peter the Great St.Petersburg Polytechnic University): "Political Modernization and Its Technological Limits." Global integration and the increase in the number of ICT users present a new dimension of problems of political development and modernization of developed and developing countries. Technological progress not only presents possibilities of states to influence the geopolitical sphere, but also serves as the base for efficient social policy and sustainable social institutions. Political elites are no longer able to determine modernization policy without considering sociotechnical imaginaries produced by various social and interest groups. New technology users fashion themselves as journalists, experts, sponsors and creators of new social and political realities. When they act as representatives of radical movements they do not rely on traditional weapons, but become active participants of information wars and become consumers of new technologies. Archaic character and modernity interweave in this new reality, raising the question to what extent technological change and sociotechnical imaginaries influence political modernization.

Contribution 4: Yu. Obukhova (Associate Professor, Peter the Great St.Petersburg Polytechnic University): "The Role of Health in Sociotechnical Imaginaries." In modern society health has become an economic norm. Healthcare is a scientifically and technologically supported system, intertwined with biopolitcal practices, for instance vaccination. These practices allow us to extend life expectancy and to construct an image of a future free from many diseases. Such an industrialized concept of medicine privatized interpretations of disease, health and normality. This view misses out on the variety of perceptions of illness and health constructed by the patient, and connected to his or her way of life and community practices. Nowadays in Russia there are various groups opposing institutional conventional medicine and negating treatment. For instance, they avoid medical interventions in childbirth, others deny the existence of AIDS or cancer. The presentation will explore health as a component of sociotechnical imaginaries and scrutinize the motivation of people who resist official medicine. It will investigate alternative practices from the point of view of sociotechnical imaginaries that involve alternative social strategies and life scenarios.

Panel: Conceptualising aero-material assemblages of the vertical gaze

Panel Description A growing number of scholars have in recent years explored the logics, functioning and implications of the ‘aerial gaze’ conveyed, for example, by drones and satellites. Research has tended to emphasise the socio-technical mediations that constitute, and are constituted by, the complex ‘vertical assemblages’ emerging (Crampton, 2015), and the wider power issues and implications thereof. In more practical terms, scholars have started to think about questions of design and urban materialities, connected with the novel aero-mobile dimensions of data production (e.g. areal zoning and flight corridors between buildings and within and between cities). The proposed session wants to add further theoretical debate to these literatures. It sets out to open up reflection on the possible combination of ‘volumetric thinking’, mobility theory and theories of materialities, to make sense of the novel aero-visual, and also material, assemblages of the vertical gaze, and to foster more detailed conceptual attention to the specific ‘aereality’ (Adey, 2010) within which techniques of vision and visualisation from above operate, and which they contribute to perform.

The proposed session thus brings together contributions that develop conceptual vocabularies and theoretical perspectives for grasping the complex aero-visual assemblages produced in the contemporary encounter between Big Data and the air. We also aim to use the gaze-from-above problematic as a prism for addressing a wider set of philosophical and theoretical questions: What difference does it make, conceptually speaking, when socio-technical systems of vision and visualisation operate in, from and through the air? How can this help in theorising the relation between the material and the semiotic dimensions of airspace and of the encounter between the air and the ground? What grammar are we to develop to grasp conceptually the complex relational configurations of material and immaterial realms, practices, affects and imaginations, that co-produce and result from the techno-mediated territorialisation of the air in the present-day world? Moreover, how does the

23 BackToTOC

emergence of new aero-visual technologies challenge the (often two-dimensional) grammar deployed for understanding cities and space more generally?

Contribution 1: Jutta Weber (Univ. of Paderborn): Working with Uncertainty. Understanding Contemporary Sensoric Control. Today's security technologies are driven by an ideal of omnipotent sensoric control based on real-time tracking and targeting from a distance. Imagination, tinkering (with the unpredictable), and the mining of possibilities play a central role for the management of these dynamic systems, while also the suspect / enemy is turned into a possiblistic system (Amoore 2015) which can be found, fixed, (finished), exploited and analysed (F3EA) with the help of information superiority. At the same time, surveillance technologies configure a “regime of technologically enhanced identification techniques” (Ruppert 2009) and build on data collections of huge populations “not only to monitor certain targets in real time but also to be able to retrace any individual’s itinerary of relations” (Chamayou 2015). In my paper I will dicuss epistemological and ontological underpinnings of technologies of sensoric control from above or below – possible differences as well as similarities that enable their increasing amalgamation. Thereby I will focus especially on the role of uncertainty, possiblity and the unpredictable.

Contribution 2: Bronislaw Szerszynski (Lancaster University): Filling the volume: towards a grammar of aerial motion How can we ‘fill out’ our thought about the vertical volume above the terrestrial surface – to understand it not as a passive empty space to be filled, but as extended matter that is full, structured and self-constituting? In this paper I focus on the material preconditions for aero-visual assemblages and the aerial gaze: the material powers of solid entities moving within the airy medium, but also of the air’s own motion within itself. I will firstly explain how I will take seriously the ‘grammatical’ framing of the general conference call, approaching both technology and human language as ‘earthly powers’ (Szerszynski forthcoming) – as phenomena produced by a far-from equilibrium planet self-organising over diverse timescales. In particular I explore ways in which the structure of technology exhibits hierarchical composition (Simondon) and arbitrariness between internal ‘working’ and external ‘function’ (Sigaut), echoing features of language. However, as I argued in a recent paper, many of these features of technology are also prefigured in the self-organisation of abiotic, fluid motion in a far-from equilibrium Earth (http://dx.doi.org/10.1080/17450101.2016.1211828). Secondly I will explore how we can understand the lower atmosphere through a ‘continuous-matter ontology’ in which difference (e.g. of pressure, temperature, velocity) is immanent, gradual and generative. I will explore how we can use forces such as shear, viscosity and inertia, and internal structures such as streamlines, vortices, boundary layers and inversion layers, to understand the internal ‘shape’ and self-shaping of the lower atmosphere over diverse timescales, and thus ‘fill out’ our understanding of air as an active elemental medium. Thirdly, I will sketch out how we can now extend this sort of approach to mobile objects moving in our now actively self-constituting aerial medium. Influenced by Jacques Lafitte’s (1932) ‘mechanology’, which categorised machines according to their materio-energetic relationship with their environment, I show how we can populate the air with families of moving entities that straddle familiar distinctions between the abiotic, biotic and technological: things that fall, float, fly, sail or soar, as their extensive and intensive properties allow them to create and occupy different mobility niches in an already active medium.

Contribution 3: Peter Novitzky (University of Twente), Peter-Paul Verbeek (University of Twente):Domestic Drones from the Perspective of the Theory of Technological Mediation The ever-increasing penetration of drone technologies from the military domain into civil, industrial, and commercial domains poses significant challenges and opportunities for individuals, social structures (e.g. labour, employment, etc.), and societies. Our presentation will focus on the changes that drone technologies may introduce into the relations within these groups. Our analysis will employ the theory of technological mediation of the post-phenomenological tradition. Finally, the findings and how they contribute the ethical discourse will be discussed, including practical recommendations that may be made regarding value-sensitive design of drone technologies.

Contribution 4: Ole B. Jensen (Aalborg University, Denmark): Drones in the Volumetric City This paper examines, very critically, the idea of planar and horizontal models of the city. Throughout the Modern city design and planning period, sprawl and urban growth has been modelled in a horizontal model with human engineering capacities and aerial gazes as the underpinning rationalities. By exploring new theories of objects, artefacts, things, and so-called ‘volumetric thinking’ the paper

24 BackToTOC

use the case of drones to exemplify and speculate on the inherent three-dimensional and spatial (volumetric) dimension of the network city. The paper explores a thinking of the network city in volumes and verticality, rather than in plan and horizontality and connects to the session question ‘how does the emergence of new aero-visual technologies challenge the (often two-dimensional) grammar deployed for understanding cities and space more generally?’ in particular.

Contribution 5: Francisco Klauser (Institute of Geography, Neuchâtel University, Switzerland): Spatialities of the Drone: Conceptualising the encounter of big data and the air Referring to the example of camera-fitted civil drones, the paper aims to conceptualise the contemporary encounter of big data and the air. The aim hereby is to develop a composite vocabulary adequate for capturing the multi-dimensionality of the spaces (of the air and of the ground) within, through and on which drones act, and which drones thus contribute to perform. This reflection pursues a more longstanding reflection on the power issues and aero-spatial functioning, logics and implications of drones, bound up with a yet still wider theoretical project of developing a resolutely ‘space-sensitive’ approach to the problematics of big data and surveillance. More specifically, the paper is structured into three main parts. Firstly, referring to Kandinski’s and Klee’s theories of abstract painting, the paper focuses on the terminological register of points, lines and planes for studying the spatialities of the drone gaze. Secondly, the paper draws upon Foucault’s distinction between apparatuses of ‘discipline’ and ‘security’ for carving out a set of contrapuntal pairs of spatial logics of power, relating to fixity, enclosure and internal organization, as opposed to flexibility, openness and circulations, which offer a second terminology for exploring the ways in which drones are bound up with space. Thirdly, the paper adds a volumetric dimension to this discussion, in advancing a spherical/atmospherical approach and terminology to the drone problematic in the sense of Peter Sloterdijk and Gernot Böhme. This aims to capture – from a simultaneously relational and physical viewpoint – the inherent voluminosity of the spaces within which drones operate, and which they contribute to perform.

Contribution 6: Maximilian Jablonowski (University of Zurich): The Transformation of the ‘Vertical Public’: The Aesthetic, Political, and Epistemological Practices of Consumer Drones Camera-equipped consumer drones are popular. They are, both technologically and culturally, situated at the blurred edge between unmanned aircraft systems and model airplanes. Due to their specific capacities of motility and mediality (McCosker 2015), drones are deeply entangled in heterogeneous spacial practices of the vertical. Hobbyists benefit from the drones’ ‘aesthetics of verticality’ (Jablonowski 2014; 2017) to get new perspectives on rural or urban landscapes and action sports; protesters increasingly use consumer drones at demonstrations in urban spaces as a form of ‘counter-surveillance’ (Schmidt 2015; Waghorn 2016); and news outlets as well as citizen journalists explore the possibilities of ‚drone journalism’ (Bartzen Culver 2014; Tremayne & Clark 2014). Some enthusiastically celebrate these practices as ‚democratization’ of the view from above, others raise solid concerns regarding airspace safety and privacy. Anyway, private uses of consumer drones never were a private matter, but challenge perceptions, spaces, and regimes of the public. Drawing on and extending Lisa Parks’ (2013) notion of ‚vertical publics,’ I will analyze how the heterogeneous aesthetic, political, and epistemological practices of consumer drone uses participate in the production and transformation of the vertical as a contested public space where different semiotic-material actors and practices intersect and sometimes, both in the figurative and the literal sense, even collide. Panel: Critical perspectives on the role of mathematics in data- science

Panel Description The critical evaluation of data-science (Floridi & Taddeo 2016) and its place in the so-called data- revolution is primarily focused on the notions of data and code. For the former, this is made very explicit in the tenet of critical data-studies that “data are never just data” (Kitchin 2014) as well as in repudiations of the “myth of raw data” (Gitelman 2013) in media studies and science and technology studies. As for code, this usually comes to the fore when the ethical or epistemic neutrality of design- decisions (often related to the development of autonomous algorithmic data-processes) are questioned, or when we try to clarify the responsibility for such design-decisions (Barocas et al. 2013). Even though algorithms and code, and by extension also large parts of the data-processes that characterise the practice of data-science, are mathematical objects, and indeed derive much of their epistemic status and authority from their mathematical foundations (in statistics, but presumably also

25 BackToTOC

from the theory of computation), the role and status of mathematics in the practice and public understanding of data-science remains relatively underexplored; especially when integrated in a broader critical assessment of the societal impact of data-science.

The upshot of this panel-discussion is to bring together different perspectives on the epistemic and societal role of mathematics in its relation to data-science and the data-revolution. It is based on the assumption that only a realistic picture of mathematics, as emphasised within the philosophy of mathematical practices, can reliably inform such an inquiry. The latter presupposes a better understanding of the role of applied mathematics in the , an appreciation of the diverse ways in which statistical theory can inform the development of data-processes (Gelman & Hennig 2015), and a critical outlook on the societal status of mathematics. Such a realistic picture of mathematics serves two purposes. It should inform an analysis of what it means to “trust in numbers” (Rieder & Simon 2016) or help us identify clear cases of “mathwashing” (Beneson 2016), but it should just as much clarify the critical role of mathematics and explain how certain epistemic virtues of mathematics can play a decisive role in exposing epistemic failures and poor practices in data-science. By bringing into focus the fact that the role of mathematics in data-science and in our understanding of data-science has both critical (to expose poor practices, but also by using mathematical proofs as the epistemic standard required to show that certain design-standards are met (Kroll et al.)), non-critical (exemplified in references to mechanical objectivity and calculative reason, see Daston (2004) or Christin (2016)) and even anti-critical facets (for instance when mathematics and mathematical literacy become gate-keepers), a more balanced understanding of the implicit and explicit epistemic standards that are at play within data-science comes within reach. This requires us to confront such issues as epistemic trust, the possibility of critique, and the role of secure epistemic foundations, and invites us to question the ambivalent role of mathematics in our understanding of data-science as an epistemic practice, and of data-processes and products as rational outcomes and processes.

Participants Karen François (Doctoral School of Humanities, Vrije Universiteit Brussel), Christian Hennig (UCL, London), Johannes Lenhard (Institute of Interdisciplinary Studies, University of Bielefeld), Jean Paul Van Bendegem (Centre of Logic and , Vrije Universiteit Brussel)

Panel: Ethics and Politics Within Cloud Computing and other Computing Environments

Panel Description Decades long conversations among philosophers, social scientists, computer professionals, and everyday citizens about the social and political dimensions of computer networking have recently begun to focus upon some vexing, even troubling, developments on the Net. The rosy expectations of early 21st century advocates on “the wealth of networks” and its beneficial possibilities for social communication, personal creativity and direct democracy are now overshadowed by far less favorable prospects. Beginning with a set of revelations about the extent of government and corporate surveillance of computer communications, ones publicized by Edward Snowden and others, the conversation has spread to range of discourse pathologies that have surfaced within “social networks,” Internet “news” sites, political chat rooms, and the like. Widespread expressions of bigotry, bullying xenophobia, “fake news” and “post-truth” propaganda now loom as serious problems for political society. The panel will explore the contributions of philosophers and political theorists to the surprisingly troublesome issues that discussions of ethics and politics in cloud computing and other computing environments must consider. How can one engage in reasonable discussions in the online realm when long-standing conventions about ways to test and correct important claims are summarily rejected by those with rigid, often toxic narratives to promote ? What might happen if the “merchants of doubt,” as Naomi Oreskes calls them, succeed in dominating key debates about policy, legislation and problem-solving within increasingly influential Internet echo chambers such as Facebook, Twitter and Instagram? How can those schooled in the norms of responsible democratic deliberation and debate – online and elsewhere – deal with voices whose purpose is to wreck such norms while expanding their own power? Can widely celebrated qualities of free speech on the Internet be realized while strengthening practices of critical self-correction upon which democracy depends?

26 BackToTOC

Contribution 1: Bustamente, Javier (Philosophy, Universidad Complutense de Madrid): The Role of Social Networks and Cloud Computing in the Spread of Post-Trust Beliefs and Developments in Fundamentalist, Populist and Radical Discourses My contribution is aimed to explore four thesis: 1) Social networks and cloud computing are contemporary breeding grounds for processes of religious and political identity building. 2) A political use of some digital means fosters and amplifies both a policy of violence and a process of radicalization of individuals and identity-based groups, 3) The expansion of post-truth beliefs, violent practices related to radical ideologies, as well as populist discourses, is closely related to topological characteristics of social networks and cloud computing systems. 4) In order to determine the extent and nature of the aforementioned processes, it is necessary to understand the dynamics of disseminating beliefs.

Contribution 2: Barrientos, Jose (Philosophy, Universidad De Sevilla): E-Wisdom and E- Experience: Is it possible to develop wisdom and life experience in computing environments? Michel Foucault, Pierre Hadot, John Sellars, John Irvim, Valery Tiberius, Howard Nusbaum or Robert Stenberg have tried to rethink wisdom in present times. They have gone back to Stoicism, Aristotle or other philosophers to rediscover life itself as a way of life. They have revived ancient techniques as prameditatio malorum, philosophical retreats, philosophical readings and so forth to encourage people to be wiser. However, they have usually done this offline. How can the promise of this approach be realized on the Internet? Is it possible to reproduce life experiences in social networks? Is it possible to use Facebook, Wassap or Instagram to offer philosophical readings? Is Internet framework a burden or a vantage to designing activities of e-wisdom? Could it be possible to use Zoom to convene philosophical meetings in roughly the same way that Stoics did at the Stoa?

Contribution 3: Suñe Llinas, Emilio (Complutense University of Madrid) Due to the global expansion of IT systems, Information Society has been playing a major role in most aspects of social life, from economy to government. Consequently, this situation permeates all the aspects of Law, because the meaning of Law has to do with resolving social conflicts of any kind. Today we are witnessing a new Revolution of Productivity, not limited to manufacture. It goes beyond the realm of an Industrial Revolution. We name it the 4th Productivity Revolution. A first generation of Computer Law was aimed to address specific questions related to personal data protection, immaterial property rights in digital environments, cybercrime, etc. A second generation of Computer Law will have to become a legal framework for the Internet of Things; in other words, a Computer Law 2.0. Far of the specialization of its first generation, Computer Law 2.0 involves almost all matters of Law, prompted by the omnipresence of ITs in the social realm, including economic, political, and managerial aspects. Due to the expansion of Computer Law in new meta-space called cyberspace, it is also time to consider the need of a Cyberspace Constitution and a Bill of Rights of Cyberspace.

Panel: Postphenomenological Research 1: Theorizing Bodies

Panel Description Moderator: Don Ihde Session organizer (and corresponding author): Robert Rosenberger This session brings together cutting-edge thought on postphenomenological conceptions of the body. How should we think our selfhood and our bodies in light of our contemporary relationships to technology?

Contribution 1: Jesper Aagaard (Aarhus University): Technohabitual Agency: Habits, Akills, and Egoless Agency Merleau-Ponty’s original notion of ‘habit’ can be seen in instances where the body prereflectively responds to the perceived solicitations of a given situation by e.g., shifting gears or grasping a coffee cup. To avoid any mechanist connotations, Dreyfus later translated this flexible and situation-sensitive concept into ‘skill’ and phenomenologists have since proceeded to discuss intentionally trained skills such as sports and dancing. The field of phenomenology is now ripe with knowing bodies, thinking hands, and egoless agency, At first glance, the notion of ‘skill’ may also help the burgeoning field of postphenomenology capture the extent to which our skillful use of technologies inclines us to act and perceive in specific ways: A staircase railing solicits grinding to the skilled skater, and an extraordinary event calls for pictures to the skilled photographer. Sometimes, however, our skillful use of technologies inclines us do things that we do not intend to do. Discussing the unsafe practice of using cellphones while driving, for instance, Rosenberger (2014) notes that, “Like the way those who habitually bite their nails will be on occasion surprised to look down and find they are once again biting

27 BackToTOC

their nails, drivers may slide inadvertently and unconsciously into the distracting habits of the phone, habits in which their awareness is occupied more by the conversation than by the road” (p. 43). Similarly, students have described laptops as endowed with an attractive allure that “pulls you in” and sometimes close their laptops to avoid this magnetic pull (Aagaard, 2015). These situations seem to involve bad habits somehow, but the notion of ‘skill’ involves elements of training and mastery that make it ill-equipped to address this issue. The purpose of this presentation is therefore to develop a notion of ‘technohabitual agency’ that helps us describe and address such phenomenological and normative aspects.

Contribution 2: Ciano Aydin (Department of Philosophy, University of Twente/Delft University of Technology): World Oriented Self-Formation: Postphenomenology Meets Peircean Pragmatism Postphenomenology combines nonessentialist and nonfoundational pragmatism with a focus on how actual technologies mediate not only experiences of the world but also of ourselves. Concepts like ‘mediation,’ ‘multistability,’ and ‘technological intentionality’ give some suggestions for developing a nonessentialist account of the self but do not provide the necessary, basic structures for an alternative framework.

In this paper I propose that the phenomenological categories of Charles S. Peirce – the American scientist and philosopher who coined the notion ‘pragmatism’ – might offer this general framework. Peirce discovers that in our encounters with the world we always and necessarily have to adopt or presuppose three categories, which he simply calls the categories of Firstness, Secondness, and Thirdness. Firstness indicates that the self is fundamentally characterized by indeterminateness. Secondness implies that the self can only manifest itself by virtue of its interactions. Thirdness refers to a certain orientation, goal or ideal by virtue of which the self can govern and regulate its interactions. From this perspective the self is not an a priori given entity but can only form itself (which is a normative challenge) by imparting a particular form to its never completely fixed multiplicity of interactions. Peirce’s categories not only account for variabilities (multistability) but also for relative durability (stability) in the self and its relations to the world.

Peirce’s nonsubjective approach argues for continuity between the ‘grammar of our thinking’ and the ‘grammar of things’ and attempts to secure world oriented self-formation. In his critique of what he calls nominalists, Peirce states “that our thinking only apprehends and does not create thought, and that that thought may and does as much govern outward things as it does our thinking” (Collected Papers 1.27). Technical mediation theory can contribute to this framework that interactions with the world, as well as the goals and ideals that give orientation and enable self-formation are mediated by concrete technologies.

Contribution 3: Kirk M. Besmer (Gonzaga University): The Digitized Body: A Twenty-First Century Reconstitution of the Leib/Körper Distinction Leib/Körper distinction is fundamental to classical phenomenological accounts of embodiment. ‘Körper’ refers to the body regarded as a physiological object in the world – that is, as a bit of physical nature – while ‘Leib’ refers to the body as the lived through agent that has a surrounding world: this is the body as the locus of all intentional activity, agency, and meaning making. In this paper, I will revisit this distinction in light of the widespread use of digital biometric technologies. Such technologies are used, for example, in medicine (extensively), personal heath (fitness trackers, etc.), and law enforcement (facial recognition software). Although there are significant differences among them, the fundamental operating logic of biometric technologies is that the body as Körper, in its homeostatic state, continuously generates relevant information about its status and being. In postphenomenological terms, Körper is constituted as a functioning element in an economy of information and is susceptible to initiatives of measurement and tracking. Rather than being primarily a bit of corporeal extension or fleshy physical stuff – the classical phenomenological understanding of Körper – it comes to be constituted in terms of data flows that pre-exist the methods of metricizating that make it visible. This ‘reconstitution’ of Körper, which is articulated in biometric technologies, is akin to and possibly as significant as the mathematization of nature that Husserl describes as central to modern science. Although the full implications of the digitalization of Körper have yet to be seen, I will examine some ramifications that have already emerged, especially as it relates to a Leib/Körper distinction.

28 BackToTOC

Panel: Postphenomenological Research 2: Technologized Space

Panel Description Moderator: Don Ihde This panel explores how selfhood is constituted in contemporary technologized spaces, considering high-tech examples such as smarthomes and wearable computing, as well as low-tech ones such as park benches. How are identities disciplined in a world of multiple surveillances, digital, human, social, and legal?

Contribution 1: Robert Rosenberger (Georgia Institute of Technology): Multistable Public Space: From Park Benches to Bathroom Stalls The postphenomenological perspective holds that our relationships to technology are always what Don Ihde has called “multistable”; any technology is open to multiple meanings, and can be put to multiple uses. One realm in which issues of multistability take on social, political, and ethical implications is technologies of public space. These areas are rife with multiple meanings and devices with multiple potential usages. And since social and political forces often work to police these potential uses and discipline these potential users, there is the opportunity to connect up postphenomenological insights with ideas from social and political theory. The main thrust of my work so far on this topic has been the critique of anti-homeless technologies, i.e. designs of public-space objects that serve to deter against potential alternative uses that homeless people sometimes take up. The most common examples include benches built to deter sleeping, trash cans built to deter picking, and spikes built into ledges to deter loitering, as well as laws against everything from camping, to panhandling, to vagrancy. In this paper I consider ways to use the notion of multistability, as well as ideas from social and political theories of technology, to analyze and critique these public-space designs and policies. Anti-homeless design is of course not the only form of hostile architecture. We could consider designs and policies targeting skateboarders, protesters, street vendors and others. Another pervasive and often unnoticed campaign involves the redesign of bathroom stalls for the purpose of deterring their usage as sexual meeting places. Such sexual surveillance and policing is part of long-term strategies for the disciplining of sexuality itself, further closeting those who have not traditionally been allowed to be themselves.

Contribution 2: Søren Riis (Roskilde University): What Is It Like To Be A House? Towards An Ontology of the Internet of Things Key to the idea of “the Internet of things” is the integration of various sensors into things. These sensors are technological filters, or mediators, that will increasingly define and help regulate the world we are living in. Based on a postphenomenological framework, I will show what these technologies can do and sense when they operate in and create so-called “smart houses”. In view of this case I will proceed to offer an assessment of how smart houses experience their in- and exteriors and thereby try to articulate a response to the question indicated in the title: “What it is like to be a house?” In the third and final part of the presentation I will unfold the underlying larger concern which has to do with how ontology transforms in the age of the Internet of things. The presentation ends by connecting the case- study above to some of the grand visions for the internet of things and stipulate ways in which ontology is likely to be revisited and revived in the near future.

Contribution 3: Olya Kudina (University of Twente): Deconstructing a Panopticon Tower of the Digital Age: A Phenomenological Account of Google Glass Our age is increasingly digital in which personal devices help to navigate us through life. For instance, mixed reality technologies such as Google Glass not only present and suggest information to the users to act upon, they also co-shape the perceptions about what is valuable to people and what particular values mean. One could suggest that Google constructed a new Panopticon tower of the digital age that both monitors and controls people, being performative of how people view the world and themselves in it. I would like to investigate the nature of such a Google-Panopticon hypothesis, relying primarily on the postphenomenological investigation of the empirical research concerning the use of Google Glass. Here, the idea of the hermeneutic circle, as interpreted by Heidegger and Gadamer, will be instructive. I will show that the interplay between the self and the world is not based only on the pre-interpreted narrative of Google. Since people are necessarily embedded in the world and since the world around them is increasingly technological, the way people interpret the world and themselves in this world cannot be comprehensive without also relating to technologies. As such, the hermeneutic circle is technologically mediated. I will employ the hermeneutic analysis of the technological mediation approach to study the

29 BackToTOC

relations that Google Glass enables and conclude that the uncertainty in the age of Google can also be productive, fostering accountability of the company and responsibility of the users of its services.

Contribution 4: Peter-Paul Verbeek (University of Twente): “The Politics of Technological Mediation” How to develop a postphenomenological approach to the politics of technology? Postphenomenology has sometimes been criticized for being too little political. Even though some postphenomenological studies have shown mechanisms of inclusion and exclusion (cf. Rosenberger on street furniture that works against homeless people) the phenomenological focus on the micro-level of human-technology relations might run the risk to ignore the social and political structures surrounding these relations. In this paper I will argue that the opposite is true, by investigating how technologies mediate the infrastructure of politics: the character and quality of political relations and processes. First, I will expand the notion of ‘mediation’ towards interpersonal relations, connecting it Merleau- Ponty’s notion of the ‘chiasm’. Interpersonal relations, in Merleau-Ponty’s analysis, are chiasmic because humans relate to each other as both perceiving and perceptible, touching and tangible: rather than a ‘line’, such relations are a ‘cross’. Connecting this notion of chiasm to the concept of mediation opens up new possibilities to investigate how technologies like drones, social robots, and social media help to shape social and interpersonal relations and structures, that have political implications Second, I will argue that the ‘politics of technology’ is not only to be found in the power structures they are part of, but also in their mediating roles in political interactions. Connecting to the Latourian notion of ‘Dingpolitik’, which analyzes politics as the formation of ‘publics’ around ‘issues’ (Marres 2005), and also connecting to recent discussions on ‘fake news’ and the political role of social media, I will develop a postphenomenological analysis of how technologies mediate both the formation of publics and the articulation of issues. Such a micro-level analysis of the technological mediation of the content and the structure of political interactions, I claim, is urgently needed in our current ‘populist’ political situation. Panel: Postphenomenologcial Research 3: Augmented Realities

Panel Description Moderator: Robert Rosenberger This panel uses idea from the postphenomenological school of thought to explore the implications augmented reality technologies. How is everyday experience of reality changed and shaped by everyday mobile computing devices?

Contribution 1: Galit Wellner (Tel Aviv University): The Grammar of Augmented Reality Pokemon Go is an augmented reality (AR) game in which the players see imaginary creatures named pokemons as if they are in the real world. The users see these figures via their cellphones’ cameras and screens, so that the reality as captured by the camera serves as a basic layer on top of which the software displays information and graphics. In summer 2016 the game brought AR into the public discourse and ignited interest in commercial implementations of AR technologies, from buying sofas and t-shirts to employee on-job training. My test case will be a vision of AR offered by designer and film maker Keiichi Matsuda in a short clip named “Hyper-Reality” (http://hyper-reality.co). So far the film has gained 2.5 million plays in Vimeo. It describes how reality may look like if and when AR pervades each and every aspect of our lives. It is a simulation of a world in which Pokemons and the like are interwoven into everydayness and change its grammar. Postphenomenology analyzes how technologies mediate reality for us (Ihde 1990; Verbeek 2005). One of its main tools is the I-technology-world scheme that enables researchers to describe and analyze the various arrangements in which we, our technologies and our world co-shape each other. AR is a major category of digital technologies and its information layers urge us to update the scheme and introduce a new ingredient – media. My suggested new scheme is: I-tech-media-world The “media-world” sub-scheme enables us to model the information layers that AR technologies “place” on the world (Manovich 2006). The “tech-media” sub-scheme represents how technologies shape media-content and vice versa. In this process “I” and “world” dramatically change. My claim is that AR technologies do more than mediation; they bring a new grammar to reality, digitality and our being-in-the-world.

30 BackToTOC

Contribution 2: Richard Lewis (Vrije Universiteit Brussel): Leveraging the Idea of Multistability with Regard to Museum Selfies There is a collision of lifeworlds in museums today. The traditional experience involves visitors passively and respectfully observing the displays while the new experience involves a more active participation involving selfies, smartphones, and mobile media apps. A museum visitor who augments their experience by overlaying themselves with the museum objects that they want to capture is not (usually) a demonstration of narcissism. Rather, the museum selfie contributes to the construction of the visitor’s self, both real and virtual. It also changes the museum experience for other visitors. Museum staff can design exhibits that either promote or discourage the selfie, and protecting museum displays from unintentional harm done by selfie-takers is a new issue for museum curators. Postphenomenological concepts such as multistability, enabling and constraining aspects of new technologies, and co-constitution of subject and object can help us understand some of the complex relations that museum selfies embody.

Contribution 3: Nicola Liberati (Twente University): The Emperor’s New Augmented Clothes: Digital Objects As Part of the Every Day The main aim of this presentation is to solve a problem Augmented Reality is facing by using a phenomenological and phenomenological analysis and projectors. Augmented reality seeks to merge digital and real world together by producing a mixed reality where the digital objects are usually visualised thanks to head mounted or mobile devices. However, this technology is facing problems because the objects generated by the digital devices are existing merely inside the small group of people while using specific devices. Therefore, these objects look fictitious for the other member of the society who are not using them. In order to analyse the elements which make these objects fictitious for the other member of the society, we will take into account the novel The Emperor’s new clothes because, even in this novel, there are fictional entities not perceivable by other members of the community. Thanks to this novel, it will be possible to highlight some elements which make the objects part of the everyday world. Moreover, it will show how the intersubjectivity of these objects is directly related to their way of being perceived by the subjects and, in the case of augmented reality, to the devices used to make them perceivable. For this reason, it is possible to solve the problem Augmented Reality is facing by changing the devices used to produced these digital objects. At the end of the presentation, we will propose a project which can solve the problem by following the elements previously highlighted. We will show how, thanks to wearable projectors, it is possible to produce digital clothes as part of the everyday world of every subject. Thanks to these digital clothes people will be able to wear the digital objects as if they were common usual objects without being naked.

Panel: Postphenomenological Research 4

Panel Description Moderator: Robert Rosenberger In this panel we consider issues of technological mediation from a postphenomenological perspective. What are the limits of this notion for a postphenomenology, and what are its epistemological implications? Is it possible to develop what could be called a “mediation theory”?

Contribution 1: Jonne Hoek (University of Twente): Technological Mediation of Limits and the Limits of Technological Mediation Technologies extend the limits of human perception (Ihde 2015), mold and shift the boundaries of our moral senses (Verbeek 2011) and even the parameters of human nature might be redrawn through appliance of various biotechnologies (Sloterdijk 1999). In order to determine and evaluate the mediating effects of technologies upon our limits, it is crucial not only to differentiate between the limited objects, but also to differentiate between different conceptions of what a limit is. At least three, philosophically different conceptions of what a limit is, can be discerned: (1) the limit of the moving frontier we find for instance in the advent of scientific knowledge. (2) A static-dynamic limit of Helmuth Plessner’s eccentric positionality (Plessner 1975). And (3) the absolute limit postulated in Karl Jaspers’ concept of the limit-situations. (Jaspers 1925) How then do these kinds of limits appear as technologically mediated? In this paper, I will argue that, from a postphenomenological perspective the technological mediation of limits is always co-constituted by an implicit or explicit idea of what the limits of the technological medium itself are. This, in turn, invites us to formulate three ways in which technological mediums are limited.

31 BackToTOC

Contribution 2: Bas de Boer (University of Twente): Vico’s Verum-Factum Principle and Contemporary Technoscience One of the central aspects of the work of Giambattista Vico is his verum-factum principle, which asserts that human beings can only attain truth of what they have made. This principle fundamentally questions the aim of the natural sciences to attain true knowledge; after all, nature is not humanly made. The reception of this idea has mostly focused on how this forces us to rethink the relation between the natural and the human sciences (cf. Marshall 2011). In this paper, I will explore the implications of the verum-factum principle when taking into account that phenomena in the natural and life sciences do also depend human intervention. I will show that this dependency becomes visible when taking into account (a) scientific experiments take place under circumstances that are humanly created, and (b) that research in the natural and life sciences is dependent on technologies that are made by human beings. This connects with ideas in postphenomenology and mediation theory that the nature that scientists study comes into being through their interaction with mediation technologies (e.g. Ihde 1998, Verbeek 2005). A key idea within this line of thinking is that natural scientific practices should be understood as hermeneutic practices, because scientists are primarily concerned with ‘reading’ the results that are presented to them by technologies. In this paper, I will argue that integrating Vico’s ideas into mediation theory has two advantages. Firstly, it allows for combining the idea of mediation theory that the nature that the sciences are studying is made by human beings, with the aim of the sciences to obtain certain knowledge. Secondly, the verum-factum principle allows us to understand why certainty is obtained through technical intervention and the use of technological instruments. These points will be illustrated with examples from contemporary neuroscientific research.

Panel: Reflections on Andrew Feenberg’s Philosophy of Technology

Panel Description Andrew Feenberg’s philosophy of technology provides the conceptual and methodological framework for the democratization of technology while also persuasively arguing for the necessity of this project. To mark the publication of his forthcoming book, "Technosystem: The Social Life of Reason" (2017, Harvard University Press), this panel situates Feenberg’s philosophy of technology within the intellectual traditions that have influenced his critical theory of technology.

Contribution 1: Graeme Kirkpatrick (): Feenberg, Marx and the Dialectic of Technology Reform There is a paradox in Marx’s writings on technology. First, he says that technology development will create the material conditions for socialism. Humanity endures capitalism in order to acquire the productive power that is needed to support a society based on plenty. On the other hand, though, Marx identifies capitalist technology as a weapon developed by the bourgeoisie to attack workers. Viewed in this way, capitalist machinery shatters the unity of the labour process (de-skilling) and undermines worker solidarity by causing unemployment. On an old-fashioned ‘dialectical’ reading, this paradox counts for little because when capitalist social conditions are removed the machines forged as weapons against them will become the workers’ best friends: it’s not the machines but how they are used that really matters. However, a machine that is arduous and unpleasant to use will not be made less so when ‘socialism’ is proclaimed. Andrew Feenberg is almost alone among critical thinkers in addressing this question. Recent re- statements of Marxism by respected authors like Harvey (2010) and Therborn (2008), for instance, do little more than rehearse Marx’s arguments as if there was no problem. This paper engages Feenberg’s attempt to disentangle Marx’s paradox and finds that, while it is more sophisticated than these efforts, it remains attached to a model of technological and historical change that cannot overcome the basic dilemma in Marx’s position.

Contribution 2: Andrew Feenberg (Simon Fraser University): Lukács & the Philosophy of Technology Lukács’s theory of , explained in his 1923 work, History and Class Consciousness, is often interpreted as a theory of ideology, but it is also a theory of social practice and a work of social ontology. Reification and dereification describe different types of social practice: individual technical practices aimed at adaptation, survival, and success, and collective transformative practices with the potential for establishing a solidary socialist society. Although many aspects of Lukács’s early work are no longer applicable, this distinction is relevant to struggles around technology today, such as environmental struggles or struggles over medical practices.

32 BackToTOC

Contribution 3: Edward Hamilton (Capilano University): The Nietzschean strain in critical theory of technology Arthur Kroker (2010) groups Nietzsche with Heidegger and Marx as one of the great early thinkers of technological modernity. And yet while the latter two have founded rich traditions in philosophy of technology, Nietzsche remains an obscure and shadowy figure. His presence is largely determined by an opposition between those thinkers in whom his influence is most keenly felt (Heidegger and Foucault) and the tradition of critical theory founded by Marx. As a result, two significant traditions in philosophy of technology remain more or less alienated from one another. This paper reconsiders the incompatibility of Nietzsche’s thought for critical theory through an examination of how these two traditions ground aspects of Feenberg’s critical theory of technology. It first examines some formal similarities between Nietzschean genealogy and historical materialism. I then explore the deviation between Nietzsche and critical theory in philosophy of technology through a brief reflection on the contributions of Foucault and Marcuse. The paper then draws these strands together in outlining how Feenberg’s critical theory of technology provides a basis for a third interpretation of Nietzsche in philosophy of technology – one which links genealogical analysis of the development of rational systems to questions of political subjectivity and agency.

Contribution 4: Darryl Cressman (Maastricht University): Contingency & Potential: STS & Critical Theory in Feenberg’s Philosophy of Technology One of the consequences of the empirical turn within the philosophy of technology has been the creation of “classical” of technology, typically associated with writers like , , and . Distinguishing a philosophy of technology as “classical” is often used to invalidate conceptual and methodological commitments that tend towards determinism or essentialism. Yet, as some have pointed out, forgetting the insights of classical philosophies of technology can diminish the capacity for a critical theory of technological society. Andrew Feenberg’s philosophy of technology stands at the intersection of critical and empirical approaches towards technology. In this presentation I examine how Feenberg adopts the methodological insights of STS (Science & Technology Studies) scholars like Bruno Latour and Wiebe Bijker to empirically verify aspects of Herbert Marcuse’s critical philosophy of technology. This critical constructivism prioritizes moments of resistance through the study of engaged use with particular technologies, leading to an empirically grounded hermeneutics of technology. In this way, STS and critical theory come together in the contingency and potential that technical objects reveal. Understanding technology through the concepts of contingency and potential, I argue, distinguishes Feenberg’s philosophy of technology from other projects that integrate the empirical methods of STS with the philosophy of technology.

Panel: Relating to things that relate to us 1

Panel Description Our world is thoroughly textured by technological things, from the most ordinary to the most sophisticated. Many of these things can now be described in terms of their digital, networked, computational, ‘smart’ character. Even as they are often manifested as things we can hold in our hands, their (mostly hidden) internal processes and systemic interconnections make them significantly different from relatively more straightforward physical things that we may have been used to in the past. They also pose fundamental challenges for understanding—both practically and philosophically—what they are and what they do, how they relate to us and to each other. More specific questions that now emerge include: How should we relate to and make sense out of things that withdraw/are not fully accessible to us on the basis of our own intentionality and experience; that actively relate to and in some cases ‘use’ us; and that can actively relate to each other in ways that do not involve us at all (and thus might call for innovative analytic methods)? And what are the conceptual, analytic, epistemological, practical, and ethical implications of the above? These are the problematics that we will collectively engage in this double panel. The complexities of these contemporary things and their relations call for bringing multiple perspectives to bear in conversation with each other—and this is precisely the kind of conversation that we wish to catalyse and stage in these sessions. The double panel is thus as much about the problematics collectively articulated and conversation generated as it is about individual presentations in themselves. The individual contributions, split into two sessions, are as follows.

33 BackToTOC

Contribution 1: Heather Wiltse: Relating to things that relate to us—ontology and epistemology. On the multi-intentionality of assembled things Working within a (post)phenomenological framework, human-thing relations are typically understood on the basis of intentionality—a thing never existing on its own, but rather always as a thing for a human in a world. Yet contemporary networked computational technologies fundamentally challenge this basic epistemological grounding in that much of what they are and do is not present to our experience through use. To take an everyday example, we might consider the act of using Google to search for something. While it is quite possible to see it as a tool that enables searching, from the perspective of its design it could be more accurately described as a thing for harvesting user data and facilitating targeted advertising. This means that while a person using the service to search has one kind of intentional relation to it, those who relate to other sides of it (Google, marketers, etc.) have a quite different one. Google is also not an isolated, stable thing, but more like a fluid assemblage composed of many interconnected, constantly evolving, contextually customized components that have their own agencies and (intentional) relations to each other. An adequate description of things that are fluid assemblages must thus account for the multi-intentionality of their relations—what they are for, and for whom.

Contribution 2: Yoni Van Den Eede: The Omnipresence of Breakdown: Object-Oriented Philosophy of Technology Graham Harman’s object-oriented philosophy (OOP) has up until now received little to no attention from philosophy of technology (PhilTech). Yet two crucial aspects of OOP make it worthwhile, even necessary, to dig into it: 1) Harman is well-known for his innovative reading of Heidegger, that puts the tool analysis “upside down.” For dominant approaches in PhilTech such as postphenomenology the tool analysis has been of great importance. What could Harman’s “reversal” mean for those approaches? 2) Harman also offers an idiosyncratic reading of Latour, supplying a kind of “mirror image” to him: he supplements the Latourian relationalist-network perspective with a notion of “substance.” But PhilTech is still sticking to the standard relationalist reading. It serves to inquire into possible reasons for this neglect, and see if and how OOP can be put to use for the study of technology. Technology is not a particular focus in Harman’s work, but if we look at OOP through the “lens” of technology, interesting results may ensue. The notions of “breakdown,” “relation” and “network” play a fundamental role here, and it may turn out that these actually acquire an enhanced meaning – especially looked at against the backdrop of contemporary/emerging “algorithmic technologies,” that may require an object-oriented analysis.

Contribution 3: Holly Robbins: Design to Situate Im/material Context We tend to design our technologies as black boxes. This may be an effort to promote ease of use, yet it limits our ability to engage with and understand the socio-ecological context around the artifact. How does it perform its function, how do I use it, how does it use me? This lack of context becomes especially problematic when we consider the layers of complexity that come with data-intensivity and connectivity and that are becoming increasingly commonplace surrounding contemporary artifacts. Yet, design has a powerful potential in breaking this black box paradox. Design approaches have the potential to not only surface this socio-ecological context, but also support opportunities for our engagement with it. Philosophical frameworks explicating the problematic nature of our relations with artifacts can also provide us with some ideas on how design can resolve some of these more complicated entanglements with objects. Working with design practitioners and design researchers, we have examined and developed design approaches that take into careful consideration how people, ways of doing, and the complex nature of materiality around objects can not only communicate this socio-ecological context, but also bind it.

Panel: Relating to things that relate to us—ethics and pragmatics

Panel Description See above

Contribution 1: Michel Puech: Attachment to things, artifacts, devices, commodities: an inconvenient ethics of the ordinary The nearest things to us (physically, functionally, emotionally) are elusive. They are ontologically vague and morally thin in the best case; they are totally unseen and utterly despised in a lot of cases. In applied ethics the trend seems to be: continually expanding ethical consideration to new "objects" (every human and not just me, my tribe, race, gender; then animals, plants, ecosystems). Can this expansion reach ordinary things? Digital objects? In wisdom ethics, awareness and care of the ordinary open a new dimension that requires new methods for engaging and valuing the objects

34 BackToTOC

populating our technosphere and the entities populating our infosphere. My intuition is that trying to constructively assess the moral significance of the most ordinary things (material and virtual, from coffee cups to Google search, from SMS texting to hot showers) has something about it that is inconvenient for moral correctness in the humanities. A candidly refreshing inconvenience, I suggest.

Contribution 2: Steven Dorrestijn: Ethics of technology below and above reason: The case of living with smart technologies With sensors and computational power, today’s smart technologies can collect information and exercise multiple functions dependent on the context. Smart technology thus could allow for sophisticated interactions with their users. Technology could become ever more ‘social’ and ‘personalized’. But at the same time the increasing ‘intelligence’ of devices also seems to raise the risks associated with technology, that it might take too much command over its makers and users. Humans and technologies could be seen as in a competition of outsmarting each other. What does this mean for humans as ethical actors? With reference to concrete examples, such as a research project about interactive screens for public spaces, I will discuss different aspects of interaction with smart technologies. I will compare occurrences of rich, two-way interaction with poorer forms of interaction which come down to a kind of process control system. I will thus consider how technical smartness may afford, even catalyse, or rather foreclose forms of human smartness. To ethically assess this I will invoke an ethics of technology below and above reason (about the technical conditions 'below' and aspirations to values 'above' the rational subject of modern moral theory).

Contribution 3: Diane Michelfelder: The New Assisted Living: Relating to Alexa Relating to Us In the past two years, AI-enabled and voice-activated tabletop digital assistants (TDAs), such as Amazon’s Echo and the more recently developed Google assistant Home, have become increasingly popular among consumers in the US. My goal in this paper is to take up this panel’s theme by exploring the question of the ethical impacts of TDA’s—in particular, of Amazon’s Echo, who responds to the name “Alexa”. For the most part, philosophical explorations of the ethical impacts of TDAs have focused on issues involving security and privacy. This presentation will go in a different direction, by looking at the impacts on our capabilities for developing relational virtues such as empathy, compassion, and trust. More specifically, it will take the form of teasing out a possible ethical paradox with respect to Alexa’s current design. This exploration will involve, in part, a discourse analysis of users’ interactions with Alexa, based on comments from the Amazon website, which demonstrate that users relate to Alexa as something with whom a relationship based on trust and caring could be established; the phenomenological elements of Alexa’s design (such as name and voice) encourage this. At the same time, Alexa takes input without regard to prioritizing what commands matter more than others: it is all algorithmically the same (from Alexa’s “perspective”) whether it is asked to stream music, give a weather report, or report the balance of one’s bank account. And, just as importantly, Alexa arguably takes most commands to reflect a user’s self-interest, rather than the user being interested in the well-being of others with whom she is in relation. Over time, this skews Alexa’s picture of the user, prompting the following worry: Does a TDA such as Alexa encourage users to build a connection with it based on relational virtues, while at the same time undermining conditions on which users can build similar connections to others? And, if so, how might this paradox be addressed? From dealing with virtual others to the construction of the self:

Contribution 4: Fanny Verrax: Videogames as an ethical sandbox How do we relate to virtual others? While much has been written on multiplayer online videogames, I would like to argue here that single-player videogames also offer a unique experience of virtual alterity and of ethical learning. More specifically, I would like to focus on five features of single-player videogames that make them a worthy ethical experience. 1) They offer a safe learning environment, indefinitely iterative to the player's wish, desacralizing mistakes and providing a learning curve. 2) Consequently, they allow for a type of mastery in reaching “the golden mean”. More broadly, they embody the Aristotelian view on virtue learning. 3) They extend the field of ethical questions and contexts one can think about, developing a sense of moral imagination. Typical examples here include being a prison manager (Prison Architect), running a pharmaceuticals company (Big Pharma), or working as a customs officer in a totalitarian state (Papers please!). 4) They lead to a construction of the self through emotions control, while the player deals solely with an artificial intelligence. 5) Finally, they can provide a valuable empirical and individual basis to Arendt's thesis of the “banality of evil,” allowing perhaps for more benevolence towards non-virtual others' weaknesses.

35 BackToTOC

Panel: Research in a Design Mode - In Conversation with Ann Johnson (1965-2016)

Panel Description Ann Johnson's untimely death serves as a poignant reminder of her clarity of judgment. As a historian of technology she continues to provoke philosophical responses. This session is dedicated to her productive provocations. Short presentations, not full-blown papers highlight some of Ann's ideas and critiques as they are carried forward in the work of the presenters. Johannes Lenhard, Leah McClimans, Alfred Nordmann and Joe Pitt make a beginning, others are welcome to join at short notice or even spontaneously.

Panel: The language of biofacts: grammars of agricultural „things“

Panel Description This panel questions the "grammar of things" with regard to agricultural objects and practices. Agriculture is commonly regarded as the beginning of culture as such and includes basic narratives of technology: from hunters and gatherers to the early settlers and their domestication of plants and animals. However, the products of agriculture can hardly be understood as "things", i.e. as stable and permanent objects resembling artefacts that result from making and creating. In this context, the idea of "grammar" – deriving from the Greek γραμματικὴ τέχνη – can itself be seen as a technique. It standardizes a set of rules for understanding "things"– and, at the same time, for silencing or not understanding entities other than "things" (Heidegger reminded us of the term "Zeug"; Meier 2012). This silencing operates on both the semiotic and semantic levels, and is backed by a philosophy that usually reflects on technologies in the realm of machines and engineers (see Hubig et al. 2013), even though one founder of modern philosophy of technology, Ernst Kapp (1877), was deeply concerned with agriculture. For history of technology, the pioneering work of Robert Budd (1993) only recently broadened the scope to bio- and agrotechnology. Therefore, the concept "grammar of things" should itself be an object of deconstruction. Aim is to take into account the methods and practices of agriculture and the transformations of its living objects, both aggregated in the term "biofact" (Karafyllis 2003; Birnbacher 2006). Already Hannah Arendt (1958) highlighted the difference between using of artefacts and consuming of agricultural products (German: "gebrauchen" vs. "verbrauchen"), the latter of which relate to subsistence and reproduction. On the other hand, industrialized agriculture treated its products precisely as above mentioned: as made things. In the Frankfurt School tradition, one could speak of "objectification" (Honneth 2005). This approach, however, would concentrate already on the results and their commodification, and not on the technogenesis, i.e. on how dynamic entities such as living beings are transformed into static things. Crucial for this transformation is also the legal sphere, in which living organisms are conceptualized as both things and legal objects – which is criticized as “bioconstitutionalism” (Jasanoff 2011). Since the 1960s, when genetic engineering and nuclear technologies in the food sector emerged, agricultural products are interpreted in the light of creation and innovation, foremost by the highly industrialized countries and global players of agroindustry (Zachmann/Karafyllis 2017; Zachmann 2015). By means of biotechnologies, evolution is made to order (Curry 2016) according to a "grammar of artefacts". As a consequence, biofacts are objects of data regimes and cryo regimes, e.g. by means of biobanking (Karafyllis 2017). Public resistance against these processes is manifold. It includes objections against technical interference in "the natural sphere", against patenting of organisms versus being "global commons", against reductionist models of life, against monocultures risking food security, and against the further exploitation of the developing countries’ resources, just to mention a few. Besides that, plant biotechnology is not that successful in practical respect, because the bio- technological grammar collides with the dynamics of the living (Gill et al. 2017). The panel contributors suggest that agricultural products can better be understood as biofacts, i.e. as growing entities that are results of technical interference. In biofacts, histories of evolution and technogenesis intermingle. One aim of the biofact-concept is to unveil the different stages and practices, in which techniques and natural processes interrelate. The concept resides on both hermeneutics and instrumentalism, and thus operates with approaches from STS, integrating philosophy, and history of technology. Despite standardization efforts and the postulated efficiency-increases of industrialized agriculture, biofacts still have natural potentials, which make them promising and risky at the same time. For example, their potential to generate mutations – which can be triggered by nuclear technologies – is necessary for 'inventing' new cultivars, while at the same time this natural potential risks product standardization. Moreover, biofacts still grow in the outdoor environment, and thus are objects of biotic interactions as well as they are affected by climate

36 BackToTOC

conditions. Referring to Actor-Network-Theory, biofacts might even embody agency. The panel lectures scrutinize traditional grammars of things that used to ally with the dichotomies of nature/technology, production/reproduction, indoor/outdoor, plant/animal, biosphere/technosphere, public/private, developed/less-developed countries, and high/low-tech. The contributors are part of the cooperative research project The language of biofacts: semantics and materiality of high-tech plants, funded by the Federal Ministry of Education and Research (BMBF), , 2015-2017 (see www.biofakte.de).

Panel lectures Karin Zachmann (TU München), Nicole C. Karafyllis (TU Braunschweig), Bernhard Gill (LMU München)

Submitted Papers

Aravena-Reyes, Jose; Krenak, Ailton: Toward to an Engineering of Care The term Anthropocene has gained increasing interest in various sectors of society, a fact that has also led to it becoming an inevitable subject for the philosophy of technology community. If we consider the engineering located at the operational level of the global production of technical objects, it is obvious that the urgent claim for a planetary orientation requires rethinking the production of technical objects not only from the perspective of sustainable engineering processes, but also from the its epistemological foundations, that is, what gives meaning to the technical production itself. In this context, a large part of the criticism towards the current production process of technical objects appears to be based on the superposition of the lógos of modern science over the technical lógos, which resulted in the use of practices that developed a global blindness face to the Anthropocene age. To postulate engineering in the field of problem solving, the lógos of modern science made it (the engineering) obedient, for it delegated the enunciation of problems to other agents of the dominant economic machine, without questioning the sense that had been given by such agents to production and technical evolution. For this reason, a critique of engineering should not be driven only towards the production process of technical objects, but also to the production process of subjectivity in the capitalist way of life, which is imposed through the education and exercise of this profession. The current educational process of engineers promotes the consolidation of a single world; a single reality, where nature is something external to man, characterized as something calculable and exploitable on a global scale. Thus, if we understand, as proposes, that the very relation between man and the world has an ontological status that institutes what we call reality, changes in this relation would allow other possible living conditions as valid as those currently dominant. In this direction, an Amerindian perspective of technicity (based on a particular human- world relationship, which gives meaning to the real) can contribute much to developing the epistemological basis for the agency of an Anthropocene Engineering under new foundations, where, for example, beings are not separated by the split culture and nature, can have different levels of intentionality and processes that identify their intentional states exists. Perspectivism and Shamanism, two key concepts of Amerindian thought, widely studied by Brazilian anthropologist Eduardo Viveiros de Castro, are a fertile ground for the establishment of a reality that orients the technical thought to a less-material dimension and, therefore, more focused on the spirit, being perhaps this the most appropriate way of promoting modes of existence less harmful to the planet. In addition to the ontological condition of this new reality in the context of the production of technical objects, the Amerindian prospect of technicality, combined with Greek concept métis, can contribute greatly to the reinvigoration of the inventive ability and produce, in the context of Anthropocenic engineering, the inventive quantum leap that Simondon considers critical to the inclusion of new lineages of technical objects within society. These objects, belonging to a less technical material reality, would be aligned to the demands of the Anthropocene. From the new man-world reality inspired by the Greek métis and Amerindian thought, emerges as a main axis the theme of caring for Planet Earth, that more than promotes the control of current processes of predatorial exploration, resulting in the institution of new schemes of care, subjective, collective, and also environmental; a kind of conceptual framework of political ecology for the production of technical objects. The subject of care has already been addressed by Bernard Stiegler in treating technology as pharmakon, a kind of cure promoted by the invention of new technologies that, in addition to being technologies of care within a new industrial logic, can also be understood as less material techniques

37 BackToTOC

or as technical of the spirit. From the premises discussed here, a planetary orientation to engineering may call upon its active agents to assume a more active role in the Anthropocene by elaborating a consistent philosophical foundation, to aid in the invention of technical objects that inhibit the global process of alienation of desire and potentiate the human métis, making possible, in short, the invention of new modes of existence more engaged with the life of Planet Earth.

Beinsteiner, Andreas: Material irritations in the grammar of things. Post-ontotheological considerations The idea of a grammar of things builds on a metaphor that equates things with written words, with grammata. The construction of things, their working-together with other things and their relations to humans is considered analogously to the relations between words as they are defined by grammatical principles. Implicitly, the equation of interrelated things on the one hand, and of written words on the other, presupposes that there is a writer, an author who constructed things in their respective interrelations with other things and human beings. This is the idea of ontotheology, as Martin Heidegger defines the term: Things in themselves and in their relationships to other things are intelligible, because there is a creator who created them with meaningful intentions, according to a plan. The createdness of things grants their readability: just like a book can be read, since it does not contain a sequence of arbitrary symbols, but the expression of the author’s meaningful intentions. E.g., according to Galileo Galilei, the book of nature is written in the language of mathematics: Whoever understands the grammar of that language is able to read this book. Such metaphors of authorship, according to Heidegger, play an essential role in metaphysics. However, if we consider things as terms of a language that is defined by a certain grammar, then there is something that never can be fully subordinated to that grammar and, accordingly, impedes its coherency. This interfering element is materiality: Heidegger calls it the earth and defines it as an uncreated abyss. Thanks to that abyss, there is never one exclusive way to read things, and, to understand their grammar. In order to demonstrate this thesis, the paper traces Heidegger’s respectively relevant considerations. Furthermore it aims to expose this irritating role of materiality also in contemporary (post)phenomenology, in. particular in what Don Ihde, also using the author-creator metaphor, has called the designer fallacy. Finally, the paper poses the question if this tension between grammar and materiality is the same in artefacts that are not material things in a traditional sense anymore but computer applications.

Boeva, Yana: Process over Form: Making Design Visible with Participatory Making Design, as scholarship discloses, has several manifestations ranging from the act of practice in the process, through the result of that, to the wider public understanding as manufactured goods with specific aesthetics (Walker 1989). The latter one emphasizes that design is visible through a form, but less so in and through an inherent process—the very in-between or the “pre- and post-design” (Sanders & Stappers, 2014; Cheatle & Jackson, 2014). However, as Bauhaus artist Paul Klee exclaimed, processes in the broadest sense and not forms matter (1973). Despite that, at least Western society predominantly relies on the study and development of processes, yet stresses out only the results of them. However, the supremacy of form over process is perhaps reverted when we focus on the specifics of recent technological and cultural currents such as digital fabrication, maker movement, and participatory practice, as many actors engage in those for the sake of doing. Long-established places or objects such as the drafting table, the prototype, the sketch, CAD technologies, but also the new wave of digital fabrication machines, work together to make aspects of design more obvious, to visualize, and to materialize design. At the same time, design contains elements beyond the material presence such as mental work, emotions, but also improvisation and play, which remain invisible as those cannot be easily indicated in objects, but are key actions of participatory maker practices. Drawing on interviews and practice observation with amateur makers and design professionals involved in those as well as recent public displays on the relationship of design, craft, and do-it-yourself (DIY) , I will present a critical, STS-informed analysis of how these recent technologies and practices uncover the value of design as process.

Böschen, Stefan: Grammar of Things – a field theoretical perspective Since the Actor-Network-Theory put forward the symmetrical approach, the question of how to understand things themselves but also in relation to subjects raised high attraction. This provocative strategy opened space for questions about the specific action qualification of things. These questions are not mere theoretical abstract ones but of high empirical importance. Present days societies are entangled within socio-material arrangements and the question gains more and more importance of how these arrangements are enabling or blocking transformation processes, like in the German Energiewende. Transformations in such cases are highly related to the infrastructures, but also the

38 BackToTOC

wide array of renewables. How does such arrangements evolve over time and what is the grammar of things in the grammar of transformation? Against this background, I will first present a sociological field-theoretical approach. In this and in contrast to the most prominent ones of Bourdieu and Fligstein/McAdam, I will strengthen and specify the model and role of actors as well as things in the field. Things are not taken as symmetrical entities but as entities with special patterns of activity in contrast to the ones of human individuals into account. These patterns are stabilization and disruption. By contrast, the ones of human beings can be differentiated into “Action pattern”, “Knowledge pattern”, “Social pattern” and “Power pattern”. Second, I will show with regard to the German Energiewende, how the transformation is evolving as a structural change while specific symmetry axis of (collective) actors and things moderating the dynamic of such a change. The inclusion of things is stabilizing as well as reconstructing such symmetry axis in the field. This is why, things are operating as ‘objects with stimulative nature’ (Lewin) in the field.

Breuer, Irene: „The grammar of architectural design: Graphs, grids and diagrams“ Figure versus ground defines the fundamentales of perception, vision emerges in the dimension of difference between the bounded objects and the ground in which they appear. Empirical visions‘ figure/ground distinction has generated an icon after another: This paper aims at showing that the use of graphs and diagrams in modernist art as well as in architecture constitute the precondition for the emergence of the percepual object to vision. Graphs structure the visual field as such, for they are the matrix of a simultaneity between figure and ground, surface and depth, a perfect synchrony that conditions vision as a form of cognition. The graph itself is perceived in its immediacy and self- containment, as such it dispenses with narrative and figuration. Transformed into a grid, it succeds in drawing the infrastructure of the visual field, its geometrical order, while further developped into a scheme or diagram it establishes the grammar of the classical language of architecture and of visual art in general, and as such it becomes a spatial frame. Nowadays, visual art and architecture tend to disrupt, dislocate those grammar frames in order to evade order and representation. In their dispute with semiological structuralism architecture and art recurre to semantics: Art in general becomes the object of experience and the source of sense and not the result of a reductionist system bearing no relationship whatsoever to external realities or events. Thus, the diagramatical project is reformulated in terms of an organism bearing multiple levels of sense and lacking any single reductive organization. The rejection of architecture and art in general as potential practices of writing understood as an organization of signs leads to an „informal“ writing, which accomodates a multiplicity of geometries and meanings. The elucidation of sense and of subjective experience involved in art and architecture as an „informal“ writing may help to bridge the gap between the verbal and the pictorial, thus defining a particular practice of understanding contemporary art and architecture.

Brey, Philip: Structural Ethics: Ontological Foundations and Implications for the Ethics of Technology In this paper, I will further develop the structural ethics approach by developing its ontology and by analyzing the role of elements in the structures posited by the approach in producing moral effects. Structural ethics (XXXX, 2014) studies social and material arrangements as well as components of such arrangements, including humans, animals, artifacts, natural objects, and complex structures composed of such entities. Structural ethics is complementary to individual ethics, which is the ethical study of individual human agents and their behaviors. Unlike individual ethics, structural ethics looks at larger structures and networks with the aim of engaging in social and technological engineering. Structural ethics has three aims: (1) to analyze the production of moral outcomes or consequences in existing arrangements and the role of different elements in this process; (2) to evaluate the moral goodness or appropriateness of existing arrangements and elements in them, and (3) to normatively prescribe morally desirable arrangements or restructurings of existing arrangements and individual elements in them. I will first develop a detailed ontology of structural ethics. The central concept is that of a structure or network, which is a conglomerate of interacting entities that together determine outcomes or actions that are the subject of moral evaluation. I will also define the entities in such networks, as well as larger structures composed of such entities. Finally, I will define the part-whole structure between entities and larger structure, and the relations and interactions between elements. I will relate my ontology to proposals by Latour (Actor-Network Theory) and Giddens (Structuration Theory). Second, I will develop structural ethics by analyzing the role of entities in networks in fixing moral outcomes or behaviors. I call entities with such a role moral factors, and I distinguish positive vs. negative, accidental vs. intentional, and outcome-oriented vs. behavior-oriented moral factors. By analyzing how different kinds of entities come to qualify as moral factors and how they contribute to

39 BackToTOC

different kinds of moral outcomes, I will work towards positioning structural ethics as a powerful new approach for assessing the moral role of technology in society.

Bristol, Terry: Hawking Evolution: Survival of the Weaker The trans-humanist movement has begun to imagine the design of new superior humans. CRISPR genetic technology is a leapt forward. What is the ethical framework for these decisions? Engineer George Bugliarello argued that modern engineering is a natural extension of biological evolution. Consider insulin technology. Previously Type 1 diabetics died young without reproducing. Now they reproduce and survive into their 60s. The genetic weakness associated with Type 1 is now spreading through the global gene pool. Arguably this weakens the gene pool. The insulin-diabetes example is a token of a type. Consider how many modern humans would survive in a hunter-gatherer culture. Engineering advances in agriculture alone have enabled competitively weaker individuals to survive and thrive. McKeown argued that virtually all the advances in health and longevity in the advanced nations over the last 300 years has been due to engineering advances such as in water- sewage systems. Stephen Hawking is the poster child of those promoting inclusion of the disabled. Actual human evolution favors the survival of the weaker. Berkeley paleoanthropologist Tim White argues that the path from Lucy, 2 million years ago, to the modern human form has been driven by technological advances (tools and rules) that favor inclusion of the weaker. Technological advances oppose the imagined forces of Darwinian natural selection. By including the genes of the weaker the gene pool qualitatively metamorphosed to produce the truly superior modern human. Malthus was wrong. Post-Malthusian economist Paul Romer notes that over the last 75 years global population has doubled and economic output has grown eight-fold. Matt Ridley documents in painful detail the reasoning and policies deriving from the Malthusian presupposition at the basis of the eugenics movement and misguided notions of Darwinian selection of the fittest. There are crucial ethical lessons here for those involved in designing the future humans.

McDonough argues that inclusion is the path to abundance. Dewey characterizes evolution as ‘the construction of the good’.

Budrevicius, Algirdas: Hylomorphism and Cartesian Coordination as Basis for the Definition of Thing and for the Grammar of Things A thing usually is defined in mechanistic terms—as a set of its properties. This way to consider thing proved to be successful before advance of computers, and the cognitive revolution in the second part of the twentieth century. However, it’s potential to meet the challenges of information technologies—in particular, artificial intelligence and additive manufacturing (3D Printing)—is not evident. Two ideas are proposed in this contribution for considering the Grammar of Things. The first one—to describe the thing in terms of hylomorphism—as it was considered since Aristotle and until Renaissance. The thing then is treated as a compound of its form and matter; this approach also allows defining representations of things, for example, signs, images, models, projects, etc. It is claimed that The Grammar of Things, most generally, also should be grounded in hylomorphism. The second idea is to consider the fundamental level of the Grammar in terms of the ontological-Cartesian coordination. In this paper, it will be demonstrated how the traditional Cartesian system of coordinates, interpreted structurally, and as intersection of the two aces (form and matter) can be used for the coordination of things. First results will be presented: it will be shown that the thing can be treated as a hylomorphic entity defined by the first quadrant of the coordination system; it will be demonstrated also that the closest related entities of the thing—some-thing, no-thing and any-thing—can be defined in terms of the rest three quadrants of the coordination system. The coordination may be viewed as the lowest level of the Grammar of the Thing; it does not allow, however, establishing relations between things. Therefore, an advanced system of ontological coordination—built by means of composition of the two elementary and oppositional systems—will be presented. The system also allows discriminating between fundamental types of things—cognitive and non-cognitive; natural and artificial. This contribution presents a further development of author’s ideas described in his book Sign and Form. Models of Sign as Homomorphism Based on Semiotic Insights into Aristotle’s and Aquinas’ Theory of Being and Cognition.

Buongiorno, Federica: Lifelogging and the "Technologies of the Self": Some Phenomenological Remarks on Digital Processes of Subjectification In 2001 the American writer Marc Prensky noted the lightning development of the Net Generation (Tapscott, 1998) into the generation of “digital natives”, distinguished by their immersion in digital communication technologies from birth and their transparent (i.e. direct and intuitive) use of such technologies. The use of digital technologies in contemporary global society has brought about a genuine anthropological and anthropotechnical transformation of the individual: in this paper, I will

40 BackToTOC

analyse these transformations in relation to a specific, very recent phenomenon of digital technology which falls within the range of practices of the quantified self, namely the phenomenon of lifelogging, in order to illustrate its repercussions in terms of the construction of the individual and social self. Lifelogging is a complex form of self-menagement through self-monitoring and self-tracking practices, which combines the use of wearable computers for measuring physical performances (heartbeat, caloric consumption, distance covered, etc.) through specific apps for the processing, selecting and describing of the data collected, possibly in combination with video recordings (including live streaming). The phenomenon will be analysed on two main levels: 1) Processes of subjectification; 2) Social recognition. In this paper I will focus on the first aspect, that of subjectification process, which I will address from a phenomenological perspective, combined with Michel Foucault's late reflections on the technologies of the self. In terms of subjectification, these practices may be understood as digital technologies of the self, to quote Foucault, which is to say as modes of controlling and transforming one’s self by acting upon one’s body by the means, in the case of lifelogging, of digital devices. I resort to the phenomenological method developed by E. Husserl and M. Merleau-Ponty in order to A) provide a philosophical understanding of this very new phenomenon, which is still largely lacking; B) fill a gap in the phenomenological research, since an application of the Husserlian method to the phenomena of new media and communication is still to be fully developed.

Cano, Pablo: Technological Normativity: its epistemic role in science and mathematics Does technology play a relevant epistemic role in the development of scientific and mathematical knowledge? To argue that indeed this is the case, I will propose the notion of technological normativity to explain how technology establishes directives that indicate how science and mathematics should change or adjust in order to manage and solve concrete problems. We can identify examples where technological research allowed development, modification and even abandonment of a particular science. In this work I present two historical cases to argue that scientific theories and mathematics can be understood as plastic instruments that adjust to technological requirements in order to better understand the studied phenomena. The first example is the development of a steam engine diagram indicator in 1796 to expose its impact on the further development of thermodynamics. In 1834 Paul Émile Clapeyron, a French engineer, idealized the average diagram and established relationships between thermodynamic theory proposed by Sadi Carnot and the behavior of pressure and steam volume in the machine, identifying adiabatic and isothermal processes (processes that at the time were only starting to be understood) with some sections of the diagram. The second example deals with three contributions from Oliver Heaviside. The first one is operational calculus, which is a technique for solving differential equations; the second is step function, and the third is vector algebra and the corresponding reformulation of Maxwell's electromagnetic theory. The three of them are motivated by the same reason: experience and experimentation must guide scientific work, even if scientific knowledge is insufficient, science must be allowed to be guided by experience and experimentation. Heaviside freely experimented with the nineteenth century telegraph, but also with the mathematics and of the time. Therefore I pretend to argue that what I refer to as technological normativity is articulated in a reflective equilibrium with science and mathematics (a kind of a virtuous circle), evidencing a plastic quality in mathematics and also in experimental and theoretical physics, allowing for the development of scientific and mathematical knowledge.

Carvalho, Tiago: Gardening in the Anthropocene: going nowhere, growing somewhere. With the industrial transformations and the green revolution in agriculture, the link between local food production and consumption was severely cut; the related imaginary was profoundly shaken (Thompson, 1995). Agriculture in industrial countries has been largely converted into a mechanized commercial activity through the introduction of more cost-effective and efficient technologies. The energy inputs that once human bodies, cattle and manure gave to the land were replaced by fertilizers, tractors and threshers. This change was celebrated as an escape from all the toil, misery and submission to crop and weather variations and introduced a radical novel way of relating to food as a cultured nature. At the same time, through affordable transportation technologies, cities have been expanding into the once necessary tracts of agricultural fields, making the difference between rural countryside and cities, nature and culture all the more blurred. Food production technologies have also enabled that the once local and site-specific edible commodities can be replicated anywhere and anytime in the world (Coff, 2006). In recent decades though, commanding grass-roots reasoning and environmental arguments have been advanced for public policies that allocate more urban space for recreational gardening within cityscapes. An acknowledged over-dependency of the food supply chain gave way to fears of

41 BackToTOC

paralysis and other shortcomings about the fragility of those provisions; reasons have also been put forward about how our moral imagination is disengaged from the food history and how urban gardening can reconnect urbanites with a conspicuous reality. Following Borgmann’s notions of focal things and practices, this paper aims at applying mediation theory to question the hopes about urban gardening; presented as a simple, unassuming plan for valuing place-making, community building and a responsible way of technologically relating with nature, it contrasts with other more ambitious and high-technological plans for fixing a damaged environment; we hope to show what virtues and vices urban gardening fosters and how the resulting image of nature that ensues is at odds with the challenges of the Anthropocene. Still, we maintain that given the uncertainties of any technological-fix, these kinds of approaches are the best that we could hope for.

Checketts, Levi: The Sacrality of Things Mitcham and Grote’s 1984 Theology and Technology (a counterpart to Mitcham and Mackey’s 1972 Philosophy and Technology) opens with a section borrowing H. Richard Niebuhr’s five ideal types from Christ and Culture to consider the relationship between Christianity and technology. Mitcham, along with Jacques Ellul and Albert Borgmann, argues that the “Christ against culture” model is the most appropriate mode for thinking of the relationship between Christian faith and technology. I challenge this claim by noting that several technologies have sacred function within the Judeo-Christian tradition. I examine in particular the execution technology of crucifixion, the technology of the moveable-type printing press and the technologies of trans-continental navigation. Various artifacts in these cases (crosses, printing presses, caravels) are part of technological systems (Roman disciplinary techniques, books and information technologies, nationalistic colonization schemes) which have important function within religious (theological, liturgical, ecclesiastic) functions. In other words, these particular technologies have been “sacralized” under the various guises of Christology, biblical and liturgical studies, and missiology. I contend that, following Durkheim in The Elementary Forms of Religious Life, these particular technologies have become exempt to philosophical critique because they have been granted the status of “sacred” and are thus independent of ordinary “scientific” interrogation of technologies. I contend based on this assessment that the relation of technology and theology is not so simple as Niebuhr’s five categories suggest; in fact, some technologies become “invisible” due to the fact that they are central to a belief system, but this does not obviate the fact that they are “things” constructed by human beings to accomplish certain ends (including political ends through execution, intellectual ends through communications production or nationalist ends through exploration). Thus, I contend that the relationship between Christianity and technology is complex and irreducible to dichotomies; theology (and, further, philosophy) is formed in the context of historical technologies and their functions in social systems, and it cannot be understood apart from this. Indeed, Christian theology recognizes the historical nature of the revelation of God through Christ Jesus, a revelation that is unintelligible apart from historical technological artifacts and systems.

Chen, Jia; Chen, Fan: Research on Technological Communication from the Perspective of Social Integration With the rapid development of science and technology, there is an increasing concern around the world about the efficient communication mechanism of technology. And, the issue of technical communication has also long been one of the key research contents in the field of philosophy of technology. This paper tried to analyze the communication of technology in society, and aimed to grasp the overall process of how the technical communication acted its roll from the perspective of social integration. This paper introduced the concept of technical communication, discussed the differences between technical communication and scientific communication, and elaborated four traits of the technical communication. What’s more, the paper also introduced the social integration in three views. The first was based on theory of social evolution by Herbert Spencer, the second was based on theory of social morality by Emile Durkheim, and the third was based on structural functionalism by Talcott Parsons. Then, the paper analyzed the concept of social integration. On this basis, this paper proposed the compound concept of social integration of technical communication, and held that the social integration of technical communication could be understood as the technology reached to state of socialization by restriction, adaptation, regulation with the society in process of communication. That was also the logical starting point of the paper. Then, this paper discussed the subjects of technical communication by theory of process, the social- systematic in structure of technical communication by theory of system, and the constructivism in diffusion of technical communication by theory of social construction. And unscrambled the connotation of social integration. At last, this paper proposed four steps of the social integration of technical communication. The first step is contact and assessment. The second step is regulation and adaptation. The third step is feedback and absorption. And the forth step is maintenance and update.

42 BackToTOC

Chestnova, Elena: Time travel in seven mile boots: temporality and distance in Gottfried Semper’s history writing Suzanne Marchand, in her seminal study of classicism in Germany, has noted that in the course of the nineteenth century the understanding of the idea of culture had shifted away from one describing a level of aesthetic attainment along an absolute scale, to a more pluralistic definition that spoke of cultures as unique characteristics of particular groups of people. She makes an intriguing proposition that this change was associated with a shift away from prioritizing texts as main sources of information to the study of things. Her reinforcing this postulate with the example of the development of what she calls ‘ethnological sciences’, suggests that she observes, among her nineteenth-century protagonists, a perceived equivalence between historical and geographic others that has also been noted by critiques of , most significantly Johannes Fabian. How does this conflation map onto the emergent centrality of the study of things that she describes? The writing career of Gottfried Semper (1803–1879) contains the shift from texts to things in compressed form. A French-educated German architect and scholar, he was forced to re-locate to London in the 1850s, taking along an on-going book project on the history of architecture, which until that point had been building steadily on its heritage of German cultural history (Arnold Heeren) and advances in archaeology (Karl Otfried Müller). The capital of the empire had assaulted Semper with a pandemonium of things, leading him to change his thinking on history and culture and eventually producing one of the most ambitious historical schemes of artistic endeavour ever attempted – his magnum opus Der Stil. Containing well over one thousand artefact references, the tome is really a museum in a book, narrating a history of as a succession of decorative – and often quotidian – things from far away or long ago. This paper will explore how the conflation of time and distance as markers of the other became a key element of Semper’s history writing. Of particular interest is the appropriation of ethnographic and historical objects artworks able to act as apparent bridges of geographic and chronological distance.

Coeckelbergh, Mark; Funk, Michael: Transcendental Grammar and Technical Practice in Wittgenstein's Late Writings: Reading Wittgenstein as a Philosopher of Technology While there are some references to Wittgenstein in contemporary thinking about technology (Winner, Nordmann, Keicher, Kogge, Ihde, generally Wittgenstein's work receives little attention in this field and the topic Wittgenstein and technology is not systematically studied by Wittgenstein scholars or by philosophers of technology. This presentation argues that, and shows how, we can fruitfully use Wittgenstein's late writings to better understand technical use and practice. We propose to read Wittgenstein as a philosopher of technology, with a specific focus on the notion of technical use and practice and the meaning of grammar understood from a Wittgensteinian point of view. First we highlight examples and metaphors of technical practice in the Philosophical Investigations and other late writings of Wittgenstein. Then we use Wittgenstein's view of language in the Investigations to elaborate a conception of technological practice, informed by the notion of transcendental grammar. In particular we show that in Wittgenstein language and philosophy are seen as forms of technique, skill, and performance, that meaningful language use and practice have bodily and socially shared dimensions, and that this embodied and social use and practice is embedded in a transcendental grammar (transcendental contrasted with merely syntactical). We support this interpretation with references to authors such as Keicher, Nordmann, Kogge, and Winner. We argue that with this approach to grammar, Wittgenstein reflects on conditions of possibility in a more radical way than Kant did, and that we can use this approach to conceptualize technical use and practice as involving transcendental grammar. We conclude that based on this reading of Wittgenstein, we can not only go beyond some already existing Wittgenstein interpretations, but also begin to develop a new approach in the philosophy of technology.

Coeckelbergh, Mark: Magic devices, gothic robots, and romantic people: Understanding the use of contemporary technologies in the light of romanticism Contemporary approaches in philosophy of technology after the empirical turn, for instance current directions in postphenomenology and critical theory, are often occupied with humans and things, and discuss present human-technology relations; the role of language remains undertheorized and systematic engagement with cultural and historical approaches is often missing. One way of trying to better understand our relation to technology is to focus on language and culture as constituting a kind of "grammar" that shapes our use of technological artefacts. In this talk I support this point by referring to Wittgenstein and then interpret and discuss the use of contemporary technologies such as smartphones and robots in the light of romantic language/thinking and culture. First I show that in the

43 BackToTOC

history of technological culture, romantic language/thinking has not always been hostile to technology, as is often assumed, and should not be reduced to nostalgia. Romanticism and technology have also been involved in all kinds of entanglements and liaisons, for instance in 19th century and 20th century science/technology and science fiction. Then I argue that this is also happening today when we use high tech devices and technologies such as smartphones, robots, and augmented reality, and that a better understanding of the relations between romanticism and technology can contribute to a more comprehensive theory about the use of technologies today.

Cong, Hangqing; Chen, Ximeng: Social Governance in Chinese Engineering During the past 20 years, there have been a number of significant events in China's engineering sector affecting the country’s social stability. These events can be divided into two categories: one caused by internal quality problems; the other caused by externalities of engineering. No matter what kind of problem it is, both can be attributed to major changes happening in the governance model of the engineering project. To address these issues and initiate a new and appropriate governance model, the Chinese engineering practice should find a new and broader base. The concept of “engineering morphology”, which enables an analysis of the diverse forms of engineering and their specific manifestations, could help in establishing such a new foundation . The term "morphology" refers to the process of formation of an organism in an animal or plant. Morphology of engineering is different from the concept as originated and used in biology, geology and linguistics; it is a four - dimension space of technology development, defined by the aspects of people, artefacts, nature and society. Morphology of engineering is the fusion of three forms: knowledge, organization and life. As a form of knowledge, morphology of engineering is not only limited to the engineering theoretical knowledge, but also includes the practical knowledge of the engineering practice, which is deeply entrenched in all aspects of the industry chain. As an organizational form, its participants include government, industry, business and the public. As a kind of life, engineering has penetrated into all aspects of social life, engineering practice has become the soil of the modern lifestyle. The application of the morphology as a concept depends on the characteristics of the “people- artefacts -nature-society” dimension space .When a project is implemented in a specific culture ,as well as specific geographical and environmental setting, it requires diversification of engineering. In practice, the morphology of engineering is always associated with a specific social context, consequently, there is no universal morphology. A particular engineering form cannot be transplanted to another place, because such transplant would ignore the characteristics of engineering rooted in the four-dimension space; it would undermine its forms of knowledge, organization and life. Morphological diversity presupposes active governance. The ethical basis of all governance should be autonomy. In the process of governance, the participants are ideally acting on their own judgments and therefore they would be autonomous. Although self-determination is the core of autonomy, it should not degenerate into self-centeredness. In view of multiple actors, heteronomy is indispensable in order to achieve well-coordinated governance of engineering. When we consider others and make decisions in favor of their interests, then autonomy is combined with heteronomy, and that is what will be advocated here . This idea of autonomy in combination with heteronomy should be the core of social governance in Chinese Engineering. Governance is the only way to do justice to engineering morphology, that is to the comprehensive character of engineering (the three forms) and it practical diversity (the four characteristics). From the point of view of the organizational form, China's engineering has undergone a transformation in governance models: from a government-business to a government-industry-business-public model. Industry and the public are two new actors. On the one hand, the economy being reformed, basically it is being decoupled from the government. On the other hand, public engagement in governance of engineering is gradually increasing. The new governance model should be broader and more varied; the morphological approach of engineering and autonomy can help to bring such about.

Cruz, Cristiano C.: Brazilian Popular Engineering and the Responsiveness of Technical Design to Social Values In Brazil, there is a movement in the realm of technical production that started to gain momentum from 2002 on. This movement, which some of its participants now call popular engineering (PE), actually encompasses three originally different, but complementary, perspectives: social technology, solidarity economy, and university extension. Despite some PE’s remarkable undertakings – such as the agro-ecological solutions pursued in the settlements of the Landless Rural Worker’s Movement, and the cooperatives built with collectives of homeless or very poor collectors – this field of technical production has not been given the

44 BackToTOC

philosophical consideration it deserves. Indeed, if one pays careful attention to the way these technologies are produced and to how engineers work, one will certainly identify some very intriguing aspects of PE. These aspects concern two main characteristics of popular engineering: the collaborative construction of the technical solution along with the local participants; and, to do so, the imperative of dialoguing with traditional (or popular) knowledge, which demands some participatory methodology (such as action research) and a process of popular education. Because of this peculiar way of producing technology, PE manages to create technical solutions that not only address real demands of popular groups, but also incorporate (part of) their (these groups’) knowledge, values and worldviews. In doing so, PE allows such collectives to build a sociotechnical reality somewhat closer to what they really want it (or dream of it) to be. Ontologically, PE’s technical legitimacy can be proved by the conjugation of Feenberg’s sociotechnical rationality and a defensible appropriation of Simondon’s technical development understanding. Epistemologically, the production of social technology by popular engineers is only possible – that is our thesis – because they succeed in transforming part of Vincenti’s design instrumentalities (mainly, ‘judgmental skills’ and ‘ways of thinking’), in relation to their (design instrumentalities’) standard technocratic form. However, if this is so, there seems to be a part of the technical design – the arts of engineering (i.e., Vincenti’s design instrumentalities) – which is (or can be) deeply responsive to social (or non- technical) values. And such conclusion, if correct, is everything but devoid of importance, mostly (but not only) for those who struggle for social justice, popular empowerment and/or environmental sustainability.

Cutler, Mark: What Do Things Want? Timothy Morton develops the concept of 'hyperobjects' in his book of the same name, in response to both recent and impending economic, digital and ecological catastrophes. Hyperobjects are entities that have causally and conceptually broken free of the banal human category of ‘object.’ Morton wants to push for objects of human creation which are nonetheless much, much more complex than human thought, objects far beyond any possibility of coherent or even finite representation, structures of which we can only see some facets, while others withdraw into ontological crevices. They may be spatially and/or temporally distributed; they may manifest only as torsions in the behaviours of other systems. If there is no first statistical aberration, can we say there was a moment climate change began? And yet it began. Hyperobjects are nonlocalizable but always closer than we think. There is no upper limit to the amount of data we can collect about them, and yet we come no closer to understanding them, to predicting their undulations. Morton suspects that hyperobjects are, in a sense, thinking; that they have a kind of agency which is both strictly outside human comprehension and of immediate consequence to human livelihood. The moment we become aware of them, we were already in them; they ooze around us, patterning our vital and social functions. This paper radically extends Morton’s conception of hyperobjects from the domains of ecologies and economies to that of the information networks and objects we interact with every day. Hyperobjects’ tentacular messengers may be much closer than we realise—on our skin, in our hands, coagulating in unmanageable piles on our desks and phones, subsuming us. Morton fears that hyperobjects are siphoning from humanity our agency over our own lives; that we are making monsters smarter and more powerful than we are; that something colossal and incomprehensible is coming; that we are in the dark and that the dark itself is a hostile entity. The things have arrived; they are reproducing faster than we can imagine, beyond our control. They are close; as close as your own clothing. The question is: what do they want? de Melo-Martin, Inmaculada: Valuing Reprogenetic Technologies Reprogenetic technologies, which combine the power of reproductive technologies with the tools of genetic science and technology, give prospective parents a remarkable degree of control not simply over whether and when to have a child, but also over a variety of a future child’s characteristics. Prospective parents can now select embryos with or without particular genetically-related diseases and disabilities and non-disease related traits such as sex. Furthermore, in the future new advances in gene editing techniques might permit prospective parents to alter the genome of gametes or embryos in order to eliminate the risks of some diseases or manipulate certain molecular aspects that affect physical features, cognitive capacities, or character traits. Well-known authors such as John Harris, Julian Savulescu, and John Robertson have enthusiastically embraced these technologies. They argue that these technologies increase reproductive choice, contribute to a reduction of suffering by eliminating genetic diseases and disabilities, and offer the opportunity to improve the human condition by creating beings who will live much longer and healthier lives, have better intellectual capacities, and enjoy more refined emotional experiences. Indeed, some

45 BackToTOC

take reprogenetic technologies to be so valuable to human beings that they believe their use is not only morally permissible but morally obligatory. More often than not, however, proponents of reprogenetic technologies treats these technologies as mere value-neutral tools, limiting their assessments to risk and benefit considerations. In this paper I bring insights from the philosophy of technology regarding the value-laden nature of technologies to bear on bioethical analyses of reprogenetics. I challenge proponents’ assumption that an evaluation of risk and benefits is all is needed to determine the moral permissibility or impermissibility of developing and using reprogenetic technologies. I argue that a robust ethical analysis requires attention to the relationship between contextual values and technological development and implementation, as well as to the ways in which technologies reinforce or transform human values by mediating our perceptions of the world and our reasons for action. I show that ignoring the value-laden nature of reprogenetic technologies results not just in incomplete ethical evaluations but in distorted ones.

Dong, Xiaoju: Avatar: Multiple or Disrupted? ——A research on the identity in the age of Internet Today, technology and its developments have an increasingly crucial impacts on people’s lives. Along with these developments, technology has also constantly redefined the concept of "self" and “identity" of human. In recent years, with the rise and expansion of virtual social networks and online games, people can freely create Avatars on the network by which they can participate in the network activities. Many scholars argue that the Avatar created in such conditions, which corresponding to people's needs, represents a particular aspect of the real-life personal identity, perhaps the potential one. Besides, they think that the relationship between online identity and the offline is a parallel one, thus people can have multiple identities and be flexible in the conversion of multiple identities. However, such a view seems to indicate an optimistic attitude towards technology, since it seems that people regard technology as a helpful method for them to explore different aspects of oneself, nevertheless which is not the fact. Furthermore, such a view does not give an appropriate account for the identity and the changes it has suffered in nowadays. Therefore I want to elucidate that Avatar does not represent different aspects of the multiple identities; on the contrary, identity has been divided into pieces, and the Avatar represent such a disruption of identity. Such an identity is not multiple, but a disrupted one, which has many deadly shortcomings. Then I attempt to give some ideas about how to rebuild the wholeness of identity, by virtue of the phenomenological reflections on technology. Since we live in an age abounds with Internet and AI, such a reflection of identity may help us reconsider the relationship between technology and us, and rebuild a critical and rational view towards both the benefits and disadvantages of the developments of technology and science.

Doorn, Neelke; Murphy, Colleen; Gardoni, Paolo: Translating social resilience to engineering: How can resilient engineering systems contribute to social justice? In the last decade, resilience has become a leading paradigm for thinking about risks and safety threats, ranging from climate change and natural hazards to threats related to economic crises, migration, and globalization. Resilience roughly refers to the capacity of a system to respond to and recover from threats and it is often thought to contribute to a better – that is, a safer and more just – society. However, this claim is not uncontroversial. Some also argue that the resilience paradigm primarily benefits the people who are already quite well-off at the expense of disadvantaged groups. According to these critics, talking about resilience does not add anything useful if one interprets it as “bouncing back” to some previous state. The term resilience is used in different domains with a correspondingly wide range of definitions and interpretations (Doorn, forthcoming). Although much of the literature on resilience refers to the work of Holling, who introduced the term resilience in the context of ecosystems (Holling, 1973), the term has its origins in engineering (Hollnagel, forthcoming). The first use in an engineering context goes back to the beginning of the 19th century, in which the term was used to describe the property of certain types of wood that were able to accommodate sudden and severe loads without breaking (McAslan, 2010). In the literature, it is especially the engineering interpretation of resilience that is criticized for being too narrowly focused on bouncing back (“conserving what one has”) instead of adaptation and transformation (Folke, 2006). The aim of this paper is to explore how the concept is used in different disciplines and how the engineering interpretation of resilience can be enriched so that it can live up to its promise of contributing to a “better society” rather than re-affirming the status quo of vulnerable groups.

Earle, Joshua: Identity and Normative Danger in Transhumanist Rhetoric What is “enhancement?” What is “improvement?” What does it mean to be enhanced or improved? By what mechanism? By whom? For whom? And by which metric(s) is this determined? In this paper I

46 BackToTOC

will argue that the inclusion of diverse voices and a conscious attendance to our eugenic history is the only way we avoid repeating the errors of our past. Transhumanists posit a utopic vision of the future that is brought about by ubiquitous technological marvels that can do anything, including enhance human abilities. Ray Kurtzweil speaks of nanomachine foglets that can conjure objects out of thin air. Hugh Herr and Dean Kamen are trying to build prosthetics that are stronger and more durable than our own fleshy limbs. Aubrey DeGray wants to make us immortal by treating aging as a chronic condition, easily managed. Kurtzweil (again) and Michio Kaku also speak of cognitive enhancements that will birth generations of geniuses. And who among us would not want any of those things? Actually, quite a few. Technological improvement of the human condition, and particularly human ability, has been tried before to disastrous results. The eugenics movement of the 1920s through the 1950s was a more rudimentary attempt at such a program. The result was a scientific reification of ability, gender, race, and class separation that still pervades our science and society. Metrics of normality and pathology reinforce this view, and many people in the disability and neurodiversity population directly feel the exclusion of these technological and attitudinal paradigms. The sweeping argument from Transhumanists that everyone will be able to choose the set of abilities that they have is patronizing and ignorant of the normative power that technology, and technological ability has on our culture. We must more closely attend to the desires and diverse ways of being that exist in the world, and fight against the Western, White, Able, (straight, cis, etc) Male vision of how we all ought be.

Edens, Sam: Constructing Publics on the Plane of Objects: a Case Study of Fairphone The last decades have seen an intensified use of communication technologies and multifunctional digital devices, leading to an information technology paradigm -known as the network society- that increasingly influences the organisation of social, cultural, political and economical dimensions of society. One of its effects is an increased attention for events taking place inside this network, thereby excluding those that take part outside of it (Castells, 2010; Cramer, 2014; Appadurai, 2010; Lash and Lury, 2007). This in- and excluding effect of the network society also manifests in its material dimension: our mobiles, laptops and tablets are assembled in Asia, exist out of components coming from anonymous subcontractors, contain toxic chemicals inhaled by workers in artisan factories, and contain metals coming from rickety mines run by African warlords. Simultaneously, in Western media, the mobile phone is mediated as versatile and empowering device, embodying the dynamics of the network society. Because of this one-sided mediation, the mobile phone as a material object is cut loose from its production chain that ties it to different geographical sites where the increasing production of mobiles also influences the local dimensions, but unfortunately in a negative way. Fairphone is a Dutch start-up that aspires to change this issue. The social enterprise aims to make Western consumers aware of the global consequences of mobile phone production, by setting an example with their own ‘fair’ phones (Middel, 2011). I am interested in the manner in which a mobile phone -a device said to change our experience of space and place- is employed by Fairphone to establish a connection between users and producers in different territories. By envisaging design as a world-making practice (Goodman, 1978) and expanding upon the notion, the various ecologies that form part of an object’s constitution come into each other’s proximity. To understand the manners in which Fairphone articulates these humane and environmental ecologies that form part of the object and simultaneously creates a public for ‘fairer consumer electronics’ by employing design activities, I use selected works of Bruno Latour (2005), Noortje Marres (2013), and Carl DiSalvo (2009), that all incorporate John Dewey’s object-centered theory about the construction of publics. The paper concludes with a reflection of the mobile phone’s ability to bring different parties in each other’s proximity, thereby assembling these parties on the plane of the object.

Elder, Alexis: Why not just step away from the screen and talk face to face? The moral import of communication medium choice When reaching out to a friend, we can do so by using Snapchat, Twitter, Instagram, Facebook, text, instant messenger, videochat, telephone, mail, or meeting face to face. This introduces a new kind of moral question: which channel should one use for any given communication? For example, evolving social norms hold that it is bad to break up via text. While it is tempting to suppose that some communication channels, especially face-to-face exchanges or those that most closely resemble them, such as telephone and videochat, are morally and/or pragmatically superior to their competitors, I argue for a more inclusive account of interpersonal interactions, one in which the Aristotelian virtues, from good humor to good temper to magnanimity to friendliness, are best supported and expressed in a communicative environment rich in media options. Each kind of communication channel presents its own moral hazards, but also supports the distinctive expression of some traits of character important for rich and rewarding social engagement. Asynchronous communication channels such as text and email enhance both senders’ and recipients’ autonomy, while synchronous communication channels

47 BackToTOC

demand attention from the other but also offer oneself as a willing recipient of their immediate and unfiltered responses. Private and ephemeral communication channels such as Snapchat facilitate disclosure of sensitive information, including a kind of vulnerability that can enrich relationships and enhance trust, while highly public fora such as Twitter support public engagement but have the potential to invoke mobbish mass reactions. Rather than focus on each channel in turn, I argue that the ethical issues involved are best addressed holistically. The value of Snapchat or Twitter is to be understood not in isolation, but in virtue of its role in an ecosystem of communication channels. For users, learning to make appropriate choices includes an awareness of the moral hazards and virtuous means involved in the exercise of each. Furthermore, a model of interpersonal relationships that highlights both interdependence and individual flourishing provides a regulatory ideal to help arbitrate choices between competing goods that are often present in choosing an appropriate communication medium.

Franssen, Maarten; Kroes, Peter: Philosophy of technology after the empirical turn: Perspectives on research in the philosophy of technology in the next decade In the fall of 2015, Maarten Franssen, Pieter Vermaas, Peter Kroes and Anthonie Meijers contacted about twenty scholars from philosophy, engineering, and philosophy of engineering and technology in particular with an invitation to contribute to a volume in which the development of philosophy of technology during the fifteen years that had passed since the publication of The empirical turn in the philosophy of technology, edited by Peter Kroes and Anthonie Meijers in 2000. This book was one of the landmark publications in which philosophers of technology were urged to open the black box in which technology had largely remained hidden in earlier philosophical discussions of technology, and to start engaging with the practice of technology and engineering, with the content of the great variety of knowledge claims to be found there, with the methodology of design and engineering science, and with the moral issues that technology raises for engineers and policy makers. Much of philosophy of technology since then has been undertaken in the spirit of this call. Now that fifteen years have passed, it appeared to the organizers of the invitation the right time to both look back and ‘view’ forward. The invited scholars were asked to reflect on the work that has been done and to present their ideas on how to developing the field of philosophy of technology further. The organizers eventually received sixteen essays in response to their invitations. The book that contains these sixteen essays, to which was added an introductory essay by its four above-mentioned editors, was published in July 2016. Two of these editors now propose that a special session or symposium be held as part of the SPT 2017 international conference, in which several leading people in the field will comment on the book and on the various perspectives it contains on how the field is to develop and what the promises for and challenges to the field are. Persons of whom the symposium organizers would in particular welcome their comments and views are: Shannon Vallor (current president of the Board of SPT) Mark Coeckelbergh (current vice president of the Board of SPT) Peter-Paul Verbeek (previous president of the Board of SPT) Albrecht Fritzsche (Conference chair of the 2016 fPET conference in Nürnberg) Astrid Schwarz (member of the organizing committee of the 2017 SPT conference) Martin Gessmann (as a voice from German philosophy) We must stress that none of these persons have been approached yet on their willingness to participate. They are mentioned by us partly for their current positions in organizations that are central to the field, and partly for their being German philosophers interested in technology, which makes them somewhat salient as potential contributors to a major conference in the philosophy of technology held in Germany, with its strong and long-standing tradition in the field, but which will also increase the likelihood of their being in the neighbourhood. Time and attention limits for the symposium will constrain the number of eventual commentators to about four. If, and in what form, there should be occasion for responses is something that must be determined once the list of commentators becomes clearer. If the programme committee of the SPT 2017 conference shares our confidence that we will be able to develop this into a lively symposium, we will put all our efforts into making this happen.

Funk, Michael: The Grammar of Information – Methodological Constructivism/Culturalism and the Philosophy of Technology In this SPT 2017 lecture the following questions will be answered: What is information? Is information a thing? Is there a grammar of the thing information? Or might there be a grammar of information, which does not relate to the category of things? Answering these questions it is argued that information is primarily a form of culturally embedded technical practice instead of an artifact of communications engineering. Therefore the approach of

48 BackToTOC

methodological constructivism/culturalism is elaborated focusing the fundamental works of Dingler, Lorenzen, Kamlah, Mittelstraß and Gethmann. Following the claim of methodological order it can be illustrated that socially shared and succeeding communicative praxis serves as a lifeworld axiom for transmission of information. Transsubjectivity illustrates the fact, that information is constituted within a sphere of shared communicative actions, which also enables ethical assessment of information technologies. There is no pure information in an isolated artifact like behavior. In order to justify this claim, a genuine focus of the presentation is on the works of Janich, Gutmann, Bölker and Hesse who critically investigate the anthropology and ethics of the term information and current IT applications. Moreover it is argued that an underlying grammar of succeeding technical practice can be revealed, which can contribute conceptual insights to the philosophy of technology. It turns out that information is not a thing – in the meaning of artifact – and therefore its grammar is not a set of thing relating explicit rules. Following a Wittgensteinian understanding of grammar – a practical form of meaningful language, not syntax –, and the approaches of Kambartel and Rentsch it can be justified that the grammar of information depends on contingent and finite communicative situations. Even if information technologies like smartphones suggest access to worldwide standardized and situation invariant information, the grammar of information depends on very genuine communicative circumstances. It is critically argued, that the illusion of worldwide accessible standardized information can lead to misleading concepts – e.g. in the case of social robots and the intended design of social relations with information technologies.

Gaillard, Maxence: The individuation of a scientific instrument Debates about scientific instruments are common among experimental scientists and questioning the validity of a given instrument or its relevance for a specific field of scientific research are recurrent hot topics. However, discussions about instruments often rely on assumptions about what defines the instrument—its identity. When does a new instrument emerge? When are two laboratories using the same instrument, and when are they using different ones? In this presentation, I take a deeper look at what makes a scientific instrument, what I call here its process of individuation. For doing this, I consider the case of magnetic resonance imaging (MRI) and several variants of brain scanning. Neuroimaging researchers generally assume that there is a variety of instruments behind MRI. As users, they can identify these entities by putting a label on it, such as “functional MRI”. I review some of the putative criteria for the individuation of the instrument: machines themselves, underlying physical and biological principles, procedures for recording data… I discuss the relative importance of these criteria and their robustness from the viewpoint of the philosophy of artefacts. Clarifying some ontological assumptions about the technology used in science should in the end be helpful for epistemological arguments. As a consequence, I will try to revisit in the last part a classical epistemological argument criticizing the lack of ecological validity of neuroimaging experiments. Observers of laboratory experiments in neuroscience have focused on the interaction between the body and the scanner. The instrument constrains the body so strongly that the scanning session can barely be compared to a real world situation, and one can conclude that neuroimaging is not studying “real brains” but merely producing meaningless data related to the activity of “brains-in-scanners.” This argument relies on default definitions of a brain and a scanner, and by doing so it does not allow epistemology to confront the real challenges encountered in the practice of neuroimaging: improvement of existing instruments, development of new tools, and innovation against standardized procedures.

Geerts, Robert-Jan: The polder as precedent for geoengineering Geoengineering (or actively influencing the global climate by means of technology aimed at achieving this goal) is typically seen as the boldest and most problematic solution to mitigate climate change, the most pressing and visible effect of the anthropocene. This range of technologies is also presented as unprecedented, and therefore risky and uncertain with regards to its effects. This paper explores the idea that a useful precedent for geoengineering may be found in the Dutch practice of poldering. Since the Beemster in 1608, the Dutch have drained water from a significant range of land, thus increasing space for farmland and cities alike. Through this practice, the Dutch have: 1) technologically taken over a 'service' elsewhere done by nature; 2) made their safety dependent on the continuing efforts to keep the water out; 3) created a deteriorating situation because of settling of drained areas; 4) set up special governing agencies (water boards) to properly handle water management; 5) created land unfit for some of the intended purposes; 6) have, especially early on, pushed a technological project with uncertain outcomes; and 7) used, or planned to use, the ability to submerge areas for military purposes. The parallels are striking. Geoengineering is critiqued for: 1) mingling with the natural 'earth system'; 2) the difficulty of ending it; 3) combined with continuing carbon emissions, an ever more precarious

49 BackToTOC

situation; 4) being hard or impossible to govern with current global institutions; 5) possibly leading to novel climatic patterns, thus creating a sub-optimal situation; 6) being based on imperfect knowledge of the earth system; and 7) possible military usage. These similarities suggest that discussions on geoengineering could be informed by insights from discussions on polders, both in the historical setting when these were novel, and in retrospect. However, the parallel is not perfect. The global climate is much more complex, and technologies to manage it are less well understood than pumps and mills in the 17th century. Further, greater inequality and cultural differences make the challenge of (just) global governance of geoengineering more difficult than local governance of polders, and this time, there is no higher ground to observe from or flee towards in case of catastrophe.

Gransche, Bruno: Things get to know who we are and tie us down to who we were. When people handle things like cooking knives or footballs they develop competencies and can become skilful cooks or athletes. Surely, continuously dealing with things changes what a person is able to do. The potential of a knife or football instead remains the same (besides abrasion etc.). So, people dealing with things get better in doing so – things being handled do not. Today, that has changed and this change is far-reaching. When things get to know persons, they, in turn, develop ‘competencies’ in dealing with people. Learning personalised systems – i.e. like the everyday assistants Jibo, Echo, Pepper &Co. – aggregate their ‘experience’ from previous interactions in detailed user profiles. “Pepper wants to learn more about your tastes, your habits and quite simply who you are. […]Your robot evolves with you. Pepper gradually memorises your personality traits, your preferences, and adapts himself to your tastes and habits“ (Aldebaran Robots). Today, some things – that is informed systems – learn how to ‘handle’ people like people used to learn how to handle things. This is very far-reaching because it affects no less than the openness of our futures, our possibilities to develop, and our freedom to change. Memorizing “quite simply who you are” actually means not forgetting “who you were”. Vast spheres of our everyday life are mediated by intelligent systems. They now (partly) preconfigure our options to decide and act, they nudge us to or persuade us of certain decisions and actions (i.e. to buy or vote) according to what they learned in the past (plus their operators’ interests etc.). In this paper, I will propose a perspective on the grammar of things as a structure that things oppose on people (rather than vice versa) and how to consider the competencies’ and futures’ openness while being handled if not even mastered by things.

Greif, Hajo: The Reality of Augmented Reality Augmented Reality (AR) technologies provide information about a given subject matter in either one of two particular ways, as shall be argued in this paper: Being designed to integrate simulated elements into the perception of an environment, the information they provide is either convergent with, or isomorphic to, natural information that a perceiving subject collects in his or her environment. In the case of convergence, natural perception and AR are in the business of tracking a shared set of informational relations in the environment. Tracking is accomplished by the system in such a way that transformations of distal conditions in the environment are regularly mapped by transformations of conditions at the proximal, perceivable end. This happens either with respect to the same proximate variables that the perceiving subject would record or by proxy of regular correlates of these variables. In the case of isomorphism, tracking of concrete informational relations in a given environment is not shared between perceiving subject and AR system. The simulated elements are integrated with, but do not refer to what currently happens in the user’s environment. Instead, there are general-level analogies in informational relations between types of world affairs in the real and a simulated environment, where the latter forms an overlay to the former, and where the the user primarily interacts with the latter. As will be demonstrated by reference to some examples of AR applications – advanced driver assistance systems, gesture-based interfaces, and mixed reality games – relations of convergence and isomorphism may be found in conjunction. The convergence/isomorphism distinction introduced here cuts across the established distinction in human-computer interaction studies between “reality” and “virtuality” that is used to describe the properties of artefacts that (partly) simulate properties of an environment (cf. Milgram’s reality-virtuality continuum). A mapping of the convergence/isomorphism and the reality/virtuality coordinates provides a picture of AR technologies that highlights their unique informational and cognitive properties, which set them apart from common- sense views of Virtual Reality or simulator environments. These properties, qua being cognition- related and artefact-dependent at the same instance, are relevant both to the philosophy of mind and the philosophy of technology.

50 BackToTOC

Hansson, Sven Ove: How to perform an ethical risk analysis Standard risk analysis leaves out important ethical aspects such as who contributes to the risk and with what intentions, whether those exposed to the risk have any influence on decisions about it, etc. This presentation introduces a method for systematic ethical risk analysis. It avoids the obscuring concept of a stakeholder, and instead puts focus on the three major roles of being risk-exposed, a decision maker, or a beneficiary (someone who benefits from the risk being taken). The importance of identifying all parties with any of these roles is emphasized. For instance, many risks depend on background decisions that create situations in which risk avoidance is associated with large disadvantages for the individual. An ethical analysis must include these background decisions, not only the decisions by individuals who take a risk due to lack of better alternatives. Another important component of the analysis is the identification of individuals and organizations with a problematic position in relation to the three risk roles. There are two particularly problematic such positions. One is that of someone who decides about a risk, is not himself exposed to it, but benefits from the risk being taken. The other is that of a risk-exposed person who neither benefits from the risk being taken nor has any role in decisions about it. It is proposed that the identification of such ethical failures can be an equally important part of risk analysis as determining the probabilities and severities of adverse events.

Henze, Andreas: "This machine is part of me like your voice." On interacting with technological generated voices. Having a voice is considered as being a major competence for members of society, for their participation in work, politics and social relationships, and basically to be recognized as a full “respectable” human being that is able to articulate him/herself and to be understood. On the contrary, for people who cannot speak, talking and being understood is a permanent challenge and can therefore be assisted by speech generating devices, by "talking machines". In my talk I will discuss the issues of voice, technology and disability. My contribution is based on research with people with spastic paralysis, who talk through different speech generating devices. I ask how a voice is enacted by interacting with technological devices? How is a voice performed and mediated by the grammar of technological things and social practices? I will deepen these topics as follows: First, to highlight the materiality of the talking machines, I exemplify particularities of the technological shaping of verbal articulations: the specific temporal aspects of talking; the embodied use of the device; the effects of synthetically generated voices on intonation and displaying emotions; the limited stock of words that results in short, one-word and highly indexical utterances; and finally, options to expand the lexical system of the device. In the second part, I will discuss the meaning of speech-generating-devices in their everyday use. Here it becomes obvious how the grammar and the performance of technological equipment shows effects on social identities. a) This will be exemplified by drawing on the concept of (a-)symmetrical relations of things and bodies and how those enable actors in certain ways. This shows b) that things and people are not intrinsically or by nature connected. Instead the grammar of things and the mediated production of a voice, have to be accomplished in and through the dis/abling features social practices. Based on phenomenology, ethnomethodology and actor-network theory I will discuss how the grammar of things is embedded in the grammar of social practices. This provides insight into how the merging of technology and embodied selves not only challenges the concept of the autonomous self. Beyond that, having a voice through things is a complex intersubjective and moral task of how to approach and understand the other and his/her technological equipment.

Herrera, Rayco: Günther Anders and the 'Promethean gap': imagination as a moral task The aim of this paper is to propose the use of imagination as a moral task. The main theses of Anders’ work The Obsolescence of Man are: that we are not as perfect as the objects that we make; that we produce more than we can imagine; and, that we think that, what we can do, we ought to do it. These theses reflect a state of the technical world that is constantly outpacing us. Günther Anders’ reflections on technology emphasize our inability to imagine the consequences of the products that we make. At the end of the second volume of his main work Anders says: “Today, interpreting is not the task of the ‘humanists’, rather it has become the moral task of us all”. In this paper, we want to discuss the problem of uncertainty about the future of technology and propose the imagination as a moral task. It cannot be said whether the consequences of the technologies that already are in use and of others that begin to emerge are positive or negative, i.e., a situation of indeterminacy. If consequences are unthinkable we will, therefore, not be able to evaluate or act responsibly. Technology developments do not provide us with a foreseeable future, but sci-fi does it better. The Black Mirror television series offers some attempts to save what Anders called the “Promethean gap”. A summary of Anders’ thoughts on technology will guide us through this moral attempt to catch up with the world that we

51 BackToTOC

produce. How can we legitimate any technological development and its convenience when we are not able to forecast the consequences in many cases? What technologies will we put first and what uses will we give them, what kind of relations between technology and us, and between ourselves will we develop? If the time that should reach us will come in a dystopian form, in the way that Black Mirror present it to us, is to be seen. The purpose is to claim the imagination of the science fiction to foresee and anticipate a world where we can and want to live.

Hung, Ching: Technologizing Democracy toward Democratizing Technology Since the late 1970s the idea of democratizing technology has been argued and promoted by theorists and activists in the field of STS (science and technology studies) and philosophy of technology. From (traditional) Technology Assessment to Constructive Technology Assessment and then to Technology Accompaniment, it is believed that democratic procedures can bring technologies under societal control and therefore prevent the public from any possible, unwanted technological consequences. However, questions would arise if we take a closer look at such an ideal. In the imagery of democratization of technology, technology is something placed on a table, surrounded by human discussants, and eventually shaped and designed by the consensuses of democratic communities. But the fact is that technology has never been such inactive and silent. At a shallow level, technology can boost democracy by providing tools and environments that facilitate discussions; it is not just a negative factor external to democracy. At a deep level, technology always mediates human experiences and praxes and thus can influence democratic procedures in which humans are discussants, voters, and decision makers. Acknowledging the active role of technology requires a reconsideration of the “consensus model” underlying the movement of democratizing technology. In order to take technology in to account as an “actant”, I would like to propose “agonistic democracy” by political theorist Chantal Mouffe as a substitution for the mainstream understanding of democracy. It rejects the rational image of human beings who are capable of reaching consensuses and admits that people’s reasoning and preference are influenced, constrained, or conditioned by various factors. Therefore, the agonistic democracy leaves a room for the mediating role of technology and is more suitable for a world in which technology is constituting our living environment and compatible with an “internalistic” perspective toward technology. In the context of this new interpretation of democracy, democratizing technology can be probably reached via the detour of technologizing democracy.

Jayanti, Ashwin: A Revised Phenomenological Hermeneutics for Understanding the Grammar of Things Philosophers of technology have interpreted what things do, the point now is to understand what can be done with things. This programmatic statement shall serve as the larger objective of this paper, wherein I shall outline an approach that inaugurates a shift in perspective in philosophy of technology from the much-celebrated “empirical turn” toward what could be called an “enactive turn.” For this purpose, I shall adopt the phenomenological—hermeneutical approach since it affords a rigorous first- person perspective into our different modes of engagement with our everyday technological artefacts. I shall bring the phenomenological approach in contact with the analytical approach of the Dual Nature of Artefacts Program and conceptualize “mode of engagement” in 'structure' and 'function' terms. This concept shall form the centerpiece of this paper, acting as a bridge between both (a) phenomenology of use and normativity of technological artefacts, and (b) contemporary phenomenological and analytical approaches to philosophy of technology. I shall begin the paper with a normative— phenomenological reading of Heidegger's tool-analysis in Being and Time in order to show how at the heart of his phenomenology lies a consumerist Dasein, i.e., one whose technological praxis is limited to what I shall refer to as the submissivist mode of engagement. I shall contrast this with that other mode of engagement which I shall conceptualize as the subversivist mode of engagement, one which is rarely, if ever, accounted for in most philosophy of technology. This latter mode of engagement shall form the locus of my analysis, which shall proceed to show the need for a revised phenomenological hermeneutics: a bottom-up hermeneutics of 'tinkering' and 'affordance,' as opposed to the top-down Heideggerian hermeneutics of 'assignment' and 'in-order-to.' Finally, the ontological and epistemological importance of the subversivist mode of engagement—and, consequently, the enactive turn in philosophy of technology—for our contemporary lifeworld shall be illustrated.

Ji, Haiqing: Human Enhancement Ethics and Naturalistic Fallacy According to transhumanism, the ethical principal on whether to get enhanced or not is some kind of individual utilitarianism based on autonomous judgement. Whether to achieve a taller stature or a better performance in memory by some sorts of enhancer depends on the real benefits followed. Naturalistic Fallacy, an ethical proposition put forward by G. E. Moore on D. Hume’s ontological problem, is-ought gap, which advocates that ethical good cannot be reductively in terms of natural properties such as pleasure or desirable, seems negative to these transhumanism claims on human

52 BackToTOC

enhancement. However, Naturalistic Fallacy has failed actually in gaining its power in these ordinary people who know little about ethics or philosophy and take real profits and benefits as their guidance in action. That means the is-ought gap can always be bridged in real life. While on the theoretical level, the is-ought gap not only provides causes for these boundary-breaking scientific researches on human enhancement as an independent enterprise free from ethical critics but also leave room for the formation of those individual utilitarianism without any specific priori presupposition on certain kind of good life. J. Habermas, as one of these famous bio-conservatives, the counterpart of transhumanism on human enhancement, argues against genetic human enhancement with his normative concept of human nature. However, Habermas’ position on the acceptance of is-ought gap, denying his objection as some kind of genetic determinism, weakens his defense on human nature. F. Fukuyama, another bio- conservative with his naturalistic view on human nature, also appears as a critics on human enhancement. Although there is no gap between is and ought for him to overcome since his naturalistic view, Fukuyama still faces the difficulty for supporting his preference on one naturalistic state, the un-enhanced, than the other, the enhanced. For any real and constructively critical purpose on transhumanism, it’s necessary first to build an anti- dualism position about the ethical meaning of the human body, which is also a part of the study of the materialistic morality.

Jia, Lumeng: From assessment to design: What is real needed in Technology Accompaniment By revealing the blurring boundaries between human beings and technological artifacts, and the fact that technology has moral relevance in mediating human’s experience and praxis, the technological mediation theory suggests a “non-humanist” turn in ethics and “technology accompaniment” as a new form of ethics of technology. Rather than assessing whether technologies is morally accepted or not outside from technological developments, according to Peter-Paul Verbeek (Verbeek, 2013), the crucial question in ethics of technology accompaniment is how we could give shape to the interrelatedness between humans and technology, which has in fact always been a central characteristic of human existence. Thus, by combining an existential perspective with the Foucauldian idea of “subject constitution”, it is proposed as a moral obligation for people to take active part in the co-shaping actions, by which taking care of human existence and constructing one’s subjectivity in the technological world. In this contribution, we aim to make a further contribution on how to realize the ethics of technology accompaniment. Firstly, by addressing subject constitution as the minimal morality in ethics of technology accompaniment, we would like to introduce two criteria of moral action in identifying one’s subjectivity: intention and behavior. Then, we will articulate those obstructions with subject constitution and two main approaches provided for people to carry out their moral obligation, mediation assessment and materializing morality, in which people can be well-informed and take productive actions to co-shape the impacts technologies have on them. Furthermore, we will make an analysis on how the current two approaches work for solving these obstructions, whether and on what level they help to construct one’s subjectivity according to the two criteria we mentioned before. Based on the discussion, we will argue that it is the design but not assessment that can better help to solve the challenge in realization of subject constitution, and thus argue for a “minimal morality” of willing to be technologically mediated for our moral subjectivity.

Jiang, Xiaohui; Wang, Jian: On possible ways to working through "uncanny valley" effect in humanoids design The uncanny valley effect is a specific result of humanoid robots design.It is said that the relation between human-likeness of a humanoid and people’s affinity of it is not always positively correlated. When human-likeness reach a certain high level, a sutble dislike will make the subject feel uncanny. Every designer is afraid of falling into it. This paper will discuss three possible ways out of it. The first one proposes that the design of humanoid robot should stop before the first peak of the uncanny valley curve(proposed by Mori, 1970), and donot try to duplicate human. We should relize anthopomorphism just through “whole effect”(like Gestalt theory), not in details. That is why toy robots look attractive. The second one suggests that we should design humanoid android into a new category of being, which is superior to human. The category is supreme good, absolutely safe and trust worthy. It maybe called human ideal or superman or God. Mori thought this new kind of life should stand on the right side of his curve at the highest point. People will not feel uncomfortble with this kind of “people”. However, The problem is how to build the supreme good into humanoids. By design or by deep earning? Can we people creat supreme good beings by ethical design? If let humanoid robots learn morality by itself, who can promise the good result? If the first way seems prudent, the second way seems idealistic, the third one must seems pragmatic. That is to develop humanoids according to different situations. For example, researchers has proved that the aged are

53 BackToTOC

not sensitive to “uncanny valley” effect, but they are lack of social bonds. In such case, humanoids will help the aged to build up social bonds again and find sense of social presence. The same to disabled people.

Kanzaki, Nobutsugu: Possibility of co-design in development process of AI and robot technology In the field of environmental conservation research, the major conversion of research method has occurred in recent years. As not only natural environment but also the lives of local residents are important aspects of sustainability, it is no longer possible for researchers to set conservation goals by themselves. Moreover, it may be unethical from the point of view of research ethics. Researchers must recognize local residents as important stakeholders and encourage their participation in the decision of what kind of research should carry out. For this reason, the idea that local stakeholders and researchers should co-design research objectives and methods is widely shared now. Such style of research can be said to be a trans-disciplinary study in the sense that it transcends over the hedge between the disciplines of researchers and the life of local stakeholders. The main question of my paper is whether it is possible to do the same attempts in the research of artificial intelligence technology and robot technology. There are many surveys on people's attitudes and thoughts towards these technologies, but it is doubtful whether the results of such surveys are reflected in the actual process of technology development. In addition, a survey targeted at engineers suggests that their attitude towards child care robots may depend on whether they experience life events such as child rearing or not (Ema et al., forthcoming). Since individual engineers or group of engineers does not always experience all of the life events and situations that the general public can experience, engineers may not be able to share the values, expectations and goals of general public. As anticipated that artificial intelligence technology and robot technology will enter into the lives of ordinary people, this will not be desirable. If so, the idea of co-design by engineers and ordinary people will become more important in the near future. In this talk, based on my own experience of the environmental preservation research project, I will discuss how the co-design process in the development of artificial intelligence technology and robot technology be possible, what kind of ethical issues can arise, and suggest possible solutions to them.

Kapsali, Maria: Novel Affordances: Technological Artefacts in Modern Postural Yoga Practice This paper is concerned with the use of two kinds of technological development in the practice of Modern Postural Yoga (De Michelis 2004; Singleton 2010): the transformation of quotidian household items into yoga ‘props’ during the first half of the twentieth century and the more recent production of digital artefacts, such as mobile phone applications and the smart mat (currently in productisation). By concentrating on yoga as a case in point, the paper aims to contribute to recent debates in the field of post-phenomenology with regard to: a. the materiality of the artefact, particularly its multistability (Idhe 1990); b. the role of skill and technique in technological mediation; and the processes of subjectivation (Foucault 1988) that stem from the technological use. The paper will first consider the appropriation of common household items as tools for the practice of yoga. Drawing on theories of ecological cognition (Gibson 1979; Heft 1986) it will be argued that yoga props developed out of the practitioners’ embodied ability to recognise novel affordances within their lifeworld, which fed into the sophistication and refinement of the somatic practice. The paper will then examine the development of the smart mat as a digital tool of yoga practice. It will thus compare and contrast the set of relationships among user-artefact-world that emerge through the use of conventional yoga props and the smart mat, respectively. It will conclude by reflecting on the following questions: What cultures of use does each tool build upon and what kind of subjectivities each of these tools produce? What are the social and political implications of the modes of embodiment cultivated by different tools? In what ways can the use of tools in yoga allow us to extend existing analytical frameworks?

Karakas, Alexandra: Object-Oriented Ontology vs. Designed Artefacts The goal of the paper is to apply Object-Oriented Ontology, -a new, non-anthropocentric philosophical approach- to artefacts specifically, in order to give further clarification of Peter Kroes’s and Peter Verbeek’s model about technical artefacts. The argument expands the discourse about OOO and design theory; within this, my aim is to present an alternate explanation of the nature of designed objects. How do designed objects and/or technical artefacts communicate and perform their intended purpose? The key feature of an artefact is this very property, the designed function: it has several meanings and can manifest in numerous ways, helping the user distinguish a designed object from other, not artificial. Because of this particular quality, designed objects work differently, and they are treated in a different manner by society and theorists than a mere object. Kroes and Verbeek propose that

54 BackToTOC

technical artefacts are two-fold; they have physical conceptualisation and functional features at the same time. However, this is not satisfactory for the understanding of technical artefacts, since, according to this model, human action is constitutive when realising function. My argument is the following: artefacts do not rely fully on human presence; instead, they do exist on their own, even without human intervention. To support this claim, the paper addresses design theory with the achievements of Object-Oriented Ontology.It is a realist position that implies, on the one hand, that objects have more qualities than what they reveal to us, and on the other hand, that they are independent from the human mind. OOO rejects the prevailing human authority over objects. Accordingly, real objects are withdrawn from being utterly known, and one could reach them only through their sensual qualities that exist within human experience.

Kerr, Eric: Breaking down malfunction: What happens when socio-technical systems don’t work? Malfunctions and failures present a vital concern for engineers and the general public. Recently, articulating an account of malfunction in technical artefacts has drawn increasing attention from philosophers of technology (e.g. Baker 2009) and engineers (c.f. Del Frate 2014) but such accounts have typically addressed malfunction at the level of individual artefacts. What does it mean to talk about malfunction in sociotechnical systems? Should it be preserved or should we abandon the idea of malfunction as it relates to such systems in favour of saying that the system is not working relative to particular user groups? Kroes et al. (2006) propose that anything that is necessary for a sociotechnical system to perform its intended function and that may be the object of design should be included in any attempt to differentiate it from its environment or context. Consequently, there is an unaddressed problem regarding what should be included in the malfunctioning of a sociotechnical system. In such systems, there are multiple user groups with variegated interests. Often these are divided along lines of privilege, access to resources, and so on, resulting in a “folk” account of malfunction. In this “folk” account, whether or not a device is functioning properly depends on the user’s interests. From this perspective, even if the device is being used outside of its design limitations, it may be said to be malfunctioning. Conflicts can exist between what is considered a malfunction from a technical viewpoint and from the viewpoints of multiple users. For example, consider a device that is used beyond its operational limits. Following existing accounts of malfunction, the device may not be said to be malfunctioning. Rather it has been used in a manner for which it was not intended. In addition, systems are composed of multiple interacting parts. It may be the case that some parts may be malfunctioning while at the level of the system, it is working as intended. These two aspects of “folk” accounts and quasi-completeness of system functioning make sociotechnical malfunctions an important, though understudied, concept.

Kesdi, Hatice Server; Güneş, Serkan: The Impact of Ontological Shift From User to Participator in Design Process on Construction of Socio-Technical Systems In 1970s the Scandinavian projects such DEMOS, NMJF and UTOPIA were the first places that technology has a correspondence in social context in terms of participation (Ehn, 1992). In the cold war era while the technology has a rapid development in terms of computer-based systems and began to be integrated in the workplaces, the importance of invisible and weaker groups’ owning a voice on social issues began to increase (Kensing and Greenbaum, 2012). The conflict between imperatives that researchers applied to workplaces through technology and workers attitudes to the changes that concerned them about alienation, deskilling and job loss due to the introduction of new technology triggered the trade unions to be a part of this integration process in the name of workers. The first concrete demand for right of silent groups in the development of technology ended up participation of workers’ in technology design process in workplaces. The unpredictable and positive impacts of participation such as success of systems in terms of appropriation of designed environments with workers, quality of products as well as workers’ upskilling on technology and empowerment in management emphasize the social in technological development. In contrast to the rationalist approach, the distinction between sociology and science of technology became blurred. The changing role of workers from passive users to active agents in and by technology defines a new approach called Participatory Design. However, this shift refers to an ontological differentiation in the technology studies that makes us reconsider the knowledge of these developments. In this context, the study aims to reveal the impacts of this ontological shift of design on socio-technical systems in terms of participation. As the theoretical framework Participatory Design approach suggested by Scandinavian researchers (Ehn, 1992; Gronbaek et. al, 1993; Bodker et.al, 1993, etc.), is discussed in the light of its principles, practices and consequences to emphasize how the change of social entities and practices affect the technological developments.

55 BackToTOC

Kranc, Stanley: What Could Instrumental Perception Be? To overcome limited perceptual abilities, humans create instruments via their technologies to interact in the world. These devices detect the presence of entities that would escape attention without the benefit of an instrumental aid. Such entities may be simply distant or occluded—like the remote assessment of temperature—but more complex situations can involve entities normally undetectable by genetic senses, as for instance, the detection of the local concentration of a toxic gas or the presence of radiation. This type of interaction with the world characterizes an instrumental (or artificial) mode of perceptual experience, with the instrumental display providing a material medium for a humanly accessible presence. Thus, Don Ihde [1] and others posit an “instrumental realism” (differing in several details from scientific or entity realism), where contact with an entity of interest occurs through a hermeneutic “reading” of the instrumental display. This realist commitment introduces several important problems, however: 1) How are imperceptible entities brought to accessible perceptual presence? 2) What is the nature of the human–instrument relationship in this event? 3) What aspects of instrumental perception warrant a realist stance? This paper explores several avenues for responding to these questions. Merely labeling the instrument mode as a kind of “indirect” perception overlooks the constituting character of the perceptual act itself. The roles of indexicality, contextual sensitivity and specificity of the instrument-entity relation are examined, with regard to producing a veridical perceptual experience. The event of instrumental perception reflexively introduces into the world an external re-presentation, an entirely new entity. In the creation and interpretation of this re-presentation, the human-instrument interaction resembles a partnership, as reflected in the common phrase, “trust your instruments.” Importantly, instrumental perceptual contact creates relationship of trusting ourselves to and with technology (as Kiran and Verbeek have suggested [2]). The discussion includes several pertinent examples of this relationship, especially as related to common instruments like pressure gauges and thermometers, or the familiar automotive gas tank gauge, often employed by others in past analyses of epistemic reliabilism.

Liu, Zheng: Chuang-tze's Philosophy of Technology and the Technological Mediation Theory The famous classical Chinese philosopher Chuang-tzu (c. 369 BC—c. 286 BC) mentioned a story about Zigong (a disciple of Confucius) saw an old gardener who preparing his fields for planting at that moment, and warned us that the machines we used would form our "machine worries and "machine heart", and would gradually transformed our pure and simple humanities. We can deduce this famous story into three metaphors which I think constituted Chuang-tze's Philosophy of Technology, and analyse this story under the framework of Technology Mediation Theory. The first metaphor is that the machine expanded human's abilities and the machine's certain "intentionality" makes us fall into the servile conditions. The "machine worries" and "machine heart" can boil down to the technologies which extended our bodily abilities and perceptions. And the human- technology relation is the “embodiment relation” (Don Ihde) here. The second metaphor is that the machines as the power that invaded into bodies and mixed together with bodies, then "a new entity comes about,"(Donna Haraway) viz. the "machine heart" as cyborgs. The “machine heart” got the second meaning here because technologies were dissolved in bodies. The human- technology relation is the "cyborg relation" (Verbeek) here. The last metaphor is the ethical consequences of "machine heart", and two approaches of ethics and technology can be deduced. One approach of the ethics of technology asked us to reflect on the negative effects of modern science and technology, so we should have certain moral sensibility when we use some technical artifacts. Another approach is the Design Ethics based on postphenomenology. For Don Ihde, Philip Brey, Peter- Paul Verbeek, etc., ethics of technology should "turn to engineering". In this sense, design technology means design morality. But I want to mention the third approach of ethics of technology, that is, to "design" our bodies. In a nutshell, we can design our bodies both in Foucaultian and aesthetical ways. Foucault mentioned the bodily "technologies of the self", it is an art of life, it emphasized on to "care of your life", so we can achieve better moral lives. The aesthetical way is according to Richard Shusterman's Somaesthetics, he payed attention to exercise our bodies, in order to achieve the aesthetical and moral goals. In Chinese culture, Zen meditation and Za-zen are the basic bodily exercises for Buddhists and Taoists. Chuang-tze, as the founder of Taoism, used the Bodily Mediation ways to achieve his free and unfettered life.

Loh, Janina: Responsibility and Robot Ethics – A Critical Overview and the Concept of Responsibility Networks Rapid progress in robotics and AI potentially pose huge challenges due to its transformative power with regard to competences that were traditionally reserved for human agents. It is suggested that

56 BackToTOC

formerly exclusive concepts – such as autonomy, agency and responsibility – might one day pertain to artificial systems in a similar fashion. The new technologies thus raise questions with regard to the meaning of these concepts, especially the concept of responsibility. In my talk I will proceed in three steps: 1 – WHAT IS RESPONSIBILITY? In the first part of my talk, I will outline the traditional concept of respon-sibility with its five relational elements and summarize the conditions that are to be met to call someone responsible. Responsibility is of high moral and legal status and plays a prominent role in every sphere of human acting. Every dimension knows its specific type of responsibility, due to the norms of identifying the responsible parties involved – e.g. moral, legal, political, economic, social responsibility, and various other forms of responsibility. Responsibility is an important tool for sys- tematizing, organizing, and thereby clarifying intransparent and very complex situations that con-fuse the agents in question. 2 – WAS IST ROBOT ETHICS? In the second part of my talk, I will sketch the discipline of robot ethics via differentiating between (2.1) robots as moral agents and (2.2) robots as moral patients. I will use this differentiation for defining two prominent working fields within this discipline. 3 – RESPONSIBILITY IN MAN-MACHINE-INTERACTION: In the third part of my talk I will analyze the role and function of the phenomenon of responsibility within these two fields of robot ethics; (3.1) respon-sibility regarding robots as moral agents and (3.1) responsibility regarding robots as moral patients. Considering 3.1 I will use Wallach’s and Allen’s approach of functional equivalence, in order to re-veal to what extent artificial systems are to be understood as responsible agents. Considering 3.2 I will sketch my concept of responsibility networks for including genuine (human) responsible agents as well as artificial systems in responsibility ascriptions.

Loh, Wulf: Publicity and Privacy in the Digital Age Hannah Arendt famously wrote that acting in the public sphere requires privacy. Currently algorithms endanger both. Privacy is compromised not only by ourselves and our often more than careless online behavior. In this respect one could talk of the “naïve digital natives” as a unwitting and unwilling post- privacy phenomenon. More importantly, however, privacy is endangered by prognostic algorithms that may utilize the ubiquitous data that every person generates online to predict consumer behavior, but also illnesses, political affiliations and likelihood to non-conformist behavior. The latter becomes especially important in repressive and authoritarian societies. The growing disregard for privacy from all sides (consumer, corporate as well as the state side), however, poses problems for liberal democracies as well, as subtle manipulations may undermine the cornerstone of all open societies: the possibility of informed consent in all matter of one’s personal life (Rawls, Habermas). The more individual autonomy as a value and virtue are slighted, the more “post” an open society is likely to become: post-privacy, post-factual, post-liberal, post-democratic. It becomes clear that the danger for privacy through algorithms has repercussions for the public sphere. However, the latter faces its own danger as well. Not only the phenomenon of “filter bubbles” and “echo chambers” in social media makes it harder for citizens to act on informed consent, but also special media bots that create fake content and spread it online (of the 12m Twitter followers Trump has, allegedly 4m are news bots). Here two values of an open society collide: free speech and informed consent. The question whether there should be limits to free speech assumes a new dimension in the face of online fake news, echo chambers, and blatant lies by politicians. It is my conviction that much can be gained from traditional critique of ideology that Critical Theory developed – and constantly redesigns –, in order to understand and tackle these dangers to privacy as well as publicity. Most importantly a workable distinction can be acquired between unproblematic forms of data exposure and public lies on the one hand, and structural forms of manipulation and self- deceit that may be termed as “ideology” on the other hand. By this I mean the seeming “irreducibility of social phenomena” (Jaeggi) that come in the form of invalid naturalizations and universalizations. Secondly, different forms of critique, e.g. “reflective, therapeutic, and demonstrative critique” (Wesche) can be employed to undo the mentioned ideology phenomena. In the same way as it is obviously not sufficient anymore to point out blatant falsehoods, it might not be enough as a critical intellectual, journalist, or interviewer to display “courage, empathy, and a good eye” (Walzer). Even a “school of sentimentality” (Rorty) will more often than not fail to convince those under the pretext of ideological thinking. Only a structural analysis of the conditions and mechanisms of ideology in the digital age in combination with different methods of critique will help to mediate this danger to privacy and publicity.

Matzner, Tobias: The interdependence of subjectivity and things: new reflections on the value of privacy Normative theories of privacy usually are based on concepts of subjectivity developed in the liberal strands of political philosophy: the subject as an autonomous being. It is this very autonomy that privacy is meant to protect. This line of thought has been criticized from various quarters, e.g. feminist

57 BackToTOC

theory, communitarianism and neo-Hegelian theory, which foreground the social dependence of subjectivity. Often, this critique has been extended to an attack on privacy, which is conceived as masking or disavowing this dependence. Others have tried to reconstruct the value of privacy, emphasizing privacy as one of the most important social preconditions for autonomy. What remains largely missing from this debate are reflections on the interdependence of subjectivity and technology. Philosophy of technology has shown that human subjectivity depends essentially on all kinds of things: on tools, on artefacts that mediate the perception of the world, on a socio-technical infrastructure. Furthermore, the intersubjective relations, which play a pivotal role in many of the aforementioned critical positions, are mostly technologically mediated. This question becomes especially pressing in times when things start to act, i.e. challenge the human as the only autonomous being. The paper shows that the interdependence of human subjectivity and technology precludes the possibility to derive the value of privacy from autonomy. If such an autonomous position is seen as the intrinsic aim of privacy, all modes of subjectivation where persons enjoy a status dependent on things or technologically mediated intersubjectivity would be exempt from privacy protection – or at least considered lacking. Furthermore, all potentials of interacting with things or deferring actions to things would be falsely reduced to the perspective of an autonomous human being opposite of technological actors. The paper offers an alternative argument for the value of privacy. Privacy is conceived as the possibility of keeping different forms and contexts of interaction separate – regardless how these structure the relation of heteronomy and autonomy. Thus, privacy protects the conditions which enable a process of subjectivation in the first place – and which could eventually lead to a (partially) autonomous position.

Milivojevic, Tatjana; Ercegovac, Ivana: Man-computer Gestalt and The Noosphere Theory: Will the Computer Be Humanized or the Man Computerized? In a very short time, the Internet has penetrated so deep in our everyday life and has become the very essence of our being as well as the way how we function, so the question arises whether the Internet has become the integral part of our life, or its users are just the factors of the global network system. Development of conventional broadcasters have previously caused increased intellectual fermentation, but with the improvement of the Internet, the idea of the noosphere is re-actualized and is associated with the concept of collective intelligence. Metaphysically speaking, Internet represents a cerebral planetary network that enables the materialization of the noosphere. Today, the satellite and information technology with network communications (internet) are considered to represent the circuit of nerve system, similar to the nerve system of individuals. Therefore, interesting recovery of organic model that came through the high, sophisticated technology is noticed. People, who participate in the creation of these networks and use them often, are considered as new nerves and sensory organs of the planet. Idea of merging between man and machine is not a novelty in scientific circles, and John von Neumann in his book, "The Computer and the Brain" (1958) considered the idea of the human brain as a computer machine. His theories and assumptions were the basis for many scientists to research in the field of artificial intelligence. Scientists are trying to answer the question whether the moment when the Internet will take over the role of the global consciousness is close. There are several directions of reflection regarding the future of interaction or symbiosis between man and machines. This paper deals with these concepts and tries to explain the idea why the total control of man by the machine is basically not plausible. It also deals with the question of whether the creation or the improvement of artificial intelligence and its implementation in people's everyday lives means raising the level of individual intelligence as well as collective and indicates the entry into the noosphere, a global layer of consciousness, intelligence, and knowledge which more or less resembles the mind and consciousness of each human being.

Miller, Glen: Superabundance and Collective Responsibility in Engineered Systems Individuals involved in complex global sociotechnical systems often struggle with issues of distributed or fractured responsibility when ethical assessments of these systems must be made. In these situations, the pairing of the “problem of many hands” with the “problem of many minds” seems to lead to unattributable actions and consequences. Such concerns overlook the nature of engineering, where responsibility normally should be superabundant or redundant to account for the uncertainty that arises in technical projects, and whose practitioners, at least in Western countries, are expected to hold public health, safety, and welfare paramount. This superabundance can be illustrated using a 5x5 matrix that aligns role with a primary area of responsibility. The five roles, ordered from narrowest to broadest scope of concern, are (i) engineer, (ii) technical manager, (iii) business manager, (iv) executive, and (v) policymaker. The five areas of

58 BackToTOC

responsibility are (i) technical design, modeling, testing, operation, and maintenance; (ii) evaluation of individual expertise, comprehensiveness of team composition, and appropriate design and testing of the composite solution; (iii) development of business processes, incentive structures, and vendor relationships; (iv) development of culture, assessment of the value of organizational goals, and achievement of these goals; and (v) effects on society, especially monitoring of interactions between systems. In addition to their primary responsibility, each actor also has secondary responsibility in other areas. Secondary responsibility typically diminishes as one moves farther from one’s role, and, presumably, one’s expertise, power, and knowledge. Superabundance arises from the redundancy of expertise among agents holding the same or similar roles and also from the secondary responsibilities that each participant has. Superabundance could be evaluated for each area of responsibility and for the system as a whole. For each area in a technical project, a gap analysis could be created to spot weaknesses. For each area and perhaps for the system as a whole, a “safety factor” for redundancy could be developed based on uncertainties, risks and who will bear them, the importance of the system or subsystem to composite systems, etc., that is somewhat similar to, even if not as quantifiable as, the safety factors commonly employed in engineering design. These evaluative approaches provide guidance for their participants independent of outcome and allow them to identify systems and subsystems that should be sandboxed to keep failures from cascading. The approach sketched above complements tools such as fault or event tree analysis that capture technical risks by acknowledging that engineering solutions are sociotechnical systems that interact with other systems.

Müller, Vincent C.; Erler, Alexandre: The digital goes 3D – and what are the risks? We all write text, produce images and video, and play music ourselves on digital systems, but actual physical objects are only made in professional manufacturing – so far. This is changing with new cheap devices that allow Do-It-Yourself printing in 3D, CNC milling, cutting, making your own electronic hardware, printing biological tissue, designing a new DNA, and many more future developments. Enthusiastic ‘Makers’, creative designers and other non-experts push DIY with a digital sharing mindset. A Digital Do-It-Yourself (DiDIY) revolution is on its way. These DIY devices allow anyone of us to make a virtually endless variety of things, often from their home: some of them innocuous or even beneficial, like customized cases for mobile phones, metal or plastic spare parts, clothing, art-work, or prosthetic limbs, but others dangerous, such as guns or biological weapons. So there is a problem of safety: What remains of gun control if people can 3D print their own guns – some of which might also be undetectable by cur-rent security technology? And if people can make their own artefacts at home, how do we uphold quality control standards? Another issue is responsibility, both moral and legal: who is to be held responsible if the use of a digitally made artifact results in harm to someone? And how do we handle the threat to copy-right, trademark and design rights, if digital DIY allows people to easily replicate virtually any artefact by 3D scanning and sharing of digital designs on-line? How will the spread of DiDIY impact the job market, distribution channels and the modes of production – will its impact be positive or negative overall? We investigate these issues in the European H2020 project “Digital DIY” (www.didiy.eu).

Müürsepp, Peeter: Aim-oriented Approach to Technology The British philosopher of science Nicholas Maxwell has been advocating the need for a revolution in the academia. The core idea of Maxwell is that the current way of doing scientific research that he calls standard empiricism has to be replaced by aim-oriented empiricism. The advantage of the latter is accounting for metaphysical assumptions in science. Scientists tend to presume that the universe is comprehensible, prefer unified theories over disunified ones and simple theories over complicated ones. According to Maxwell, this means that scientists normally assume something metaphysical but fail to acknowledge this. Standard empiricism falls short here. Aim-oriented empiricism enables to account for the metaphysical assumptions in science. This would enable to make sense of the progress of science as well as the development of technology. The core idea of the paper is to assess Maxwell’s claim that the turn from standard to aim-oriented empiricism would enable us to account for technological innovation in a new way that would also mean obtaining a better grasp of the application of powerful modern technology that might pose a threat to humans. Standard empiricism is after collecting pieces of knowledge. Aim-oriented empiricism would enable to aim at wisdom. Knowledge- inquiry has to be replaced by wisdom-inquiry. Maxwell’s understanding of wisdom is close to Popper’s, meaning that by wisdom one can mean the capacity, and active desire, to realize what is of value in life, for oneself and others, wisdom thus including knowledge, technological know-how and understanding, but much else besides. In his well-known book about questioning science to be neurotic Maxwell explains: “Before the advent of modern science and technology, lack of global wisdom did not matter too much, we lacked the power to wreak too much havoc on ourselves and our surroundings. Now, with modern science and technology, our power is terrifying, and global wisdom

59 BackToTOC

and civilization have become, not a luxury, but a necessity.” Maxwell is not a Heidegger type of pessimist waiting for a saviour God. He has his agenda that he believes would turn the tables concerning our interaction with technology in a rational way.

Nickel, Philip: Distributed consent for mHealth The conventional approach to informed consent in health care is difficult to apply to mobile health technologies (m-health). There are several reasons for this. First, m-health is not confined to a particular time or context (such as a clinical visit or facility). Second, m-health collects data that can be reused, copied and distributed indefinitely, creating the need for reflection about the limits of consent. Third, m-health introduces new entities and artifacts that complicate the transactions that underpin consent. Fourth, the norms for m-health are blurred between care norms, which clearly mandate explicit and reflective consent and are linked to standard bioethical protections, and information technology norms, which are explicit but not reflective and have a different normative content. In this paper we articulate a new approach to consent based on a distributed model, consisting of discrete acts of consent, but also participatory components and ongoing assessment of trust and intrinsic motivation. Discrete acts of consent are transactions in which information is provided and consent is requested on the basis of reflection about that information. Participatory components consist of processes of collaboration in the design and set-up of mobile health and associated technologies and practices (such as links to other infrastructure, information flows, and insurance reimbursement). Assessment of trust and intrinsic motivation consists of methods of measuring these dimensions through biological factors, and cognitive and emotional feedback collected from people while using the technology. We explore challenges to operationalizing this notion of distributed consent. We also explore the philosophical relationship between discrete acts of consent, participatory agency, trust, and intrinsic motivation. Finally, we outline our plans for further philosophical, technical and empirical research on distributed consent in m-health.

Offert, Fabian: Exhibiting Computing Machines. Ontology as a Speculative Principle for Exhibition Design When museums started to exhibit historically significant computing machines in the late 1970s, two major problems became immediately apparent: the problem of preservation and the problem of display. While the problem of preservation has received a lot of attention recently, the problem of display remains largely unsolved. Most institutions still arrange computing machines in a simple chronological fashion, disconnected from the electrical grid, silent, and confined in dimly lit glass cabinets, creating what is essentially a white cube full of black boxes. This practice prevails even in the omnipresence of machines of unprecedented complexity, particularly from the domain of artificial intelligence, whose role a critical curatorial practice should address. I argue that one reason for this neglect of the problem of display is the ontological constitution of the exhibited machines themselves, or, more precisely, a lack of critical vocabulary suited to describe a machine's relation to another machine, of a thing's relation to another thing. Hence, recent philosophical theories addressing this inaccessibility could be said to indirectly address the problem of display as well. More specifically, I claim that object oriented ontologies, if we take them seriously and literally (and with a grain of salt), can serve as a speculative principle for exhibition design, notably for the design of exhibitions of computing machines. The example of generative neural networks illustrates this claim. Most neural networks are trained on a large dataset to solve classification problems. Through simple technical manipulation, however, they can be instructed to sample the learned latent space and generate perceptually meaningful new data - images, text, sound - from it. Hence, at least for a subset of computing machines, we can visualize the machine's perspective by technical means and thereby determine how other machines relate to it. This visualization of the machine's perspective, in the form of samples from its latent variable space, can be employed to augment its display and inform further curatorial decisions, in turn opening up the machine's aesthetics to the visitor.

Parviainen, Jaana; Ridell, Seija: Choreographies of Smart Urban Power Pervasive technologies not only pose new challenges for the established policies and practices of city planning but also profoundly affect how the daily movements and (inter)actions of city dwellers partake in the production of urban space. Urbanites’ habitual paths with stopping points at diverse destinations form a choreography of “movement trajectories” that are interconnected by the use of smart devices. This paper approaches urban choreographies in the context of the ubiquitously digitalised city, particularly addressing questions of spatial power dynamics. For instance, Sony’s SmartEyeglass is lightweight, binocular eyewear that enables augmented reality experience. Text, symbols, and images are superimposed onto one’s sensual field of view. How does this type of information materialism both

60 BackToTOC

affect city dwellers and engage them as participants in the reorganization of urban choreographies? Conceiving of pervasive technologies as embodied media, our aim is to explore how the city dwellers’ mediated bodily routines participate in the co-constitution of spatial urban structures. Following here Thrift’s (2004) definition of “technological unconscious”, we assume that the software and algorithms working silently in the background affect the bodily coordination and habituated uses of mobile devices. Unlike the Freudian or Lacanian notions of unconscious, this kind of unconscious keeps people’s bodies in the loop of smart urban infrastructures. Applying media theory, Latour’s Actor-Network Theory (ANT), and recent theoretical discussions on choreography in the context of HCI (Parviainen et al 2013), we examine the use of wearable technologies as micro-choreographic social gestures that connect integrally to the macro- choreography of collecting big data for gaining economic profit. From this starting-point, not only the technological gadgets as material components constitute infrastructural support and control, the entire set of bodily practices and the techno-economic needs and interests that they serve is involved (Guattari 1995; Mumford 1995). In other words, users as bodily beings are always part of some technological ensembles and their infrastructural dynamic. When digital technologies, action, and materiality intertwine, people’s movements become one actor in a larger assemblage or choreography. This, we argue, is the crux of the spatial power dynamics in the contemporary ‘smart’ cities.

Peterson, Martin: The Geometry of Engineering Ethics In this talk I will highlight some of the key points of my book The Ethics of Technology: A Geometric Analysis of Five Moral Principles, Oxford University Press (to be published in July 2017). My aim is to develop an analytic ethics of technology based on a geometric account of moral principles. I show that geometric concepts such as points, lines, and planes are useful for clarifying the structure and scope of moral principles applicable to the technological domain. This adds a missing perspective to the ethics of technology, and possibly to methodological discussions of applied ethics in general. The geometric method I propose derives its normative force from the Aristotelian dictum that we should “treat like cases alike”. To put it briefly, the more similar a pair of cases are, the more reason do we have to treat the cases alike. Here is a somewhat more precise statement of this idea: If two cases x and y are fully similar in all morally relevant aspects, and if principle p is applicable to x, then p is applicable to y; and if some case x is more similar to y than to z, and p is applicable to x, then the reason to apply p to y is stronger than the reason to apply p to z. These similarity relations can be analyzed and represented geometrically. In such a geometric representation, the distance in moral space between a pair of cases reflects their degree of similarity. The more similar a pair of cases are from a moral point of view, the shorter is the distance between them in moral space.

To assess to what extent the geometric method is practically useful for analyzing real-world cases I have conducted three experimental studies. The three studies are based on data gathered from 240 academic philosophers in the U.S. and Europe, as well as two groups of 583 and 541 engineering students at Texas A&M University, respectively. The results indicate that experts (philosophers) and laypeople (engineering students) do in fact apply geometrically construed moral principles in roughly, but not exactly, the manner advocates of geometrically construed principles believe they ought to be applied. Although we cannot derive an “ought” from an “is”, these empirical findings indicate that it is at least possible for laypeople as well as experts to apply geometrically construed principles to real-life cases. It would thus be a mistake to think that the geometric method is overly complex and hence of little practical value.

Pitt, Joseph: Updating the Language of Philosophy I argue that characterizing philosophy as a search for universal answers to eternal questions (what has been called the perennial philosophy), a popluar fiction, is not only falsified by examining the history of philosophy, but contributes to rendering philosophy increasingly irrelevant. In the context of the philosophy of technology, it is downright pernicious. Practitioners of the perennial philosophy are mislead into pursuing its ends by two things – first, the language of philosophy; second, an implicit unjustified metaphysics commitment. When we look at the language of philosophy we see many of the same words being used throughout the ages. Phrases like “the good life” or terms like “epistemology”, “ethics”, “reality”, “nature”, etc. And thus we feel justified in performing such Whig historical investigations as asking about Socrates’ epistemology, seeking ancient answers to contemporary problems. But from the fact that we use much of same vocabulary, we do not mean the same thing because what we accept as an adequate answer to questions posed using this vocabulary change over time. More to the point, new questions come to the fore for which there is no historical vocabulary as in the philosophy of technology, a relatively

61 BackToTOC

young field for which many, e.g., Heidegger, felt the need to invent a new vocabulary. The problem this presents is the problem of connecting the new contrived vocabulary to the old. For, if the aim of philosophy is, as Wilfrid Sellars stated, “to see how it all hangs together” then our language must allow us to form a coherent picture. So if we look at the language of the philosophy of technology in which we proceed with our philosophical investigations, how do we show that these activities are part of philosophy seen grandly? What I seek to do here is connect the old vocabulary to the new, for the new is in touch with today. Second, we continue to buy into Plato’s metaphysics assuming some unchanging reality – which connects to the perennial philosophy. But, I will argue, we would do better with Heraclitus as our guide – for all is change and our language must reflect that.

Poznic, Michael: Architectural Modeling: Interplay of Designing and Representing In many sorts of engineering modeling practices there is interplay of different but interrelated goals. This paper discusses architectural models, whose use is connected to two goals at least: designing particular buildings and representing these buildings. These two goals can be captured with two different modeling relations between vehicles and targets: in the instance of designing the target is adjusted to the vehicle and in the instance of representing the vehicle is adjusted to the target of modeling. In a previous paper I showed that particular models in bioengineering involve both of these modeling relations (cf. Author 2016). What about architectural models? The architect is a prototypical model user who wants to create something with the help of a model. Her goal is to create a building and in the progress of creation she has to develop a design of the building that can be materialized in a scale model of the building. Especially when presenting the model to a customer the question whether the model is about the planned building is strongly inviting the answer that the model represents the building. The model is about the building and, intuitively, it seems to represent the building. In case the model was to represent the building it would however represent a potential or possible building rather than an actual one. Given the two mentioned modeling relations, the architectural model could also be regarded as standing in the relation of designing to the building. The building is adjusted to the model and not vice versa. This paper proposes to conceptualize the relation between model and building as a bipartite relation: first, the model stands in a relation of representation to a plan of the building. In this sense the model represents something, namely a plan of the building. Second, the plan and the building are standing to each other in a relation of designing. So the intuition that the model represents something can be retained. Yet, the target of the representation relation is not the building but the plan of the building.

Puga Gonzalez, Cristian: The Techno-Moral Change Revised: How Does Technology Affect Moral Philosophy? Some authors have analyzed how technology affects morality. Their analyses, however, are focused on how technology shapes the landscape of widely held conventional values or some persons’ moral decisions and beliefs. Instead, we analyze how technology can shape moral philosophy. Assuming moral philosophy is mostly a reason-based endeavor, we analyze how technology can affect the reasons or reasoning behind normative theories, principles, considered judgments and intuitions that are used to justify an action as right or wrong. Focusing on the case of Assisted Reproductive Technologies and specifically in wrongful life and wrongful handicap cases and the non-identity problem, we claim that technology can shape moral philosophy by producing ethical conundrums in which accepted concepts, principles and intuitions no longer provide clear guidance to assess what should be done. In turn, these conundrums stimulate the creation of new concepts and principles that are not necessarily consistent with other moral beliefs. We conclude by arguing that John Rawls’ method of justification, Reflective Equilibrium, is a promising tool to cope with these changes.

Reijers, Wessel: Virtue Sensitive Design of Personalised Virtual Assistants Personalised digital or virtual assistants can be conceptualised as systems for “managing communications and information, whose behaviour can be changed by the user and whose behaviour automatically adapts to the user” (Cooper et al., 2004: 1). They are increasingly integrated in the everyday use of ICTs (think of Apple’s Siri and Googlenow) and as such can have significant ethical impacts related to for instance software biases and privacy awareness of people using them. To evaluate the ethical impacts of virtual assistants, I first turn to the Value Sensitive Design (VSD) approach that has gained a prominent position in ethics of technology and allows for the incorporation of ethical concerns in design practices. However, due to its reliance on the “principlism” approach (Friedman & Borning, 2006: 69) that shows a lack of theoretical consistency (see e.g. Clouser & Gert, 1991), we argue that VSD cannot adequately be used to understand how technologies mediate human “values”. I then turn away from an approach build on principlism and move instead to an approach that connects with the tradition of “virtue ethics of technology”, which has been firmly

62 BackToTOC

established with Vallor’s recent book “Technology & the Virtues” (Vallor, 2016). Instead of looking at how virtual assistants mediate human values, I consider how they mediate human “practices”, which as Vallor shows are constitutive for human virtues. I approach this notion of “practice” by asking three different questions. First, I ask how virtual assistants mediate the practical wisdom, or phronesis, of the people they interact with. Second, I ask how virtual assistants can mediate concrete practices such as “meeting others”, “using a map” and “having a conversation”. Third, I ask which “technomoral” virtues, as Vallor calls them, are most suitable for evaluating the way in which virtual agents mediate these practices. Based on the analyses that result from these questions, I engage in a preliminary discussion of how the “value sensitive design” could be turned into a “virtue sensitive design” of virtual assistants. To do so, I situate “virtue” in the different stages of the VSD approach: the conceptual, empirical and technical stages.

Robison, Wade: Engineers’ Oath: Do no unnecessary harm! The ethical principle that we are to do no unnecessary harm ought to be the first principle of engineers, a way in which ethical considerations enter into design solutions on the ground floor, as it were. This is not a utilitarian principle, but a principle we ought to adopt whatever a utilitarian calculus tells us about whether our design solution is ethical or not. A design solution that passes a utilitarian test and so is judged ethical on utilitarian grounds can cause unnecessary harm that is outweighed, in the utilitarian calculus, by its benefits. But it is still ethically wrong. It is tempting to use a utilitarian calculus in assessing whether a design solution is ethical or not. Utilitarianism fits well the mindset engineers come to have. A great many effects of any design solution are predictable, and so we can compare the effects of one possible solution to those of another, providing checklists to compare. We can quantify, even if only roughly, the various harms and benefits among those effects — as Bentham thought we could for pleasures and pains. We can assess, even if only roughly, the probabilities of various effects occurring. And, what utilitarians emphasize, the effects are publicly accessible. As Kant points out, we can never be sure what someone intends because that is inaccessible to others, but effects are events that we can all observe — and measure. But it would be a mistake, I shall argue, to bypass the first question any engineer ought to ask about a design solution: does it cause unnecessary harm? I shall illustrate the value of asking that question by considering some examples of what I call error-provocative designs. The aim is to undercut the assumption that engineers are to be utilitarians if they are to be ethical.

Romele, Alberto; Furia, Paolo; Severo, Marta: Digital Hermeneutics: Mapping the Debate and Paving the Way for New Perspectives Hermeneutics classically refers both to a first order ‘art’ (Kunstlehre, which means ‘art’, ‘technique’, and also ‘technology’) for texts’ interpretation, and a second order reflection on the conditions of understanding. In recent years, several scholars have tried to expand hermeneutics and its principles toward digital technologies (de Mul 1999, Capurro 2010, Rastier 2011, van den Akker et al. 2011, Diamante 2014, Mohr et al. 2015, Armaselu et al. 2016, van Zundert 2016, Sützi 2016). Despite differences, two main tendencies can be isolated. Some academics focused on the impact of digital methodologies on (mostly linguistic) expressions such as texts and texts’ corpora. Other researchers have been rather interested into the conceptual tools offered by hermeneutics for understanding digital technologies as interpreting (and eventually understanding) entities. The former group of scholars is usually related to digital humanities, while the latter to the philosophy of the Internet and new media (Brey and Soraker 2009). The goal of this paper is twofold. First, it aims at mapping exhaustively, for the first time, such a debate. Authors will resort to their ‘ethnographic’ experience in the field, and to a systematical query of four databases of academic literature (Web of Science, Scopus, Philpapers, and Google Scholar). Second, it will be advanced the hypothesis according to which the two current tendencies in digital hermeneutics (digital tools as instruments for interpretation; hermeneutics as intellectual toolbox for understanding digital technologies) may integrate each other, at two conditions. First, the shift from digital texts and documents to digital traces as the object of digital hermeneutics (Authors 2016a). Second, the shift from an ontological interest for the sense of Being to a more modest concern for the sociotechnical conditions of production and use of digital traces (Authors 2016b). In sum, authors will propose a ‘material hermeneutics’ (in the sense given to this term by postphenomenology and, rather differently, by Peter Szondi) which is both methodologically-oriented and context-interested, anti- transcendental (i.e. empirical) and anti-dogmatic (i.e. critical). Such a hermeneutics uses digital methods for interpretation, but also reflects on the limits (the ‘transductions’, in Simondon’s terminology) of all digitally-mediated understanding. Next to the theoretical reflection, authors will offer empirical evidence of the effectiveness of this new approach, by mapping and visualizing with the help of an automatic webcrawler (Hype) and the open-

63 BackToTOC

source software Gephi the ‘world’ online related to philosophy of technology (journals, universities, associations, editors, authors, etc.).

Saijo, Reina: Human’s Vulnerability to AI technology for Decision Support System and a Possibility of Some Autonomous and Authentic Way of Living Nowadays, we already live in the society where intelligent system, at least partly, participates in our decision making. That is called a decision support system (DSS) basically to represent a certain part of the human intelligence process and substitute for it in order to help people reduce their tasks. DSS is not only introduced for professionals like an expert system in medicine, but for citizen living in everyday life. For example, a product recommendation system by Amazon might be more efficient than search references by us, especially in unfamiliar topics. To choose and purchase a book with the intelligent system can be said as collaborative decision by the person and the system, not only by herself alone. If we rely on and partly leave it to intelligent system on making a decision, it is a question to be considered whether the decision of us can be autonomous in a sound way or not. First, in this paper, I examine the ethical concerns brought about by intellectual judgment with the system from the viewpoint of the ethical theory of ‘vulnerability’ (Mackenzie, et al. 2014). The situation of the use of DSS mentioned above implies a possible risk to invade our autonomy. It is an ability to act in conformity of our beliefs and desires to do. Vulnerability is a concept to grasp the human situation where they are likely to be subject to reduction of their interests or increase of their dangers: death, diseases, injuries, isolation in society, hostility from others, poverty, and discrimination. In other words, the first main question here is what vulnerability can be caused by AI technology for DSS. As the vulnerability, I suggest bad paternalism from the developers or the system to a user. Moreover, the second problem is how to handle the possible vulnerability caused by intelligent system. There are two suggestions for that as follows. (1) To specify and reduce the causes of bad paternalism between the system and the user. What consist in the badness of the paternalism is a n invasion of individual’s autonomy by others. Hence, another problem arises what to understand and how to realize our autonomy, too. The next step is (2)to examine and refine the concepts of autonomy and authenticity. As the concept is discussed in bioethics, feminism and disability studies, they point out that we are dependent agents on others and their environment. An the same time, they say that our autonomy is rather a relational concept, not possessed by a single independent agent. My suggestion is to learn the studies of minority groups and be able to apply to human condition living with intelligent machines.

Santos, Alexandra Dias: From a rooted southern perspective: the life and thought of angolan Ruy Duarte de Carvalho Ruy Duarte de Carvalho, a portuguese born angolan (Santarém 1941—Swakopmund 2010), developed an original and coherent reflection on technology, civilisation and development, with a focus on Angola, especially the countrie’s southern provinces. These semi-deserted lands, inhabited by nomads, constituted for RDC both a terrain of enquiry and a philosophical epicenter, from which he encompassed the world and its processes. The unusual focus on the rural world can be explained by biography, as he was raised in the South by a father who was a famous hunter, and trained as an agricultural manager. Having witnessed the first stirrings of the angolan armed struggle (1961-1975), RDC became a sympathiser with the nationalist cause. After independence, he became a movie director in the Angolan broadcasting service, having produced a number of documentaries on the living of the nomads, which granted him a degree from the École de Hautes Études en Sciences Sociales (EHESS). Later research on the fisherman of the Ilha de Luanda lead to his doctoral thesis on Social Anthropology and Ethnology, also in the EHESS (1986). In the 1990s RDC started recording his frequent travels towards South in an unique style, a mix of ethnography, history, philosophical reflection, and fiction. The triology Prosperous Sons, written in the 2000s, culminated a work centered on the confrontation between the processes of expansion of the western world, in its various dimensions—colonial, scientiphic, thecnical, and political— and the older Bantu expansionism, observed from the ultra-peripheral Southern Angola. RDC coherently denounced the imposition of disembedded thecnologies as a means of domination by the colonial powers, as well as by the governements which succeded them, and made clear that these thecniques were inadequate for the landscape of the desertified South, not offering solutions, neither for the survival of populations, nor for the environment. He antecipated the concerns expressed by activists such as Vandana Shiva, by defending the recognition of ancestral techniques. More than a denunciation of technique, we find in RDC a refusal to succumb to its enchantment, and a reflection on its conspicuous uses.

Simon, Jonathan: The medical drug as technological object. What, if any, is the philosophical value of considering a medical drug as a technological object? In this paper I will argue that looking at medical drugs from this perspective raises some interesting questions

64 BackToTOC

not only about the nature of drugs but also the philosophy of technology. Starting with a survey of recent (and some less-than-recent) discussions of the nature of technological objects, I will go on to examine how pharmaceutical drugs fit with the various available definitions. Exploring a schema put forward by Andrew Feenberg in his Questioning Technology (with eight categories constructed around the concepts of primary instrumentalization (decontextualization, reduction, autonomization, position) and secondary instrumentalization (systematization, mediation, vocation, initiative)) will lead us to consider the medical drug in terms not only of its development, production and marketing but also its prescription and consumption as well as the legal restrictions that set it apart from other consumer goods. To illustrate this approach I will present some key elements of an in-depth study of the serum introduced to treat diphtheria at the end of the nineteenth century. This exploration of the context of the production and use of this serum will open the way to a more focussed discussion of the notion of efficacy, a key concept in modern pharmacy. Looking more closely at efficacy will allow me in turn to provide a tentative and partial answer to the initial question. Although the principal example I will be using to illustrate my approach is the serum for the treatment of diphtheria, the discussion of efficacy will provide an entry into an interrogation of twentieth (and twenty-first) century practices aimed at determining efficacy, notably the randomized controlled clinical trial. Thus, we will show how thinking about the medical drug as a technological object can allow us to re-think the function of the clinical trial in modern pharmacy.

Sjöstrand, Björn: Technology, ethics, and politics in a globalized world In this paper,I will investigate the ways in which technology relates to the ethical and the political in the age of globalization. I argue that the technological development is transforming the very concept of the political, imposing upon us un ethical imperative to think the political beyond the political and the democratic beyond democracy. To argue my case, I will bring to bear the philosophy of technology of Jacques Derrida. Based on his understanding of ethics as openness to the entirely other, I will explain why it is necessary, indeed a duty, to be open to technological innovations. It is necessary because they dramatically challenge fundamental political concepts such as democracy. They may, for better or for worse,"deterritorialize" the given concept of democracy, which is governed, controlled and limited by the borders of the nation state. New technologies blur the borders of the nation state and in an unforeseeable future they may erase them completely. What the acceleration of technology then produces is a fundamental transformation, not only of the prevailing concept of democracy, but also the very concept of the political. Technology can no longer be perceived as just one aspect among many that defines the political, since technology itself transforms this concept. It is important, I think, not only to "politicize" technology, that is to put it in a political, social, and global context in order to determine its political, social, and global implications. For it is precisely this technology that, for better or for worse, transforms the concept of the political. A limitless technology forces us to see the political beyond politics and the democratic beyond democracy. It is this beyond that liberates politics from its own identity and thus relates politics to the entirely other, to ethics. This triptych of concepts—technology, ethics, and politics—becomes inextricably intertwined in a global techno-ethico-political complex, in which the acceleration of technology is the very motor of the development towards a possible new world order.

Son, Wha Chul: SDGs in the Era of the 4th Industrial Revolution This presentation examines the possible connection between the Sustainable Development Goals adopted by UN in September 2015 and the notion of the so-called “4th Industrial Revolution” introduced at the Davos Forum in January 2016. More specifically, it asks how the notion of sustainable development can guide the 4th industrial revolution towards a better future. The critique of modern technology represented by the concept of “autonomous technology” have been accused of being pessimist and techno-phobic. However, the discourses concerning the 4th industrial revolution only concretize the concern. Newest technological developments are pictured to be unavoidable or inevitable. This leads to the wide-spread tendency to consider technological progress like the weather change and its prediction the weather forecast. SDGs focus on the poverty, inequality and environmental issues while believing in the good of technological progress. This approach is as conventional as that of the 4th industrial revolution. I will argue that the idea of SDGs and the expectation concerning 4th industrial revolution should be reinterpreted and redirected in the light of the long dispute in philosophy of technology, namely in terms of the future of technological society and the relationship between man and technology. Especially, one has to take more active stance in interpreting the “sustainable industrialization” in the 9th of the SDGs. The concept of sustainability should be applied not only to the usage of existing technology, but also to the potential direction of future technology promoted by the notion of the 4th industrial revolution. The promoters of the 4th industrial revolution need to provide more concrete and realistic agenda to

65 BackToTOC

minimize the potential damage of the “revolution.” They themselves often refer to the possibility of economic and technological polarization, unemployment, and invasion of privacy as the result of future technology, but never go further than mentioning them. I will begin with the introduction of SDGs and the 4th industrial revolution, analyzing their significance in our age. Then these will be examined in the context of philosophy of technology, which will lead to my argument that the notion of sustainability should be understood and applied more broadly.

Stufano Melone, Maria Rosaria; Borgo, Stefano: Ontological Analysis for shared urban planning understanding Urban planning aims to drive and manage the evolution of complex systems that coexist in a defined portion of territory and requires the involvement of a variety of actors. The presence of distinct actors involved in the different aspects of the organization and control of complex systems makes the urban planning activities very sensitive to a variety of perspectives. Also, the planning needs to manage, foresee and organize a territory in a temporal span whose extension can vary depending on the aspects on focus and, thus, some actors may intervene at times where others remain inactive. For these reasons, actors and actions coordination and orchestration are essential. However, technical actors like administrators, politicians, architects, associations and corporates, rely on specific languages whose terminologies, although addressing the same territory and its processes, can be correctly understood only adopting their background knowledge and objectives. This means that crucial terms like landscape, security, transportation and sustainability, although shared across actors, are far from being neutral and can be sources of subtle misunderstanding. An effort towards terminology alignment and shared understanding (extended also to non-technical actors like the inhabitants) is needed. Do these understandings of the terms have a common core? Can we clarify commonalities, highlight distinctions? Building on experiences in engineering and economy, we hypothesize that applied ontology should be introduced in the urban planning domain to clarify the terminologies and to make explicit interconnections and contrasts across the actors’ perspectives. It can even pivot the integration of new ITC technologies in urban planning, e.g., those needed to model and manage Smart Cities like software management, app developers, technology-based social system experts. Finally, it helps to organize the essential features of places in terms of objects, properties and processes. Our focus is on the interplay of the spatial, artifactual, cognitive, social, cultural and process levels. We will use these to present an analysis of the perspective of some technical actors and of relevant terminology, to indicate how ontology can single out the different intended meanings of such terms, and to show how lack of this knowledge leads to incompatible readings of the same plan. Finally, we indicate how to structure the framework for knowledge sharing in this interdisciplinary domain.

Thompson, Paul: Soiciotechnical Imaginaries for Future Food This paper sketches four archetypal characterizations of how food will be produced, processed, distributed and consumed over the coming half century—a time in which all manner of social association will be influenced by climate change, growing scarcity of resources relative to human population and climate change. Each archetype reflects what Sheila Jasanoff has called “a sociotechnical imaginary” that generates scenarios or visions of the future that are richly dependent on a technical infrastructure and on a pattern of future technical innovation. In this paper, 4 such imaginaries are sketched briefly: technological modernization (a continuation of food system innovations that began in the 20th century); sustainable intensification (a model emphasizing more efficient use of ecosystem services: “extensification” (a return to less intensive land use) and urban agriculture, a model that is driven by traditions of urban activism, planning and information technology. As Jasanoff argues, it is in the comparison of sociotechnical imaginaries that their normative commitments and implications become clear. This analysis places special emphasis on how each archetype reflects and incorporates a response to environmental sustainability and to food justice.

Thomson, Jol: G24|0vßß / Film Screening This film engages with the environmentalization of media technologies, or the 'technoecological condition' of the neocybernetic order. G24|0vßß (21mins) is an award winning HD audio-video composition shot with the coldest piece of matter in the observable universe — The Cryogenic Underground Observatory for Rare Events (CUORE) experiment at the National Laboratory of Gran Sasso (LNGS), of the National Institute for (INFN), Italy. Crucially, the mountain of which CUORE resides within, is a significant element of the experimental apparatus. G24|0vßß is part of a series of collaborations with high energy and non-thermal physics experiments at the edge of perception and computation. These fieldworks investigate new planetary scale sensing apparatus'; architectures of the imperceptible; posthumanist and speculative philosophies; the

66 BackToTOC

entangled, expanded and transductive narratives of landscape and technology; histories of visuality, physical observation, and their representations. The artist will introduce the film briefly and leave some time for comments. Shot in the Abruzzo region, G24|0vßß diffracts some of the embedded characters of the mountain, which itself has become an essential collaborator with the neutrino and dark matter experiments at the LNGS. Quotes from Stanislaw Lem’s 1968, ‘His Master’s Voice’ punctuate this composition and help question the norm of human exceptionalism in the age of the Anthropocene.

Thuermel, Sabine: Online Dispute Resolution based on Smart Contracts: An Example of Disintermediation and Disruption of a Socio-technical System Online dispute resolution serves to resolve disputes, e.g. arising from online transactions, by using innovative technologies. Smart contracts are a case in point: they are computerized transaction protocols self-executing the terms of a contract under pre-determined circumstances. By making use of block-chains smart contracts may be programmed in such a way that no human intermediaries are needed for the decentralized execution. In the ideal case these contracts are fully self-executing and self-enforcing and the transparency of the execution of the contract is secured. Online bets on arbitrary events, e.g. on the outcome of political elections or on public referenda, and on facts e.g. weather bets –snow on Christmas eve – are simple examples for such smart contracts. Trading platform for “smart securities” such as syndicated loans and catastrophe-insurance swaps are more elaborate ones. Decentralized automatic organizations (DAOs) are an even more ambitious approach. However, judging from the hacking into one prominent decentralized automatic organization and stealing $50m this technology is not necessarily safe and infallible. Even if the technology is not without problems of its own one must acknowledge that new socio- technical norms are emerging. A smart contract is a unique “technological arrangement of people and things” where the “reasoning and acting” is delegated to technology. The need for trusted intermediaries is minimized because online dispute resolution is already part of the smart contract. Traditional means of dispute resolution are augmented or even replaced by these novel approaches. Especially for cross-border disputes in e-commerce this may well be the alternative dispute resolution and enforcement method of the future. Adherents of smart contracts hope for “economic transactions on autopilot” (Economist, May 2016) where all intermediaries, may they be governments or banks, become superfluous. Thus online dispute resolution based on smart contracts may serve as an embryonic example of disintermediation and disruption of a socio-technical system, i.e. of classical ways to do e-commerce and resolve disputes in public.

Tjostheim, Ingvar: The feeling of being there - is it an illusion? In our daily life and for many of our experiences technology plays a key role. Experiences with, or in VR and AR are sometime described as an illusion. In one of the most citied definitions of telepresence the authors posit that the feeling of being there is a perceptual illusion of non-mediation (Lombard & Ditton 1997). The authors do not give any references or discuss what they mean by illusions. Many have adopted the Lombard & Ditton-definition, but do not discuss or question why the term illusion is used in the definition. In empirical work, the question is not only how to define a phenomenon, but how to operationalize the phenomenon. This is a necessary task because the participants in these types of studies are asked to report what the experience is like etc. In order to investigate the interplay of the empirical and the philosophical, we argue that we should look the phenomenon from both perspectives. There are many types of illusions and although many of them are well described; “it is extraordinarily hard to give a satisfactory definition of an illusion. It may be the departure from reality, or from truth; but how are these to be defined? (Gregory 1997). Allsop (2010:199) writes that illusions are a phenomenon easily described within a representational model of perception. In this paper we discuss definitions of illusion related to telepresence theory, and how it is explained within representationalism, relationism, the sense-datum theory and enactivism. In telepresence research, can we identify studies or articles that builds on these theoretical and philosophical viewpoints? With reference to these four, and by looking at how telepresence researchers operationalize the phenomenon, our aim is to unpack some of the assumptions underlying the telepresence definitions.

Toon, Adam: Things and concepts What are scientific concepts? How are they related to the material culture of science, such as instruments, formulas, maps and diagrams? This paper addresses these questions by drawing on recent work in cognitive science and philosophy of mind. We tend to think of mind and cognition as inside the head. By contrast, the extended mind thesis

67 BackToTOC

claims that mental states can extend into the world. Andy Clark and David Chalmers’ (1998) offer the well-known thought experiment involving Otto, an Alzheimer’s patient who uses a notebook to compensate for memory loss. Clark and Chalmers claim that Otto’s notebook plays a similar role to normal, biological memory. As a result, they argue, its entries count as part of the material basis for Otto’s beliefs. Scientific concepts are normally taken to be mental representations found in the scientist’s head. I will argue that this view is mistaken. In fact, many scientific concepts are what I will call extended concepts: they are realised by interaction between brain, body and world. To show this, I draw on a classic study of birdwatching by Michael Lynch and John Law (1999). Birdwatchers rely on various items of equipment, such as binoculars, spotting scopes, lists and field guides. I will argue that the birdwatcher’s field guide forms part of the material basis for her conceptual resources, just as Otto’s notebook forms part of the material basis for his beliefs. Recently, a number of authors have argued that the extended mind thesis has important implications for epistemology (Clark et al. 2012). If the argument in this paper is along the right lines, then the implications for the nature of scientific concepts are equally far-reaching.

Unger-Büttner, Manja: Design as Experimental Ethics – Moral Skepticism and the Value of Exploration. Designers are great skeptics. Just as they call habitual or stereotypical ideas into question by finding completely new solutions, they sometimes also question the demands addressed to them. So, from my design students I heard the question: Why being moral? By allowing moral skepticism to take effect on us and, meanwhile, taking a deeper look at design practice, we can find an interesting way to a productive approach to this central problem of ethics. Relative to Verbeek (What Things Do, 2005, 232), innovation and design depend not only from the decision of WHAT should be designed, but also of HOW it should be. Here, a leeway/Spielraum opens up within which things can be designed one way or another (Hartmann, Philosophie des Schönen, 1887, 140). Decisions can only be made if something can be decided one way or another. Weischedel says, philosophical ethics nowadays is possible only as skeptical ethics. He finally suggests to endure and determine the fundamental doubt in skepticism as something that belongs to life (Skeptische Ethik, 1980, 13 & 183). As a philosopher and designer, to me this still seems a bit passive. By questioning “On Certainty”, Wittgenstein points to everyday action. Stanley Cavell shows, that what returns after passing through skepticism will never be the same again (Die Unheimlichkeit des Gewöhnlichen, 2002, 93.) Gutschmidt shows, with Heidegger, a sort of "being carried groundlessly"/”grundloses Getragensein” by passing skepticism, and not just enduring it (Sein ohne Grund, 2016). Beside all of these more or less active approaches to skepticism, I want to underline the explorative, experimental factor of design. The designer and design theorist Björn Franke, f. e., emphasizes that designers do not consider questions about good life on normative way, but on the way of asking for possibilities of existence: "Explorative design is not asking “what ought to be” but rather “what could be” or “what would be if ...?“ (Design as Ethical and Moral Inquiry, 2010, 71). So my talk will show how this connection of exploration and experiment in design to an "active skepticism"/”tätige Skepsis”, that has been demanded by J. W. von Goethe, could work (Maximen und Reflexionen, Berliner Ausgabe, XVIII, 651). This can pave the way for an exploratory approach to morality and ethics. The great moral sceptic, Nietzsche (Werke, 1954, Band 2, 72) already wrote, he will praise every skepticism which allows him to answer: "Let's try it!".

Unsworth, Kristene: Smart cities and participation in the polis A goal of Smart cities is to use technology and data to improve the lives of individuals living in urban environments. Projects range from those spear-headed by IBM in the Smart Cities Initiatives to local initiatives that use individually shared location data to determine where to place bike share stands. These projects rely not only on data, but in most cases on individual willingness to participate in mapping their own activities and in many cases sharing information that in the past may have been considered private or even insignificant. Public participation is a necessary component of the polis. How does the Internet of things (IoT) impact participation in the polis? Is this participation voluntary? Is this participation democratic? This presentation will reexamine civic participation in the polis mediated by IoT.

Vermaas, Pieter; Eloy, Sara: Grammar and Quality: Assessing the Design Quality of Grammar- System Generated Architecture In architectural research the SPT 2017 challenge of understanding things by their grammar has been taken up. Grammar systems have, for instance, been developed for analysing the design principles of

68 BackToTOC

types of architectural artefacts as Palladio villas and Queen Anne houses. The understanding these analytic grammars give of the artificial world may already be the topic of philosophical research. This paper focuses on a further challenge that grammars may bring to philosophy and that is their use for designing new architecture. This further use of grammars may seem business as usual, since it is merely the transition from descriptive tools to prescriptive ones. Within architecture it however raises fundamental questions. Are designs generated with grammar systems true architecture? Have grammar-system designs the same quality as designs created by human architects? In this contribution we take up the challenge of answering this last question. We present a research method for assessing the quality of grammar-system generated designs, and give the outcomes of an experiment done with it. This contribution may be taken as work on the broader methodological challenge of validating design tools. Our case is a grammar system developed for analysing refurbishment design of Rabo-de-Bacalhau apartments, a specific type of housing in Lisbon, Portugal. This grammar system generates new refurbishment designs, leading to the question about their quality that we have taken up. An approach to answering it is to first define design quality and then test if the grammar generates designs that have this quality. A drawback of this approach is that it makes the answer dependent on the adopted definition of design quality, and that a positive outcome leads to further tests with sharpened definitions of design quality. For avoiding this we operationalised design quality in a manner that is customary in architecture: we organised an architectural competition in which a panel of architectural evaluators ranks sets of refurbishment designs for Rabo-de-Bacalhau apartments that included designs generated by the grammar system and designs created by human architects. We thus could demonstrate that grammar systems can generate designs that are of sufficient quality. von Schomberg, Lucien; Blok, Vincent: Innovation and the Character of our Age Over the years the concept of Responsible Research and Innovation (RRI) has gained increasing importance in Europe, where it currently stands as a cross-cutting issue under the prospective EU Framework Program for Research and Innovation “Horizon 2020” (European Commission, 2015). In its implementation, both EU policy makers and researchers continuously discuss how to guide research and innovation (R&I) processes in the ‘right’ direction. However, little thought goes to what concept of innovation is presupposed as self-evident today and what implications it has for the objectives of RRI (Blok & Lemmens, 2015). What is the character of our age and how does it determine the way we think of innovation today? In a recent paper, Benoît Godin argues that a particular concept of innovation as technological and economically driven becomes dominant in contemporary times (Godin, 2008); it is no longer understood as ‘creation of change’ or ‘invention’ in general, but in terms of new technologies and the economic benefit they bring. In other words, it is self-evidently understood out of the current techno- economic paradigm. In light of Godin’s observation, in this contribution we critically reflect on the ontological status of the presupposed concept of innovation in the RRI literature. This implies that we do not account for what this presupposed concept of innovation is with regard to its representing artefacts and products, but instead focus on the very paradigm it projects. In this respect, we make the hypothesis that the very framework of RRI is grounded in what Heidegger calls ‘Enframing’ (Heidegger, 1977). This involves the idea that while RRI governance frameworks believe to increasingly control and secure the societal desirability and ethical acceptability of R&I processes, these ‘frameworks’ are in fact ‘Enframed’ by the techno-economic paradigm inherent in their presupposed concept of innovation. As a final step, we explore what implications this presupposed concept of innovation has for the ideal of responsible innovation. In doing so, we bring into question whether the ‘R’ in RRI ultimately calls for a concept of innovation that exceeds the domination of the techno-economic paradigm, especially given that RRI promises greater benefits to society than economic growth and technological advance alone (Wynne, 2001). von Schomberg, Rene: Global perspectives on Responsible Innovation I present some perspectives from an an handbook on Responsible Innovation(in print) which constitutes a global resource for the fast growing interdisciplinary research and policy communities addressing the challenge of driving innovation towards socially desirable outcomes. This book brings together well-known authors from the US, Europe and Asia who develop conceptual and regional perspectives on responsible innovation as well as exploring the prospects for further implementation of responsible innovation in emerging technological practices ranging from agriculture and medicine, to nanotechnology and robotics. The emphasis is on the socio-economic and normative dimensions of innovation including issues of social risk and sustainability.

69 BackToTOC

Wang, Dazhou: Reflections, Thinking in prospect and Experiments in the Technological Era The traditional philosophy typically rests content with reflections on the past, just as Hegel said, "Philosophy, as the thought of the world, does not appear until reality has completed its formative process, and made itself ready……the owl of Minerva spreads its wings only with the falling of the dusk", meaning that philosophy understands only in hindsight. Certainly it is easy to be wise after the event, while the technology era desperately requires thinking in prospect. Indeed, modern technology has raised many challenges to human beings, and to properly cope with such challenges, we need not only reflect ex post thoroughly on what happened in the past, but also think ex ante wisely about the future. Challenging such a preconceived idea about the role of philosophy, the author tentatively puts forward a new direction of philosophical thinking in the context of technological practices with the spirit of experimentalism, firstly developed by John Dewey and then enriched in the metaphysical sense by Bruno Latour. In so doing, the author puts the emphasis on the asymmetry of ex ante and ex post in terms of human action, the incompleteness of thinking in prospect as well as in retrospect, and the indeterminacy in translating the hindsight from the previous actions into the guideline for the following actions. On this basis, it is argued that there is no solution once and for all, and that the sensible experiment is the key to a wide range of theoretical and practical problems in the world of technology and engineering. To this end, doing philosophy and playing a role in the experiment and formation of the technologies, not just sitting and prattling about the general principle, might be the inevitable choice for contemporary philosopher of technology and engineering.

Wang, Hao: Can Privacy Escape Powers? Toward a Theory of Privacy Through the Lens of Play With the rise of widespread surveillance, rights-based conception of privacy seems to be slipping into a fiction. Against this backdrop, some have criticized the liberal root of privacy which perpetrates the inequality and dominance of the existing power structure, and pacifies individuals to resist it. This sort of criticisms, better known as the Power Critique, is so fundamental that it seems the liberal concept of privacy can only be abandoned. I will argue, however, that the Power Critique is unsound, as it assumes that existing power relations are unchallenged by privacy. To account for this, I will firstly introduce Julie Cohen’s postliberal theory of privacy which is innovatively associated with the conception of “play”. Unlike the liberal paradigm only based on abstract rights, Cohen stakes out a realistic theory of privacy located in everyday practices, constrained by power relationships. Secondly, I will illustrate that this realistic framework of privacy is constitutively transformative and resistive. The upshot is that the situated self, through the lens of play, is not wholly determined by all those powers, but plays tactical - often engaging in diverse, unpredictable, and ad hoc practices, rather than so readily conforming to the theoretical strategies of mainstream description. Privacy, a dynamic space that is preserved for subjects to play, is thereby constitutively a potential power to resist and transform the hegemonic structures. Thirdly, I will point out some weaknesses in this new framework of privacy, and more importantly, propose a theoretic remedy based on Habermas’s notion of intersubjectivity. I argue that in the new framework of privacy, both multiple everyday tactics and rights-based liberty should be understood as co-contributed to resist and transform the existing power structure. In all, the concept of privacy is by no means the ally of powers as commonly criticized; instead, it is potentially an empowerment for people to resist and transform existing power relations. Hence, in this society of runaway surveillance, we should rediscover the value of privacy - the continual resistance to powers, and attach greater importance to it, rather than the opposite.

Weiss, Dennis: How Ought We to Treat Our Televisions? It’s 1960. The second season of The Twilight Zone is airing. The episode, “A Thing About Machines,” features Mr. Bartlett Finchley, a sophisticated bon vivant. As he drives up to his estate, we see a television repair van parked outside. The repairman is deep in the bowels of Mr. Finchley’s television, amidst broken tubes and oscillators (it’s an earlier, pre-digital age, after all). The repairman is very familiar with Mr. Finchley and his television having previously been called out to repair it after Mr. Finchley put his foot through the screen. Finchley recounts, “The set was not working properly. I tried to get it to do so in a perfectly normal fashion.” The repairman observes, “By kicking your foot through the screen? Why didn’t you just horsewhip it, Mr. Finchley? That’d show it who’s boss.” Finchley’s appliances don’t work properly, the repairman offers, because he “don’t treat them properly.” Would it be wrong for Mr. Finchley to horsewhip his television? Perhaps yes, The Twilight Zone suggests, as Mr. Finchley’s “collection of mechanical Frankensteinian monsters,” as he proclaims them, finally rebel, forcing him to run for his life, finally drowning him in his own pool. This televisual artifact from the first golden age of television directly poses the challenging question, “how ought we to treat our televisions?” Despite the fact that the television is probably our most ubiquitous and domesticated technology and that we in the U.S. and Europe watch a lot of television (on average four hours a day), the television continues to present challenges to theorists examining technology. At least when they

70 BackToTOC

bother to turn to it, for among the dominant approaches and theorists in philosophy of technology, the lowly television seldom rates analysis, its domestic stain perhaps serving to marginalize it from high theory. And while it was supposed to have been swept aside in the digital revolution at the turn of the century, television is still very much with us, its technological form mutating and its content generating a so-called second golden age. In posing the question, “how ought we to treat our televisions?,” I’ll suggest that the television calls forth a transdisciplinary approach to technology that brings into close conversation philosophy of technology and cultural studies. Such an approach would serve to address the lack of attention to culture in philosophy of technology and the lack of normative considerations in cultural studies. Properly caring for our televisions, it is suggested, will prove fruitful to philosophy of technology, cultural studies, and those of us who watch and enjoy our televisions.

Wittkower, Dylan: Disaffordances and Dysaffordances in Code Code, in the broad sense of ‘architecture’ (Lessig, 2006), implicitly contains an idea of its user through its interfaces and affordances. Code may hold a prejudicially normative idea of its users, excluding some persons in a way that rises to the level of discrimination, per Winner (1980). Winner’s analysis, however, does not analyze the attributes of the technologies leading to discriminatory effects in terms of their non-affordances. This analysis is needed in order to distinguish (a.i) unavoidable exclusionary design (e.g. designing clothing for either normatively male or female gender presentations) and (a.ii) unproblematic exclusionary design (e.g. having a diverse but limited range of different skin color options for prostheses) from (b) problematic exclusionary design (e.g. a job application website with no access to persons with visual impairments, or face-recognition software programmed only with contrast patterns of light-skinned persons). This paper presents a theory of non-affordances that distinguishes unproblematic exclusionary code from discriminatory code, the discriminatory elements of which is described as either disaffordances or dysaffordances. Disaffordances and dysaffordances are illustrated with a series of technologies conceptualized under Ihde’s typology of human-technics relations (1990) and with reference to Latour (1992, 1999). Disaffordances are defined as technologies that fail to recognize differential embodied experiences that correspond to attributes constitutive of group and individual identities including race, gender, disability, and religion. Dysaffordances, are defined as not only failing to recognize identity-related differences but as also requiring non-normative users to misidentify themselves in order to gain material or social access to the commodities and services provided through these technologies. This theorization allows for the identification of thresholds where non-affordances become issues of ethics and justice, allowing for stronger, clearer arguments about the need to reform discriminatory code. The identification of this threshold also demonstrates how dis/dysaffordances not only participate in but sustain and actively construct exclusionary normativity (e.g. white normativity, male normativity, heteronormativity, bi-erasure, ableism, etc.), showing that code is not merely a cultural carrier but plays an active role in the social construction of deviance. Finally, the identification of this threshold allows designers to better identify issues in code during its initial formulation.

Wittkower, Dylan: Teh Intarwebs: Maed of Cats, Akshully Timothy Berners-Lee, often referred to as the “inventor” of the WWW, was asked in his Reddit AMA what was one of major uses of the WWW that he did expect to come to define the web was. He responded “Kittens. I never expected all these cats” (Vincent, 2014). This confirmed the longstanding truism that “the internet is made of cats.” This article seeks to articulate what “cats” are insofar as the internet is made of them. I put forth an inclusive theory of the Internet Cat as emblematic of numerous adjacent, related, and derivative participatory media practices— a paradigm case in a strong, Kuhnian (1996) sense: a particular conceptual moment through which an era of thought and interpretation is formed. The Internet Cat implicitly demonstrates that—far from the Web 1.0 model of information access and commerce—Web 2.0 is for sharing, remixing, storytelling, microcelebrity, community, and lolwut. The biological specificity of the Internet Cat presents a difficult question—why is the Internet Cat as paradigmatic of Web 2.0 predominantly feline in species? A detour into the study of companion animals provides some suggestions: The cat provides strong analogues of human relationships in the way that human-cat relationships require distinctive processes of negotiation and mutual respect, different from the categorical allegiance of the dog. I argue that multiple other factors favor the use of cats as a medium of digital communication, including the affective strength but narrative-poor expressive character of the cat’s eyebrowless faces and reactive bodies. Finally, using my affective supplement theory (Author, 2012), I argue that the cuteness of the cat provides a feeling of being needed which corresponds to but is bodily unfelt in online practices of care—personal relationships as well as engagement with online communities.

71 BackToTOC

Taken together, these provide an account of why there are “all these cats” which is sufficient but necessarily incomplete, given the complexity of the phenomenon, and which demonstrates substantial value in continued study of what may be a prima facie absurd object of research.

Xia, Baohua: On Francis Bacon As The Father of Philosophy of Technology Who is the true father of philosophy of technology, it is still a meaningful problem within the discipline of philosophy of technology. Ernst Kapp(1808-1896),who coined the phrase “philosophie der technik”, published his Grundlinien einer Philosophie der Technik in 1877,and Henry Dircks(1806–1873), who coined the phase“philosophy of invention”,published his Philosophy of Invention in 1867.But Ernst Kapp and Henry Dircks should not be regarded as the true father of philosophy of technology, the conditions of being called as the discipline’s father include at least three, i.e. he was a truly great philosopher in history of philosophy; he truly took technology seriously; and his technological thought is still alive. Francis Bacon(1561-1626) should be considered to be the true father of philosophy of technology. Francis Bacon called himself a guide for human technological transformation from the ancient to the modern, and provided a theoretical program for the technological transformation. He pointed out that the essence of this technological transformation was to develop a new complex of technology-science, i.e., science-based technology and technology-based science; and disclosed its grounds and foundations from varied perspectives, such as from humanity, society, religion, and nature. Moreover, he answered the big question of how to realize the technological transformation. In his opinions, to achieve this technological transformation, we must have a new logic of taking reason into invention, and construct the two world of invention. Francis Bacon claimed that logic of invention included two parts of the learned experience and the interpretation of nature, and the world of invention included the objective knowledge world and the collective action world. Nobody else in the history of philosophy have thought so deeply and so truly about the technological transformation. Furthermore, Francis Bacon put forward a series of philosophical propositions of technology to be further studied, such as the unity of the natural and the artificial, the unity of science and technology, logic of technological invention, technological objective knowledge and collective action. Francis Bacon called upon people to use sound reason for constructing a technological society. His thought can give a theoretical support for constructive philosophy of technology toward a new technological transformation movement.

Young, Mark: Maintenance: Technology in Process Philosophers of technology have long disregarded maintenance as a derivative form of technical activity. This presentation seeks to challenge this perception by demonstrating how maintenance can and should be understood as a form of technological production in its own right. The first section of this presentation seeks to illustrate the creative role of maintenance by revealing the inadequacy of traditional conceptions of the practice of design as ‘technical problem solving’. Because the production of novel technologies make new forms of life possible and therefore changes the technical landscape to which designers originally aim to respond, I will argue, we cannot regard the activity of design simply as providing ‘technical solutions’ to pre-existing problems. Rather, I want to draw attention to the way in which the design and production of technical artifacts are experienced as generating challenges, the solutions of which require creative activity on the part of users. In this respect the activity of maintenance can be understood as a form of creative technical activity which participates in the production of artifacts. The second section aims to outline how different conceptions of the practice of maintenance correspond to particular ways of understanding the nature of technology itself. Much recent history and philosophy of technology, by privileging processes of invention and innovation over use, has served to perpetuate a formal conception of technology; as the material manifestations of a designer’s intentions. From this perspective, maintenance is inevitably cast as derivative; as the preservation of the results of a designer’s creative activity. On the other hand, if we are to resist this narrow interpretation of maintenance as I suggest, and recognize its creative and productive value, then we are also required to conceive of the nature of technology differently; as a process which is extended in time and inseparable from its social and material manifestations. In other words, recognizing the creative value of maintenance requires us to understand artifacts in the way Heidegger suggests; as unintelligible if not conceived as part of a holistic system of technologies and practices.

Young, Mark: Making or Using?: The Geiger Counter and Early Cosmic Ray Research This presentation seeks to problematize the dichotomy between production and use in the history and philosophy of technology through an examination of the early history of the Geiger counter. Traditional histories of the Geiger counter often imply the production of the device to have ended when its

72 BackToTOC

designers; Hans Geiger and Walter Müller, managed to resolve problems of interference affecting the device in 1928. However, interviews with early cosmic ray researchers who used the device throughout the 1930’s reveal that the device continued to present researchers with a series of challenges that could only be met through innovative, productive activity. My goal in this presentation will be to show how the difficulties faced by historians in delimiting the production of the Geiger counter provide us with an opportunity to question some common assumptions about the nature of the practice of making itself. The first section of this presentation aims to demonstrate how the distinction between making and using stems from a formal conception of technological production, one which demarcates production from use in the evolution of an artifact by positing a point at which a designer’s intentions are realized materially. By illustrating the ways in which the early history of the Geiger counter subverts this model, I intend to question the extent to which the distinction between making and using provides us with adequate conceptual resources to explain processes of technological development. In response to this concern, the second section seeks to outline and explore an alternative conception of technological practice by drawing on the phenomenological framework for material culture outlined in the work of anthropologist Timothy Ingold. By illustrating how the function of technical artifacts is best understood as emerging after production, within contexts of use, this account challenges the hierarchy between making and using that has hitherto provided the dominant framework for the history and philosophy of technology, and calls for new forms of scholarship attentive to the creative dimensions of practice

Yu, Xue: The Moral Agency and Moral Responsibility of Self-driving Cars It has become one of the most significant issues that whether autonomous machines are moral agents or whether they have moral agency especially with the advent of autonomous technology and intelligent devices. Generally, the entity that is a moral agent implies that it has moral agency. In this paper, I give an alternative approach that moral agency not necessarily exist in moral agents while it could be found in other entities with moral significance and moral status. If the entity x could satisfy some conditions: a) x has the intention to act in moral situations; b) x could independently make predictions and judgments on the relevant situations according to circumstances; c) x could make decisions to take actions based on the predictions and judgments, then x could be regarded as the entity that has moral agency regardless of being a moral agent. Taking the Google’s self-driving car and the Tesla’s Model S as examples to analyze above conditions, it could be conclude that self- driving cars meet the standard of having moral agency even though whether them are moral agents. Next, whether the self-driving cars with moral agency should take responsibility in the moral context is decided by the moral decision-making ability of them. Compared with the moral decision-making capacity of humans as being moral agents, self-driving cars are designed to make decisions and some of them are designed to help human users to make decisions. Considering that the moral decision- making ability of self-driving cars are driven by outside factors, it is hard to argue that self-driving cars have the same moral responsibility as humans that are moral agents for whom are driven by inner consciousness or mental states to make decisions. However, it can be conclude that self-driving cars with moral agency are parts of responsibility community together with designers and users within the framework of “design-use” in a moral situation, which implies that responsibly designing and using self-driving cars are crucial due to the moral role in responsibility.

Zhou, Liyun: The Boundary and its challenges between Digital Technology and Digital Art: A Phenomenological Perspective Traditionally, there has been a clear distinction between technology and art. But with the development of digital technology and digital art, the boundary between the two has become blurred. These boundary issues and their implications are explored with the aid of contemporary examples using a phenomenological perspective. 1. Digital technology and digital art: the ambiguity of boundary In traditional understanding, technology is governed by rational and controlled logic, while art is sensory and indeterminate. In traditional forms of art, it is not difficult to distinguish technology from art. While in digital technology/art works, the distinction between subject and object, and the boundaries between performers and spectators become blurred. Digital art is not just an artificial thing or object, but primarily an "experience” or “process”. 2. From object to experience: transcendence of boundary In everyday life (“natural attitude”), the distinction between technology and art is very straightforward. The application of phenomenological method can help us suspend the natural attitude, and activate different types of perception and experience by our “intentionality" of “being-in-the-world”. Merleau-Ponty’s concept of body provides theoretical resource for the integration of digital technology and digital art. Body is not only the source of technology and the foundation of art experience, but also

73 BackToTOC

the purpose of technical activities and artistic creation. Moreover, we can also get some theoretical resources from Don Ihde’s phenomenology of technology, and Peter Paul Verbeek’s theory of technological mediation. In the perspective of phenomenology, there is no objective, external, consistent standard of demarcation between technology and art. Whether it is technological or artistic depends on the connection between the body and them, and the source of this connection lies in different perception and experience of the subjects in different situation of “being-in-the-world”. 3. Hope and danger: the challenges of the boundary From phenomenological perspective, to discuss the boundary between technology and art means to reflect on some metaphysical, ethical and aesthetic concepts and issues in them. On the one hand, thinking through the ambiguous boundary means that technology and art need cross-border cooperation to open new channels for people to experience their embodied self and the world. On the other hand, it also means to keep a reflective and critical attitude to the excessive application of technology, and make the world humane.

Special Tracks

Track: Anthropocene (1)

Contribution 1: Mitcham, Carl: Engineering Ethics: From Thinking Small to Big At its origins as a socially recognized profession in the late 1700s, engineering simply incorporated a narrow theory of the good advanced by Enlightenment philosophy. Civil (as opposed to military) engineering was defined as “the art of directing the great sources of power in nature for the use and convenience of man.” The term “use and convenience” implicitly referenced the ethical frameworks being developed by David Hume and Adam Smith. Over the next three centuries the abstract notion of “use and convenience” underwent a series of interpretations that began quite narrowly as company loyalty in a capitalist economic order but progressively enlarged to emphasize a paramount obligations to protect public safety, health, and welfare. Today the paramountcy clause has often been expanded further to include environmental protection. In a world now being transformed into an artifact of human construction and design, both engineers — along with all non-engineers such as myself, whose lives are ineluctably changed by living in our engineered world — are called upon to think engineering ethics as more than some professional code of conduct. Small thought engineering ethics must expand to large thinking engineering ethics. The period 800 to 200 BCE has been described as a pivotal or Axial Age in which, independently of each other, thinkers as diverse as Shakyamuni Buddha in India, Laozi and Kongzi (Confucius) in China, the Hebrew prophets in Israel, and Socrates and the early Greek philosophers introduced into human affairs a new kind of question: What is the proper way to be human? Today we are living in a New Axial Age in which we must ask: What is the proper way to engineer the world? What is the meaning of the engineering way of life — not just for engineers but for everyone who directly or indirectly contributes to and is influenced by the engineering way of being in the world? Engineering ethics is no longer only for engineers.

Contribution 2: Zwier, Jochem; Blok, Vincent: Gaia’s Garbage – Technological Insistence and Existence in the Anthropocene The question to be addressed in this paper concerns the relation between human existence and its technological condition in the age of the anthropocene. The hypothesis is that today, existence becomes manifest as radically technological. Technology is interpreted as the strife against collapsing into the earth, i.e. of becoming Gaia's garbage. Our claim will be that technological existence is this strife. In order to develop this claim, we first interpret the anthropocene as the concrete realization of what Martin Heidegger has called the essence of technology as Enframing (Heidegger, 1977). We contend that in the anthropocene, the world on a warming globe appears as a managerial resource to be technologically managed by humans as planetary managers. On the one hand, we call this situation in-sistent insofar as humans cannot escape this technological identity as planetary manager. On the other hand, we will argue that the anthropocene offers an opportunity to reconsider human ek- sistence. To substantiate this argument, we interpret planetary management as technological resistance, which is to say a stance against being undermined by the earth, i.e. a stance against becoming Gaia's garbage. We then read this stance along the lines of two philosophical trajectories: First Heidegger's understanding of techné as a stance against physis in his reading of Sophocles’

74 BackToTOC

Antigone (Heidegger, 2014); and secondly Bataille's philosophy of human ek-sistence as stance against or repulsion of waste (Bataille, 1989; 1991 ). From this reading, we associate both Heidegger’s physis and Bataille’s domain of waste (“the general economy, cf. 1991) with the anthropocenic earth or Gaia (cf. Bataille 1991; cf. Blok, forthcoming ). We derive two things from this. First, that Heidegger’s consideration of the essence of technology (Enframing) as the overexposure of technical presence (i.e. the permanent fixture of “standing-reserve”, cf. Heidegger 1977) undergoes a reorientation in the anthropocene. This is because the anthropocene as concretization of Enframing manifests the “standing-reserve” to stand against Gaia, who potentially lays it to waste. Secondly, that reading Bataille’s philosophy in the anthropocene gives rise to an understanding of technology which goes beyond the domain of managing the earth as resource (“the restricted economy”, cf. Bataille, 1991). Rather, technology now explicitly bears upon Gaia as the (wasteful) “general economy”, i.e. the domain that surrounds, overwhelms, and transgresses the restricted economy of technological management. We conclude that the human earthling is not solely an in-sistent technological manager of planet earth, but is technological ek-sistence, meaning that humans are at home upon earth whilst striving against the unhomely earth of Gaia’s garbage.

Contribution 3: Stolzenberger, Steffen: Prospects of the Technoscene. Critical Remarks on Sloterdijk's Mystification of History In one of Peter Sloterdijk’s recently published essays the term anthroposcene, indicating the time when human beings became responsible for the damages they cause on the planet earth, is replaced by that of technoscene, referred to the relation between humankind and nature as being totally determined by technique. Throughout the whole text Sloterdijk affirms the idea of technique as the determining principle of history and assigns a historico-philosophical logic to the latter in which technique has not yet spoken its final word. The paper aims to criticize Sloterdijk’s position by confronting it with Theodor W. Adorno’s philosophy of history in which the ideology of technology, i. e. the imagination of technique as a power that has run out of human control, is explicitly criticized. Although Sloterdijk’s and Adorno’s philosophies of history share central motives, most importantly that of human history running into a catastrophe (the logic of decay), it is to be shown that the difference between both is substantial. While Adorno aims to grasp and thereby criticize the societal powers which control technique, and asks for the possibility to establish humankind as the subject of history, Sloterdijk, in contrast, denies the latter’s status as an actor and predicts its survival by reaching a state in which the mastery over nature is inverted to human being’s total integration into the environment by means of technology. The true point in Sloterdijk is the insight that reason has not turned out to be the ruling factor in history and that the human potential has developed self-destructive tendencies. But it is to emphasize that he disguises this regression as technological advance and thereby encloses humankind into a mystified natural process in which the end of humankind is anticipated – the planetary orientation of philosophy leaves this world. Interpreting technique as a tactic to survive in an evolutionary struggle for power is a parallel, as will be shown, to the fascistic thinking of Oswald Spengler, to whom both Sloterdijk and Adorno have their own (negative) relation.

Track: Anthropocene (2)

Contribution 1: Conty, Arianne: Who is to Interpret the Anthropocene? Nature and Culture in the Academy It is somewhat ironic that just when scholars seem to be reaching an academic consensus critiquing the human exceptionalism of modern humanism, and to be replacing such exceptionalism with a contextual and processual understanding of the human species, we are suddenly told that we are living in a new geological era named The Anthropocene. Since atmospheric chemist Paul Crutzen named our geological epoch the Anthropocene in 2000, we have indeed begun to see the human everywhere, in a deet-resistant mosquito and the o-zone heavens. The Anthropocene has thus come to signify human responsibility not only for human societies, as studied by the human sciences, but for all of life on the planet earth, as studied by the natural sciences. Such a dissolution of the nature/culture divide will thus require the collaboration of scholars from many different disciplines addressing both scale and value. Yet notwithstanding widespread recognition of the need for an inter- disciplinary response, there is considerable disagreement amongst natural and social scientists about the meaning of the nature/culture divide and its role in causing the Anthropocene. With such contradictory interpretations, the Anthropocene has come to represent the node in a theory debate with important consequences for understanding who we are and how to respond to the crisis and envision our future on the planet earth. Due to the importance of such responses, and the centrality the Anthropocene has taken in both academic circles and the general media, this presentation will seek to evaluate the role of the nature/culture divide amongst natural scientists, Actor-Network

75 BackToTOC

Theorists (Flat Ontologists), and Political Theorists. If the first and second groups have interpreted the Anthropocene to mean that nature no longer exists, and thus that human “management” solutions are the only option, the third group has claimed that the Anthropocene actually depends upon the nature/culture distinction in order to objectify the material world as a free resource to be used. After clarifying these three positions, this article will focus on the role attributed by these three interpretations to technology, as extending or mediating between different forms of agency, or rather extending human agency over all other forms of life.

Contribution 2: Leskanich, Alexandre: The Technology of History and the Grammar of the Anthropocene ‘History’, remarks Paul Valéry, ‘is the most dangerous product that the chemistry of the intellect has invented.’ In its capacity to contain everything, history constitutes the most comprehensive category. That the world is already historicized, that history is its dominant technology for making sense of what happens, that people possess an apparently ineradicable historicized consciousness, means that we are already predisposed to think historically, to engage in social practises that affirm history’s importance: i.e. the endless sites of ‘historical memory’; the reams of historical fiction; the constant parade of historical dramas on television and film; the interminable commemorations, memorials, re- enactments and ceremonies; the countless places of ‘historical interest.’ If technology is principally a means of compensation, of organization, of manipulation, then history proves the most dominant technology. Hence it is the pre-eminent means of making the world make sense: the world is now seen through historical knowledge and has already been historically categorized. Thus it can make anything mean anything, no matter how much things change. Historical narration works as a palliative to the disorder of the world and the indifference of the cosmos, giving us a place of significance, an order to existence. A historicized world defers to history by definition. Yet self-evidently, the world transformed by the human mind exhibits ecological failure. Therefore, since history was supposed to be a means of guiding human action, the world also exhibits a failure of history. Hence, since the mind invested history with a capacity for sense and meaning it can no longer deliver, history elicits a sense of cognitive failure; in turn producing disappointment, apprehension, trauma. Despite this, the Anthropocene is the latest, grandest organizational attempt by the history technology. As a managerial projection, the Anthropocene depends on a now defunct idea of historical comprehension, upon a redundant explanatory mechanism. Hereby we go ‘down in history’ as part of a sequential process according to history’s coercive categorical coordinators (e.g. ‘period’, ‘age’, ‘epoch’). Subsequently, the Anthropocene gives rise to the apprehensions of incarceration, of our having been already historically managed by a now compromised technology.

Contribution 3: Imanaka, Jessica: Laudato Si’, Technologies of Power, and Environmental Justice In Laudato Si’, Pope Francis identified the “technocratic paradigm” as a leading cause of interlocking ecological crises. Francis’ critique of technology originates in Romano Guardini’s ideas of technology as heightening human powers over both the Earth and humanity itself, ideas akin to Janicaud’s notion of “hyperpower”. Guardini predicted that increased technological capability would bring us to a point of “final crisis” with nature. As with Bernard Stiegler’s works, Laudato Si’ explores the ways rapid industrialization and excessive consumerism combine in a financially engineered global economy to generate the anthropocene. Francis argues that technological solutions alone will not extricate humanity from impending crises. Rather, we need an ecological conversion that will alter mindsets, habits, and attitudes so as to transform environmental, social, economic, cultural, and daily ecologies for the sake of the common good and justice. Laudato Si’ expresses an ambivalent relationship to technology vis-à-vis environmental crises; technology has value, but requires circumscription. What are the technologies of power that operate in the technocratic paradigm? How do these technologies generate environmental injustices, and how can counter technologies be developed to cultivate environmental justice? The paper begins by explicating Pope Francis’ account of the technocratic paradigm in Laudato Si’. Then, the background of these ideas is excavated in the works of Romano Guardini. Specifically, notions of “Not-Human Man” and “Not-Natural Nature” are highlighted in relation to the ways technology yields indirect forms of experience and responsibility. From such indirect phenomena it can be shown how unjust structures develop in ways that systematically degrade both the environment and human populations. These unjust structures lead to myriad environmental injustices, understood in terms of grossly inequitable distributions of environmental benefits and harms. Finally, to counter such injustices, new technologies must be developed. The integral ecological outlook favored by Francis may be regarded as a kind of eco- technology or as an eco-politics. But the cultivation of such an outlook requires developing concrete platforms for connecting people with each other and the planet in ways that foster depth, authenticity,

76 BackToTOC

and encounter. Such platforms harken back to Guardini’s insights about the potential benefits of indirect experience.

Track: Anthropocene (3)

Contribution 1: Szerszynski, Bronislaw: Technology as a planetary phenomenon If the philosophy of technology is to respond adequately to the challenge of the Anthropocene it must do more than apply itself to global environmental problems; it must also understand technology as a planetary phenomenon. Technology has become planetary in its impacts and its dynamics: the human–technology coupling is increasingly affecting the operation of Earth systems, and increasingly behaves as an autonomous ‘technosphere’ that entrains more and more entities and resources into its operation (Haff 2014). But technology was also always already planetary, in the sense that, like the biosphere, it is a phenomenon produced by a far-from equilibrium planet self-organising on multiple timescales under thermodynamic imperatives. In this paper I expand on this idea, showing how philosophy of technology can draw on ideas from the natural sciences in order to understand technology as an ‘earthly’ phenomenon. Firstly, I argue that Earth technology as an object of study has to be situated amongst the other physical entities and processes with which it shares the dense space of our home planet, illustrating this approach by describing mobilities technologies in ways that cut across conventional boundaries between inanimate matter, living things and artefacts (see http://dx.doi.org/10.1080/17450101.2016.1211828). Secondly, I argue for the importance of also situating technology against the background of the Earth’s ‘deep past’ – its long, contingent, trajectory of self-organisation, passing through key points of ‘bifurcation’ such as the emergence of eukaryotic cells and of multicellular animals (Maynard Smith and Szathmáry 1995) – exploring the complex way that the emergence, stabilisation and evolution of the technosphere fits into the pattern of earlier ‘major transitions’ (http://lancs.ac.uk/staff/isabs/metazoic.pdf). Thirdly, I show how this kind of approach can provide clues about the ‘deep future’ of the Earth’s technosphere as it passes through further major evolutionary transitions in its self-organisation and its relationship to the human, animal body and the wider Earth system (http://anr.sagepub.com/content/early/2016/10/18/2053019616670676). I conclude by drawing out the implications of this analysis for how we understand and respond to the current ‘Anthropocene moment’.

Contribution 2: Lemmens, Pieter: Re-Imagining the Noosphere. Reflections on Digital Network Technology, Energy and Collective Intelligence in the Emerging Ecotechnological Age If there is one term that catches our current predicament like no other, it must be that of the ‘anthropocene’, the name of the proposed new geological epoch in which humans have apparently become the most important and influential geological (f)actor. Although opinions concerning its precise dating diverge, the consensus among Earth System Scientists is that its beginning roughly coincides with the onset of the Industrial Revolution and the invention of the steam engine, initiating a process of massive entropy production throughout the Earth System, which has led French philosopher Bernard Stiegler to characterize the anthropocene as the entropocene and to suggest that our response to it should consist in a negentropic turn of the global technosphere, transforming it from the destructive, anthropocentric, wasteful and care-less apparatus of exploitation Heidegger called enframing into a genuine system of care co-operative and co-productive with the biosphere, in order to ‘overcome’ it and usher in what he calls the ‘neganthropocene’. The global installment of the digital techno-noo-sphere represents a literally mind-blowing epochal change since it entails nothing less than the complete restructuring of the organological configurations that constitute human cognition and intelligence. Even more mind-blowing appears the necessity of collectively reappropriating and repurposing it for overcoming the anthropocenic condition, i.e., of transforming it into the prime conduit of entropic tendencies into a new social, economic and cultural system of care for the biosphere as our ultimate life support system, so as to turn it into an engine of negentropy again. In my talk I will reflect on Stiegler’s pharmacological view of this negentropic turn of the global noosphere as organologically conditioned by the digital technosphere, focusing on the question of the relation between digital technology and collective intelligence in view of its necessary yet currently unimaginable and arguably improbable ‘upgrade’ for dealing with the anthropocene, and in particular on of how a global noosphere more co-operative with the dazzlingly complex biosphere could be imagined, given both its current subservience to the nihilist goals of capitalism as well as its future requirement to somehow compose with the autonomous ‘Gaian’ agencies that constitute the Earth System, agencies that are nevertheless expected to become increasingly monitored and ‘co-

77 BackToTOC

programmed’, or so it is claimed, through the digital technosphere’s growing plethora of gobally networked sensorial and controlling technologies.

Contribution 3: Zwart, Hub: From the nadir of negativity towards the cusp of reconciliation: A dialectical (Hegelian-Teilhardian) assessment of the anthropocenic challenge This contribution addresses the anthropocenic challenge from a dialectical perspective, combining a diagnostics of the present with a prognostic of the emerging future. It builds on the oeuvres of two prominent dialectical thinkers, namely G.W.F. Hegel (1770-1831) and Pierre Teilhard de Chardin (1881-1955). Hegel himself was a pre-anthropocenic thinker who did not yet thematise the anthropocenic challenge as such, but whose work allows us to emphasise the unprecedented newness of the current crisis. I will notably focus on his views on Earth as a planetary process, emphasising that (in the current situation) the “spirit” of technoscience is basically monitoring the impacts of its own activities on geochemistry and evolution. Subsequently, I will turn attention to Teilhard de Chardin, a palaeontologist and philosopher rightfully acknowledged as one of the first thinkers of the anthropocene and whose oeuvre provides a mediating middle term between Hegel’s conceptual groundwork and the anthropocenic present. Notably, I will discuss his views on self- directed evolution, on the on-going absorption of the biosphere by the noosphere, and on emerging options for “sublating” the current crisis into a synthetic convergence towards (what Teilhard refers to as) the Omega point. I will conclude that (a), after disclosing the biomolecular essence of life, biotechnology must now take a radical biomimetic turn (a shift from domesticating nature to the domestication of domestication, i.e. of technology); that (b) reflection itself must become distributed and collective; and (c), that the anthropocenic crisis must be sublated into the noocene.

Contribution 4: Galzacorta, Iñigo; Garagalza, Luis; Rodríguez, Hannot: Rethinking Technology in the Anthropocene: Relations and "Gravitational Forces" It seems increasingly clear that the challenges related to the emergence of the Anthropocene will represent a dominant scientific-political issue in the decades to come. The extreme gravity of human impacts on Earth ecology cannot be ignored. However, the progressively more radical ecological transformation of our planet may represent a chance to reformulate the categories by which our technological actions are conceived and planned. In that sense, we claim that the Anthropocene is characterized by the fact that the enormous dimension reached by our technological interventions bring a possibility to understand more appropriately the complex relations of interdependence between technology, humanity and ecological systems. The Anthropocene would be the absolute proof that human history and natural history are indissolubly interconnected (Latour 2014). The point here is not just that humanity has become for the first time a crucial force in the evolution of the Earth System. This process also involves a difficulty for humanity to attribute itself the role of the “subject” of those very transformations. In that vein, the emergence of the Anthropocene demands a reconceptualization of the idea of “agency”, in the sense that it should be conceived as included in a complex network involving not just human and non-human actors but also time scales that transcend the human horizon. The meaning and capability of technological agency are thus radically relational, namely constituted and constrained by the “gravitational force” (Bryant 2014) of certain (f)actors and interactions. The goal of our research is to elucidate these dynamics of constitution and constraint concerning the possibilities and limits underlying the interactions between technology, human society and the Earth system.

Contribution 5: Fuentes Palacios, Aníbal; Hernández Vargas, José: Prose and verse. Accuracy and creativity in the biomimetic transference. The process of transference from natural to technological systems involve many assumptions that we must analyze and understand in order to grasp the possibilities and limitations of biomimicry, both as a technological tool and an epistemological approximation to natural systems. The biological reference that serves as the subject for mimicry is vague enough to enclose many aspects of the natural world besides the biological systems. This naturalist paradigm aims to validate technology as an extension of the nature and therefore conceive the resulting human creation as part of the natural logics. On the other hand, the idea of mimicry moves from the replication of the formal aspects of nature to abstract interpretations of principles and behaviours in living organisms. This broader approach focuses on the representation of nature as a diagram, able to be transferred to any other context. This abstract object can now be merged with many others, since its reality is no longer the actual piece of nature it comes from. This process of abstraction is based on a descriptive approach to nature, where the description becomes the new reality of the object and the mimicry is the ability to describe the idea on the object as it was described in nature. As a complex system, nature is

78 BackToTOC

not necessarily adequate nor efficient, and the idea of a natural solution is constructed within the description process rather than the translation. This change of coordinates, from a natural to a technological context, opens the question about the coherence of the biomimetic system and its autonomy from the logic and principles of nature, understanding that in every biomimetic transference there is a transaction where something is lost and something is won: what biomimicry lose in accuracy can be won in creativity. Put in linguistics terms, biomimicry moves between the attempt of being concrete and descriptive like a scientific paper and being creative like a poem.

Contribution 6: Dicks, Henry: Biomimicry: An Alternative Path for Reconciling Art and Technology From the advent of modernity onwards, there have been frequent attempts to bridge what is often seen as a problematic divide between art and technology. The starting point for most of these attempts has been the observation that whereas art aims to be aesthetic, technology aims to be functional. This has led to the development of various strategies – usually theorized and implemented by architects and designers rather than by artists or engineers – for aestheticizing technology, from the traditional recourse to decoration and ornamentation to modernist approaches which simply declare – often by fiat – the aesthetic virtues of the functional, as in Le Corbusier’s (1923) celebration of the work of engineers – factories, steamships, etc. – as the paradigm of beauty, as well as, more recently, postmodern attempts to aestheticize the functional by means of playful contradiction, collage, eclecticism, ambiguity, and so on (Jencks 2011). The basic argument of this paper is that biomimicry opens up an alternative path for reconciling art and technology. Just as the ancient Greeks saw techné, which comprised both art and technology, as “imitation of Nature”, so the emergence and generalization of biomimicry as the basic design philosophy of a new era focused on sustainability raises the possibility of reconciling art and technology by seeing both of these modes of techné as “imitation of Nature”. From this perspective, it is not aesthesis (sensory perception), but rather mimesis (imitation) – a topic analysed extensively by recent or contemporary philosophers of art and literature (Adorno, Auerbach, Benjamin, Blumenberg, Derrida, Lacoue-Labarthe, Ricoeur, etc.) but generally overlooked by mainstream philosophers of technology (Borgmann, Feenberg, Heidegger, Ihde, Mitcham, Winner, etc.) –, that needs to be imported from philosophy of art into philosophy of technology. This new path for reconciling art and technology in turn raises the possibility of the first major shift in philosophy of technology since the inception of modernity: rather than seeing Nature as the "object" of technology, something to be “mastered and possessed” by technologically-equipped human subjects, Nature would henceforth become the "source" of technology, a model which all human techné would seek to imitate or draw inspiration from.

Track: Biomimicry: applied

Contribution 1: Stojanovic, Milutin: Biomimicry in Agriculture: Is the ecological system-design model the future agricultural paradigm? Comprising almost a third of GHG emissions and having an equally prominent role in pollution of soil, fresh water, coastal ecosystems, and food chains in general, agriculture is, alongside industry and electricity/heat production, one of the three biggest anthropogenic causes of breaching the planetary boundaries. Since, in humanly relevant terms, Anthropocene is a crisis of the biosphere, understanding our ecosystem-dependent subsistence technology becomes of the prime importance. Most of the problems in agriculture, like soil degradation and diminishing (necessary) biodiversity, are caused by unfit uses of existing technologies and approaches mimicking the agriculturally-relevant functioning natural ecosystems seem necessary for appropriate organization of our toxic and entropic agro-technologies. Our thesis is that ecological crisis necessitates a new focus of the philosophy of technology on agriculture – that eco-curative and sustainable uses of agro-technology require a paradigm shift from the chemical model of agro-systems to the ecological system-design model of agriculture. In this context, we tackle the main challenge: Is there a sustainable non-polluting broadscale agriculture, and related low-energy-consuming model of rural society? Particularly, following the new biomimetic paradigm of ecological innovation, we question in what sense can we mimic natural solutions in agriculture, and to what extent is “doing it the natural way” desirable or even compatible with the Earth system crisis and world and urban demographic momentum of the last fifty years. We discern among Integrated agriculture and Permaculture and argue that the former nature- mentored approach (contrary to the latter nature-modeled approach) is more appropriate for sustainable broadscale agriculture necessary for the growing world. However, it is not clear how this agricultural bio-integration will interact with the predicted automatization of work following the ongoing

79 BackToTOC

digital revolution and can the natural farming alternative emerge as a socially-acceptable solution for the anticipated technologically redundant workers.

Contribution 2: Castiglioni, Sara: The Fifth Wave - Applying Biomimicry to Business Regardless our actions or inactions, technology keeps advancing at strong and firm pace and in correlation with Moore’s law (Moore, 1965). No matter what we do as humanity, the fourth wave is around the corner. It is well known, that not everyone has transitioned from the first to the second wave, and less have done so from the second to the third, that is the core part of the Toffler proposal which mentions that in different countries, even regions, the first, second and third wave can coexist , resulting in inequity and poverty. Will it be possible for them to catch up? Is that what we really want? Is that the future that we envisioned? In this paper, I argue, that our final destination as humanity does not have to be neither utopia nor dystopia, but protopia --an incremental process as coined by Kevin Kelly. I called this scenario “The Fifth Wave”. In this wave, a model of interaction and cooperation between Nature and Technology, provides the language and framework to help organizations to achieve the goal of leading a Sustainability Learning Evolutionary Economy. The discussion about Biomimicry in business is not just a novel way for products and process innovations, it is way deeper in the heart of the business, acting at core purpose and core values level. Exploring the ethos and the re-connect parts of its philosophy. Mimicking or emulating is not good enough, it is about using life’s principles in business, or “Business Principles for the Firm of Future”, to help organizations and communities flourish and thrive in a systemic way. The conclusions will help to understand how the principles mentioned before will provide a new way to see how Biomimicry can be used in real world business scenarios where some technological constraints could apply.

Track: Critical Infrastructures (1)

Contribution 1: De Keijser, Anais: The criticality of people as infrastructure: creating resilience in the provision of urban services through System-D System-D comes from the Francophone term ‘sytème-débrouillard’, which is considered a philosophical approach to urban life in countries of Francophone Africa. It refers to a mentality of resourcefulness, where actors find solutions to problems with the opportunities at hand. Through it, self-reliance materializes enabling the provision of services to those who find themselves beyond the capacity of the centralized infrastructure (be it in terms of access or failure). This reality has been captured in anglophone debates with terms such as that of 'people as infrastructure' by Abdoumaliq Simone (2004). Through empirical data collected in Bujumbura, the capital of Burundi, supported by secondary literature this paper takes on a post-colonial critique of debates on 'critical infrastructure' and argues for the need to consider System-D as a critical infrastructure in contexts characterized by splinterism and uncertainty. System-D creates resilience in the provision of urban services at times when the centralized infrastructure does not, and in doing should locally be valued as more critical than its 'western' counterpart. The paper focuses on the limitations of current critical infrastructure debates and to the contribution the adoption of the francophone concept System-D can bring to these debates.

Contribution 2: Müller, Marcel: Reading Cities – Building Blocks for a Grammar of Technical Structures Grammar is the principle of order for every text. It consists of specific rules and norms we rely on to identify different narratives, plots and overall meaning in linguistic texts. Through grammar we become aware of the relations of words, sentences and the way those semantic units are structured in our language. I will argue that the grammar of things provides a similar structuring of the relations between humans and artifacts. The paper proposes that in the same way texts can be read through words and sentences, socio- technical systems like cities and infrastructures can be read through practical relations. It will show that both the semantic units and the grammar of these material texts emerge from a complex dialectic of internal and external relations between humans and things. The way we use our artifacts as means for a wide variety of ends is eagerly comprised of situational sets of technological, social and societal norms as well as of technical rules directly prescribed by these very artifacts. (Latour 2007) Also certain practices embodied in our artifacts become such integral parts of our society that they seem natural to the way human beings organize their life. As hexis, a form of habitus, they can govern our further endeavor. (Sartre 1976)

80 BackToTOC

The relationships between humans and their ends as mediated by their technology (Hubig 2006) can be seen as the written words or sentences of those material texts; the interactions of various forms of such practices (Verbeek 2015) can be seen as the grammar that makes the single practices tangible as being a part of a complex structure and vice versa. What is text and what is grammar is dependent on the reader's perspective of these material texts. Through hermeneutical analysis different narratives of power and control, of growth and decline arise. The criticality of critical infrastructures for example not only consists of technical probabilities of failure and preparedness but also of cascading effects on other infrastructures and their severity for human beings. Looking at interdependencies, unscrutinized value judgments and path dependencies as semantic building blocks of our cities and infrastructures we come to a better understanding of our built reality.

Contribution 3: Maurer, Florian; Fritzsche, Albrecht: System Protection and the Benefit of Others – a Service Perspective Historic efforts to ensure the availability of the critical infrastructures of society (CI) built on the principles of redundancy and stock-keeping. This, however, is often economically unfeasible and usually difficult to sustain. Normal Accident Theory (NAT) and – its rival – the High Reliability Theory (HRT) have added a higher level of reflection, leading to alternative concepts such as robustness and resilience. At the same time, they have shifted the focus of the efforts from CI as a service for the benefit of society to CI as a separate entity that is protected for its own sake. A similar development can be observed in the shift of efforts from the protection of CI in general to the protection of the information and communication technologies (ICT) in CI. This article focuses on CI “Transportation”, in particular supply chain networks (SCNs) and the most critical actors in these systems: transport service providers (TSPs). TSPs are responsible for the frictionless service about and around transport, warehousing and advanced data exchange of goods. On the one hand, TSPs act as independent service systems in supply chain networks, to make business and to gain profit. On the other hand, they act as intermediating service systems, providing transportation and warehousing of goods at the core, between (global) offer/supply and demand (vendor/customer). A deviation, caused by TSPs itself or their dynamic environments (e.g. natural disaster, man-made disruption, ICT fallout, etc.), can impact service clients and can cascade easily to a major disruption, the breakdown of the whole network and beyond. TSP providers operate within a hierarchy of service systems, starting from technical services like ICT systems up to actual environmental and social functions. Based on empirical evidence from TSP providers, this paper investigates how the benefits provided by the systems can change in the course of agile and flexible responses to external change. This is set in relation to the concepts of robustness and resilience in CI and the general notion of systemic integrity. We compare this with the concept of services as acts for the benefit of others, identify inconsistencies and suggest solutions.

Track: Critical Infrastructures (2)

Contribution 1: Friedrich, Alexander: The Grammar of Cryopreservation Very low temperature storage technologies enable to preserve ever more biological substances and entities. Continually developed since the second half of the 20th century, cryotechnology gave rise to a new kind of archive or repository – one, that has come to be called ‘cryobank’. In cryobanks, a great variety of organic substances, such as microbes, seeds, specimen, body parts, vaccines, egg cells, or semen, can be stored over a period that exceeds many times the lifespan of the actors who decided to cryopreserve entities, that have been living, are still vital, and possibly will be alive again. In this paper, frozen entities that are potentially animated but hibernating in cryobanks will be addressed as ‘cryofacts’. Cryofacts are the product of a powerful technology that is able to set living entities into a state of being lacking any signs of life. Technically, cryofacts are dead, but they’re not. Actually, they represent a third state between life and death, animate and inanimate nature. To talk about these intermediate beings, cryofacts require a new vocabulary (like ‘cryofacts’ itself). Moreover, they come along with a peculiar grammar – both of speech, practice and technology. In this paper, the grammar of cryopreservation shall be discussed in the following respects. (1) While the concept ‘preservation’ suggests that something is kept in the state just as it has been in the moment of preparation, cryopreservation fundamentally transforms any thing that is being turned into a cryofact. Cryofacts are on the one hand less and on the other more than they have been before, and what they will be after re-thawing. They have been turned into stabilized perishables, immutable growables, immortable mobiles, disposable potentials, surplus funds. In this perspective, cryofacts can be regarded as cold based option values that save chances for future actions. In this way, cryofacts introduce a new grammar into the course of life: ‘later is never too late (we have a backup).’ (2) With cryobanking, organic entities can be stored for future or present purposes. The large-scale

81 BackToTOC

cold storage of animal or human sperm and egg cells, blood products, or vaccines enables a large- scale availability of choices regarding reproduction and health, for example. However, with the progress of cryopreservation it turned out, that the potential use of cryofacts generally exceeds their intended purposes. Cryopreserved blood specimen from the 1960’s for example can now be used to study the former condition of microbes that have been included in the blood samples before they gained antibiotic resistance. Thus, the biomedical progress might reveal many new, unexpected options of using the potential of the frozen archives. Therefore, the anticipation of possible future options led to the generalized practice of preserving cryofacts for ‘purposes as yet unknown’. (3) To ensure, that (1) intended options for given purposes will be saved for future and (2) potential options for unkown purposes will be preserved, an intricate network of practices, technologies, measures, information, communication, and regulations of cryofacts has been build up – a world wide web of artificial coldness. As a comprehensive infrastructure of cryopreservation, this network unfolded a new grammar of constructing, engineering, composing, hibernating, and securing matters of life in appropriate, reliable, sustainable, and sociable ways – a grammar based on a sort of infinite future perfect progressive: ‘Make sure, that we are going to have been preserving possible options, as yet unknown!’ In the proposed paper, I will present and discuss, how this grammar manifests itself in the language, practice and technology of cryopreservation.

Contribution 2: Jerónimo, Helena Mateus; Garcia, José Luís; Mendonça, Pedro Xavier: Philosophy and disasters: the outbreak of Legionnaires’ disease in Portugal The relationship between philosophy and disasters, catastrophes and calamities goes back to the debate between Voltaire and Rousseau about the Lisbon earthquake. From this 18th century’ event through to the present day, the debate has been developed alongside the interrelationship between science, technology, economy and society. Several of the 20th and 21st century disasters (Chernobyl, Katrina, Fukushima…) demonstrate that vulnerabilities and threats are derived from an infinite number of contingencies, brought about by either natural or technological events, which may lead to a chain of calamities at multiple levels. One of the most serious international outbreaks of Legionnaires’ disease – an atypical and potentially fatal pneumonia – occurred in Portugal, in November 2014, when 14 people died and 377 were infected. This outbreak dramatically combined two factors: on the one hand, a mix of atypical weather and climate conditions which led to the dispersion of the bacteria, and, on the other hand, the existence of a contaminated cooling tower of a local chemical factory. While the first is of natural origin, the second is anthropogenic, being the product of 20th century technology and industrial practices. This outbreak, besides being an epidemiological event, demonstrated the link between ecological, social, and technological risk factors. Using evidence from documentary analysis and in-depth interviews, the aim of this paper is threefold: (1) to argue that the cooling towers, which caused the bacteria to spread, are in themselves a risk factor, as they are subject to the multiple interactions of complex technological and industrial systems, and even to human failures regarding their proper use, maintenance, and cleaning; (2) to demonstrate that the combination of industrial activity and a high population density, as well as the emphasis on economic growth and job creation, and trust in effective procedures for the monitoring and prevention, all combine to make the dangers “invisible” to local residents; (3) to discuss the fact that the outbreak occurred following a change in the law which abolished the compulsory external auditing of air quality in industrial buildings. The severe 2007-2008 financial crisis, compounded by the enactment by law of the option of self-auditing further aggravated the concern regarding the transparency of health and safety procedures and both the government’s and industry’s capability to monitor such risks.

Contribution 3: Suri, Anshika: Women’s everyday contestations and negotiations with technology in cities of East Africa Technology is a significant site of gender negotiations where both masculine and feminine identities are constructed and deconstructed. However, women’s everyday encounters with technological artefacts are rarely recognized with one in three women still lacking access to safe toilets worldwide. Previous studies reveal that planning for sanitation infrastructure is often far removed from women's needs, their socio-cultural practices and existing gender constructs. Furthermore, the lack of a private space often forces sanitation to become a public act, i.e. open defecation, with several studies highlighting a sense of ‘shame’ attached with it due to socio-cultural constructs of the female body. Hence, the female body becomes not only a site of oppression, but also contestation, negotiations and a socio-political tool within urban infrastructure regimes. Although feminist scholarship in STS studies has theorized that users are active participants in shaping the gendering of artefacts by interpreting and using technologies, innovators construct many different representations of users and objectify them in technological choices which exclude specific

82 BackToTOC

users. Therefore, a critical feminist perspective on technology can help strengthen investigation into access to sanitation provision by including wider gender contexts within which they are designed and used and by starting a critical debate about what and whose needs are to be met. In this paper, I investigate women's encounters with sanitation infrastructure provision and how it affects their everyday life in informal settlements in Dar es Salaam and Nairobi. By using data collected through qualitative semi-structured interviews, I highlight how women interact, inform and transform infrastructure contributes to its social shaping. The findings of this investigation highlight that inadequate public infrastructure may be contributing in propelling fear of sexual violence in women residents, further accentuated by reductive technological design strategies.

Track: Engineering Epistemology / general

Contribution 1: Zoglauer, Thomas: The nature of technological knowledge Agency in socio-technological systems is a rule-following activity, which is guided by technological knowledge. It is an open philosophical problem what kind of knowledge technological knowledge is and what it distinguishes from scientific knowledge. I will call this the “knowledge problem”. Different classification schemes for technological knowledge were proposed by Marc de Vries, Günter Ropohl and others. They present a rich variety of knowledge types ranging from simple abilities and skills and practical expertise to the knowledge of rules and technological laws. A distinction between two basic knowledge types emerged as fundamental: the distinction between knowing how (or implicit knowledge) and knowing that (or explicit knowledge). An often repeated conviction among philosophers of technology is the thesis that in technology the dominant kind of knowledge is knowing how, whereas the main object of science is knowing that. It is also claimed that there can be pure knowing how without knowing that. Another dispute is on the question whether knowing how can be reduced to knowing that or vice versa. Per Norström has recently argued that knowing how and knowing that are both genuine and irreducible types of knowledge. Critically reviewing the main positions and arguments in this debate I will show that there is no clear distinction between knowing how and knowing that. Different levels of technological knowledge have to be distinguished, beginning with abilities and skills on the lowest level, practical expertise on the middle level and ending with scientific knowledge on the highest level. In practice knowing how and knowing that are always interwoven, where sometimes knowing how or knowing that dominates. An analysis and comparison of model building in science and technology will substantiate the thesis that technological success crucially depends on the scientific underpinning of technological rules. Metaphorically speaking, practical rules are representing the surface structure of the grammar of things, while the hidden deep structure of this grammar are scientific laws. This solution to the knowledge problem makes it possible to understand the relationship between science and technology more clearly.

Contribution 2: Chakrabarty, Manjari: THE CRITICAL FUNCTION OF SCIENCE IN THE ORIGINS OF THE STEAM ENGINE Historical-philosophical studies investigating the character of science-technology relations have largely been dominated by what is often called the ‘reductionist view’. The reductionist view asserts that technology emerges from science and entails the mere application of scientific knowledge to the making of artifacts. The present paper challenges this widely-held reductionist view that implies a constructive function of (theoretical) science in technology by examining the prenatal history of the first fully self-acting steam engine built by Thomas Newcomen (1663-1729) during the 1700’s. The prenatal history of the Newcomen engine has been documented quite extensively by scholars. Drawing freely and gratefully upon such historical material we would like to discover the principal function of (theoretical science) in Newcomen’s invention of the steam engine. While some scholars argue that scientific principles concerning heat and mechanical power - what later came to be known as thermodynamics - were not applied to the steam engine before the work of James Watt, others point out that without the discoveries of the phenomenon of atmospheric pressure by Evangelista Torricelli or that of the power of atmospheric pressure over a vacuum by Otto von Guericke the steam engine would not have developed. The present study wants to find out how far the old saying - that science owes more to the steam engine than the steam engine owes to science - is true. The paper has two sections. The first section examines the prenatal history of Newcomen’s engine and finds that though there was a strong connection between the science of pneumatics (but not thermodynamics) and the technological practices related to early steam engines, that piece of theoretical science did not itself reveal how to construct the final form of the engine. Considering this key point that science does not have a constructive function in early steam engine technology the next section proceeds to examine the real potential of science in technological innovations.

83 BackToTOC

Technological implications inherent in scientific conjectures, as Karl Popper (1957) noted long ago, come into sight only when such conjectures are formulated negatively, in the form of prohibitions. For e.g., the law of conservation of energy can be expressed by: `You cannot build a perpetual motion machine'. Applying this old Popperian insight we find that the technological implications of, say, Otto von Guericke’s scientific discovery of the power of atmospheric pressure over a vacuum becomes most visible when it is put negatively: You cannot prevent the piston (fitted into a cylinder) from being sucked in an evacuated cylinder against the pressure of the atmosphere. Expressed in this negative form von Guericke’s scientific work clearly indicates what not to do. Upon a close review of the science of pneumatics that was an indispensable prerequisite of the Newcomen engine the paper ends with the argument that science has a critical (or negative) and not a constructive, function in early steam engine technology.

Contribution 3: Montminy, David; Meunier, Gabriel: Functional stance towards science and engineering Recent discussions in philosophy of science have centered around engineering, either concerning its relation to other sciences (de Vries 2010; Boon 2011, 2012; Knuuttila 2011); or its use of models (Boon 2008; Pirtle 2010). In this paper we argue that adopting a functional stance towards science, allows one to consider engineering as constitutive of science as a whole. To do this, we will show how various functional accounts can satisfyingly be applied to engineering. We will show that both Bailer- Jones’ (2009) account of models in science and Woodward’s (2014) account of causality can be seen on functional terms and thus provide an understanding of science in which engineering is an integral part. Furthermore, we will also show that a functional account of scientific explanation (Baetu 2015; Bouchard 2013; Craver 2013; Huneman 2013) coupled to Vermaas and Houkes' (2013) functional account of artefact is appropriate to highlight the specificity of engineering within the sciences. Specifically, we argue that the stance from which functional ascriptions are made is a privileged locus to assess the difference between natural sciences and engineering.

Contribution 4: Naoe, Kiyotaka: Engineering and tacit knowledge Contemporary engineering activities are performed for the most part collectively. Manufacturing, designing and managing are usually done by the group. In these processes, members of the groups share not only the explicit formal knowledge, but also the tacit unverbalized knowledge which constitutes the core of the expertise. Tacit knowledge was first advocated by M.Polanyi. This concept is widely used by many social and cognitive scientists as an irreductionist view of the knowledge. But as Polanyi himself emphasized this knowledge was regarded ad "personal". In this presentation I discuss the collectiveness of tacit knowledge in engineering processes. We can distinguish 1) tacit knowledge in bodily relations(especially manufacturing process), 2) tacit knowledge in remotely distributed relations (ex, tacit knowledge connected by internet), 3) tacit intellectual knowledge(ex. creativity, intellectual intuition engineering judgment). My discussion will be focused on 1). Collective Intention in manufacturing: Formerly introduction of the computing is criticized as it can reduce a highly-skilled work (ex. lathe work) to a simple de-skilled one. But, as Nakaoka points out, the former totality of the proficiency of whole person is, in the computerized factory manufacturing, replaced by that of the whole personality of the workers group’s, or by the totality of the whole process flow. Proficiency is distributed among the human actors. D. Norman shows by the analysis of the cockpit that the expertise in airline flight system resides not only in the knowledge and skills of the human actors, but in the organization of the tools in the work environment as well. We can see such a distributed cognition as a characteristic of the cognition/action mediated by artifacts. What is important is that this distribution does not only mean a routinely regulated embodiment relationship, but an adequate problem-solving ability which gives the human-machine-system smoothness and flexibility. In analyzing this adequacy and flexibility, theme-horizon structure in the “theory of relevance” (A. Schutz) is informative. In this presentation, I discuss, by taking up an example of Japanese chalk factory employing many intellectually-handicapped people, epistemological structure of collective tacit knowledge.

Track: Engineering Epistemology / special

Contribution 1: Eckert, Claudia; Stacey, Martin: Object references as engineering knowledge Engineering products are designed in an incremental way, where a new design is based on an existing one; and components, systems and solution principles are maintained over product generations and shared across products. We argue based on nearly 20 years of empirical studies of

84 BackToTOC

engineering processes, that references to existing objects play a key role in the creation of new design ideas, making sense of suggestions by others, and evaluating design ideas in the early phases of design processes. To limit the cost and risk associated with engineering products, companies try to maximize the commonality across products and reduce novelty in each product generation. Engineers usually work in component teams over multiple product generations. Past designs are recorded in multiple models, such as CAD drawing or simulations, but the reuse of models is still proving a practically challenging problem. However the engineers remember the designs that they have generated and sometimes keep the physical components in their offices. They frequently use references to past designs as a shorthand for particular combinations of solution principles, materials, geometry etc. In addition they are also familiar with solutions used by competitors for similar problems and also use those as both inspirations and reference points for new designs. Reference to objects is a parsimonious way of indexing highly complex information and enables designers to reason about complex problems and situations, by thinking about the properties and behaviour of previous designs and how they might be modified. However, it is often not clear what the scope of the similarity is between a new design and the reference design. While a core concept such as the configuration that is referred to might be explicitly mentioned, other characteristics such as materials or manufacturing processes are left unspecified. This creates the risk of misunderstanding between the members of design teams, which are only discovered when the aspects of the design are defined explicitly and integrated later in the process, as well as unintended fixation on aspects of existing designs.

Contribution 2: Schiaffonati, Viola: Engineering knowledge and experimental method: the case of experimental computer engineering Investigations on experimental knowledge in engineering have been so far mostly dismissed in the traditional framework of engineering as an applied science. More recently, a wide debate on the nature of the engineering sciences has contributed to stress, from the one side, the continuity with the natural sciences (Boon 2012) but, from the other side, to point out a methodological distinction that differentiates them from both science and technology (Staples 2015). In this paper my plan is to reconsider some of the issues of engineering knowledge and its relationship with the natural sciences with a focus on the way experimental methods contribute to shape knowledge in engineering. In particular, I will address experimental computer engineering by constructing on case studies from this field. In this endeavor I intend to investigate two different, but interconnected, issues. The first one is the difference between testing (and in particular software testing) and experimentation on computation- based artefact, from algorithms to robots. The second one is the crisis of the traditional notion of experimental control in experimenting with new technologies (Kroes 2016), and how this impacts on the engineering knowledge. At the end I will advance the idea of a posteriori form of control and I will try to adapt the notion of ‘learning-by-exploration’ (van de Poel 2016) to the case of engineering knowledge.

Contribution 3: Smit, Renee: Idealisation in engineering science knowledge The work described in the paper has its origin in an interest in the differences between and similarities in knowledge in the sciences and engineering sciences. Of all the different kinds of knowledge used in engineering, engineering science knowledge comes closest to knowledge in the sciences. There is, however, a dearth of empirical studies in scholarly literature that focuses on the nature of engineering science knowledge. The paper addresses the issue of moving beyond a binary classification of engineering science knowledge as ‘hard-applied’ vs knowledge in the sciences as ‘hard-pure’ in terms of Biglan’s classification. The philosophical question then becomes how the applied-ness of engineering science knowledge presents in epistemic properties. This paper specifically explores idealisation as an epistemic property of engineering science knowledge, and contrasts this with idealisation as used in the sciences. Idealisation is an important way in which scientists and engineers engage with the world, and has been described as the intentional distortion of reality for a specific purpose. The paper reports on an empirical study using engineering curriculum knowledge as recontextualised knowledge valued for the purpose of inducting disciplinary (engineering) neophytes. This is compared to science curriculum knowledge covering the same nominal conceptual content. Starting from fundamental disciplinary values, orientations of engineering science knowledge and knowledge in science are investigated. Results confirm constrained idealisation in the case of engineering science as suggested in some literature, but also the novel use of idealised conceptualisation unique to the engineering setting. The demand for physical realisability of artefactual systems and a relation

85 BackToTOC

between idealisation and normativity in engineering science knowledge are also explored in the findings of the study.

Contribution 4: Kratzer, Jan; Fleck, Claudia; Luxbacher, Guenther: Grammar of mechanical failure cases: a diachronic and linguistic comparison Technical artefacts such as apparatuses and machines are usually seen as results of socio- technological processes. Social scientists and humanists are primarily interested in the question what role knowledge acquisition, coordination and integration into the final product play for the final outcome, and whether the final artefact worked in everyday practical life. In contrast, since the early 20th century, engineering scientists, specifically mechanical and materials engineers, have focussed increasingly on resolving the question, why a product did not function the way it was intended to do. Initially, design and material parameters were investigated separately in this analysis of dysfunctionalities, by means of traditional destructive materials testing. With time, the analysis of machine failures led engineers to a new form of knowledge acquisition. More and more they got to know the highly dynamic influencing parameters of service conditions, operation and operating errors in the practical environment, and they realised that these parameters merit a more systematic approach during knowledge coordination. Thus, the older disciplines materials testing, mechanical failure analysis and engineering mechanics fused within the newer field of operational stability of structures the results of which modified the design knowledge. By means of a diachronic and linguistic comparison, using a network analytical approach, our group of a historian, a sociologist and a materials scientist investigate knowledge acquistion, coordination and transfer and the corresponding active networks, in the field of mechanical failure analysis. On the example of a specific machine part we investigate, how new knowledge has been integrated into the engineering sciences, and whether and how new materials and designs have been developed on the base of the newly acquired knowledge. Hereby, we focus on the development from the early 20th century till now, and on the German and English language area. Track: Knowledge Productions / Action in Engineering Knowledge

Contribution 1: Zwart, Sjoerd: Prescriptive Knowledge: The Grammar of Actions and Things In this paper I argue that the distinction between descriptive (DK) and prescriptive knowledge (PK) is the most fundamental one in the analysis of the cognitive practices of scientists and engineers. Consequently, any enlightening empirical study or analysis of explicit engineering knowledge should at least distinguish between DK and PK. This holds equally for most empirical studies, philosophical analyses, engineering science and didactics. The relevance of the topic follows from the combination of two observations. First, that almost one third of the engineering research projects concerns the development of prescriptive means-end knowledge (Zwart, de Vries 2016); and second the absence of a comprehensive methodology handbook that contrasts sufficiently clearly the differences between the methodologies for DK and PK in engineering. I will substantiate my thesis using empirical and conceptual arguments. To start with the first, I will show that the well-known devastating historical critique on the technology-as-applied-science model, provided in the 1970s and 80s has largely been based on the recognition of the irreducibility and priority of PK (Layton 1974, Staudenmaier 1985, Vincenti 1990 etc). Almost all taxonomies of engineering knowledge ensuing from these and later discussions about the science-technology relation prominently feature the difference between DK and PK (Gille 1978 (86); Vincenti 1984 (90); Staudenmaier 1985; Hubka & Eder 1990; Mitcham 1994; Ropohl 1997; Faulker 1997; Hendricks et al 2000). Besides these historical discussions, developments in Industrial Design (Cross 2008; Hubka & Eder 2012) and Architecture (De Jong & Vervoort 2005), and more recently, in Information Systems (March & Smith 1995; Hevner et al 2004), Management Science (Van Aken 2004; Denyer2008) and Requirement Engineering (Wieringa 2009), show a strong tendency to emphasize the importance of PK. Second, the more global conceptual arguments for my thesis derive from analyses such as developed by Ryle (1945 & 1949); Polanyi (1958); Bunge (1966/7); Mitcham (1994) and Meijers & Kroes (2013). While discussing objections put forward in Houkes (2009 and 2013), I will come to a sharper characterization of PK by definition PK fragments in terms of specific issue statements, which have the form of conditional means-end sentences. My conclusions are that for reading and understanding engineering practices, only a ‘grammar of things’ is insufficient; we need a ‘grammar of actions and things.’ Next, it is observed that the practices of engineers and those of scientists are both imbued with DK and PK. It is hypothesized therefore that the variability in scientific and engineering knowledge is located in the differences of DK-PK patterns

86 BackToTOC

these practices exhibit on a regular basis. Paris, February 6th, 2017.

Contribution 2: Tromp, Hans: Artifact Design Reasoning and Intuition, explored through philosophy of action and cognition The existence of artifacts raises the fundamental question how something could be created that did not exist before. That question cannot be answered by ontology and epistemology, which basically cover the knowledge of the world “as is”. The purpose of this paper is to demonstrate that philosophy of action and cognition is a promising choice not only to address this fundamental question, but also to support further analysis. The typical combination of complex rational and intuitive processes as a characteristic of the design process can be addressed by new developments in the domain of action theories. Recent multi- disciplinary approaches such as Grounded Action Cognition provide a base to close the gap between the action theories in the analytical tradition of Anscomb, Mele, Davidson, Bratman and Dretske with the recent phenomenologically oriented developments such as 4E (embodied-, embedded-, extended- and enacted) cognition. A conceptual closed-loop oriented Grounded Action Cognition action model will be proposed as a common base for rational and intuitive processes including the compatibility with aspects as engineering knowledge and (net) value. Such action-process-orientation sheds light on the role of the two main constitutional elements of the action process goal (desire) and knowledge (indicated as belief in action theories). These will be related to the notion of function including its normative aspect as goal of action and the knowledge of an artifact’s material structure and behavior in practical artifact realization and use. With a couple of historical cases the feasibility of the proposed modeling will be demonstrated.

Contribution 3: Simon, Judith: Apprehending Big Data: Extended, Android or Socio-Technical Epistemology? "Big Data" has captured public imaginations: heralded by some as the panacea to solve all sorts of economic or societal ailments, feared by others for its potentially detrimental impact on civils rights and liberties. Given the prevalence and impact of big data on many societal domains, detailed analyses of such large-scale data practices appear paramount. With this contribution, I want to focus in particular on the epistemic premises, functions and implications such novel data practices. The relevance of technologies for knowledge practices is commonsensical: from mural paintings over books to databases, technologies have served as external memories; telescopes and microscopes, thermometers and acoustic radars have extended and supplemented our senses; abaci and supercomputers have aided and shaped reasoning by changing the speed, complexity and range of calculations. To characterize the epistemic role of technologies, different terms have been employed (e.g. extensions, supplements, prostheses or tools), and various techno-epistemologies have been developed. Assessing the merits and shortcomings of some prominent techno-epistemologies, such as Clark & Chalmer's (1998) extended mind or Ford, Glymor & Hayes (2006) android epistemology, I will argue that in order to understand the coordination, interactions and tensions between different people and various things involved in or affected by big data practices we need to develop a solid socio-technical epistemology. Such a socio-technical epistemology is not only needed to re-assess central debates in epistemology and philosophy of science, its relevance also goes well beyond the academic ivory tower: Given the increasing utilization of machine reasoning for stratification, sorting and discrimination within the socio-economic realm and the reliance on big data analytics for political decision making, sound epistemological analyses and the development of means to improve understanding, vigilance and responsibility in socio-technically distributed knowledge processes are also of vital ethical, legal, political and economic concern.

Contribution 4: Kornwachs, Klaus: Modalities in describing technological actions: To do, to prevent, to omit We can take a look on artifacts as elements of technologic systems. The time-space structure that connects the elements gives rise to the overall function. In this case, the art of design and engineering is enabled by putting the rights things together in a right way if it is known for what purpose each element can be used. Thus, an approach to understand technology in terms of philosophy of science is given by the examination of functions, concatenations thereof and the formal expression of knowledge about all that. Here, the main difficulty lies in the nearby indescernibility between natural and artificial objects (Meier 2012). Another way is to look for the relation between actions and processes when dealing with technological issues. Whereas the effectivity of a technological rule like (if A is wanted, do B) may be determined in quite simple cases by technological knowledge and in more complex cases by experience, the

87 BackToTOC

problem of formalizing technological rules consists in the difference between the logical and semantical type of the expression, named by A as a wanted function and the expression named by B as an action. It could be shown, that the formalization of the pragmatic syllogism, concluding from a causal law to a technological rule is only logically correct, if one uses the negative versions (Kornwachs 2012, p. 64-74). This has not only a formal, but a technological reason which can be discovered by an in-deep logical analysis of technological actions. Each technological action is an action, but not every action is a technological one.Using the concept of “Durchführungslogik” (Logic of performing, Harz 2007), which is syntactically isomorph with propositional logics, one can define the initialization and the negation of an action as doing, preventing and omitting. Nevertheless, its semantic is different from propositional logic, and it is possible to introduce modalities like “feasibility” and “inevitability”. Some results of the application of these concepts are discussedfor technological actions. Particularly, we can find some interesting contradictions when we try to describe technology in a pure logical way. Meier, J.(2012): Synthetisches Zeug. V&Runipress, Göttingen Kornwachs, K. (2012): Die Struktur technologischen Wissens. Analytische Studien zu einer Theorie der Technik. Edition Sigma Berlin Harz, M. (2007): Zur Logik der technologischen Effektivität. Masch. Diss. Fakultät für Mathematik, Naturwissenschaften und Informatik. Brandenburgische Technische Universität Cottbus

Track: Knowledge Productions / Models and Simulation

Contribution 1: Ammon, Sabine; Meyer, Henning: Simulation Models as Epistemic Tools in Product Development: A Case Study Our presentation examines the role of simulation models for the generation of knowledge in processes of product development. The investigation is based on a case study which deals with the development and usage of a gear in a vehicle simulator. The example of the InDriVe-Hybridsimulator allows to compare different kinds of simulation which vary in respect of their epistemic aims, their approach to gaining insights, their validation strategies (comp. VDI 2206), and their impact on standardisation. The goal of the InDriVE project was to develop a driving simulator which is able to simulate, test and experience innovative vehicle and drive concepts. The InDriVe Hybridsimulator is a tool applicable in early phases of the development process. By using this example, we aim to show how simulation models serve as epistemic tools (Boon & Knuuttila 2009), how modelling practices allow for a better understanding of the future artefact, and to what extend these procedures contribute to a gain of knowledge. Drawing on the example of the InDrive-Hybridsimulator we find three major instances of simulation modelling: a FEM-model during the construction phase, the prototype on the test bench, and the engine and drive train in the vehicle simulator. The FEM-model (based on the finite element method resp. analysis) allows to simulate stress in the component and to perform a stress analysis. The prototype on the test bench allows to simulate the vehicle operation under idealised standard condition in order to test specific parameters. The vehicle simulator couples computer-based, simulated feedback control systems with “hardware-in-the-loop” in order to induce different kinds of longitudinal dynamics. Their impact allows to predict the driving performance of the future vehicle; insights which, in turn, are used by the car company to decide on a new line of products. As epistemic tools those simulation models are embedded in complex milieus of reflection which comprise soft- and hardware constellations, operational procedures, notations, artefacts as well as the expertise of the developer. Putting this developmental milieu in action, the process not only results in a new product (the gear, the vehicle), but also generates knowledge about the future artefact.

Contribution 2: Hillerbrand, Rafaela; Claudia, Eckert: Models in Engineering Design Processes Engineers interact with their products and processes largely through models. Models in engineers describe the product and process, but also at the same time shape and create it. This clearly distinguishes them from scientific models that primarily aim to describe a certain target system. While over the last decades or so, there has been a growing body of literature on models in the sciences, much less research has been done on models on engineering design. This paper aims to fill this gap by looking at the engineering knowledge from the model point of view. We will classify various types of models used in engineering design and compare them to models used in scientific research. This work comprises both a methodological as well as a conceptual part. In particular we will argue that process models are central in engineering design because they affect design choices about the product by making time and resource constraint explicit. As process models play only a rather subordinate role in the sciences, they seem to create characteristic engineering knowledge. We hold that this is not only relevant for debates within the philosophy of design, but also of interest

88 BackToTOC

for the engineering community itself. As the generation of engineering models usually involves large effort, models are frequently reused and adapted for other purposes. However engineers often have little awareness of the how the purpose of the models affects the models and what is or is not expressed in a model. Engineering models have a life cycle and a changing relationship to the product or process that they are describing. While some models, in particular product simulation models are validated through tests of the real product, most models have a relationship to reality that is difficult to assess. We hope that our work provides a first step towards a better understanding of the prospects and limits of adapting and reusing models in contexts they were not designed for.

Contribution 3: Boon, Mieke; MacLeod, Miles: Model-Based-Reasoning (MBR) as a skill for interdisciplinary approaches to socio-technological design and research Complex societal issues usually ask for socio-technological solutions, which requires integrating technological design and optimization with the design and optimization of social (and/or socio- economical) phenomena. Examples are e-health, surveillance, care robots, mobility, climate-problems, disaster management, and smart environments. Their design and development involves inter-, multi- and/or transdisciplinary scientific research in both social and engineering sciences. This situations puts forward particular challenges for the education of students. How should they be prepared for these kinds of tasks? Based on an extensive review of existing literature on teaching interdisciplinarity in engineering education we have concluded that clarity on crucial epistemological challenges of interdisciplinarity is lacking, let alone, on how to teach the necessary (meta-)cognitive and methodological skills needed for interdisciplinary research and design. At our university college, we are developing and implementing a course in model-based-reasoning (MBR) that aims at methodological and cognitive skills needed for interdisciplinary research and problem-solving. Model-based-reasoning (MBR), in short, is the activity of reasoning by means of models and modelling. In actual research practices MBR plays an essential role in the coordination and integration of different fields. Our general claim is that scientific education aiming at graduates capable of interdisciplinary research towards problem-solving should take MBR as one of the central skills that support in learning effective approaches in: problem-analysis, reading scientific articles, translating problems into possible solutions, crafting design-concepts for problem-solving, translating design-ideas into research-project, and ultimately, also understanding of so-called cognitive and heuristic strategies that scientists use in generating scientific knowledge. In this talk, we will present examples of how we train students in these applications of MBR.

Contribution 4: Hasse, Hans; Lenhard, Johannes: Ennobling Ad Hoc Modifications. Confirmation, Simulation, and Adjustable Parameters Ad hoc modifications have a bad reputation in philosophy. The early and influential account of Popper denounced such modifications as unscientific because they aim at avoiding falsification. His verdict has been substantially mitigated by more recent accounts, like that of Chalmers or Worrall. The latter acknowledges that ad hoc modifications have been part of fine exemplars of scientific knowledge. He discerns god from bad ad hoc in the following way: In good cases, theoretical considerations guide the introduction of ad hoc parameters, whereas bad cases are those in which parameters are introduced just to enable pragmatic adjustments. We argue for a much more radical revision of Popper’s verdict. Knowledge in the technical sciences is crucially based on ad hoc modifications that are not theoretically motivated. Far from being bad, such pragmatic types of adjustable parameters are crucial components for producing knowledge as sought in technical sciences. This holds especially for computer simulation. We will argue along the case of engineering thermodynamics. One important issue is how employing adjustable parameters affects procedures of confirmation.

Contribution 5: Date, Geetanjali; Chandrasekharan, Sanjay: What role do formal structures play in the design process? Background Engineering design and innovation practices have been studied empirically. Philosophy of technology has engaged with this empirical data, leading to debates on the nature of technological knowledge and its components, and whether it is epistemologically distinct from its formal cousin, scientific knowledge. This comparison takes for granted that formal knowledge structures play a key role in design. However, this role is not well characterized. Objective/Design/Method To explore the role formal knowledge plays in design, we study a contrast case – the design process of a rural innovator not formally trained in design or engineering, and his practice of developing high end micro hydro power systems in remote mountain areas. We trace the trajectory of his designs and modifications, as seen through the installed artefacts and their components. The data are collected through interviews, field observation, and secondary data sources.

89 BackToTOC

Results The designer follows a goal-driven iterative design method, where each design iteration is complete in itself. The process of technology development is structured spirally in terms of broadening of goals i.e. expanding 'use plans', and increasing component complexity to satisfy them. This process has allowed him to successfully design and build several systems, situating his design practice in the site conditions, without recourse to formal engineering knowledge. We discuss this process in the context of another recent study, which reports that expert engineers solve input-output estimation similarly, without appealing to formal knowledge structures. Discussion Formal structures, particularly equations, help componentize complex systems, allowing exploration of idealized situations, via manipulation of parameters / design specifications. However, this analytic process is useful only for optimizing systems, and not for generating designs, which require a synthesis process, driven by imagination. This critical imagination process is subsumed/occluded later by the formal system, when it is used to compare/calibrate different designs. Secondly, the componential structure of equations results in a discrete model of the environment, leading to isolation of environmental parameters out of context, and optimizing/combining them in destructive ways. The design role of formal structures thus appears to be overemphasized – their optimization/calibration role occludes both the imagination process and sustainable design patterns.

Contribution 6: Wang, Wei Min; Exner, Konrad; Preidel, Maurice; Jenek, Julius; Ammon, Sabine; Stark, Rainer: Evaluation of Knowledge-Related Phenomena in Milestone-Driven Product Development Processes – An Explorative Case Study on Student Projects The steady increase of product functionalities and the introduction of new business models such as Product-Service Systems (PSS) are driving the complexity of products and demanding interdisciplinary collaboration. Hence, the integration of knowledge from different domains has become an essential task in product development processes (PDP). Engineering methods such as Concurrent or Simultaneous Engineering are applied to control the increasing complexity and to face the challenge of ever more competitive markets. Such methods facilitate the management of complex product development projects by supporting the decomposition of tasks and their paralleled execution. Respective PDPs are characterised by repetitive cycles of phases of distributed activities and distinct milestones for coordination and integration. In former cases, actors (e.g. engineers) work independently in their own domains and use their set of data, information and knowledge to achieve partial goals. The results range from design drafts in early phases to physical prototypes in later stages of the PDP. Such result-artefacts can be seen as manifestations of the knowledge and resources invested by the respective actors to achieve their goals. In latter cases, the distributed result-artefacts are brought together and integrated to realize a defined state of the product. To manage such paralleled development activities milestone-driven processes (MDP) have been widely adopted in industrial practice. As such processes require intense collaboration of interdisciplinary teams, one key success factor of MDP is the efficient coordination and synthesis of knowledge that is created dynamically in the network of actors and manifested as result-artefacts. The existing body of research on knowledge management in PDPs and respective IT support is largely outdated and does not consider novelties in engineering environments, e.g. the extensive use of Computer Aided Design Methods (CAx) or Product Lifecycle Management (PDM) Systems. Moreover, a holistic evaluation of knowledge-related dynamics in PDP including the interrelations of knowledge forms, domain experts as bearer and user of knowledge and generic PDP activities is still missing. In our presentation, first results from a case study are presented which is currently conducted on two exemplary student product development projects at the TU Berlin. This explorative case study provides an initial basis of generic knowledge-related phenomena in different phases of the PDP. This in return allows for a first systematic draft of knowledge dynamics in PDP and the further development of a generic model to depict the interdependencies of knowledge forms, knowledge bearers, PDP activities and artefacts throughout the PDP.

Track: Pedagogical Pragmatics (1)

Contribution 1: Shew, Ashley: Teaching Technology & Disability I share my materials and approach in my class on Technology and Disability. Having taught this course twice and anticipating its addition to my university's curriculum, this class has been tried with both graduate students and undergraduates. I talk about the projects produced within the class by students: a better harnessing system for blind skiers, a graphic pamphlet about Down Syndrome, an assessment of curb cuts on campus, research on environments in adult day services, and more. Philosophy of technology provides solid framework for extension into this area of teaching, and

90 BackToTOC

students report tremendous transformations within the class about their thinking regarding engineering, engineering education, public health, infrastructure, and design work more broadly. I will discuss a particular unit on Deafness and its usefulness for other types of classes; I have taught this in the context of a class on Medical Dilemmas and another on Bioethics.

Contribution 2: Martin, Diana Adela; Conlon, Eddie; Bowe, Brian: A CRITICAL REFLECTION ON EXTENDING THE CASE STUDY METHOD IN THE TEACHING OF ENGINEERING ETHICS A dominant teaching method, the use of case studies has in recent years attracted criticism pointing to its weakness in capturing the dynamics and complexity of the professional environment of engineering. Case studies appear to fail at capturing the nature of engineering practice on two grounds. From an epistemological point of view, they rely on the assumption of the pre-eminence of explicit knowledge, such as the one provided by professional codes and moral theories, in addressing possible ethical dilemmas faced by engineers. But the knowledge involved in engineering practice is not covered exhaustively by this pedagogical method, with its emphasis on clear cut, black and white scenarios. As Vaughn (1996) argues in her study dedicated to the Challenger disaster, the workplace is often subject to an “incremental change” that makes a certain deviation from norms an institutionalized work routine. Such that the institutionalisation of patterns of behaviour and the evaluation of risks are matters that require professional engineers to make use of implicit knowledge, which is highly contextual. While from a metaphysical perspective, case studies elude the nature of engineering artefacts. These are not mere products whose creation is restricted to the application of certain scientific principles, but they also comprise a certain social dynamic or have political effects (Feenberg 1998, Winner, 1986). Bringing insights from learning theory, our presentation examines the pedagogical challenges of extending the case method in a manner that aims to correct its epistemological and metaphysic minuses highlighted above. For this, we put forward a proposal to recontextualize the case study “Cutting roadside trees” designed by Pritchard (1992) in light of Burawoy’s approach (1998, 2009) and the constructivist frame suggested by Jonassen (1999), as to conclude with a reflection on its implication for teaching informed by our own pedagogical experience with students enrolled in the General Engineering programme at Dublin Institute of Technology. The desired learning outcome is that such extended contextualization makes students aware of the existence of institutional logics and relations of power inherent to engineering practice, as an enabler of exercising their agency to change or resist unethical practices.

Contribution 3: Fujiki, Atsushi: Developing pedagogical method of engineering ethics integrated with environmental ethics This paper introduces a pedagogical method for teaching both environmental ethics and engineering ethics to the engineering students through a local case. One of the main purposes of this study is to fill the gap between environmental ethics and engineering ethics education. Over the past few decades, ethics and consideration of environmental factors has been regarded as one of the most important competencies in the realm of engineering education, but little attention has been given to the relationship between them. In short, these two academic disciplines have not exchanged each other for a long time because they have different interest in ethics. The most important thing to environmental ethics, on the one hand, is the settlement of a metaphysical dispute between anthropocentrism and ecocentrism, but on the other, engineering ethics intended to be more practical one that be able to foster engineers who have an ability to design their own action plan with a professional sense of morality. For improving this current situation, I brought a local case, concerned with schistosomiasis japonica eradication activities in the Chikugo river basin, into the class offered in the National Institute of Technology, Kurume College. Schistosomiasis japonica, once spreading over the Chikugo plains and some other places in Japan, is endemic disease caused by parasitic infection. In the eradication process, the sole intermediate host, Miyairi nosophora were exterminated. The history of schistosomiasis japonica eradication activities is cross-sectional case over environmental ethics and engineering ethics, and includes some important ethical implications concerning with these two disciplines such as “how much can we, especially as engineering profession, sacrifice natural environment for the improvement of public health?” I named this dichotomic problem structure as “the conflict between environmental protection and public health”. When I tell the students about this case, I always make an effort to analyze from the viewpoints of different ethical theories and concepts, such as utilitarianism, environmental pragmatism, and intrinsic value of life, etc. Feedbacks partly show that students acquired multifaceted viewpoints on environmental problem as a result of this approach.

91 BackToTOC

Track: Pedagogical Pragmatics (2)

Contribution 1: Hamilton, Edward: Ontologies of "openness" and the politics of educational technology At the turn of the new millennium the term “open” took on a central position in educational technology. This term applied to a development paradigm (open source), to an orientation to the content of technology-mediated education (open educational resources), and to a philosophical approach to teaching and learning (the open education movement). And in contrast to the commercial model that dominated the discourse of “online education” in the 1990s, “open” educational systems and processes appeared to promise a better alignment between educational technology at the level of design and implementation and definitive aspects of academic institutions and cultures. This paper examines “openness” as an ontological ground for educational technology with deeply ambivalent features. That is, “openness” itself opens different trajectories of development in educational technology on the basis of conflicting ideas about its meaning and realisation. What is being opened? How is “openness” to be realised in technical systems, processes and practices? To what ends is higher education subject to “opening”? What is the nature of “open” systems and how does a contrast between “open” and “closed” link to divergent conceptions of progressive educational change? The paper explores these questions through an examination of the rise of “openness” as a discursive and developmental figure in educational technology and in particular through the phenomenon of massive open online courses (MOOCs) whose development trajectories clearly illustrate the ambivalence of “openness” as an ontological category in educational technology.

Contribution 2: Cruz, Cristiano C.: How to Form Engineers Capable of Social Technology and Popular Engineering: Some Brazilian Initiatives In Brazil, the social technology (ST) movement emerged in the early 2000s and has gained some momentum since then. Its main perspective is to develop or adapt technology so to address the technical necessities marginalized social groups identify as urgent. Such work, moreover, must be carried out in intimate and open collaboration with the local participants, and must produce a technical solution that bear a sociotechnical order which helps to socioeconomically include the group and to promote the social values they want to preserve. Currently, part of the practitioners of ST call it popular engineering; they themselves being popular engineers. Even though ST is meant to be produced by an interdisciplinary technical team in dialogue with the local group, engineers realized since the beginning that the average education they were given was insufficient to allow them to face their share of work. Particularly insidious here was, on the one hand, the lack of interpersonal training that could make them more empathic and sensitive so as to be able to deal with (marginalized) people on a really collaborative and committed way. On the other hand, they lacked too the epistemic capability of learning and incorporating popular knowledge and popular values into ST, which, as presented on “Brazilian Popular Engineering and the Responsiveness of Technical Design to Social Values”, seems to be essential to this kind of technical production. Producing ST thus reveals itself as something somehow totally different from producing conventional (or mainstream) technology (CT). Forming engineers that are as capable of producing ST as they are of producing CT demands, in addition to the interpersonal and epistemic training just stated, the deconstruction of the widespread technocratic myths of neutral and unilinear technical development. Many Brazilian initiatives which aim to tackle these educational issues are inspired by Paulo Freire’s ideas. The professional profile all them seek to form is what they call educator engineer. In order to (try to) accomplish this, three main types of activities/ trainings are usually offered (isolated or articulated): a) STS disciplines; b) experiences of immersion internship in, or committed work with, poor groups; c) preparatory and evaluative activities, along with tutoring, related to (b). In this paper, such initiatives will be presented and analyzed in some detail.

Contribution 3: Jenkins, Daniel: Moral Philosophy and Automation: How Should Engineers Decide How Machines Should Decide? Undergraduate engineering students usually have little broad-based humanities experience, though it is precisely via the humanities that engineers develop competence to meaningfully address the morally significant problems that arise in engineering work. Further complicating matters, while many ethics courses for engineers rightly focus on professional codes of ethics, concentration on formal and legal issues can unhelpfully displace discussion of moral philosophy. Emphasis on classical ethical theory in such courses, however, holds out the promise of equipping engineers with the tools necessary to engage meaningfully with the most challenging moral questions about emerging technology. It is only through careful attention to philosophical theories of right action, as opposed to codes of conduct, that engineers can identify the factors on which right-making characteristics might supervene, for example; and it is only with such knowledge that questions about the goals of

92 BackToTOC

automation – a pressing contemporary issue – can be satisfactorily addressed. Two technological developments thus stand out as being uniquely suited for analysis by engineers through the lens of moral philosophy: autonomous unmanned aerial vehicles and autonomous terrestrial vehicles. The development of decision-procedures for such vehicles, whether used for military or civilian purposes, must reflect commitment to a theory of right action that seeks to achieve an outcome or class of outcomes at the expense of others. Moral philosophy will help engineers to select and defend a pattern of autonomous machine deliberation that transcends formalistic compliance to professional codes of ethics, in part by ensuring that those deliberations will be grounded in a robust understanding of the kinds of consequences relevant to human welfare.

Contribution 4: Powers, Thomas M: Globalizing Science and Engineering Ethics: Convergence or Equilibrium? It is widely acknowledged that science and engineering (S&E) research is increasingly international and collaborative, and that this trend is only likely to continue. Will S&E ethics likewise converge on an international set of standards? Should it aspire to such a convergence? Should the ethics of technology follow suit? This paper theorizes efforts by a team of NSF-funded researchers to “internationalize” online offerings in S&E ethics, broadly construed. Our main objective in this five-year project is to develop an international component of the Online Ethics Center (OEC) at the National Academy of Engineering in the U.S. The issues that interest us range from research ethics and scientific integrity to emerging technologies. In addition to seeking out contributors from outside of the U.S., we are now working to organize existing resources on “global” ethics issues. Our assumption has been that the inclusion of international perspectives—partly because of the contrasts they provide—should make the OEC a more valuable pedagogical resource. In theorizing the value of these “contrasting viewpoints” several deeper issues of epistemology arise, and they are difficult to resolve without further understanding the goals of science. For instance, in much of S&E ethics there is a strong presumption of conventionalism—the notion that, much like driving on the left or the right side of the road, it does not ultimately matter how researchers conduct themselves, just that they pick a standard. The issue of order of authorship in collaborative publishing may be apt for a conventionalist solution. However, conventionalism is dissatisfying when considering other issues, like responsibility for engineering failures. At some level, conventionalism in S&E ethics just looks like relativism, and is thus open to the criticism that since much of international science is Anglophone, Western, and even U.S. dominated, the “ethics” just defaults to which investigators have the most clout or funding. A more promising view of the role of contrasting international perspectives is that, instead of leading us to converge on the rightness of a dominant view, they provide an “essential tension” to the status quo, and thus the basis of a critique. A critical international S&E ethics can improve our current situation by allowing us to see that contrasting and dominant views both equilibrate around a more complex conception of how to conduct research and simultaneously conceive of the goals of S&E in different contexts around the world.

Track: Technology and the City: (Infra-)Structures

Contribution 1: Nagenborg, Michael: Elevators as urban technologies: Past, present, and future In the first part, I will introduce „urban technology“ as a hermeneutical concept. In the second part, I will discuss the elevator as urban technology. On the one hand, I aim for a better understanding of the interplay between cities and elevators by exploring the past and the potential future of this particular form of vertical transport systems. On the other hand, I will demonstrate the usefulness of the concept of „urban technologies.“ „Urban technologies“ are defined as technologies, which (a) shape city life or (b) are shaped by city life. The definition is deliberately broad because the term aims to overcome the implicit bias towards infrastructures in similar concepts like the „urban machinery“ (Hård & Misa, 2008). To consider something as urban technology does not imply, that all urban technologies share certain properties. It is also not implied that an urban technology is to be found exclusively in cities. The car is the obvious example here: the impact of the car on city life is hard to deny, yet, the use of cars is of course not limited to urban spaces. Since the modern safety elevator (Bernard 2014) did enable the modern high-rise building, which - in turn - shaped the modern (Western) city, elevators need to be understood as urban technologies, too. As Easterling (2014) rightfully remarks, „contemporary elevator technologies that experiment with horizontal as well as vertical movements are the germ of a very different urban morphology.“ To understand the potential of this and other current innovations in vertical transport (e.g., Graham 2014,

93 BackToTOC

2016), I suggest addressing the challenge presented by Hans Blumenberg (2006) who pointed to the chicken-and-egg problem presented by elevators. After all, high rise buildings are not useable without elevators, which - in turn - don’t make sense in the absence of high rise buildings. Building on Blumenberg’s account, it will be argued, that the co-evolution of elevators and high rise buildings needs to be understood in the context of larger economic and societal changes in the city. Hence, we may read the current innovations in vertical transport as indicators for similar transformations.

Contribution 2: Niculescu-Dinca, Vlad: Towards a sedimentology of infrastructures / A geological approach for understanding the city Drawing primarily on ethnographic research performed in a city in Romania, this paper provides a thick description of police practices and information systems in that municipality. It shows various ways in which technologies mediate policing practitioners’ perceptions, decisions and actions. Bringing some additional material from a case in the Dutch police in which they build risk profiles predicated on real-time data from a sensor network, the paper highlights new phenomena with ethical implications emerging at the intersection of information infrastructures and policing practices. To adequately account for these phenomena the paper proposes the development of a geological approach to the study of urban information infrastructures.

Contribution 3: Calafiore, Alessia; Guarino, Nicola; Boella, Guido: Recognizing urban forms through the prism of roles theory The interplay between the built environment and human activities has always been of paramount interest in urban design and planning. In a vision that sees cities as designed large scale artefacts made up of various lower level artefacts such as buildings, streets, and parks, current theories of urban design tend to provide normative frameworks prescribing how to create spaces that people will use in a specific way (Lynch, 1984). However, there are situations in which citizens claim their right to re-shape the city through collective actions (Harvey, 2008). With this contribution we propose an ontological analysis of social places based on the different roles urban artifacts may play. Notably, we distinguish among the functional role (Guarino, 2014) and the social role (Masolo et al. 2004) of urban artefacts. The functional role of artefactual objects composing the shape of the city emerges when they are considered as realizations of design specifications resulting from urban design theories. Indeed, design specifications are developed with the intent that their realisations will have the capability to perform a certain function. On the contrary, the social role of urban artifacts emerges from human collective activities which can be defined as social practices. These two roles may completely overlap or create conflicts in the use of urban spaces. While analyzing social places, we need also to distinguish the role played by non-artefacts. For example, a bench in a square is designed to let people sit but also a tree in a park may play the same role, although it was obviously not planted there to that purpose. In the same way, it is rather common that urban objects designed to play a functional role are then used differently. It may happen that a green area where people are supposed to keep off the grass is actually used to lay on it, while, on the contrary, when designed to make people stay there, it may not be used at all. In conclusion, we believe that recognizing the coexistence of multiple roles to be associated with elements of the built environment is the first step to support more participative urban governance.

Contribution 4: Epting, Shane: Automated Vehicles and Transportation Justice To fully understand the moral dimensions of automated vehicles (AVs), we must think about them in their (future) socio-political contexts, from city streets to suburban cul-de-sacs. This point suggests that examining their inherent qualities cannot tell us if they will have a positive or negative effect on a city’s inhabitants. Yet, when surveying the AV literature, such instances account for much of the scholarship. For example, initial moral inquires, “thought experiments,” addressed how an AV would respond in the event of a crash, along with arguments addressing responsibility and decision-making algorithms. Recent research efforts advance these discussions, but they remain close to the original lines of questioning. While these fictional works provide insights into the possible problems that AVs could bring, they tell us little about how they will influence qualify-of-life issues or how to deal with them in everyday settings. This topic is of paramount importance because transportation can disproportionately harm marginalized people. Neglecting to investigate the subject discounts its significance, implicitly saying that imaginary lives matter more than black, brown, and senior lives do. Correcting this oversight entails giving the topic attention and recognizing the imperatives that underpin it. For instance, this subject involves helping poor people escape poverty, preventing the elderly from facing lonely deaths,

94 BackToTOC

and restoring integrity to fragmented and alienated communities. I argue that these kinds of considerations should weigh heavily when it comes to how AVs are introduced into society, suggesting that moral prioritization is a problem that AV researchers, engineers, and planners must face. The point is not that there is something inherently wrong with these technologies, but we have moral obligations that carry different degrees of importance that determine who or what receives consideration, along with stipulations for how they receive it. In addition to marginalized groups and the public, technologies also impact the nonhuman world and will affect future generations, along with historical and cultural artifacts. I argue that all of these entities deserve consideration, but they do not equally warrant it. To approach these kinds of multi-tiered issues, I employ a “complex moral assessment,” a guide for addressing complicated situations.

Track: Technology and the City: Architecture and Design

Contribution 1: Eloy, Sara; Vermaas, Pieter: Over-the-counter housing design: the city when the gap between architects and laypersons narrows The aim of this paper is to focus on the impact that new automatic architecture design systems may have on the building refurbishment dynamic in the city. We argue that this impact consists of increases in the architectural quality of housing and in the social and ecological sustainable renewal of cities. European cities are faced with stocks of housing from the past centuries that do not respond to contemporary ways of living. For the required modernisation of these stocks three general options are available: inhabitants making small improvements to their housing, large-scale centralised refurbishment, and new construction after demolishment. These options all have their disadvantages. Improvements by owners may lack architectural quality, say it may undermine structural integrity of buildings when whole walls are demolished. Centralised modernisation imposes a homogeneity on the refurbished buildings that may disrupt the social fabric in neighbourhoods by chasing away the original inhabitants. Modernisation by demolishing is increasingly recognised in architecture as ecological unsustainable by its use of energy and other resources. We argue that new automatic design systems that are currently developed in architecture (e.g., refurbishment design systems based on shape grammars) provide ways to modernise cities without these disadvantages. The refurbishment designs that such systems make available to inhabitants contain architectural knowledge that guarantees architectural quality. These systems also provide refurbishment designs that are tailor-made to the individual wishes of inhabitants, enabling that inhabitants can stay in their neighbourhoods after refurbishment. With automatic design systems inhabitants are empowered to, so to say, order cheap and high quality housing designs over the (internet) counter, renew their houses individually and thus renovate collectively large parts of cities in a socially and ecological sustainable way. On the down side continuous refurbishment of neighbourhoods may lead to an unwanted impediment of urban growth of large parts of cities.

Contribution 2: Stone, Taylor: The Morality of Darkness: Urban nights, light pollution, and evolving values This paper will conceptualize darkness as a value relevant to the moral appraisal of, and decision- making about, urban nighttime lighting. Towards this goal, I will discuss three interrelated considerations: historical developments in nighttime illumination leading to the emerging desire for more darkness, a clarification of this phenomenon via recent work in the philosophy of technology, and finally, a forward-looking definition of the value of darkness aimed at decision-making for nighttime lighting infrastructure. For centuries there was a necessity and desire for more illumination at night, spurring technical innovation and drastically altering urban behaviour and perceptions of the night. However in recent years the adverse effects of artificial nighttime lighting, known as 'light pollution,' have emerged as a topic of inquiry, raising moral concerns about the planning, design, and technologies of our urban nightscapes. Alongside these concerns, darkness at night – understood historically as chaotic and dangerous, a space and time for immoral behaviour, and primitive in the face of new technologies – is now increasingly perceived as a rare and valuable environmental resource needing protection from artificial lighting. While the emergence of this value is seemingly antithetical to the lighting technologies from which it emerged, it can be argued that this follows an established grammar of ethical concerns arising from technological development. The increasing desire for dark (or natural) nights can be conceptualized by drawing from recent work theorizing the co-evolution of moral values and technologies. In a world of ubiquitous artificial light, the 'loss of the night' has become a topic that requires reflection and action, moralizing nighttime lighting in a novel – but not unforeseen – way.

95 BackToTOC

Empirical and philosophical insights will be combined to provide a definition of darkness as a value worth promoting and pursuing in nighttime lighting infrastructure. It will be understood as a contextual and multi-faceted property from which, and through which, value is derived and understood. Such a definition will be sensitive to historical context, address pressing contemporary environmental concerns, and anticipate the impact of future technological innovations on perceptions of our urban nights.

Contribution 3: Frigo, Giovanni: ‘Green Buildings’ in the City? A Reflection about Technology, Sustainability Indexes, and the Ethics of Energy Cities are complex human creations in which artifacts such as buildings, streets, and parks continuously interact with their inhabitants. Buildings are particularly relevant in that they silently embody certain ideas and values. As such they define the built environment and the character of private and public spaces. Buildings’ construction and functioning traditionally require considerable amounts of energy. Policy options to promote efficient, climate-friendly buildings may involve both voluntary and mandatory programs. In the US, several cities, states, and federal agencies have mandated LEED or equivalent certifications for public buildings. However, the vast majority of privately owned structures fall short with respect to energy efficiency, due to a lack of standards and codes, insufficient financial incentives, shortcomings in energy information and education. Given that these issues also pertain to individual and social preferences and choices, ethics can and should become part of the conversation. This article focuses on a specific new frontier of research in applied ethics: ethics of energy. It suggests one of the possible ethics of energy, intended as a normative, but non- systematic, framework of ideas and values which can enrich the design of more comprehensive sustainability-energy indexes. The ethics of energy can account for human and social values which are often overlooked by more technical energy practitioners. For instance, the adoption of better insulation and renewable energies might proceed in parallel to other concerns for more inclusive city planning such as access to homes for low income groups, transparency and informed public participation, glocal concern for climate change, environmental and energy justice, ecological conscience, or responsibility towards future generations. The ethics of energy can provide specific ‘ethical indicators’ for more comprehensive energy performance indexes for buildings and other technologies.

Track: Technology and the City: Exploring the City

Contribution 1: Michelfelder, Diane: Urban Landscapes and the Techno-Animal Condition This paper begins by exploring how ordinary artifacts belonging to the urban “techno-scape” serve to reinforce dominant social perceptions of urban wildlife. Such technologies generally fall into two categories. Some, for example the containers developed for use in Toronto to prevent raccoons from making meals of residents’ food waste, are designed to mitigate the negative impacts of wildlife on metropolitan areas. Other technologies, such as urban wildlife corridors built in San Diego and other US cities, are designed to keep humans away from wildlife in order to help the latter move more freely through urban environments. This paper argues these apparently different uses of technological artifacts as responses to the presence of urban wildlife can be seen as representing two sides of the same coin: both add to hermeneutical pressure to perceive and understand urban wildlife as non- human animal Others who need to be controlled by discouraging opportunities for contact and interaction. Drawing upon perspectives associated with postphenomenology, STS, and critical animal studies, as well as recent scientific research into how some species of city-dwelling wildlife differ biologically from their non-city dwelling counterparts, this paper takes steps toward envisioning how ordinary urban technologies ranging from street signs to abandoned railroad tracks could be used differently in order to promote more engaged, ethical relations among humans and wildlife. Used to prompt such relations, such technologies could not only support critical alternatives that would not only allow us to see urban wildlife differently, but also ourselves.

Contribution 2: Barbaza, Remmon: Metro Manila: A City Without Syntax? The organization and maintenance of a city obviously requires technology. The sourcing, distribution, consumption, and disposal of objects are made possible by technology, both in its instrumental and logistical sense. But underlying the technology that shapes a city is language. Like in many other cities, Metro Manila is dominated by the language of consumption. Baudrillard, however, sees consumption as “the most impoverished of languages,” devoid of any syntax. In a society given to consumption, the system of objects and the system of needs do not correspond to each other. This paper analyzes Metro Manila from this perspective of Baudrillard, given the preponderance of

96 BackToTOC

monstrous “megamalls” and giant billboards that all but cover one’s vision of the sky. Both dominate the cityscape, delivered over as the megacity is to endless cycles of consumption. But impoverished a language as it is, the obsession with consumption remains to be a language. What is this language that plays itself out in the megacity that is Manila? How is it spoken—or better yet, how does it speak—through the phantasmagoria of things and images?

Contribution 3: Putnam, El: Locative Reverb: Artistic Practice, Digital Technology, and the Grammatization of the City Mapping as an artistic practice is not novel, though the influence of digital technology on the production and uptake of cartography has informed how artists engage with geographies, especially with the advent of locative media (Tuters and Varnelis, 2006). In this paper, I examine how digital technology informs mapping as an artistic practice, particularly in the production of soundscapes of as a means of re-inscribing a place. Advances in mapping technology, including open source geographic information system (GIS) and Google maps as application program interface (API) mash-ups, have facilitated how artists engage with mapping and locative media, offering accessible platforms for new means of experiencing a place. However, the predominance of Google maps and the consumer behaviours affiliated with this model raises questions as to whether digital mapping actually liberates its users, or does it alienate consumers from the process of navigation (Gordon and de Souza e Silva, 2011). Instead of critiquing GPS as military technology, which evoked concerns of self-surveillance when locative media art first became popular in the early 21st century (Holmes, 2003), I problematize the commodification of data through the use of consumer technology. Bernard Stiegler describes how in today's society, aesthetic ambitions are subjected to marketing where individuals are transformed into consumers, causing the homogenisation of culture and lack of individuation. On the one hand, digital mapping has opened up users’ access to mapping processes, pronouncing its dispersal from authoritative control. On the other hand, previously informal practice of mapping and annotation are becoming formalised through digital cartographies. If cartography is the creation of meta-data of a place, then it functions as a way of grammatizing place and movement. Referring to Stiegler’s (2013) consideration of digital technology as pharmakon (or the condition of duality in which something is both poison and cure, bringing both benefit and harm), this paper examines the mixed implications of digital technology on mapping as an artistic practice and its influence on users’ experience of the city. Particular attention is paid to the following examples: the crowd sourced global sound map Radio Aporee (aporee.org/maps); the locative media projects of Teri Reub; and Christina Kubisch’s Electrical Walks, which provides a unique engagement with urban technology through the sonification of electromagnetic waves.

Track: Technology and the City: What makes a smart city

Contribution 1: Gonzalez Woge, Margoh: Technological Environmentality: technologies absent-presence in everyday environments In this paper I aim to discuss what does it mean that technology has become environmental, as well as what is the role of technological 'absent presences' in the constitution of agency. Due to their close connection to phenomenology, for both Postphenomenology and Material Engagement Theory the environment has never been a neutral or passive feature in understanding the existential and cognitive complexity of human beings. Nevertheless, in both theories its role has been overshadowed by the role of concrete objects and artifacts. Although their answers to the questions of ‘what things do’ (Verbeek 2005) and ‘how things shape the mind’ (Malafouris 2013) entail a broad consideration of material agency, their focus on things seems to be insufficient to tackle the question on the new kind of agency that arises from the merging of technologies with the environment. Information and Communication Technologies are not only embedded in devices that we explicitly ‘use’ but increasingly become an intrinsic part of the material environment in which we live (eg. Ambient Intelligence and the Internet of Things). What kind of technological agency arises from the dynamic intersection of these technologies and our world? In which sense do they enable, constrain, and regulate interaction as interfaces to the world? And to what extent are they becoming our world themselves?

Contribution 2: Dainow, Brandt: Philosophical Framework for Smart City Analysis This presentation offers a new analytic framework for the analysis of the emerging Smart City and Internet of Things (IoT). This framework solves current problems in understanding Smart Cities and the Internet of Things caused by the complexity of these emerging techno-social environments. Visions of the emerging Smart City agree that it will be characterised by a ubiquitous network of heterogeneous digital devices and services. In such an environment most aspects of individual and

97 BackToTOC

social activity will be mediated by a vast array of digital technologies interacting in complex and often unpredictable patterns. Elements of the built environment will cease to be passive objects and instead become active participants in the personal lifeworld and in wider society. Analysis of such an environment requires a new vocabulary - one which accounts for the fact humans and technologies now share properties which were previously and always the reserve of humans alone. This paper will offer a conceptual framework for ethical analysis of this emerging digital environment. This construction will derive from a strict application of Actor Network Theory as a methodology to the twin premises that ICTs are socio-technical systems and that society is an autopoietic system bound by communicative processes. Under this view the Smart City is treated as a single complex system, ranging from devices within the body to urban management systems - a total digital environment. This allows us to identify where ethical concerns must arise in any digital environment which responds to human activity. The resultant model will show that the logic of the urban digital environment necessarily generates particular ethical concerns, irrespective of the technology involved or the context in which it is deployed. It is hoped that others may find this model a useful approach, solving many of the analytic problems generated by the complex nature of our emerging digital world, and providing a practical methodology for future research and analysis.

Contribution 3: Wang, Qian; Yu, Xue: Technology and the City: From the Perspective of Organicism From the perspective of Organicism, technical artifacts are generated from the process of giving biological features of organisms from humans to natural objects. It shows that the technological system bears a certain degree of organic features. Meanwhile, as the place for living in human society, cities not only are corporate supports of biological organisms, artificial organisms, social organisms and mental organisms, but also the platform of interaction among all types of those organisms. The constant development of cities is promoted by the interaction of those organisms. In contrast, the mutual restriction among them leads to the different kinds of “city disease”, which is particularly outstanding in modern cities. Rapid increasing of complex artifacts results in the expansion of social organisms and the anxiety of mental organisms, and further affects the health of biological organisms as humans. To achieve the goal of sustainably developing of modern cities, it is significant to play the proper role of technology in cities in order to adjust the scale, the growth rate and the functional orientation of all types of organisms. Its aim is to construct the harmonious relationship among biological organisms, artificial organisms, social organisms and mental organisms. In this respect, the idea of “Tao models itself after nature” from Chinese Philosophy will enlighten it, because “nature” is just the harmonious state of all types of organisms. To conclude, the “Tao” of city developing mode is approaching the harmonious state of organisms.

Contribution 4: Sadowski, Jathan: Parameters of Possibility: Envisioning and Constructing the Smart Urban Future In this talk, I bring clarity to an influential, but nebulous, vision of our urban future: the smart city. This movement is, in short: global in scale, with initiatives being rolled out all over the planet; driven by proponents with deep pockets of wealth and influence; and a lucrative opportunity with market projections in the billions or trillions of dollars (over the next five to ten years). There is no single definition or conception that those involved in making and studying the smart city agree to. However, two major corporations, IBM and Cisco, are at the forefront of creating the ways we understand, envision, and enact smart urbanism. Like other political projects, the smart city is a battle for our imagination. We should think of the corporate-driven movement—with its initiatives and ideologies, texts and technologies—as a campaign to direct and delimit what we can imagine as possible. Through a close, holistic analysis of discourses and initiatives produced by both companies, I flesh out the overarching narrative that structures this vision of the smart city. The narrative moves from crises as catalyst for change, to theories of technological transformation, to solutions for fixing cities, and finally to strategies for implementation. Importantly, the smart city is still a future-in-the-making. The values and politics embedded in this corporate model have not yet become inextricable parts of our urban future. By engaging with its features, we are better equipped to confront this vision of smart urbanism and reimagine alternative arrangements. Note — If the track organizers are interested and/or if it thematically fits better with the track, I could also put together an abstract that focuses more on urban justice and techno-politics. If that is desirable, please let me know.

98 BackToTOC

Track: Technology Translations (1)

Contribution 1: Kliemann, Ole: The Human in the Machine There is a growing tendency towards the anthropomorphisation of algorithmic technologies such as artificial intelligence. This paper argues that the »cognitive« performance of any algorithm solely lies in the assignment of semantic values to symbols. This assignment, however, is performed by the programmer, not the machine. The programmer soon forgets her own part in this play, making it seem as if the algorithm had cognitive abilities of its own and thus allowing the transferral of human attributes to the technology. Cognitivism follows the paradigm that cognition consists of a symbolic representation of the world and a symbolic computation to process this representation. Modifying an argument found with Merleau- Ponty, Maturana and Varela, one can argue that the real cognitive process does not lie in the symbolic computation but rather in the process of mapping semantic values of the world to symbolic representations: Looking at the Turing Machine as a model for any algorithm, i.e. a syntactic manipulation of symbol systems, reveals that every algorithm is fully deterministic in its input data. Considering an algorithm operating within the world using a visual sensor for input data and a motor apparatus for action, it becomes apparent that the algorithm is not more than a mapping of input data to actions in a deterministic manner. What happens inside the algorithm can thus be considered trivial in any »cognitive« sense, it is simply a deterministic mapping between two sets. Everything depends on how the visual sensor maps its analogue input to a digital representation which is then used by the algorithm. Apparently the programmer was aware of the mapping of semantic values to symbolic representations and constructed the algorithm accordingly. The programmer looking simultaneously at both the semantic values and their symbolic representation is the essential component in any algorithm performing a seemingly »cognitive« task. The algorithm simply borrows from the cognitive power of the human. Drawing on Merleau-Ponty’s argument that perception makes its own contribution to the world disappear, this paper thus argues that the programmer’s obliviousness to his cognitive contribution to the algorithm lays the foundation for the algorithm’s anthropomorphisation.

Contribution 2: Torpus, Jan: Human Adaptivity to Responsive Environments Current media technologies – discussed under the notions of ubiquitous computing and biofeedback – are driven by the aim to disappear and become invisible and unperceivable. These technologies merge with the everyday environment or human body in a way that simulates a “natural” relation between humans and artefacts, without the need of media interfaces. They may become enhancing partners or ethically critical opponents gaining agency. In our currently running artistic research project (Swiss National Science Foundation) we empirically and theoretically examine mechanisms that constitute the adaptivity of humans and their technological environments, the agencies that operate the systems and the effects these have for human self- understanding. We combine ubicomp and biofeedback technologies to immediately and affectively connect an artistic environment with a person being immersed in it, thereby considering the human and the technological system as equal actors (Bruno Latour 2005) interplaying and signal mapping in real-time in a human-in-the-loop ecosystem. The abstract installation is composed of a hanging, seemingly organic, textile structure creating a depriving situation, prepared with parametrically controllable, real-time reactive lights, 3D-sounds, wind flows and dynamic volumes. Test persons were invited to put on a wireless sensor set (heartbeat, breath and acceleration) to explore the space and to give feedback about their experience in a subsequent interview. They developed strategies to master the unfamiliar environment that provoked associations and imaginative activities. According to McCarthy and Wright (2007) visitors apply sense-making models such as iterative anticipating, connecting and interpreting to appropriate and assimilate an unknown situation. By creating cognitive maps or mental models (Johnson-Laird 1983) of devices, interactions and links across spaces with different features or properties, people guide their behaviour and live their experience. To investigate design principles of technologically extended environments, we use design patterns and theories on spatial design. Approaches like Edward Relph’s (1976) three types of spatial identity, a) the place’s physical setting, b) its activities, situations and events and c) its meanings created through people’s experiences and intentions, are based on human perception and interpretation rather than functionality, aesthetic composition or applicability and are therefore an alternative starting point to design future responsive everyday environments.

99 BackToTOC

Contribution 3: Moloney, Cecilia: Self-Attention in Learning and Doing Digital Signal Processing: Why and How Digital signal processing (DSP) is a ubiquitous and mainly invisible part of many technologies of the modern digital world; its importance is underscored by the slogan of Richard Lyons, “the future of electronics is DSP.” As such, undergraduate students of electrical and computer engineering are typically required to learn the rudiments of DSP, while many researchers and graduate engineers do research on DSP or use DSP across numerous application domains. Key among the challenges of practice in DSP is being adept and confident with theoretically-challenging DSP concepts while translating knowledge into practice within the specifics of an application. Many DSP educators and textbooks/handbooks approach this challenge by attempting to make materials more intuitive, as well as more practical and applications-oriented. This paper starts from an alternative position, one rooted in self-attending to what the student/researcher/practitioner is doing while he/she is learning basic DSP or using DSP in practice. The research questions this paper addresses are: 1) Why is it important to focus on the conscious processes that students/engineers experience or perform when learning or using DSP in practice? More specifically, why might this approach be fruitful pedagogically or in terms of research and engineering outcomes? 2) How might students and engineers be trained to self-attend to their conscious processes so that they may become better researchers and practitioners of DSP? The arguments in the paper are grounded in philosophical bases of the human subject (per Bernard Lonergan) and conscious performance analysis (per Mark Morelli), as well as in pedagogically and psychologically relevant backgrounds. While DSP is used to make the discussion concrete, the arguments may apply to any field of engineering with similar requirements for practitioners to successfully connect deep theory and application. The approach of this paper starts from philosophical models of the human subject in order to answer questions and devise strategies to benefit engineering pedagogy and practice. This paper may also feedback to philosophy, by providing a concrete analysis of practice in a field of engineering that has not been extensively investigated from philosophical positions on the human subject.

Contribution 4: Almarza Anwandter, Juan: On the “Pathos of closeness”: Technology and experience in contemporary architectural practices. Interaction, accessibility and other related concepts have acquired a significant role in the context of the contemporary notion of architectural experience. Shifting away from the modernist aseptic ethos, in which the phenomenic experience of space was still conceived primarily from an optic perspective, our current conception of the relationship between body and space tends to privilege a haptic way of interaction, in search of a more tactile, sensitive and performative form of experience, aided by the extensive use of new interactive technologies. Essentially, this conveys an effort to diminish the distance between subject and object inherent to the mere act of contemplation, thus surpassing the modernist conception of open form and transparency (still rooted in an essentially “Platonic-optic” paradigm). It is from this perspective that notions like interaction, accessibility, ludic and sensitive experience have become relevant keywords that articulate the “grammar” of current architectural practices, with technology playing a key role as an active catalyzer of this displacement. From within the field of Architecture Theory, this process must be analyzed not just in formal and spatial terms, but rather as a consequence of a wider historical and cultural process. In this sense, it is possible to affirm that this tendency towards “closeness” and direct interaction as privileged ways of phenomenic experience is conceptually linked to the definite collapse of the Modern metanarratives and their teleological character based on the utopia of unlimited progress. In the absence of ideals and sublime visions of prospective transcendence, the immanent experience of here-and-now becomes a prevailing and dominant narrative. The Pathos of distance (Nietzsche), -understood here as a driving force towards the consummation of a “higher and supreme ideal”- is replaced by the Pathos of closeness, and its consequent de-sublimation process. Based on relevant case studies taken from contemporary architecture practices, the presentation will be focused the formal and theoretical aspects involved in the current conception of phenomenic interaction between body and space mediated by interactive technologies, also including complementary references taken from artistic practices and cultural trends, in order to establish a conceptual cross link framed by the proposed theoretical subject.

100 BackToTOC

Track: Technology Translations (2)

Contribution 1: Luan, Scott: Grammars of Creation I will advance the notion of “structure of intention,” a notion that would help solve some philosophical problems concerning the metaphysics of artifacts. I will also attempt to bridge the literature on the making of textilic/craft and technical/technological artifacts. I will argue that, in the making of certain technical artifacts, deliberation happens not “in the head,” but rather deliberation is the act of working with materials. Consequently, the structure of intention of the maker is reflected in or corresponds to the material structure of the artifact. Such correspondence can be explicated for certain technical artifacts as the chiastic intertwining of intention and material. The mirrored inversion of the chiasmus entangles an originary intention that turns matter upon itself, an intention that is knotted into matter. Such correspondence concerns the fundamental question of how intentionality figures in the physical world. This question can be approached by focusing on artifactual function, function ascription, and what Daniel Dennett calls artifact hermeneutics. I submit a different approach that inquires into the conditions for the possibilities of interpretation. The structural correspondence between intention and material can be understood as a “prior meaning” from which the call to interpretation arises.

Contribution 2: Dot, Anna: Translationscapes and its multidimensional articulations. An approach to borders from the project "On Translation", by Antoni Muntadas The notion of translationscapes was coined by Italian professor Annarita Tarona in 2009 to appoint the set of panoramas in which we can find an articulation of the complex relationship between historical, linguistical and political dimensions of acts of translation and the different agents implied in those - from which Tarona specially focuses on national states, diasporic communities, regroupments and subnational movements-. Interdisciplinary research made halfway between Translation Studies and Borders Studies has developed the theory that translation takes place in the borderlands (Anzaldúa, 1987; Godayol, 2000), there where the contact with the Other is possible. According to this, in this paper, we locate translationscapes in the border zone in order to interpret the constellation of multidimensional forces and powers that shape mental and territorial borders as artefacts and the social set of relationship that become articulated in and across them. To comply with this objective, this research is based on the study of On Translation, a project started in 1995 by Spanish artist Antoni Muntadas. This brings together a series of "time-and-site specific" works which present critical approaches to experiences of intercultural translation taking place internationally.

Contribution 3: Mollicchi, Silvia: Mediums as languages: a transcendental naturalist approach to the relation between mediums and representation. If we were to reverse Kittler’s predicament and, instead of considering language a medium, we were to consider every medium as a language, we would have to radically rethink our conception of mediums. We would have to take into serious consideration the signification-function of mediums, alongside their communication-function, which seems to have been devalued within the prevailing concept of medium which have been constructed over the course of the XX century. This research is premised on a re- examination of the relation between mediums and epistemology and the admission of the formal distinction between appearance and reality, working towards a definition of mediums as the irreducible ‘structures’ through which we inform all of our representations of reality. I will also problematize the use of the word ‘structure’ in the above definition, since, here, it could be replaced by a variety of terms, such as entity or system, all to avoid the fact that the more appropriate to use would, in fact, be medium. Here we touch upon the circularity so common to the ‘grammar’ of mediums. Within the framework of the conference, I will suggest that we need to look into the relations not only among things, but between things and their representations, and in cases where the things in question already engender representations, between these and their second order representations. In this paper, I will first justify the choice of both affirming the irreducibility of mediums and reconsidering mediums within the framework of epistemology, a decision taken in opposition to currents of new materialism and the anti-representational stance popular within British media theory. Then I will make a case for an alternative: a transcendental naturalism that engenders a powerful review of the notion of representation without entailing transcendence, and combines the functions of know-how and know-that without flattening them onto each other. I will also use this idea of representation to rephrase the initial definition provided for mediums. In the last segment, I will turn towards possible sites in which to test this definition of mediums, mostly drawing examples from modernist poetry as an alternative to both expressionism and formalism.

101 BackToTOC

Track: Technology Translations (3)

Contribution 1: Ziegler, Barbara: Unmanned aerial systems in armed conflict: Synergies between historical and philosophical perspectives The technological development of unmanned aerial systems (commonly referred to as drones) and their deployment to military conflicts around the world has garnered immense attention in the past decade. While legal and policy-related implications concerning their use as well as a technological foreshadowing of their potential has mostly stood in the center of academic attention, so have philosophical approaches critically reflected on their inherent qualities as well as socio-political and economic effects. However, no publication has seriously looked at the origins of remote aerial weaponry and what it means to locate, surveil, and attack military opponents with remote technology from a global history perspective. This contribution will closely analyze how the discipline of historical science, and in particular a relatively new and complex subdiscipline, global history, can contribute to academic discussion around the subject matter. To contextualize the perceived novelty of remote aerial combat through a global history approach enables an enriched understanding of core questions about the use of drones in armed combat. Building up on theories of new and asymmetrical forms of warfare, it is the purpose of this paper to establish the view that using drones in war is part of a broader debate about war in the globalized “post-modern” age. After elaborating upon how the usage of drones opens up notions of space and time and reframes other categories central to the study of war, I will show how remote aerial warfare is also deeply embedded in power relations. These aspects are not only of central importance to the study of drones from a philosophical-technical perspective, but also constitute a core characteristic of military and global history approaches. They also matter for theories and approaches of political science, philosophical materialism and anthropology. Although this interplay of scientific disciplines and approaches can be considered a significant advantage to the realization of academic progress, it should not be facilitated without great caution. Fighting with remote aerial weaponry needs a more distinct and informed debate, which is why this article will end with highlighting the need for complementing the dominant legal and policy-related approaches concerning the use of drones with a broader and more contextualized way of studying the characteristics and implications of drone warfare.

Contribution 2: Raffetseder, Eva-Maria: Epistemological Assumptions Inscribed in Process Management Software The practices of algorithmic registering, tracking and controlling all kinds of phenomena in life and work environments through algorithms are continuously expanding in order to make more parameters accessible. Behavioral patterns and performances of humans are being quantified and processed in order that those processes can be algorithmically controlled. This development can be empirically observed at the ubiquitous use of process management systems in all kinds of organizations (companies as well as universities and hospitals). This contribution provides a media-archaeological perspective on Salesforce, a widely used process management system, uncovering some of the epistemological implications lying in the code of Salesforce. The paper suggests that registration and controlling practices of work at the beginning of the 20th century like they were implemented by Frederick W. Taylor are materialized in contemporary code structures. In the paper, it is debated whether today’s practices of working with digital process management systems to strategically optimize work processes are following on the same epistemological assumptions as in Taylorism, speaking then of a “Digital Taylorism”. In order to investigate the continuities but also the discontinuities or further development of Taylor’s ideas, the author proposes 4 categories of epistemological assumptions in Taylors Scientific Management for the analysis.

Contribution 3: Bristol, Terry: Design Communication and Innovation Policy Creative communication is essential to the engineering design process. MIT engineer Louis Bucciarelli, working from Wittgenstein’s language games, points out that electrical and mechanical engineers have a communication problem in designing electro-mechanical devices. Electrical engineers’ language game reflects their worldview of currents, capacitors, and transistors. Mechanical engineers’ language game reflects their worldview of levers, pulleys, and material stresses. These language games are incommensurable. They don’t translate. This poses a real communication in the design of electromechanical devices. Bucciarelli’s example is a token of a type of communication problem that is more than the problem of a simple plurality of perspectives that compromise might resolve. The mechanical worldview is Newtonian and the electrical worldview is Maxwellian. Just as particles and waves are complementary

102 BackToTOC

so too are the languages of these engineering frameworks. Complementarity is ubiquitous in all successful design from the design of the irrigation of our fields, the design of our houses, our cities, the design of our businesses and economic policies and to the design of our political and moral system. Free market individualism and democratic socialism represent two complementary design agendas. The communication problems between these two language games is easily recognized. When both sides recognize the perspectives to be ‘essentially contestable’, each valid but limited, creative ‘enlighten dialogue’ begins. I will argue that all successful design advances involve the creative emergence of a conceptual middle ground. Language develops, with a new grammar, syntax and logic, that formally subsumes and supersedes the initial language games. What then does this creative communication problem (viz. opportunity) reveal about innovation policy and about what constitutes a good design. John Dewey referred to the middle ground design agenda as ‘the construction of the good.’

Track: Technology Translations (4)

Contribution 1: Wagner, Nils-Frederic: Doing away with the agential bias: Agency and patiency in health monitoring applications Persuasive health monitoring applications pose novel questions that lie at the intersection of philosophy and technology. Such apps not only collect sensitive data, but also aim at persuading users to change their lifestyle for the better. The present paper investigates the grammar of persuasiveness in the context of different disciplines involved in developing these devices. Whereas persuasion has a largely positive connotation in social psychology and health science (Cialdini et al., 2005), its reputation in the philosophy of action is rather seedy; sometimes persuasion is seen as akin to (or at best in between) manipulation and convincing (Spahn, 2012). A major concern is that persuasion is paternalistic as it intentionally changes someone’s beliefs, chipping away at the agent’s autonomy. This worry derives from the philosophical conviction that the perhaps most salient feature of living fulfilled lives stems from us being agents as opposed to patients: our lives go well in virtue of what we do, rather than what happens to us (Lott, 2016). Now, at first glance it seems evident that being persuaded by a device telling us how to conduct our lives renders the agent passive; a recipient of technological commands. However, there are several reasons for why this claim doesn’t hold up to scrutiny. For one, it is unclear whether an action can only count as agential so long as its causes are internal. Drawing on an extended mind framework, it can be argued that health monitoring apps merely serve as an aid to the agent’s internal volition. Goals that have been set autonomously can be achieved more effectively. So, to be persuaded by an app does not mainly (let alone exclusively) emphasize patiency; to the contrary, it can be an effective tool to technologically enhance agency. In order to find common ground between the disciplines when developing health monitoring applications, semantic alignments and translations are needed that enable linguistic coherence within the interdisciplinary conversation. In a second moment of reflection from here, this discussion indicates that there’s another, equally significant side of agency (Reader, 2007): molding the world inevitably comes along with being molded by it.

Contribution 2: Preston, Beth: Sustainable technology in action: Intractable users and behavior-steering technologies What makes a technology sustainable? It is tempting to think about this question only in terms of the things involved. Are their materials recyclable? Their operations energy efficient? However, as philosophers of technology have long insisted, technology is not just things but techniques—activities involved in making and using things. These activities have a separate bearing on sustainability, for erstwhile sustainable technologies can be used in unsustainable ways. Responses to such problems typically focus either on education of users or on redesigning the technology to suppress undesirable uses. But questions have been raised about the ethical and political dangers of such behavior-steering technologies. In Section One I distinguish three categories of users who pose problems for sustainable technologies—clueless users, stymied users, and intractable users who intentionally use technology for purposes other than those for which it was designed. Clueless and stymied users respond well to education or redesign; intractable users do not. In Section Two I argue that this intractability is rooted, on the one hand, in the inherently improvisational nature of human agency; and, on the other hand, in the multistability of artifact functions and structures, which lend themselves to such improvisation. This explains the practical difficulty of countering intractable use with behavior-steering measures. But such measures also raise ethico-political concerns, which I address in Section Three. Some claim that behavior-steering technologies infringe individual freedom and subvert democratic social processes.

103 BackToTOC

Others downplay these issues, arguing that constraints on individual freedom can be limited and that the design process itself can be democratized. I argue that this debate misses the deeper issue of epistemic injustice. This may occur when the testimony of intractable users is not regarded as credible, or the hermeneutic resources that underwrite their understanding of their situation are not acknowledged, thus harming them in their status as knowledgeable actors. Sustainable technology requires a response to the problem of intractable users. For this response to be effective, it must take the improvisational nature of human agency into account; for it to be ethical it must avoid epistemic injustice.

Contribution 3: Shew, Ashley: Walk This Way The disability rights slogan “Nothing About Us Without Us” asserts that disabled people be part of conversations about disabled people. Still, assumptions about disabilities play a large part in how technologies get built and situated around disabled bodies. Focusing on mobility technologies, this presentation summarizes two sets of considerations: first, a summary of approaches people take when engineering for disabilities, and, second, a summary of approaches people take when the choose technologies for their own mobility. I turn specifically to exoskeletons, prosthetic limbs, walkers, canes, crutches, and other technologies of walking to discuss the ways in which the engineering and marketing of emerging tech around walking often includes ableist assumptions about what it means to be a good human. These assumptions include a specific type of techno-optimism that I call techno-ableism - which assumes that a good life for a disabled person means being fixed or passing for non-disabled through the use of technology. In this way, engineering for disability gets lauded as heroic, even when the technologies produced are not as helpful as imagined. Movements in disability studies and activism have highlighted who the “real experts” about disability ought to be, but engineering projects about the topic continue to place nondisabled people in positions of authority. As Jillian Weiss writes, “For a while, all the experts on African-Americans were white. All the experts on lesbians were Richard von Krafft-Ebing. All the experts on cyborgs were noninterfaced humans.” This paper tries to sort out the differences in approach and orientation between various communities, and how translation between groups can be difficult in the face of unrecognized ableist assumptions.

Contribution 4: Fried, Samantha: Picking a Peck of Pickled Pixels: Thinking Through the Pixel Paradigm in Terrestrial Remote Sensing Remotely-sensed images, or digital images captured by satellites, are often treated by ecologists as neutral, objective representations of earth's terrain. In this presentation, I argue that satellite design, image capture, and image analyses are all highly technosocial, complex processes that can yield multiple, potentially-conflicting truths. I make this argument by thinking with -- and against -- pixels. Pixels are the base unit of these remotely-sensed digital images; they represent swaths of land tens- to-hundreds of meters wide; they are discrete. That is, pixels are squares that can only represent one kind of terrain: "urban," "forest," "agriculture," etc. However, the earth is not singularly comprised of landcover categories "urban," "forest," agriculture," etc. Additionally, these categories themselves are not inherent to earth, and they do not necessarily have a singular meaning across the remote sensing community. After all, not all scientists reading and analyzing these images have the same training or foci, or use the same statistical methods. I offer multiple situations where a singular pixel category cannot be reached, because remote sensing scientists offer conflicting -- yet equally plausible -- interpretations of their imagery. Furthermore, I explore the ways in which the paradigm of pixels, one that seeks to collapse truths at many levels, plays to an instrumental, or goal-oriented, approach within ecological research. Finally, I seek to begin a dialogue within the remote sensing community -- one that disrupts singular, collapsed truths with multistabilities and feminist epistemologies.

104

List of Participants

105 BackToTOC

 Aagaard, Jesper (Aarhus University, Denmark) [mail: [email protected]]  Allo, Patrick (University of Oxford, United Kingdom) [mail: [email protected]]  Almarza Anwandter, Juan (Technische Universität Berlin, Germany) [mail: [email protected]]  Ammon, Sabine (TU Berlin, Germany) [mail: [email protected]]  Aravena-Reyes, Jose (UNIVERSIDADE FEDERAL DE JUIZ DE FORA, Brazil) [mail: [email protected]]  Aydin, Ciano (University of Twente/Delft University of Technology, Netherlands) [mail: [email protected]]  Barbaza, Remmon (Ateneo de Manila University, Philippines) [mail: [email protected]]  Barrientos, Jose (Universidad de Sevila, Spain) [mail: [email protected]]  Beinsteiner, Andreas (University of Innsbruck, Austria) [mail: [email protected]]  Besmer, Kirk (Gonzaga University, USA) [mail: [email protected]]  Blok, Vincent (School of Social Sciences, Wageningen University, Netherlands) [mail: [email protected]]  Boella, Guido (Dept. Computer Science, University of Torino, Italy) [mail: [email protected]]  Boeva, Yana (York University, Toronto, ON, Canada, Germany) [mail: [email protected]]  Boon, Mieke (University of Twente, Netherlands) [mail: [email protected]]  Borgo, Stefano (Laboratory for Applied Ontology, ISTC-CNR, Trento, Italy) [mail: [email protected]]  Böschen, Stefan (ITAS / KIT, Germany) [mail: [email protected]]  Bowe, Brian (Dublin Institute of Technology, Ireland) [mail: [email protected]]  Breuer, Irene (Bergische Universität Wuppertal, Germany) [mail: [email protected]]  Brey, Philip (University of Twente, Netherlands) [mail: [email protected]]  Bristol, Terry (Portland State University, Institute for Science, USA) [mail: [email protected]]  Budrevicius, Algirdas (Vilnius University, Lithuania) [mail: [email protected]]  Buongiorno, Federica (Freie Universität Berlin, Germany) [mail: [email protected]]  Bustamante, Javier (Universidad Complutense de Madrid, Spain) [mail: [email protected]]  Calafiore, Alessia (Dept. Computer Science, University of Torino, Italy) [mail: [email protected]]  Cano, Pablo (Universidad Nacional Autónoma de México, Mexico) [mail: [email protected]]  Carvalho, Tiago (FCUL, Portugal) [mail: [email protected]]  Castiglioni, Sara (ITBA - Instituto Tecnologico de Buenos Aires, Argentina) [mail: [email protected]]  Chakrabarty, Manjari (VISVA BHARATI UNIVERSITY, India) [mail: [email protected]]  Chandrasekharan, Sanjay (Homi Bhabha Centre for Science Education, TIFR, India) [mail: [email protected]]  Checketts, Levi (Graduate Theological Union, USA) [mail: [email protected]]  Chen, Fan (Northeastern University, China) [mail: [email protected]]  Chen, Jia (Northeastern University, China) [mail: [email protected]]  Chen, Ximeng (Zhejiang University, China) [mail: [email protected]]  Chestnova, Elena (Istituto di storia e teoria dell’arte e dell’architettura, Accademia di Architettura, Università della Svizzera italiana, Switzerland) [mail: [email protected]]  Claudia, Eckert (Open University, United Kingdom) [mail: [email protected]]  Coeckelbergh, Mark (University of Vienna, Austria) [mail: [email protected]]  Cong, Hangqing (Zhejiang University, China) [mail: [email protected]]  Conlon, Eddie (Dublin Institute of Technology, Ireland) [mail: [email protected]]  Conty, Arianne (American University of Sharjah, United Arab Emirates) [mail: [email protected]]  Cressman, Darryl (Maastricht University, Netherlands) [mail: [email protected]]

106 BackToTOC

 Cruz, Cristiano C. (University of Sao Paulo, Brazil) [mail: [email protected]]  Cutler, Mark (University of Queensland, Australia) [mail: [email protected]]  Dainow, Brandt (Maynooth University, Ireland) [mail: [email protected]]  Date, Geetanjali (Homi Bhabha Centre for Science Education, T.I.F.R., India) [mail: [email protected]]  Dazhou, Wang (University of Chinese Academy of Sciences, China) [mail: [email protected]]  de Boer, Bas (University of Twente, Netherlands) [mail: [email protected]]  De Keijser, Anais (Graduate School of Urban Studies (UrbanGRAD), Germany) [mail: [email protected]]  de Melo-Martin, Inmaculada (Weill Cornell Medicine--Cornell University, USA) [mail: [email protected]]  Dicks, Henry (University Jean Moulin Lyon 3, France) [mail: [email protected]]  Dong, Xiaoju (Institute of Science, Technology and Society, School of Social Sciences, Tsinghua University, China) [mail: [email protected]]  Doorn, Neelke (Delft University of Technology, Netherlands) [mail: [email protected]]  Dorrestijn, Steven (Saxion, University of Applied Sciences, Netherlands) [mail: [email protected]]  Dot, Anna (Universitat de Vic - Universitat Central de Catalunya, Spain) [mail: [email protected]]  Earle, Joshua (Virginia Tech, USA) [mail: [email protected]]  Eckert, Claudia (The Open University, United Kingdom) [mail: [email protected]]  Edens, Sam (VU University, Netherlands) [mail: [email protected]]  Elder, Alexis (University of Minnesota Duluth, USA) [mail: [email protected]]  Eloy, Sara (ISTCE-IUL, ISTAR-IUL, Portugal) [mail: [email protected]]  Epting, Shane (University of Nevada, Las Vegas, USA) [mail: [email protected]]  Ercegovac, Ivana (Fujairah Women's College, Higher Colleges of Technology, United Arab Emirates) [mail: [email protected]]  Erler, Alexandre (Anatolia College-ACT, Greece) [mail: [email protected]]  Exner, Konrad (Fraunhofer IPK, Germany) [mail: [email protected]]  Feenberg, Andrew (Simon Fraser University, Canada) [mail: [email protected]]  Fleck, Claudia (Institute of Technology Berlin, Germany) [mail: [email protected]]  François, Karen (Free University Brussels (Vrije Universiteit Brussel), Belgium) [mail: [email protected]]  Franssen, Maarten (Delft University of Technology, Netherlands) [mail: [email protected]]  Fried, Samantha (Virginia Tech, USA) [mail: [email protected]]  Friedrich, Alexander (Technische Universität Darmstadt, Germany) [mail: [email protected] darmstadt.de]  Frigo, Giovanni (University of North Texas, USA) [mail: [email protected]]  Fritzsche, Albrecht (University Erlangen-Nuremberg, Germany) [mail: [email protected]]  Fuentes Palacios, Aníbal (Universal Projects, Chile) [mail: [email protected]]  Fujiki, Atsushi (National Institute of Technology, Kurume College, Japan) [mail: fujiki@kurume- nct.ac.jp]  Funk, Michael (University of Vienna, Austria) [mail: [email protected]]  Furia, Paolo (University of Turin, Italy) [mail: [email protected]]  Gaillard, Maxence (Rikkyo University, Japan) [mail: [email protected]]  Galzacorta, Iñigo (University of the Basque Country UPV/EHU, Spain) [mail: [email protected]]  Garagalza, Luis (University of the Basque Country UPV/EHU, Spain) [mail: [email protected]]  Garcia, José Luís (ICS, Institute of Social Sciences, Universidade de Lisboa, Portugal, Portugal) [mail: [email protected]]

107 BackToTOC

 Gardoni, Paolo (University of Illinois at Urbana-Champaign, USA) [mail: ]  Geerts, Robert-Jan (University of Twente, Netherlands) [mail: [email protected]]  Gonzalez Woge, Margoh (University of Twente, Netherlands) [mail: [email protected]]  Gransche, Bruno (Universität Siegen, Germany) [mail: [email protected]]  Greif, Hajo (Technische Universitaet Muenchen, Germany) [mail: [email protected]]  Guarino, Nicola (ISTC-CNR, Italy) [mail: [email protected]]  Güneş, Serkan (Gazi University, Turkey) [mail: [email protected]]  Hamilton, Edward (Capilano University, Canada) [mail: [email protected] / [email protected]]  Hansson, Sven Ove (Royal institute of technology (KTH), Sweden) [mail: [email protected]]  Hasse, Hans (Laboratory of Engineering Thermodynamics (LTD), University of Kaiserslautern, Germany) [mail: [email protected]]  Hennig, Christian (UCL, United Kingdom) [mail: [email protected]]  Henze, Andreas (Graduiertenkolleg Locating Media, University of Siegen, Germany) [mail: [email protected]]  Hernández Vargas, José (Universal Projects, Chile) [mail: [email protected]]  Herrera, Rayco (Universidad de La Laguna, Spain) [mail: [email protected]]  Hillerbrand, Rafaela (KIT Karlsruhe University, Germany) [mail: [email protected]]  Hoek, Jonne (University of Twente, Netherlands) [mail: [email protected]]  Hung, Ching (University of Twente, Taiwan) [mail: [email protected]]  Ihde, Don (Stony Brook University, USA) [mail: [email protected] / [email protected]]  Imanaka, Jessica (Seattle University, USA) [mail: [email protected]]  Jablonowski, Maximilian (University of Zurich, Switzerland) [mail: [email protected]]  Jayanti, Ashwin (Jawaharlal Nehru University, India) [mail: [email protected]]  Jenek, Julius (TU Berlin, Germany) [mail: [email protected]]  Jenkins, Daniel (University of Maryland Baltimore County, USA) [mail: [email protected]]  Jensen, Ole B. (University of Aalborg, Denmark) [mail: [email protected]]  Jerónimo, Helena Mateus (ISEG, Lisbon School of Economics and Management, Universidade de Lisboa, and Advance/CSG, Portugal, Portugal) [mail: [email protected]]  Ji, Haiqing (Philosophy Institute, Shanghai Academy of Social Sciences, China) [mail: [email protected]]  Jia, Lumeng (University of Twente, Northeastern University, Netherlands) [mail: [email protected]]  Jiang, Xiaohui (Research Center for Philosophy of Science and Technology, Northeastern University, China) [mail: [email protected]]  Kanemitsu, Hidekazu (Kanazawa Institute of Technology, Japan) [mail: [email protected]]  Kanzaki, Nobutsugu (Nanzan University, Japan) [mail: [email protected]]  Kapsali, Maria (University of Leeds, United Kingdom) [mail: [email protected]]  Karafyllis, Nicole (TU Braunschweig, Philosophy Dept., Germany) [mail: Karafyllis@t- online.de]  Karakas, Alexandra (Moholy-Nagy University of Art and Design, Hungary) [mail: [email protected]]  Kerr, Eric (National University of Singapore, Singapore) [mail: [email protected]]  Kesdi, Hatice Server (Gazi University, Turkey) [mail: [email protected]]  Kirkpatrick, Graeme (University of Manchester, United Kingdom) [mail: [email protected]]  Klauser, Francisco (University of Neuchâtel, Switzerland) [mail: [email protected]]  Kliemann, Ole (Philosophisches Seminar CAU , Germany) [mail: ole-easychair- [email protected]]

108 BackToTOC

 Kono, Tetsuya (Rikkyo University, Japan) [mail: [email protected]]  Kornwachs, Klaus (Universität Ulm, Germany) [mail: [email protected]]  Kranc, Stanley (University of South Florida, USA) [mail: [email protected]]  Kratzer, Jan (Institute of Technology Berlin, Germany) [mail: [email protected]]  Krenak, Ailton (UNIVERSIDADE FEDERAL DE JUIZ DE FORA, Brazil) [mail: [email protected]]  Kroes, Peter (Delft University of Technology, Netherlands) [mail: [email protected]]  Kudina, Olya (University of Twente, Netherlands) [mail: [email protected]]  Kukita, Minao (Nagoya University, Japan) [mail: [email protected]]  Lemmens, Pieter (Radboud Universiteit, Netherlands) [mail: [email protected]]  Lenhard, Johannes (Department of Philosophy, University of Bielefeld, Germany) [mail: [email protected]]  Leskanich, Alexandre (Royal Holloway, University of London, United Kingdom) [mail: [email protected]]  Lewis, Richard (Vrije Universiteit Brussel, Belgium) [mail: [email protected]]  Liberati, Nicola (University of Twente, Netherlands) [mail: [email protected]]  Liu, Zheng (Department of Philosophy, Peking University, China) [mail: [email protected]]  Loh, Janina (University of Vienna, Austria) [mail: [email protected]]  Loh, Wulf (Universität Stuttgart, Germany) [mail: [email protected]]  Luan, Scott (State University of New York (SUNY) at Buffalo, USA) [mail: [email protected]]  Luxbacher, Guenther (Institute of Technology Berlin, Germany) [mail: guenther.luxbacher@tu- berlin.de]  MacLeod, Miles (University of Twente, Netherlands) [mail: [email protected]]  Martin, Diana Adela (Dublin Institute of Technology, Ireland) [mail: [email protected]]  Matzner, Tobias (New School for Social Research, USA) [mail: [email protected]]  Maurer, Florian (Vorarlberg University of Applied Sciences, Austria) [mail: [email protected]]  Mazurenko, Anna (Saint-Petersburg Polytechnic University, Russia) [mail: [email protected]]  Mendonça, Pedro Xavier (ISCEM, School of Business Communication, Lisboa, Portugal, Portugal) [mail: [email protected]]  Meunier, Gabriel (Independent scholar, Canada) [mail: [email protected]]  Meyer, Henning (TU Berlin, Germany) [mail: [email protected]]  Michelfelder, Diane (Macalester College, USA) [mail: [email protected]]  Milivojevic, Tatjana (Faculty of Culture and Media, John Naisbitt University, Serbia) [mail: [email protected]]  Miller, Glen (Texas A&M University, USA) [mail: [email protected]]  Mitcham, Carl (Renmin University of China, USA) [mail: [email protected]]  Mollicchi, Silvia (University of Warwick, United Kingdom) [mail: [email protected]]  Moloney, Cecilia (Memorial University of Newfoundland, Canada) [mail: [email protected]]  Montminy, David (Université de Montréal, Canada) [mail: [email protected]]  Müller, Marcel (Critical Infrastructures: Construction, function crises, and protection in cities, Germany) [mail: [email protected]]  Müller, Vincent C. (University of Leeds/Anatolia College-ACT, United Kingdom) [mail: [email protected]]  Müürsepp, Peeter (Tallinn University of Technology, Estonia) [mail: [email protected]]  Murphy, Colleen (University of Illinois at Urbana-Champaign, USA) [mail: ]  Nagenborg, Michael (University of Twente, Netherlands) [mail: [email protected]]  Naoe, Kiyotaka (Tohoku University, Japan) [mail: [email protected]]  Nickel, Philip (Eindhoven University of Technology, Netherlands) [mail: [email protected]]

109 BackToTOC

 Niculescu-Dinca, Vlad (Erasmus University Rotterdam, Netherlands) [mail: [email protected]]  Nikiforova, Natalia (Saint-Petersburg Polytechnic University, Russia) [mail: [email protected]]  Nordmann, Alfred (TU Darmstadt, Germany) [mail: [email protected]]  Novitzky, Peter (University of Twente, Netherlands) [mail: [email protected]]  Obukhova, Yulia (Saint-Petersburg Polytechnic University, Russia) [mail: [email protected]]  Offert, Fabian (University of California, Santa Barbara, USA) [mail: [email protected]]  Parviainen, Jaana (University of Tampere, Finland) [mail: [email protected]]  Peterson, Martin (Texas A&M University, USA) [mail: [email protected]]  Pitt, Joseph (Virginia Tech, USA) [mail: [email protected]]  Polli, Andrea (University of New Mexico, USA) [mail: [email protected]]  Popov, Dmitry (Saint-Petersburg Polytechnic University, Russia) [mail: [email protected]]  Powers, Thomas M (University of Delaware, USA) [mail: [email protected]]  Poznic, Michael (Karlsruhe Institute of Technology, Germany) [mail: [email protected]]  Preidel, Maurice (TU Berlin, Germany) [mail: [email protected]]  Preston, Beth (University of Georgia, USA) [mail: [email protected]]  Puech, Michel (Université Paris-Sorbonne, France) [mail: [email protected]]  Puga Gonzalez, Cristian (Arizona State University, USA) [mail: [email protected]]  Putnam, El (Dublin Institute of Technology, Ireland) [mail: [email protected]]  Raffetseder, Eva-Maria (Technische Universität München / MCTS Post/Doc Lab Digital Media, Germany) [mail: [email protected]]  Reijers, Wessel (Dublin City University, Ireland) [mail: [email protected]]  Ridell, Seija (University of Tampere, Finland) [mail: [email protected]]  Riis, Søren (Roskilde University, Denmark) [mail: [email protected]]  Robbins, Holly (TU Delft, Netherlands) [mail: [email protected]]  Robison, Wade (Rochester Institute of Technology, USA) [mail: [email protected]]  Rodríguez, Hannot (University of the Basque Country UPV/EHU, Spain) [mail: [email protected]]  Romele, Alberto (University of Porto, Portugal) [mail: [email protected]]  Rosenberger, Robert (Georgia Institute of Technology / McGill University, USA) [mail: [email protected]]  Sadowski, Jathan (Delft University of Technology, Netherlands) [mail: [email protected]]  Saijo, Reina (Hokkaido University, Japan) [mail: [email protected]]  Santos, Alexandra Dias (Universidade Europeia, Portugal) [mail: [email protected]]  Schiaffonati, Viola (Politecnico di Milano, Italy) [mail: [email protected]]  Severo, Marta (University of Paris Ouest, France) [mail: [email protected]]  Shew, Ashley (Virginia Tech, USA) [mail: [email protected]]  Sidorchuk, Ilya (Saint-Petersburg Polytechnic University, Russia) [mail: [email protected]]  Simon, Jonathan (Université de Lorraine, France) [mail: [email protected]]  Simon, Judith (IT University of Copenhagen, Denmark) [mail: [email protected]]  Sjöstrand, Björn (Södertörn University, Stockholm, Sweden, Sweden) [mail: [email protected]]  Smit, Renee (University of Cape Town, South Africa) [mail: [email protected]]  Son, Wha Chul (Handong Global University, South Korea) [mail: [email protected]]  Stacey, Martin (De Montfort University, United Kingdom) [mail: [email protected]]  Stark, Rainer (TU Berlin, Germany) [mail: [email protected]]  Stojanovic, Milutin (Faculty of Philosophy, University of Belgrade, Serbia) [mail: [email protected]]  Stollery, Pete (University of Aberdeen, United Kingdom) [mail: [email protected]]  Stolzenberger, Steffen (Technische Universität Braunschweig, Germany) [mail: [email protected]]  Stone, Taylor (Delft University of Technology, Netherlands) [mail: [email protected]]

110 BackToTOC

 Stufano Melone, Maria Rosaria (Politecnico di Bari, Italy) [mail: [email protected]]  Suñe Llinas, Emilio (Complutense University of Madrid, Spain) [mail: [email protected]]  Suri, Anshika (TU Darmstadt, Germany) [mail: [email protected]]  Suzuki, Toshihiro (Sophia University, Japan) [mail: [email protected]]  Szerszynski, Bronislaw (Lancaster University, United Kingdom) [mail: [email protected]]  Thompson, Paul (Michigan State University, USA) [mail: [email protected]]  Thomson, Jol (Jol Thomson, Germany) [mail: [email protected]]  Thuermel, Sabine (Munich Center of Technology in Society, TU Muenchen, Germany) [mail: [email protected]]  Tjostheim, Ingvar (Norwegian Computing Center, Norway) [mail: [email protected]]  Toon, Adam (University of Exeter, United Kingdom) [mail: [email protected]]  Torpus, Jan (Fachhochschule Nordwestschweiz, Hochschule für Gestaltung und Kunst, Switzerland) [mail: [email protected]]  Tromp, Hans (Radboud University, Netherlands) [mail: [email protected]]  Unger-Büttner, Manja (TU Dresden, Germany) [mail: [email protected]]  Unsworth, Kristene (Drexel University, USA) [mail: [email protected]]  Van Bendegem, Jean Paul (Free University Brussels (Vrije Universiteit Brussel), Belgium) [mail: [email protected]]  Van Den Eede, Yoni (Vrije Universiteit Brussel, Belgium) [mail: [email protected]]  Verbeek, Peter-Paul (University of Twente, Netherlands) [mail: [email protected]]  Vermaas, Pieter (Department of Philosophy, Delft University of Technology, The Netherlands, Netherlands) [mail: [email protected]]  Verrax, Fanny (INSA (Institut National des Sciences Appliquées), France) [mail: [email protected]]  von Schomberg, Lucien (School of Social Sciences, Wageningen University, Belgium) [mail: [email protected]]  von Schomberg, Rene (European Commission, Belgium) [mail: [email protected]]  Wagner, Nils-Frederic (University of Duisburg-Essen, Department of Philosophy, and Competence Centre Personal Analytics, Germany) [mail: [email protected]]  Wang, Hao (University of Amsterdam, China) [mail: [email protected]]  Wang, Jian (Research Center for Philosophy of Science and Technology, Northeastern University, China) [mail: [email protected]]  Wang, Qian (Dalian University of Technology, China) [mail: [email protected]]  Wang, Wei Min (TU Berlin, Germany) [mail: [email protected]]  Weber, Jutta (University of Paderborn, Germany) [mail: [email protected]]  Weiss, Dennis (York College of Pennsylvania, USA) [mail: [email protected]]  Wellner, Galit (Tel Aviv University, Israel) [mail: [email protected]]  Wiltse, Heather (Umeå University, Sweden) [mail: [email protected]]  Wittkower, Dylan (Old Dominion University, USA) [mail: [email protected]]  Xia, Baohua (Southeast University,Department of Philosophy and Science, China) [mail: [email protected]]  Young, Mark (University of Bergen, Norway) [mail: [email protected]]  Yu, Xue (Dalian University of Technology, China) [mail: [email protected]]  Zhou, Liyun (Shanghai University, China) [mail: [email protected]]  Ziegler, Barbara (University of Vienna, Austria) [mail: [email protected]]  Zoglauer, Thomas (BTU Cottbus-Senftenberg, Germany) [mail: [email protected]]  Zwart, Hub (Radboud University - Faculty of Science - Institute for Science, Innovation and Society - Department of Philosophy, Netherlands) [mail: [email protected]]  Zwart, Sjoerd (TU Delft, Netherlands) [mail: [email protected]]  Zwier, Jochem (Radboud University Nijmegen, Netherlands) [mail: [email protected]]

111 BackToTOC the Conference in a nutshell

Graphic: Geiger

112 BackToTOC

Maps of the area

113 BackToTOC

114