<<

Vrije Universiteit Brussel

Technology & Society Heylighen, Francis

Publication date: 2021

License: Unspecified

Link to publication

Citation for published version (APA): Heylighen, F. (2021). Technology & Society: social, philosophical and ethical implications for the 21st century. Brussels: ECCO VUB.

General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Download date: 04. Oct. 2021

Technology & Society

social, philosophical and ethical implications for the 21st century

Francis Heylighen

Lecture Notes 2020-2021 Center Leo Apostel (CLEA) Vrije Universiteit Brussel

- 1 - Preface

The following are the lecture notes for a course given to Master’s students in , ethics and media at the Vrije Universiteit Brussel. They contain all the material that the students need to know for the examination. In addition, the students normally need to prepare a presentation on a self-chosen topic within the same broad subject domain. In that presentation, they normally introduce a technology or theory and discuss some of the pros and cons of this application or perspective, so as to open a wider debate with the rest of the group of students. The score on the presentation counts for one third of the points on this course, the score on the examination (this material) counts for the other two thirds.

This course tries to give the students a deeper insight into what technology is, and how it affects human life on this planet. Given how pervasive and dominant technological systems have become in this 21st century, it is important to understand the dynamics that propel its ever-faster development. It is especially important to understand, on the one hand, the negative effects and dangers of this development, so that we can mitigate or evade those, on the other hand, the benefits and promises, so that we can further promote and enhance them.

It is customary in some circles to see all technological innovation as , as a harbin- ger of a utopian society without poverty, disease, or conflict. On the other hand, other people, especially in the humanities and ecological movements, tend to see technology as intrinsically oppressive, threatening and demeaning of nature and humankind. The present approach steers a middle course, examining both promises and perils, albeit with a long- term perspective that is mostly optimistic. In practice, the situation is highly complex, with a variety of different trends, forces and desires that are pulling society in contradictory directions. It is not always obvious which ones of those are good and which are bad: well- meaning innovations, such as social media, sometimes bring about unexpected negative side effects, such as echo chambers or false news, while slow trends that few people pay attention to, such as vaccination, may result in spectacular progress, such as the near vanishing of child mortality, when considered over the longer term.

It is fashionable now to say that all technological developments should be scrutinized for their ethical implications, and that engineers should not just think about building a system that does what it intends to do, but reflect about the wider implications of releasing such a system in society. While this is correct, we should be careful to avoid judging complex technologies through knee-jerk, “moralistic” reactions, where we reject some innovation just because it frightens us or offends our cultural traditions or personal sensibilities. Just like engineering needs to be informed by ethics, ethics needs to be informed by engineer- ing, and more widely by a deeper understanding of the system formed by all humans, their technological tools, and the natural ecosystems of which they form part. It is only on this

- 2 - broadest, global scale that we can appraise whether some innovation is more likely to be good or bad for humanity.

That requires a systems approach, which looks at the complex interactions between humans, machines and nature from a long-term, evolutionary perspective. That approach should also go in real depth, not taking developments at face value, but critically examining them and asking difficult questions about meaning and implications. Such critical analysis typifies philosophy. The present course will therefore combine insights from the philosophy and ethics of technology with the more engineering-inspired concepts proposed by systems theorists, cyberneticians and computer scientists, in the hope of arriving at a balanced appraisal.

- 3 -

- 4 - Table of Contents

PREFACE ...... 2

WHAT IS TECHNOLOGY? ...... 7 Defining technology ...... 7 The systems view of technology ...... 7 Extending human agency ...... 10 The mediating role of technology ...... 12 PHILOSOPHICAL ATTITUDES TOWARDS TECHNOLOGY ...... 15 1. Instrumentalism vs. technological autonomy ...... 15 2. Neutrality vs. bias ...... 16 3. Techno-optimism vs. Techno-pessimism ...... 17 THE EVOLUTION OF TECHNOLOGY ...... 18 A brief history of technology ...... 18 Anti-technology movements ...... 19 Mechanisms of technological evolution ...... 21 Diffusion of innovations ...... 22 Utility and value ...... 23 Effectiveness and efficiency ...... 24 Ephemeralization: doing more with less ...... 25 Reduction of friction ...... 28 Extension of cause and effect chains ...... 30 Dangers of reduced friction ...... 31 Cascading failures ...... 32 Accelerating change ...... 34 The technological singularity ...... 36 Technological capabilities extrapolated to the limit ...... 37 Return to Eden: a techno-utopia ...... 40 DANGERS AND NEGATIVE SIDE EFFECTS OF TECHNOLOGY ...... 41 Technology effects tend to be unpredictable ...... 41 Technologies can be used for immoral purposes ...... 43 Technologies can make us lose control ...... 44 Technology tends to marginalize human values ...... 47 Technology can make us lose touch with reality ...... 48 Technology tends to create psychological parasites ...... 50 Technology threatens health and well-being ...... 53 Technology tends to upset the ecosystem ...... 56 Technology may produce unemployment ...... 59

- 5 - Technologies can be monopolized by special interests ...... 61 Technology can amplify or reduce inequalities ...... 63 Technologies evolve too quickly for us to cope ...... 65 HUMAN-TECHNOLOGY CO-EVOLUTION ...... 68 Technological niches ...... 68 Actor-network theory ...... 69 Human-technology symbiosis ...... 71 Transparent user interfaces ...... 72 Technology as mediator ...... 72 Technology affects our choices ...... 75 Mobilizing the user ...... 76 TOWARDS AN ETHICS OF TECHNOLOGY ...... 78 Normative ethics ...... 78 Virtue ethics ...... 79 Deontological ethics ...... 79 The precautionary principle ...... 81 Utilitarian ethics ...... 82 Pragmatic ethics ...... 83 Side effect and dangers of ethical evaluation ...... 84 Towards an integrated technoethics ...... 86 Clarifying the utility of technology ...... 88 Individual human needs ...... 88 Needs of the socio-technological system ...... 90 Coordination needs ...... 92 RECENT ISSUES IN THE PHILOSOPHY OF TECHNOLOGY ...... 95 Becoming ...... 95 Transhumanism ...... 97 ...... 99 Are AI programs truly intelligent? ...... 100 Neural networks ...... 103 Mind Uploading ...... 105 The global superorganism ...... 108 REFERENCES ...... 113 FURTHER READING ...... 119

- 6 - What is technology?

Defining technology

While we can all easily come up with examples of typical technologies, such as planes, cellular phones, or computers, it is not always obvious what should be included in the category “technology” and what should not. So, let us try to analyze that category more precisely. The economist John Kenneth Galbraith defined technology as:

“the systematic application of scientific or other knowledge to practical tasks”

This fits within a wider view of technology as applied science. By “science” we here understand knowledge that was developed and tested using formal, systematic methods, including mathematical modelling, observation and experimentation. “Application” then means using such knowledge to solve the real-world problems that society is confronted with, such as distributing food, caring for the ill, or educating children.

But technology is more than using knowledge. Moreover, technologies, such as windmills, water clocks or chariots, have been developed well before there was any formal scientific knowledge on how to do this. So let’s propose a more precise definition:

Technology applies advanced knowledge to develop systems that support people in achieving some desired objective.

Let’s unpack the different elements of that definition, starting with the crucial notion of system.

The systems view of technology

A system is an organized whole consisting of coupled components (Heylighen, 2014). These components are called subsystems, because each component typically consist itself of smaller components that are coupled to form an organized whole. For example, a car includes components such as engine, fuel container, wheels, seats and the exterior body. But the engine itself consists of many interconnected parts.

A system interacts with the rest of the world by receiving an input of matter, energy and/or information. That input is normally converted or processed into an output, which is then given back to the world. For example, industrial plants may process oil into plastic, iron ore into steel, grain into bread, or coal into electricity. Processing can also work just with

- 7 - information, like when a spreadsheet processes data into diagrams, or with energy, like when an electrical motor transforms electrical current into movement.

Coupling between systems means that output from one system is used as input by another system. For example, the battery of a car converts chemical energy into electricity that is used by other parts of the car, such as the lights and the spark plugs. The spark plugs convert that electricity into sparks, which trigger the explosion of fuel in the pistons. The pistons convert fuel, air and sparks into back-and-forth movement, which is then converted into circular movement of the wheels. This is finally transformed into forward movement of the whole car, but also into electrical current that recharges the battery. Thus, as a whole, the system “car” can be said to process fuel (its input) into forward motion (its desired output) and exhaust (its undesired output), via many intermediate couplings between its components.

Such a technological system was designed to achieve one or more objectives. These are goals, values, needs or desires of the system's users. Simply put, an objective is what the user wants to achieve by using the system. This is typically in order to solve a problem or satisfy a need. But what a person using the system wants is not necessarily good in se. For example, sophisticated industrial systems have been designed to produce cigarettes, because many people want to smoke—even though most of them know it is bad for their health. In economic terms, the tobacco industry uses its technologies to satisfy an existing demand: it is sufficient that people are willing to pay money for something, to make that something into an objective for some technological system.

The next concept we need to unpack is how a system supports the achievement of an objective. Such achievement means that some change in the situation must be produced. Typically, a support system will produce a desired condition, such as food, clothes, or protection. For this, it will need to process some required input, which we will call re- sources, into the desired output, which we will call the product. For example, the system may help process grain into bread, fabric into clothes, or concrete into buildings. A support

- 8 - system may also reduce an undesired condition, such as a disease, pollution, or cold, by producing something that neutralizes it, such as an antibiotic, a catalytic converter or heating. It is important to note that a support system may either produce the desired objective directly, or just facilitate its production by people. For example, an automatic bread machine may produce the bread on it own, while an oven will merely help people to prepare it.

The next concept we need to clarify is the “advanced knowledge” needed to implement the system. When the functioning of a support system is very simple and intuitive, we call it a tool, rather than a technology. For example, the functioning of a hammer, a pencil, or a bucket is so obvious that we do not need any special knowledge or complicated instructions to use them. The functioning of a tool is transparent; the one of a technological system is not. That is what makes technology potentially problematic: we depend on others, with a special expertise, to get it to function, and we are unlikely to understand all of its implica- tions, or to foresee how things may go wrong with it.

Let us then examine the technological knowledge needed to build a support system. This knowledge consists of procedures that specify which components are to be coupled in what way in order to produce a system that produces the desired output, and that in a manner that is both effective, and, as we will further specify, efficient. This knowledge is advanced in the sense that it is not common knowledge, such as the use of a hammer, but based on extensive research by many people building on each other's results. This research is typically rooted in scientific theories. However, it also incorporates plenty of practical experience based on trial and error, in which many versions of the system were tried out, until all the things that commonly go wrong (“bugs”) were discovered and remedied.

Finally, we must explain how this abstract knowledge is realized, applied or implemented in a concrete system. Technological systems, such as machines, typically consist of material components (“hardware”). But technologies can also be social, psychological, organizational, or computational. For example, software systems consist purely of abstract instructions on how to process information. Similarly, procedures for managing a complex organization may be implemented as a system of rules that efficiently direct the activity of the employees. When these procedures become so complex as to lose their transparency, we can properly view them as advanced technological systems. This type of system is sometimes called “orgware”.

- 9 -

Such systems embody or exteriorize the knowledge in an explicit, dependable form. That means that there is no need to understand the detailed functioning in order to use the system. The knowledge is embedded in the system. That also means that the useful applica- tion of this knowledge can be passed on to other people without these people needing to learn all the procedures. In this case, the system function as a “black box”. The user may know what goes into the system (input) and what comes out in return (output) but does not know the intermediate processing steps or components of the system. The black box is opaque, not transparent: the internal is hidden from the user. That makes it much easier to use—but more difficult to remedy if something goes wrong.

Extending human agency

The objectives of technological systems derive from the goals of their human users. Therefore, it is worth examining how humans try to achieve their goals. is the science that studies goal-directed systems (Heylighen, 2014). It can thus help us to under- stand both people and the systems they use. Humans are autonomous living systems with intrinsic needs and desires, goals and values. This means that they intrinsically prefer certain situations to others. For example, people prefer to be warm rather than cold, well fed rather than hungry, and safe rather than in danger. These values derive most fundamen- tally from our biological needs for survival, growth and development (Heylighen, 2020). Achieving such goals happens via interaction with the environment. According to cybernet- ics, such goal-directed interaction requires the following functions:

- 10 -

• perception: what is my present situation? Establishing this requires sensory organs or sensors, such as eyes and ears, that can perceive the features of the environment

• goals or values: in what way does the perceived situation differ from my ideal or preferred situation? This requires the ability to evaluate a situation as better or worse

• information processing: what could I do to bring the present situation closer to the desired one? This requires the ability to store and make sense of in- formation, received from perception or retrieved from memory, which typi- cally happens in the brain.

• action: how do I effectively change the perceived situation? This requires or- gans, called “effectors”, that convert intentions into physical actions. Exam- ples are hands, vocal cords and legs.

• feedback: in how far was the action effective? By accurately sensing in how far the result deviates from the intended goal, I can make the necessary cor- rections to my actions so as to get a better result. This is the control function that suppresses the inevitable errors, deviations or disturbances

• challenges: these are phenomena originating in the environment that may ei- ther help or hinder me in achieving my goals, and thus influence my planned course of action. They include opportunities or resources (positive) and

- 11 - problems, dangers or obstacles (negative)

This defines a human being as an intentional agent: a goal-directed system that acts on its situation in order to achieve more of its goals or values (Heylighen, 2020). Not only people are agents. Animals, autonomous robots, and organizations, such as firms, teams or universities, too can be seen as agents (also called “actors” in the context of social sys- tems).

We can now characterize technology as an external support for agency: a collection of tools that make action more effective. That may happen by making it easier to achieve goals or value, e.g. producing food, or by producing more value than we could achieve without technology. But these tools may also enable reaching goals that were as yet unreachable, such as putting a human on the moon. Technology can thus be seen as augmenting our abilities for action.

The media theorist Marshall McLuhan saw media as an extension of the self, that is, as technological systems that extend natural human capabilities (Logan, 2010; McLuhan, 1994). Technologies thus change the way humans perceive, reason, decide, and act. Therefore, new technologies often have major psychological, physical and social impacts. McLuhan summarized this in his famous slogan:

We shape our tools and our tools shape us.

McLuhan noted that the use of technology extends the reach of body and mind: a bicycle can be seen as an extension of our legs; a telescope as an extension of our eyes; a pincer as an extension of our hand; a telephone as an extension of our voice, and a computer as an extension of our brain. Thus, when investigating a new technology, a good question to ask is: what does the system enhance, make possible, or accelerate?

However, McLuhan also observed that not all capabilities are enhanced to the same degree, and therefore technologies have an in-built bias, promoting certain senses or skills at the expense of others. For example, computer screens enhance vision, while ignoring touch, taste or smell. As a result, we may lose some of our abilities to finely discriminate between smells or tastes by spending most of our time interacting via computers. This brings us to the key notion of technology as a mediator between humans and their environment.

The mediating role of technology

Technological systems increase human power or ability to achieve goals. Following the scheme of agency as a perception-action-feedback loop between human and environment,

- 12 - we can distinguish at least the following general functions for technology, each followed by examples of systems that perform that function

• amplifying perception: e.g. telescopes, glasses, satellites, infrared cameras, microphones

• amplifying action: e.g. cars, cranes, robotic arms, factories…

• amplifying information processing and storage: e.g. computers, books, ar- chives, databases

• amplifying control: e.g. automation, thermostats, robots

As illustrated in the picture above, technology functions as an interface between the environment with its challenges and humans with their goals. That means that many of the inputs of the system “human” first pass through a layer of technological systems before they reach the individual. This input layer neutralizes, absorbs or filters out certain unde- sired inputs. For example, too strong light is filtered by sunglasses, or cold is kept out by walls, heating installations and clothes. On the other hand, the layer amplifies or facilitates the entry of other desired inputs. For example, a hearing aid amplifies sound, while indoor plumbing provides an input of drinkable water.

Vice versa, most of the outputs of humans also pass through a layer of technological systems before they affect the environment. For example, human waste is collected and

- 13 - carried away by toilets and sewers, and processed into sewage treatment plants before it is released in the environment. Human actions are commonly amplified, made more powerful or effective by technology, like when we use an electric lawnmower to cut grass, or an elevator to climb heights. In other cases, the technological system produces desired outputs or goods, such as toys, clothes or bread, and the only human action needed is to program the functioning of the system.

In yet other cases, the technological layer bypasses human decision making, since percep- tion, information processing and control are fully automated, so that the system can itself determine what actions it should undertake when. For example, thermostatic heating equipment will automatically switch on or off in order to attain the desired temperature. Here, the only role left for the human is to specify the goal (e.g. the desired temperature) that the system is supposed to autonomously achieve and maintain.

As we will discuss further, this mediating or interface function of technology has very deep implications for the relations between humans and the world in which they live, on the one hand, increasing the power humans have over that world, on the other hand, insulating or isolating humans from that world.

- 14 - Philosophical attitudes towards technology

Thinkers throughout the centuries have adopted different positions towards technology. These can be roughly classified according to three dimensions:

1. Instrumentalism vs. technological autonomy

Instrumentalism is the position that technology is intrinsically subordinated to humans. This philosophy sees technological systems as mere instruments that perform the function for which they are designed. That means that people can choose whether to use a system or not, and for what purpose they use it. For example, you may or may not use a hammer to break a stone, and you may or may not use an electronic calculator to compute how much 115 x 32 is. The main idea is that humans are ultimately the ones that decide how a tech- nology is used.

Technological autonomy is the opposite position, according to which humans are subordi- nated to technology, and technologies evolve according to their own dynamics. Here the idea is that people have to adapt their lifestyle and work to the prevailing technological conditions. They do not really have a choice in the matter. For example, a worker at a conveyor belt in a factory will have to adapt his movements to how the pieces are presented to him by the machine. Similarly, a bank employee entering data into an information system will have to strictly follow the procedures defined by the computer software. More generally, as Marxists have argued, the material circumstances in a society, which are strongly dependent on the existing technology, dictate how people should behave. And not even the owners, designers or engineers of the technological systems have control over it, because its development is too fast and complex for anyone to be able to control it. Techno- logical determinism is a position that assumes that a society's technology determines the development of its social structure and cultural values.

Our own position will be intermediate between these two extremes, seeing technology as neither autonomous nor subordinate, but in a relation of mutual dependence with humanity. Human society and technology co-evolve, the one constantly adapting to the other. That means that humans invent or adapt technologies to fit their needs, but also that technologies create new needs and inspire new uses for humans. These in turn inspire further technologi- cal innovations, which suggest new goals and so on.

- 15 - 2. Neutrality vs. bias

The position of technological neutrality is that technology is an objective reflection of human knowledge and desire. The idea is that the systems are designed according to formal principles, as expressed e.g. in blueprints, computer programs, or mathematical models. While these designs may be very complex, they are ultimately transparent: you can always open the black box and see what each component is doing and why. The assumption is that there is an underlying scientific rationality: the system is e.g. designed to maximize this particular output, using these and these known laws.

In this view, the values or biases of the system are imposed by its human creators. If the effects of a technology are negative (e.g. people getting addicted to social media), that is either because its designers wanted to achieve that effect (e.g. in order to get more advertis- ing revenue), or because the design failed to take into account certain factors (e.g. the dopamine inducing effect of receiving messages), which could be corrected after more knowledge about the domain is developed. This is the position most common among scientists, technologists and engineers.

The position of non-neutrality is that technology through its very nature imposes values and biases, independently of the desires of its creators. Social scientists and philosophers, such as Jacques Ellul, commonly argue that technology imposes a biased worldview. This technological way of thinking promotes instrumental values such as efficiency, rationality, and material output, while neglecting other, more “human” values, such as love, wisdom, serenity, or intimacy. We will discuss this criticism in more detail later.

Here, we can already observe that technology effectively “colors” our view of the world by its very nature as a mediator or filter that regulates the interaction between human and environment. This filtering means that it will let certain signals pass, while obscuring others, thus potentially withholding valuable information. Cyberneticians have noted that, paradoxically, achieving better control over certain factors may make us less prepared when these factors get out of their usual range, because we have become blind to their normal variation. For example, the better insulated your house is, the less you will notice the storm building up outside. But if the storm becomes so strong that it blows off the roof of your house, you will suddenly be fully exposed without warning.

The approach that we will develop integrates the assumption of technological neutrality or rationality with the existence of biases in the following way: the more we become aware of the biases imposed by certain technologies, the better we can counter these biases by redesigning technologies so as to correct for them. For example, we could support certain neglected values by developing technologies such as meditation apps that promote serenity, or online courses by great thinkers to promote wisdom. We can also try to change the culture to redress misbalances, e.g. by admonishing people to walk more in nature. Thus, we may regain some rational control over the negative effects of technology.

- 16 - 3. Techno-optimism vs. Techno-pessimism

The optimistic position is that technology is basically a force for the good. There are a lot of arguments for this position. Indeed, over the past few centuries technology has spectacu- larly improved the human condition, by increasing the wealth, life expectancy, education level, safety, and comfort of the world population. This progress is not just physical: as material needs are more efficiently satisfied, people pay more attention to higher, moral and intellectual needs. It seems as if technology has the power to solve all remaining human problems, such as hunger, disease, poverty and illiteracy. An extrapolation of on-going advances leads techno-optimists to envisage a techno-utopia (Heylighen, 2015): an ideal world in which all human needs are fully satisfied.

The pessimistic position sees technology basically as a negative factor. This is based on arguments such as the following. Technology has alienated us from our natural, human condition and thus made us unhappy and stressed. Moreover, we become ever more dependent on technology, while understanding it ever less, thus losing the little control we still had. Worst of all, technology has the power to eradicate humankind. It has created so- called existential risks: dangers of a complete annihilation (Bostrom, 2013; Ord, 2020). These include a nuclear war that kills all life on Earth, the release of deadly viruses created by bioengineering, climate collapse, or a take-over by robots or computers that have become more intelligent than humans.

Our position will be more balanced by considering both promises and perils. Indeed, technology can be used positively or negatively, but we need to understand it well enough to be able to control its effects. We also need to develop effective regulations and guide- lines to make it work for the better. Therefore, there is a need for an ethics of technology informed by a deep understanding of what technology is: a techno-ethics.

- 17 - The evolution of technology

A brief history of technology

Humans are not the first species to develop sophisticated external support systems. Many animals use and even build tools. For example, certain fish use stones to crack open shells, termites build huge, air-conditioned termite hills with thousands of tunnels and chambers from hardened mud, and beavers build dams on rivers from trees and branches they cut. Some animals, such as apes and certain birds, even learn how to use and construct simple tools by imitating others.

The first human tools were probably sharpened sticks for digging, wooden spears and clubs for hunting. These were later extended with stone knives and spear points. The technology of controlled fire allowed prehistoric humans to harness energy for cooking food, clearing land, lighting and heating. Innovations would spread through culture: the social transmis- sion of technical knowledge via imitation, the exchange of artifacts, and language. This led to the accumulation of ever more sophisticated knowledge, where technological advances could build on each other, thus leading to an accelerating progress. Here are some of the important stages in that development.

About 10 000 years ago, the domestication of plants and animals initiated agriculture—an essential technology for producing food. Next to the selective breeding of species to become ever more productive and easier to handle, some of the innovations that increased food production were plowing to clear land for sewing seeds, irrigation to provide water for the plants, fences to keep wild animals out and domesticated animals in, and fertilization to increase harvest. This was accompanied by the development of techniques and utensils for the storage and processing of food, such as pottery, mills, salting and fermentation.

The development of architectural knowledge allowed people to build increasingly large, stable and comfortable accommodation and infrastructure, from huts and walls to standing stones, pyramids, palaces and cathedrals. Architecture relied not only on practical experi- ence, but increasingly on the application of mathematics and to the design and engineering of robust, geometrical structures. The development of writing and later the printing press greatly facilitated the preservation and dissemination of such knowledge, thus inaugurating first the scientific revolution (about the year 1600), then the industrial revolution (about 1800).

The latter enabled the systematic exploitation of energy sources for transport, processing and manufacturing of goods. Instead of relying on human or animal power, steam engines could now do much of the work while getting their energy from coal. The internal combus-

- 18 - tion engine (around 1900) enabled the use of mineral oils as a source of energy, making engines much more compact, so that they could be used e.g. for cars or planes. This was shortly followed by the spread of electricity as a universal energy carrier that could power any machine or apparatus.

Electricity also enabled the information technologies of telegraph and telephone. These allowed the immediate transmission of information across large distances via wires. When these telephone networks were coupled to computers, tapes, disks and other systems for the storage and processing information, we saw the birth of the Internet as a universal medium for the communication of information (about 1980). Most recently, the use of electromag- netic waves for information transmission initiated wireless technologies, giving us constant access to all the information on the Internet via the smartphones we carry on our bodies.

Agricultural techniques are now being extended via biotechnology: the manipulation of biological organisms and processes at the molecular level. Chemistry and medicine had already created very effective tools to combat diseases, including vaccines, antibiotics and various other drugs. Molecular medicine promises the ability to design drugs that target specific receptors in the body, so as to combat problems with a minimum of side effects. Genetic modification of plants and animals is a technique to promote desired features in agriculture, such as larger crops, disease-resistant plants, or grains with added vitamins. Even human reproduction can now be made more effective via techniques such as in-vitro- fertilization (IVF or “test-tube babies”) and pre-implantation genetic diagnosis (PGD).

Anti-technology movements

These developments were not always received positively, and technological innovations were sometimes violently rejected. Luddism is the general attitude that technology is to be feared and its spread is to be prevented (Jones, 2013). The name refers to a certain Ned Ludd, an Englishman who in the early 19th century organized a movement to protest the introduction of machines that would replace workers. These Luddites sometimes entered factories to destroy their machines.

- 19 -

A more recent example of a Luddite is Theodore Kaczynski, also known as the Unabomber. Kaczynski, initially an American mathematics professor, became notorious as a lone-wolf terrorist. Over a period of 17 years he had been sending mail bombs to academics, business executives and others involved in the promotion of technology, killing three people and injuring many others. He was arrested and imprisoned in 1996, shortly after his Manifesto had been published in major newspapers. This manifesto, which explained his actions, is a remarkably well-thought out and detailed criticism of technology (Kaczynski, 2010;1995). While extreme in its conclusions, it is certainly worth reading. We will be discussing some of these criticisms in later chapters.

A softer form of protest against technology can be found in various movements that promote a lifestyle that is less dependent on industrial products, under the motto of “back to

- 20 - nature”. This attitude is common in ecological circles. A more radical version of this philosophy is anarcho-primitivism. This is a political ideology that advocates a return to non-”civilized” ways of life, characteristic of hunter-gatherers. This would be achieved through abolition of state institutions and the division of labor, deindustrialization, and abandonment of technologies.

Mechanisms of technological evolution

Like biological evolution, and in fact most forms of the evolution of complex systems, technology evolves through the mechanism proposed by Darwin: variation and selection (Heylighen, 2014).

Variation is generated when someone either proposes a new idea on how to build a support system or makes some major or minor change in such a system. This change does not need to be consciously planned. Perhaps someone misinterpreted the instructions, or did not have all the materials at hand, and built the system with some component that is a little larger or smaller than expected, or used a different material than intended. What counts is whether the system achieves the desired objective. If the new variation somehow manages to achieve the objective in a better way, or to achieve a new valuable objective, the variation will tend to be retained, and further systems will be built that incorporate that variation. On the other hand, the older version of the system, which did not work as well, will eventually be abandoned. Still, in most cases the variation is worse, or at least not better, than the original. In this case, people will stick with the old version, and forget about the variation.

This is the aspect of selection: of all the different variations that are available at a given time, only the best ones will be reproduced and spread, while the less good ones will be abandoned. This is what evolutionary theorists call “survival of the fittest”. But these best ones will undergo further variation, by endless numbers of people trying to either con- sciously improve it or merely haphazardly experimenting with a version that is different here or there. Eventually, one is discovered that works even better. That better one will again overtake the older one, being reproduced and distributed more widely, until none of the old ones are still being used. This is the basic dynamic behind on-going technological innovation.

Note that there is a difference between creation or discovery, on the one hand, and innova- tion, on the other hand. Suppose that a scientist or engineer has an original idea for creating some novel technology. Suppose that that scientist not only demonstrates in principle that the idea would work, but builds a fully functioning prototype. This certainly exemplifies deep creativity. However, the creative idea only becomes an innovation when the system is built, distributed and used by many people.

- 21 - That requires not just technological or scientific skills, but entrepreneurial, managerial and economic capabilities to produce and market the technology. Still, even great entrepreneurs or multinational companies commonly fail to get their novel product to be adopted. On the other hand, a design can sometimes spread because of the sheer luck that the prototype is encountered by people sufficiently enthusiastic about it so that they would imitate it or tell others about it. In the end, a technology has to spread and be adopted by the public. That depends on a multitude of factors that often have little relation with how creative the idea was. We speak of an innovation only when the new invention has spread through the population, so that it effectively changes the way people behave.

Diffusion of innovations

The spread of an innovation through the population over time (Meade & Islam, 2006) has typically the shape of a sigmoid curve or s-curve. (It is called like that because it looks a bit like a skewed letter S, which is called “sigma” in Greek.) Such a curve can be generated by a “logistic” mathematical function. This describes the growth of a population, initially exponentially, but then slowing down until it reaches its maximum (“carrying capacity”). The maximum spread for a technology is achieved when it is adopted by everybody that could possibly have some use for it.

Initially, just a few people are willing to try out a new, as yet unproven technology, which is probably still in an experimental phase, with plenty of bugs, while being expensive because it is not yet mass produced. These “innovators” may be motivated because they have a particular need for this kind of system, because they are curious to try out new things, or perhaps just because they want to impress others with their expensive gadgets. If the experience of these innovators is positive, their example will convince a somewhat

- 22 - larger number of “early adopters”, to try out the new system as well. These in term con- vince further people. The larger the number of people using the technology, the larger the number of their friends, colleagues and acquaintances that see it being put to use, and therefore the larger the number of people likely to adopt the system themselves. Thus, the initial growth in the number of users is exponential. But as the number of people that are not yet using the system diminishes, the addition of new converts must slow down, until there are just a few “laggards” left that are intrinsically slow to accept such new develop- ments. After this, the number of users stabilizes on its maximum value.

Thus, we see that technology evolves when some new variation is generated that is in some way better adapted to the desires of the population, after which it is adopted by a growing number of users, until no one is left that still sticks to the older, worse version. But what precisely is it that makes a technology “better”?

Utility and value

Most generally, we can say that a technology is better when it produces more value for its users. In philosophy and economics, the notion of value has been formulated more techni- cally as “utility”, which can be defined as follows:

The utility of a good or service is the degree to which it satisfies an existing desire.

Note that utility is not intrinsic to the good itself, but relative to its use. For example, a glass of tap water has a great utility for someone dying of thirst, but zero utility for some- one who just drank a big bottle of water. Utility can also be seen as the degree to which something produces satisfaction, happiness or pleasure. We will discuss this further in the section on utilitarian ethics. In economics, something is assumed to have utility when there is demand for it, i.e. when people are willing to pay to receive it.

Utility or value is to an important degree subjective: it depends on the situation, the individual, the culture and the social context. For example, people may desire a new technology because it is esthetically more pleasing than the older version. But we all know that what is considered “beautiful” depends on fashion, and that fashions change. Market- ing is an attempt to manipulate such cultural values, by creating a desire or demand for a certain kind of product that people previously were not interested in. If enough people start to think that they should acquire some gadget, then it effectively becomes valuable in economic terms. But eventually the fashion may change, or people get bored with their new gadget, and then it loses its value. That opens the way for some further “innovation” to take over.

Such fluctuations in demand that depend on fashion or marketing are very difficult to predict. Moreover, they do not seem to have much effect on the long-term evolution of

- 23 - technology. Let us therefore consider some of the more “objective” aspects of value, i.e. those that are likely to prevail over the longer term.

Effectiveness and efficiency

We defined a technological system as one that helps people achieve value. While the value itself may be subjective, the success with which the system achieves it can in general be established objectively. However, there are two distinct measures of success:

1. Effectiveness is defined as the degree to which the desired goal is reached. For ex- ample, a vaccine is 95% effective if 95% of the people who receive it are protected against the virus that the vaccine is intended to combat, and 5% are not. Effective- ness is perhaps the most fundamental criterion by which technologies are selected: technologies that are not effective will be the first ones to be abandoned, and a more effective technology will generally be preferred over a less effective one.

2. Efficiency is defined as the ratio of the desired output of the system to its required input. As a formula:

Efficiency = (useful)output/input

Efficiency is not the same as effectiveness. Indeed, two technologies can be equally effective, but the one much more efficient. Compare for example an old-fashioned incan- descent light bulb of 60 Watts with a modern LED lamp of 6 Watts. Both lamps produce approximately the same amount of light. This is their desired or useful output, which they effectively achieve. However, the LED lamp consumes 10 times less electricity (required input). It is therefore 10 times more efficient.

The required input can be seen as the cost of operating the technology. It represents the resources that you need to provide to get the system to function. Therefore, it is better to reduce this cost by increasing efficiency. This can be understood as either:

Ø minimizing the consumption of resources for a given output, or

Ø maximizing the desired output for a given consumption of resources

Note that not all output is desired. In the case of the lamps, most of the electrical energy consumed by the incandescent light bulb is converted to heat, making the lamp hot to the touch. Because of physical conservation laws, the total amount of matter and energy in the output of a system must be the same as in the input. Matter and energy cannot disappear or appear out of nothing. But they can be processed into a form that has either more, or less, utility to the user. In the case of the lamp, the electrical energy in the input is converted to useful light, and useless—or even counterproductive—heat. Increasing efficiency therefore also means reducing the proportion of undesired output.

- 24 - Next to effectiveness, efficiency is another fundamental selection criterion for technologi- cal evolution: when two technologies with the same effectiveness compete, in general the more efficient one will win, because it requires fewer resources to achieve its objectives. It thus costs less, while achieving more. Therefore, over the long term any innovation that increases efficiency will be preferentially retained. This produces the very general, long- term trend of ephemeralization.

Ephemeralization: doing more with less

Ephemeralization is a little known, but very illuminating, concept that helps us to under- stand technological evolution. It was proposed by the visionary architect-engineer Buck- minster Fuller, who is best known as the inventor of the geodesic dome. He defined ephemeralization as: doing ever more with ever less through technological innovation (Fuller, 2019; Heylighen, 2008).

For example, suppose that your objective is to build a very tall structure. With the technol- ogy available to the ancient Egyptians, the only way to do that is to assemble massive blocks of stone in the shape of a pyramid, i.e. a structure with a very broad, solid base to provide a stable support for the increasingly narrower levels that are added on top. Such a pyramid requires a gigantic amount of resources: stone, manpower to cut, carry and lift the stones, and decades of time to build.

Fast-forward to the end of the 19th century, when the Eiffel tower was built in Paris. Thanks to the technology of steel beams welded and screwed together in a robust geometrical shape, in a few years a structure arose that is more than twice as tall as the Great Pyramid of Giza. Yet the Eiffel tower contains only a fraction of the mass of the pyramid: ten thousand ton for the tower vs. more than six million ton for the pyramid. While both are effective in terms of reaching an impressive height, the steel construction of the Eiffel tower is orders of magnitude more efficient: a greater useful output (height) for a much, much smaller input of materials, energy, time and effort.

For a short video illustrating ephemeralization, see:

https://www.youtube.com/watch?v=X8lqnO7aYe0

More generally, ephemeralization is the tendency for technological systems to become ever more efficient. For a given useful output, newer technologies typically require:

• less material or resources (as components of the system, or to be consumed during its operation)

• less energy (to produce or to function)

• less time (to build or to achieve their aims)

- 25 - • less human effort (to build or to use)

A practical illustration is the speed of travel. In the 16th century, Magellan needed two years to sail around the globe. In the 19th century, a similar journey around the world would have taken 80 days according to the calculations of Jules Verne. Today, planes can carry you around the globe in less than 48 hours.

We see similar trends in the increased productivity of agriculture. Thanks to better crops, irrigation and fertilization, better harvesting methods, less waste through pests or diseases, faster transport, refrigeration and other techniques, the food that reaches your plate needs only a fraction of the agricultural land for its production compared to a century ago. We also witness an on-going reduction in the fuel consumption for cars, and the energy consumption of lamps, apparatuses and household appliances.

Perhaps the most visible effect of ephemeralization is the miniaturization of equipment, which requires an ever-smaller volume, mass, and amount of material for the same func- tionality. That also reduces the time needed for signals to travel across these volumes. The progress has been most spectacular for electronics, and their use for information processing, storage and communication. Here growth in efficiency tends to be exponential, meaning that it doubles after some fixed period of N years. That means that it becomes four times as large after 2N years, about thousand times as large after 10N years, and about a million times as large after 20N years.

- 26 -

The famous “law of Moore” is the observation that through on-going miniaturization the number of transistors on a computer chip has been doubling about every 2 years over the past half century. As a result the computing power, measured as the number of calculations per second a computer can do, has been growing a little more quickly: about a million times over the past thirty years (Kurzweil, 2005).

The memory storage capacity on chips or disks has been increasing even more quickly: doubling about every year. Information transmission too has become much more efficient since the early days of letters transported by couriers on horseback, the telegraph and the telephone. The first modems in the 1960’s used existing telephone lines to transmit a meager 300 bits of data per second, about a million times slower than present high-speed Internet connections. Here the doubling time seems to be about 1.5 to 2 years.

- 27 -

Reduction of friction

In physics, friction is defined as the contact force that hinders and slows down a move- ment—for example when a heavy weight is pulled across a rough surface. Friction thus dissipates (diffuses, loses or wastes) the energy that sustains the movement, until no energy is left, and the movement comes to a standstill. More generally, the famous 2nd law of thermodynamics says that all physical processes are accompanied by a dissipation of energy (in the form of disorder or entropy). It implies that we will never be able to build a perpetuum mobile—an imaginary machine that would run forever without input of energy.

Still, the 2nd law does not specify how much energy is dissipated by a process. That makes it possible to design processes that are more efficient, because they dissipate their resources more slowly. Ephemeralization can therefore be understood as an on-going reduction of dissipation, losses, or friction in technologically supported processes (Heylighen, 2008).

Imagine a smooth, perfectly round billiard ball on a perfectly flat, smooth table. A single hit with the cue will make the ball move fast and keep it moving for quite a while until friction has slowed it down so much that it stops. Producing such balls and tables requires a sophisticated technology. Now imagine an irregular ball handmade by pressing clay, which rolls over a surface of soft soil. Obviously, this second ball, which does not require any advanced technology, will undergo so much friction that it will come to a halt almost immediately, because the energy of the push has been dissipated in the soil.

We can generalize the notions of friction and dissipation to other resources beyond energy. Consider food production. The initial inputs of the process are land, water, fertilizer and sunlight, i.e. the resources necessary to grow crops. The final output is the food consumed by people. In between there are several processing and transport stages, each accompanied

- 28 - by a loss of resources. For example, most of the water used for irrigation will be lost by evaporation and diffusion in the soil before it even reaches the plants. From all the plant tissue produced, a large part will be lost because it is eaten by pests, succumbs to diseases or drought, rots away during humid episodes, etc. More will be lost because of damage during harvesting and transport. Further losses occur during storage because of decay, rodents, etc. Processing the fruits or leaves to make them tastier or edible, such as grinding, cooking, or mixing with other ingredients, will only lead to further loss. What the consumer finally eats constitutes only a tiny fraction of the resources that went into the process.

As we noted above, ephemeralization has led to a spectacular reduction in these losses. In primitive agricultural systems, such as are still being used in many African countries, the output per unit of area or of water is minimal, and in bad years, hardly any produce will reach the population, leading to wide-spread famines. Modern techniques are much more efficient. For example, in modern greenhouses water with just the right amount of fertilizer is brought via tubes directly to the root of the plant, minimizing evaporation and dissipa- tion. The gain compared to traditional irrigation systems, where water runs in ditches between the fields, can be a hundredfold. The plants are protected against bad weather and pests, and kept at exactly the right temperature and right amount of illumination so that they can put all their energy in growing fruits or leaves. Similar gains are achieved during all stages of the collection, storage and distribution process, virtually eliminating losses because of pests, decay, oxidation, etc., with the help of refrigeration, pasteurization, airtight enclosures, various conserving agents, etc. That is why the high-tech greenhouses covering a small percentage of the land in the small country of Holland, with its poor climate, can provide vegetables for a major part of Europe. In fact, thanks to its very efficient technology, Holland is the second largest exporter of food in the world (after the much larger USA)!

- 29 - As a final example, consider the reduction of “friction” in information transmission. Imagine giving your neighbor a detailed account of something that happened in your neighborhood, such as an accident or a police arrest. Your neighbor tells the story to his aunt, who passes it on to her friend, who tells it to her hairdresser, and so on. It is clear that after a few of such oral, person-to-person transmissions, very few details of the original account will have been conserved, because of forgetting, omissions, simplifications, etc. In other words, the original information will have “dissipated”. In the end, the story is likely to be forgotten and to stop spreading.

A simple way to reduce such “friction” is to write down the account and post it on your social media page, where your neighbor and others can see it. The neighbor can then simply repost the original message to his aunt, who forwards it to her friend, and so on. Digital technology ensures that the received text is exactly the same as the one that was sent. Unless someone actively manipulates the text, no information will be lost, and the trans- mission chain will extend for as long as people are willing to repost the message.

Extension of cause and effect chains

Reduction of friction extends the chains linking causes and their effects. Imagine a straight row of smooth billiard balls, each about 10 cm. distant from the previous one. Hitting the first ball makes it collide with the second one. The second one in turns starts to move until it hits the third one, which moves towards the fourth one. This “chain reaction” of balls bumping into balls continues until the last ball has slowed down so much because of

- 30 - friction that it stops before it has reached the next one in the row. That is when the causal chain is interrupted. Now, imagine performing the same trick with a row of soft clay balls on an irregular soil. Here the first ball will hardly manage to reach the second one, while the third one will never even start to move.

This is similar to the example of the story being transmitted either by conversation or by email. The “low friction” medium of email allows the story to propagate much farther, reaching more people, while requiring a shorter time to travel. This explains why technology-driven ephemer- alization makes events in the world more connected: consequences spread much farther and faster, so that everything potentially interacts with everything.

One implication may be called the “real- time society”. We live in a situation where many of our desires can be fulfilled almost instantly. For example, suppose that you feel hungry. You just need to order a pizza on your smartphone, and you will get it delivered almost instantly. In the old days, you would have had to go collect the different ingredients and prepare the food yourself, an operation that might have taken the whole day. With communication, the desires can be satisfied even more directly: one click on a button and you can have a conversation with your friend living on another continent.

A concomitant effect has been been called the “death of distance” or the “end of geogra- phy”. Physical distance becomes increasingly irrelevant. Using jet flight, we can travel almost anywhere within a day. Via the Internet, we can communicate or collaborate with anyone anywhere on earth. This is the basic force driving globalization: markets, cultures, food chains, patterns of migration, partnerships, diseases, ... nowadays extend across the whole planet. Unfortunately, that also has serious negative side effects.

Dangers of reduced friction

Reduction of friction makes all kinds of movements easier. However, that means that it can facilitate desired as well as undesired processes. By accelerating changes, it makes it easier for situations to get out of control. For example, ice is a surface with much lower friction

- 31 - than soil. Therefore, skating requires less energy than walking: you can move faster and longer on skates for the same effort. On the other hand, the risk of falling is also much greater on ice than on soil: once you start slipping, there is very little friction to hold you back.

The same applies to technology-supported processes. A problem that appears in one part of a low-friction system can propagate very quickly to other parts. Thus, problems may spread more rapidly than they can be contained, making us lose control. For example, as illustrated by the Corona virus, an infectious disease that appears in one city can become a pandemic within a few months because of the easy traveling of infected people. Problems such as infections do not just move from one part to another, they spread, because one part typical- ly affects several parts. For example, one ill person can infect several more people, who in turn infect several more people. The same applies to information that propagates, such as computer viruses, rumors, or false news. The faster the transmission, the more quickly dangerous information spreads

These are examples of a vicious cycle or chain reaction, i.e. a positive feedback (self- amplification). The result is an exponential, or “explosive” growth in the size of the problem. For example, if one infected person typically infects two other people within a week, then after ten weeks there will be about a thousand infections, and after twenty weeks a million.

Cascading failures

A related type of self-amplifying problem is known in the theory of technological systems as a cascading failure (Dueñas-Osorio & Vemuru, 2009; Heylighen, 2015). In a network of connected systems, the breakdown or failure of one system may lead to the failure of the

- 32 - systems that are directly dependent on it. Their failure then leads to the failure of those systems that are dependent on them, with the result that ever more systems fail, until the whole network may collapse.

An example is an electricity blackout in which the power grid shuts down across a large region that can cover different countries. When there is a lot of demand for power, it can happen that one of the power lines transporting the electricity has to shut down because of overload. The electrical current is then distributed over other lines. But his increases the chance that these too would get overloaded and have to shut down as well, until the whole network is inactivated. This is again the same positive feedback: the more lines are shut down, the heavier the additional load on the remaining lines, and therefore the larger the number of those that will have to shut down as well.

Another example is a collapse of the stock exchange. Because of speculation, the price of stocks is vulnerable to positive feedbacks: buying triggers more buying (causing a “boom”), while selling triggers more selling (causing a “bust”). Thus, the monetary value of a stock, or of the whole market, can fluctuate wildly. Ephemeralization, e.g. through computer-controlled buying or selling, accelerates the process. Thanks to the Internet, it has become much easier to move huge amounts of money from one stock to another, some- times within milliseconds. The “Black Monday” stock market crash happened on October 19, 1987, in spite of the fact that the overall economy was doing quite well. It is thought to have been precipitated by computers that were programmed to sell stocks whenever the price started to go down. But that selling made the price go down even faster, thus trigger- ing even more selling, until the whole market collapsed.

An even worse example of a cascading failure was the 2008 financial crisis. The failure of one bank, which was not able to pay back its debts, threatened the failure of other banks, which were relying in part on the money owed by the first bank to pay back their debts. The banks were so strongly dependent on each other that this would have led to a collapse of the

- 33 - global financial system. This mutual dependency was created by so-called “financial technology” using extremely complex mathematical models that were trying to predict how the price of one thing would depend on the price of other things, thus offsetting potential losses in one domain by likely gains in another domain. But these models did not take into account that a sufficiently large failure would become amplified by a positive feedback so that there would be losses in all domains. A total meltdown of the global financial system was only avoided by a massive injection of money into the banks by governments world- wide. We are all still paying the economic price for that intervention.

Accelerating change

Ephemeralization not only accelerates production, travel and communication, it accelerates technological evolution itself. Indeed, faster and more efficient processing of matter and information makes further innovation easier. Because of ephemeralization, ever more money, energy, materials, components, information, knowledge, collaborators, processors, memories, and support systems in general become available for the research and develop- ment of new technologies. New technological systems can be assembled more easily from an ever-larger array of available components. Moreover, innovators can draw on the ever- growing knowledge base of all scientific publications available through the Internet. Thus, technology feeds on itself, in a virtuous cycle (positive feedback) of more innovation producing more innovation.

We saw that the spreading of a technological innovation is characterized by a sigmoid growth curve: initially a very fast, exponential growth, which eventually slows down because of lack of resources for further growth. For example, the spread of a technology will run out of steam as soon as the majority of potential users have already adopted it. However, sooner or later an innovation will appear that is sufficiently more powerful so that users of the present technology will be ready to acquire that one as well. The spread of the new innovation then creates a new sigmoid curve on top of the previous one. Thus, ongoing technological evolution appears like a staircase of subsequent s-curves or waves of innovation (see picture).

- 34 -

However, the interval between waves becomes shorter: because of ephemeralization, the time for the newer technology to spread and reach the whole population is typically shorter than the one for the older technology. For example, it took nearly a century for the tradi- tional telephone to be adopted by the whole population, while it took only a decade or two for the same to happen with the cell phone, and probably even less with the smartphone (see picture).

- 35 - There is a limit to ephemeralization, though. We saw that material systems are limited by conservation laws: you cannot create matter or energy out of nothing. For everything produced as output there must have been an equivalent amount consumed as input to make the product from. For example, to make steel you need iron ore. By reducing waste, you may reduce the amount of ore needed as input for a given output of steel. However, you can never produce a ton of steel with less than a ton of ore. Therefore, physical transformations are limited in the amount of ephemeralization they can undergo.

On the other hand, there is no such conservation law for information. Miniaturization allows you to store or process ever more information for a given amount of material. Therefore, informational capabilities can continue to grow exponentially: processing power, memory, bandwidth, resolution, ... Moreover, the exponential growth in one thing can accelerate the exponential growth in another thing. Therefore, efficiency or capability can grow even faster than exponentially. This has led a number of thinkers, such as Ray Kurzweil, to suggest that in the nearby future technological progress may reach a near infinite speed.

The technological singularity

In mathematics, a singularity is defined as a point in the curve of a function where continui- ty breaks down, i.e. where the speed of change in the value of the function becomes infinite. This means that the curve cannot be extrapolated beyond that singular point. If the growth of technological capabilities would be characterized by such a singularity, this would point to an event so momentous that we cannot in any way imagine what would come beyond. The singularity would be a transition to a radically new regime for the way society, technology and mind functions (Eden et al., 2013; Heylighen, 2015; Kurzweil, 2005).

- 36 - Based on his extrapolations of different curves of technological growth, Ray Kurzweil has estimated that such a singularity would happen about the year 2045. However, it must be noted that other authors before and after Kurzweil have made different estimates, the earliest of which (2010) has already passed. So, it is clear that these dates should not be taken too literally…

The most common interpretation of the technological singularity is what I. J. Good has called the “intelligence explosion”. The idea is that as information processing capabilities grow, computers will not only surpass humans in intelligence, they will become so smart that they can reprogram themselves to become even smarter: AI (artificial intelligence) would become self-improving. The reasoning is that a truly intelligent program should be able write a program that would be even more intelligent. Thus, computer intelligence increases computer intelligence, in an explosive, positive feedback cycle. This explosion would produce an intelligence so much beyond human imagination that it is impossible to say what it would do.

However, this interpretation overlooks the fact that intelligence demands more than information processing: it requires an interaction with the outside world via sensors and effectors, so that information acquires a concrete meaning, and solutions can be tested out in reality. That requires at least a robotic “embodiment” of the intelligent program. But such physical realizations are much more limited and difficult to build than faster proces- sors.

A more likely mechanism for self-improving intelligence that we will discuss later, the , would include both humans and technological systems that support each other. To get an idea of a conceivable “post-singularity” future, let us here just assume that ephemeralization will continue without practical limit, and try to imagine what the resulting “superefficient” technology would be able to do.

Technological capabilities extrapolated to the limit

Innovations are likely to be developed and adopted if they are more useful, i.e. more effective and efficient in solving our problems or achieving our desires, whatever these desires may be. In general, we may state that a technology will be more useful if it satisfies the following criteria:

1. widely accessible, so that you can use it whenever you need it

2. intelligent and knowledgeable, so that it can deal with more complex and diverse problems

3. powerful and efficient in realizing its solutions

- 37 - Accelerating innovations will boost these capabilities. Following singularity thinking, let us extrapolate these capabilities as far as we can. In reality, there are of course physical limitations, such as the law of energy conservation, the second law of thermodynamics, and certain theoretical limits on computability. Yet, these theoretical limits are so far removed from what we presently desire that we may as well ignore them for the time being. We then come to the limit of infinite capability. Interestingly, the three above usefulness criteria then turn into the “divine” attributes that characterize the God of monotheism (Heylighen, 2015):

1. omnipresence:

2. omniscience:

3. omnipotence:

This suggests that a post-singularity technology would provide humanity with God-like capabilities! The idea is not as far-fetched as it may appear if we consider what is already available right now in terms of these three criteria.

Omnipresence means being present always and everywhere. In other words, technological support systems would be available at any location at any moment. This is actually already achieved to an important degree via the smartphones we carry on us, the global Internet, and the nearly ubiquitous wireless connections that connect the two. These allow us to call up (human or technological) support anywhere we are. The next step being developed is the “Internet of Things”. This a system of wireless communication protocols that would be built into about any artificial object, including vehicles, tools and apparatuses. Thus, these

- 38 - artifacts would be able to communicate with each other and their human users, so as to coordinate their activity. They would also be controllable via remote interfaces, so that we can make them do things for us wherever they are.

Omniscience means knowing everything, or more practically, being able to answer any question or solve any problem. The World-Wide Web with its search engines already provides us with access to about all the knowledge ever collected by humanity. Initiatives such as Wikipe- dia and the Semantic Web are moreover organizing and synthesizing that knowledge, so that it becomes more immediately usable by both people and machines. That knowledge is also being distilled into freely available online courses and educational environments, which efficiently teach people anything they want to learn. Increasingly ubiquitous sensors, including cameras and satellites, collect information about everything happening on Earth. Increasingly powerful methods of machine learning and data mining extract new knowledge from these “Big Data”. Finally, Artificial intelligence systems learn to use that knowledge to solve increas- ingly complex problems or answer complex questions.

Omnipotence means being all-powerful, or, in the more practical sense, being able to produce any effect or achieve any goal, and this with maximum efficiency or minimal waste. A first step in that direction is the technology of 3-D printers. These can in principle fabricate any object, given a blueprint or design. Such designs become increasingly available for free download over the Internet. Drones and other autonomous vehicles and robots become increasingly efficient in delivering or manipulating objects. The burgeoning research domain of nanotechnology promises that we will soon be able to build whatever we want at the microscopic level of cells and molecules. These developments lead futurists to envisage an era of abundance, also known as “the end of scarcity”, in which anything needed can be produced on the spot for a negligible cost, using materials that are made from inexhaustible resources such as sunlight, air, water and sand (Diamandis & Kotler, 2012; Drexler, 2013; Rifkin, 2014).

- 39 - Return to Eden: a techno-utopia

Let us assume that these god-like capabilities would be used to optimally satisfy all of humanity’s needs. (We will later examine in how far technological evolution is indeed likely to produce the greatest happiness for the greatest number, a development that is much less obvious than maximizing efficiency). That would produce an ultimate positive scenario, which in one of my papers I have called “Return to Eden” (Heylighen, 2015).

In this scenario, technological innovation would solve all global problems. For example, technologies for renewable energy production and carbon sequestration would solve the problem of climate change. Ultra-efficient technologies for food production would solve all problems of hunger, while restoring most land presently used for agriculture and industry back to natural ecosystems. Moreover, abundance would eliminate poverty, competition and conflict over scarce resources, while intelligent systems for communication, coopera- tion and education would lead to a general improvement in education levels, peace, sense of community and freedom.

In such a utopian society, people will no longer have to work, since all the necessary work will be done by machines. However, they would still be able to do whatever they enjoy, such as being creative, caring for people, animals and nature, gathering experience and wisdom, and finally transcending human limitations (an idea we will further discuss in the section on transhumanism). Thus, everyone on Earth would be able to live a long and happy, deeply fulfilling life.

Such a scenario may seem naïve given the many problems of our contemporary society, many of which seem to derive from technological developments. To be able to develop a more realistic, long-term perspective on the societal implications of technology, we will now review the most common problems associated with technology, and see whether they can be effectively tackled.

- 40 - Dangers and negative side effects of technology

Intrinsically, the dynamics driving technological innovations are positive: increasing value, effectiveness and efficiency. On the other hand, many observers, both optimistic and pessimistic, have noted that technologies often have negative effects. Some of these are difficult to avoid side effects of desired positive effects. In other cases, things did not develop as intended, and things just went wrong, or they created the risk of something going catastrophically wrong.

We will now try to understand in how far these negative consequences, actual and potential, are intrinsic to technology itself or merely temporary shortcomings to be redressed by further advances. We will do so by systematically reviewing the main difficulties, issues and criticisms raised by technology, while each time proposing some recommendations that may help us to tackle these problems.

Technology effects tend to be unpredictable

A first general problem with any complex process, such as the interaction between a new technology and society, is that its evolution cannot really be predicted. Therefore, we just have to wait and see what it will lead to. Both positive and negative effects of technology take time to discover and to learn how to manage.

While innovators usually have a clear idea of what they want to achieve, they often misjudge the actual consequences. That is inevitable because, being human, their knowledge and capability to reason are intrinsically limited, a property known in econom- ics as “bounded rationality”. Moreover, people tend to be led by emotional reactions, wishful thinking, prejudices, and fears that color their perspective of the situation (Heylighen, 2015).

Next to human limitations, another reason for unpredictability is that the network of interactions between technologies and people is extremely complex. The theory of complex systems has shown that such systems exhibit non-linearities, i.e. consequences that can be much larger or much smaller than what caused them. That can produce chaotic, self- amplifying and uncontrollable effects, like we saw in the case of cascading failures (Heylighen, 2014).

The resulting misestimates do not necessarily have negative consequences: often a technol- ogy brings unintended benefits. A common phenomenon in (technological) evolution is exaptation (Bonifati, 2013; Heylighen, 2014). This means that some innovation eventually

- 41 - gets used for a function different from the one intended. An example is the technology used in microwave ovens. This application was discovered by accident. An engineer observed that the microwave radiation produced by a magnetron, which is a component of a radar used to detect planes, melted the piece of chocolate he carried in his pocket. That gave him the idea to use a magnetron to heat food, an application that by now has become much more widespread than the one of detecting planes.

Another example of a technology that became much more widespread than intended because of exaptation is the phonograph. This was the first system capable of recording and reproducing sound. Its inventor, Edison, expected it to be used for applications such as recording speeches and the last words of the dying. But he did not intend it to reproduce music, which he considered a frivolous use. Yet, that is exactly what the phonograph became most popular for.

In other cases, the unintended consequences are negative. Here are some examples of such unwanted side effects:

• While cars are very useful to transport people and goods, their popularity has led to severe problems such as traffic jams, air pollution, and an endless stream of lethal accidents.

• While social media help people to express themselves and keep informed about what is going on with friends and family, unexpectedly negative side effects include ad- diction to bits of news and “likes”, the spread of false news, the creation of “echo chambers” in which people reinforce each other’s opinions to such a degree that they become dangerously detached from reality, and people getting depressed because all the positive experiences posted by their friends make their own life seem miserable in comparison.

- 42 - • Covering land with pavement or asphalt makes it cleaner and easier to walk or drive on. However, it also makes it impossible for rainwater to be absorbed by the soil, re- sulting in more flooding, and less nutrients, space and water available for plants and animals, thus endangering ecosystems.

While the unpredictability of consequences is inevitable, the overall effect of innovation tends to be mostly positive. Moreover, negative effects may be mitigated by following some simple recommendations:

Recommendations:

• try to imagine which non-obvious consequences may result before you implement an innovation

• closely monitor all effects of novel technologies, especially unexpected effects

• be ready to restrict or control technologies whose effects turn out to be more nega- tive than positive

Technologies can be used for immoral purposes

We started by assuming that the intentions of the developers and users of technology are good: creating more value for themselves and others. However, that is not necessarily the case: reaching their personal objectives may damage those of others.

By amplifying human capabilities for action, technologies can also amplify the harm these actions do to others. For example, technologies can be used to support war, suppression, spying, crime, or terrorism. The more powerful the technology, the greater the potential harm. Extreme cases are so-called “weapons of mass destruction”: nuclear bombs, chemical warfare, and biological weapons.

But a technology does not need to be intrinsically destructive to be abused for destructive purposes. For example, the Internet is used for a variety of criminal purposes, such as selling drugs or guns, breaking into bank accounts, blackmailing people, organizing terror, or spreading hate. Drones are very useful for aerial photography or deliver- ing packages, but can also be used for surveillance or dropping bombs.

From an instrumentalist perspective, such immoral uses are not a fault of the technology. In this view, the technology is merely a tool, and it is the user of the tool that is to blame for bad use. The idea is that you cannot blame hammers for the fact that some people have used hammers to bash someone’s head in. A similar argument

- 43 - used in the debate about gun control is that “guns do not kill people, people kill people”. However, we all know that this argument overlooks the fact that most people killed by guns would not have been killed if the killer did not have access to a gun, but had to use some less effective weapon, such as a hammer or a kitchen knife. The availability of tools that make it easy to do something—such as killing another person—simply increases the probability that people would do this something, whether intentionally or by accident. Therefore, some technologies intrinsically raise the risk of criminal, violent or immoral use.

Depending on the tool, such use is merely a minor side effect (e.g. hammers, computers) or the main objective (e.g. chemical weapons, land mines). In the latter case, common ethical reasoning would recommend banning the technology altogether, not because of its techno- logical features, but because of the underlying objective. In the case where both positive and negative uses exist, we may rely on the common-sense recommendations below.

On the positive side, technology can also promote moral behavior. For example, good locks, burglar alarms and unbreakable glass will reduce the occasion and temptation to steal from someone’s home or car. Electronic payment systems and media campaigns, on the other hand, provide occasion and stimulation to donate to charity. Most fundamentally, the economic, social and educational progress of society facilitated by technology also produc- es moral progress: people are less inclined to fight with or exploit others, and more inclined to help them, if their own needs are better satisfied (Pinker, 2011).

Recommendations

• keep intrinsically dangerous technologies, such as nuclear reactors or engineered viruses, under tight security, so that they cannot spread widely

• promote awareness of immoral uses in order to phase out intrinsically destructive technologies, e.g. campaign to ban landmines or nuclear weapons

Technologies can make us lose control

The more we depend on a technological system, the greater the risk if something would go wrong with that system. This is particularly dangerous with systems that are centralized, because then if the one central system fails, everything else that is dependent on it also fails. As a general rule, decentralized or “distributed” technologies, such as the Internet, are safer: if some computer hub in between you and your correspondent is offline, your message will find an alternative route via different hubs and still reach your correspondent.

The dangers of our dependency on technological systems are nicely illustrated by the Y2K (Year 2000) bug that created a great fright just before the turn of the century (Quiggin, 2005). It was assumed that millions of computers using old software, in which years were denoted by two numbers (e.g. 99 for 1999) would not be able to deal with the change from

- 44 - 1999 to 2000, and crash as a result. Some even predicted that the resulting simultaneous interruption of millions of interdependent services, from banking to the scheduling of flights, would lead to a collapse of our technology-dependent civilization. This led thou- sands of programmers worldwide to search through billions of lines of code for pieces of old programs that needed to be replaced. While much software was updated, it seems clear that quite a few bugs must have remained. Yet, none of the expected catastrophes material- ized: no planes fell out of the sky, no bank accounts lost all their money…

This illustrates that seemingly small problems can potentially create large-scale disruptions in interdependent systems, but that our socio-technological system is probably robust enough to cope with them. We already discussed the dangers of interdependent systems through the notion of cascading failures: problems that propagate through a low-friction network can spread so quickly that they may run out of control.

Another potential for large-scale disruption is a so-called Black Swan event (Taleb, 2010). This is an event that is so unlikely that most people would never consider its occurrence, but that still may happen on occasion. (It is named after the fact that black swans do exist, alt- hough they are so rare that most people assume that all swans are white.) An example would be the Earth being hit by an asteroid. Engineers designing a technological system normally build it so that it can withstand problems that fall within a wide range of expectations, such as an earthquake or a flooding up to a certain magnitude. But occasionally a Black Swan event happens with a magnitude so great that it goes beyond the foreseen safety margins. An example is the tsunami wave that overcame the walls protecting the nuclear power plant in the Japanese Fukushima, reaching the normally very well insulated nuclear reactors. This catastrophe resulted in the radioactive contamination of a large region around the plant.

A further factor that may lead to a loss of control is the complexity of the interactions between different (parts of) technological systems. We saw that the 2008 banking crisis was in part caused by the complexity of the financial models that banks used to spread their investments and speculate on changes in values of stocks. The resulting system was so opaque that no one had any idea of what would happen if some major problem occurred, with the result that the system was wholly unprepared and would have collapsed without drastic intervention.

Unforeseen interactions between components that are in se simple is a common problem in software design. In rare cases, the results of one procedure may set in motion another

- 45 - procedure to do something that was never intended. This may result in the whole system being blocked or getting in an infinite loop. The impossibility in principle to predict such effects is known in computer science as the halting problem. This means that you cannot in general predict whether a computer program will come to some definite result (and thus stop or halt) or continue to run forever.

In practice, this problem shows itself in the fact that bugs in computer software are una- voidable: any complex piece of software must be extensively tested by thousands of users before you can be reasonably confident that it will function as intended across different circumstances. And even then, a very unusual combination of circumstances may still crash the system.

Recommendations

• try to keep systems as simple and transparent as possible so as to be still under- standable by their designers or users

• perform “stress tests” on potentially vulnerable systems to find out their weakness- es, and try to imagine what would happen under a wide range of unlikely, but not impossible, “Black Swan” conditions

• design systems to be robust or resilient, i.e. able to recover on their own in case something goes wrong (Beigi, 2019). Naturally evolved systems, such as ecosys- tems, organisms, or brains have this kind of resilience that allows them to self-repair and adapt to very diverse circumstances. Features that support resilience are

o decentralization, so that the system does not depend on the well- functioning of a central steering component

o redundancy: several components performing the same function, so that the one can take over if the other breaks down,

o diversity of designs, so that if one cannot handle the problem another one may,

o adaptivity: ability to adapt the functioning depending on the circumstances • prevent cascading failures by if necessary, installing

o boundaries or “firewalls” that separate the network into segments, so that failures cannot propagate from one segment to the other,

o artificial friction, so as to slow down the propagation of potential problems. An example is the proposed “Tobin tax” on short-term financial transac- tions, which discourages too rapid trading and thus reduces the risk of self- reinforcing financial speculation ending in the collapse of a stock

- 46 -

Technology tends to marginalize human values

The philosopher Jacques Ellul developed a detailed critique of technology, noting how its development tends to “desacralize” nature and human values, and to replace them by quantifiable, material values, such as total output or efficiency (Ellul, 2018; Greenman et al., 2012). We indeed saw that the quantitative notion of efficiency is a fundamental driving force for the evolution of technology, as systems are developed that are ever more efficient than their predecessors.

To decide whether some innovation or variation is more efficient than the existing system, you need to be able to objectively compare their performances. That assumes that you can accurately measure required input (costs) and desired output (utility). However, not all things of value are easy to measure. Therefore, technologies tend to focus on objectives that are quantifiable or measurable, such as emission of light vs. consumption of electricity, or amount of steel produced for a given amount of iron ore.

To optimize these measures, the whole design of the system should ideally be expressed in a quantitative, mathematical, rational form. That works best for mechanical, physical components and aspects, such as size, duration, energy, weight, amount of materials or concentration of chemicals. However, this approach tends to ignore difficult to measure, qualitative, non-material values, such as feeling, love, beauty, happiness or wisdom.

Moreover, problems for which you have the tools to tackle them are more likely to attract the attention and therefore to be addressed. Existing technological systems are better for certain types of problems than for others. These typically involve the manipulation of matter, such as moving people and goods, or building things out of physical components. This focuses the attention of society on material problems and values, such as producing and consuming goods, while tending to neglect social, cultural and psychological issues.

Ellul and others, such as Kaczynski, have therefore argued that technology tends to alienate us from our human feelings, from our interpersonal relations, and from nature. By reducing all issues to quantifiable and analyzable problems that can be solved using formal, rational

- 47 - procedures, the technological worldview also tends to dismiss the serendipitous learning, exploring and experiencing that turns us into full, self-actualizing human beings. But that does not have to remain the case, as suggested by the following recommendations.

Recommendations

• make designers, executives and politicians aware of this bias, so that they are re- minded to also address values that are less obviously quantifiable or achievable

• develop more subtle, “social” technologies that enhance communities, positive atti- tudes and well-being. These could include educational technologies, mindfulness or meditation apps, and the more constructive uses of social media, e.g. for collabora- tion, friendship and creating of mutually supportive communities

• develop “green technologies” and methods for ecosystem management that enhance our relationship with nature

Technology can make us lose touch with reality

We saw that technological systems in their function as mediator between people and environment intrinsically filter out many aspects of the real world, while making others more visible. Thus, they may make us blind to important phenomena. But what is even worse is that they can replace an accurate representation of reality with a representation that may appear realistic but that actually no longer has any connection with the real world. The philosopher Jean Baudrillard has analyzed such pseudo-representations as what he called simulacra (plural of simulacrum) (Baudrillard, 2000).

Technological systems help us to make representations of reality that are clearer, more informative and easier to interpret than our unaided perception. For example, photos, sound recordings and movies allow us to witness events that we otherwise would not have been able to experience. These include satellite images of developing weather systems, or movies made by drones flying over inaccessible places such as waterfalls. They thus help us to better understand the world in which we live. That support of perception becomes more effective with technologies for visualization, enhancement of images and sounds, and 3D modeling, which make the representations more vivid, clearer, more detailed and generally more compelling.

In a further stage, computer technology can be used to make simulations of real systems that cannot be directly observed. Examples are the structure of viruses, the movement of dinosaurs, or the trajectories of planes flying across the globe. While these simulations may look very realistic, they are no longer actual recordings, although they are still supposed to represent real phenomena.

- 48 - As noted by Baudrillard, however, this evolution from direct perceptions of reality via images to increasingly sophisticated simulations eventually produces simulacra: things that appear like realistic representations or simulations, but that no longer refer to any underly- ing reality. For example, science fiction movies may use special effects to show aliens that only exist in the imagination of the movie creators. The better the technology of rendering, the more realistic the simulacrum will appear, and therefore the easier it will be to mistake it for a depiction of something real. That creates the obvious danger that people living in a world full of simulacra will no longer be able to distinguish reality from imagination.

A recent illustration of the problems this may generate are so-called “deep fakes”: comput- er-generated animations of some real person or event that look as if they are actual record- ings. Thus, it is now possible to simulate a well-known politician or movie star saying or doing something that that person never did. This can obviously be abused for propaganda, spreading lies, or disinformation. But the danger of simulacra is more general than willful manipulation.

When simulacra become ubiquitous, as they have done with movies, TV-series and computer games, people may no longer care whether they are watching some version of reality or something purely imaginary. The simulacra start referring the one to the other, creating a shared culture that is a pure social construction. Thus, there is for example a whole “Marvel Universe” of superheroes that appear in movies, books and games, in which millions of people are fully immersed. The reason is that simulacra, in part because of the powerful underlying technologies, have become better at capturing the public’s attention than what happens in the real world.

- 49 - Media like cinema and TV, instead of reporting on what happens in reality, tend to create their own reality. For example, soap series follow families of non-existing people every day for years and sometimes decades, so that everyone knows and empathizes with these imaginary characters. Virtual reality environments such as World of Warcraft or Second Life have attracted millions of participants, who sometimes spend hours every day fighting imaginary demons, building simulated cities, or collecting virtual presents. The reason is that these simulacra or virtual realities have become more attractive and sometimes even easier to access than the real world. Moreover, the fact that nearly everyone knows them, talks about them, and perhaps participates in them lends them an additional aura of reality.

However, the danger remains that when the actual, physical reality reasserts itself, e.g. in the form of a natural disaster, a political revolution, or a pandemic, people will be ill- prepared for it because they have lived most of their conscious life in a simulacrum where different rules apply. For example, they may think that a deadly virus cannot hurt them, or that the only reliable remedies for the virus should be avoided because they are the product of some imaginary conspiracy to turn people into robots.

Recommendations

• use technologies as much as possible to report accurately and attractively about the real world, not just about imaginary worlds

• actively remind people that simulacra are not real, and warn them about dangers such as deep fakes

• stimulate people to physically interact with the world, leaving behind their virtual interfaces, so that they experience, learn and remember what a real environment is like

Technology tends to create psychological parasites

While simulacra already distract human attention from real-world problems, psychological parasites do this even more, by using our mind for their own benefit.

Psychological parasites can be defined as self-reinforcing activities, thoughts or behaviors, which consume mental resources in order to perpetuate and spread themselves (Heylighen, 2015). Thus, they make people spend time and energy to perform an activity that has no purpose except to repeat itself. Examples of such parasitic activities are gambling, obses- sive behaviors or superstitious rituals, where people feel driven to again and again perform the same action, even though it does not bring them any benefit, but rather makes their life more difficult. Another example is a drug addiction, where the addict again and again feels compelled to consume the same substance, even when knowing that this is detrimental to health and social life.

- 50 - Technologies play an essential role in supporting such addiction when the drug is artificial- ly produced, like heroin, cocaine or ecstasy. But the role of technology in creating addiction extends beyond the manufacture of drugs. The fundamental reason is that technologies are designed and developed to please their users. With on-going progress they become ever better at that. That means that technologies may become so pleasant to use that people use them more than is good for them.

The underlying mechanism is that that the brain receives a boost of the neurotransmitter dopamine each time a pleasant stimulus is received. Dopamine functions as a reward signal. This makes the brain want to recreate the same kind of stimulus that produced that reward (Nutt et al., 2015). For example, a shot of cocaine and a win when you gamble both stimulate the production of dopamine. Even though a gambler will lose most of the time, the intermittent reinforcement on the relatively rare occasions when gambling pays off produces enough dopamine to create a drive to get more rewards of this kind.

Examples of potentially addictive technologies are computer games, smartphones and social media. Each time you score a point or reach a new level in your game, you get a little shot of dopamine. The same happens when you receive an interesting message, a “like” on your social media post, a cute photo of some playing cat, or a bit of news about something you care about.

The resulting addictiveness of the technology is to some degree planned by the designers, because it keeps you engaged with the system. And the more you use their system, the more money the suppliers of the technology can make. That is in part because you are more inclined to buy any additions, extensions or updates of the system, in part because the more time you spend using the system, the more paying advertisements you will see. For exam- ple, Facebook and YouTube function according to this business model. Therefore, they are driven to keep you maximally engaged with their system by providing you with the stimuli you are most likely to find rewarding.

But even when designers do not intentionally plan this, the evolution of technology tends to make systems more addictive. The reason is simply that more addictive technologies will be used more often than less addictive ones. Therefore, any variation that makes a system more addictive will make it more likely that people will use this system rather than its competitors.

Let us summarize some of the psychological mechanisms that support addiction

Supernormal stimuli:

Our brain has evolved so that more intense stimuli attract more attention: louder sounds, stronger colors, more exaggerated features, faster movements, ... Supernormal stimuli are stimuli more intense than those that occur in the natural world (Barrett, 2010). In most cases, the brain pays attention not so much to the absolute, but to the relative intensity of a

- 51 - stimulus: in how far it is stronger or weaker than other stimuli? In the competition between stimuli, the strongest one normally wins—even if it takes on absurd proportions. This can be illustrated by cartoon figures—such as Mickey Mouse or anime characters—which tend to have impossibly large heads and eyes supported by ridiculously short legs. That is because faces and eyes are intrinsically more interesting to the brain. Therefore, artists have learned to exaggerate those features in order to attract the attention. Similar tricks are used in a variety of domains, including advertising, computer games, junk food (which contains unnaturally high concentrations of sugar, fat, salt, and calories), and movies (which show ever more extreme special effects and violence), etc.

Flow is the pleasurable state achieved when a person is so engulfed in an activity that s/he only wants to continue that activity, ignoring all other concerns (Csikszentmihalyi, 1990; Nakamura & Csikszentmihalyi, 2002). This may happen e.g. while playing tennis, painting, climbing a mountain, ... The requirements for flow are:

• clear goals: no uncertainty about what to do next

• immediate feedback: clear indication whether the action was successful or not, thus showing on-going progress continuously stimulating the user to go further and fur- ther

• challenges in balance with skills: the task is difficult enough to require full concen- tration, but not so difficult as to make people afraid of failure

The addictive quality of computer games is commonly explained by their ability to produce flow (Chou & Ting, 2003; Heylighen et al., 2013). Indeed, it fulfils the requirements:

• the goals of the game are clear

• the player continuously gets points or other rewards to indicate progress toward the goal

• the difficulty level increases as the player becomes more skilled so that the game remains challenging

Flow-producing technologies can mobilize people for worthwhile objectives, such as educating themselves, exercising, or collaboratively solving problems. However, their potential power on the human mind is such that they can lead to both addiction and exploi- tation by political or commercial organizations. This would turn their users into unwitting slaves of the system (Heylighen et al., 2013).

Another factor that may produce addiction to technological systems is that people want to get the same as other people they know. They want to feel involved in their peer group or community. If these peers all use a certain system, such as some social media platform, it will be very difficult to resist the pressure to become involved as well. And once they are involved in the system, they will constantly want to monitor what is going on there because

- 52 - of the so-called “FOMO” (Fear Of Missing Out), i.e. the fear that the others will engage in some fun activity in which they have not been involved.

Communication technologies such as social media or email also facilitate the spread of false news and other “mind viruses”. These are replicating ideas and stories (“”) that are propagated from person to person because of certain intrinsic characteristic of the message, like being sensational, dramatic or funny— and this independently of the message being true or in anyway valuable (Brodie, 1996; Heylighen, 1998; Heylighen & Chielens, 2009). The SUCCES criteria provide a good characterization of the kind of messages that are likely to spread: Simple, Unexpected, Credible, Concrete and Emotional Stories (Heath & Heath, 2007). Note that even when the credibility is not very high, like in urban legends or conspiracy theories, the other criteria ensure that the message will be considered interest- ing enough to be passed on, thus reaching an increasingly large group of people. Because of the lack of friction characterizing communication on the Internet this can happen extremely fast. The problem is that some of these memes can be extremely destructive, by e.g. inciting violence against certain persons or groups, or by stopping people from doing necessary actions, such as protecting themselves against diseases.

They can also convince people to join hate groups, terrorist networks or religious cults. The creation of such closed communities that are dangerously detached from reality is rein- forced by what has been called “echo chambers”. These are communication forums consisting of only like-minded people who reinforce each other's opinions, without being confronted by any counterargument or critical voice from outside the community. That allows extremist, unrealistic ideas to become amplified, a phenomenon known in sociology as “groupthink” (Janis, 1972). This can have very dangerous consequences, such as groups deciding to perform terrorist attacks or collective suicides. While groupthink does not need technology to arise, the algorithms used by social media to bring like-minded people together clearly increase the danger of it happening.

Recommendations

• make people aware of the danger of addiction to technologically supported systems

• creates regulation to discourage or prohibit the development of systems that pro- mote self-reinforcing, addictive, parasitic behaviors

Technology threatens health and well-being

People nowadays live in a highly unnatural, technology-based, enclosed environment, “protected” from natural influences, dangers and challenges, such as heat, cold, hunger, or rain. In a sense, they no longer live in their natural, “wild” condition, like hunter-gatherers in the savanna, but like animals in a zoo. They are being fed and cared for, but at the price

- 53 - of no longer moving freely outside of their buildings, cars and highly regulated city environment. In a sense, we have become “zoo humans” (Le Corre, 2019).

The effect is that our inborn human capabilities are weakened because we replace them with technological solutions. From a biological point of view, capabilities that are not actively used tend to deteriorate, a principle known as “use it or lose it”. For example, people who stay for a long time lying in a hospital bed lose muscle mass and eventually the ability to walk. The same applies for astronauts living in zero gravity, because their muscles no longer need to counteract gravity.

Technological systems are largely designed to reduce the challenges on our bodies, and thus the need to use our physical skills. For example:

• driving a car instead of walking,

• using elevators instead of stairs,

• using shoes instead of walking barefoot,

• relying on GPS instead of orientation skills,

• staying in buildings instead of open air

Another problem is that technology is used to produce artificial foods that

• demand fewer resources to produce (thus being more “efficient”),

• are easier to process, eat and digest (thus demanding less effort from producers and consumers), and

• are more attractive/addictive to eat (thus producing more “utility” in satisfying desires).

These foods tend to contain more calories, sugar, salt, additives and fat than is healthy, because these are the ingredients that are easiest to produce and that make the food more attractive. On the other hand, these foods typically contain less proteins, minerals, vitamins, antioxidants, fibers and other essential nutrients, both because these tend to be removed during processing to make the food easier to store, chew and digest, and because these tend to demand more time and effort for their agricultural production. The resulting “junk” foods include refined flours, bread, pasta, cookies, minced meat, French fries, sweets, and soft drinks.

- 54 -

More generally, technology tends to create unnatural environments and a sense of aliena- tion from our surroundings. According to the theory of biophilia (Heerwagen, 2009), our brain has evolved for close contact with nature, i.e. for being surrounded by plants and animals. Therefore, purely artificial environments, such as geometric, concrete apartment blocks, tend to be intrinsically stressful. Evidence for this are the observations that there is more crime and vandalism in neighborhoods without trees, that people in hospitals rooms with a view on buildings recover more slowly than those with a view of nature, and that people living in areas of outstanding natural beauty tend to be happier.

Moreover, our body needs sunlight, exercise, fresh air, contact with earth and with bacte- ria... (Heylighen, 2020; Sisson, 2013) Sterile environments are unhealthy, because they lack the symbiotic bacteria that live on our skin and in our intestines and that we need to remain healthy. They also do not provide the opportunity for the body to encounter a variety of different bacteria and viruses, so that it learns to recognize the harmful ones and to develop the appropriate antibodies. Without such exposure to common microorganisms, the immune system tends to be ill-prepared for infections, while overreacting to innocuous stimuli, such as dust, nuts or strawberries, resulting in allergies and auto-immune diseases.

Our brain is also not made for the constant stimulation, interruption, and information input that accompanies electronic media (Carr, 2011; Compernolle, 2014). These make us fatigued, depressed and irritable, while reducing our capability to concentrate, memorize and see the forest through the trees. According to attention restoration theory, natural environments such as forests do not require such immediate, strong focus as artificial stimuli. Instead, they invite a soft, diffused, relaxed wandering of the mind that restores our capability to concentrate (Grinde & Patil, 2009, p.).

All these factors together create a host of so-called “diseases of civilisation” or “diseases of

- 55 - modernity” (Carrera-Bastos et al., 2011). These include obesity, type 2 diabetes, metabolic syndrome, depression, cardiovascular disease, dementia, chronic inflammation, chronic fatigue, auto-immune diseases, allergies, anxiety... We know these are caused by our technological civilization, because hunter-gatherers who live in nature while lacking access to modern medicine rarely or never suffer from these diseases, even when they achieve a ripe old age. Presently, more people in the world suffer from overweight than from hunger.

After a long period of increasing life expectancy due to technological advances in medicine and pharmacy, food production, and accident prevention, these “civilizational” diseases may now have started to decrease our life expectancy. They also seem to be depressing our general mood, ability to concentrate and well-being, leading to an epidemic of burnouts, major depressions and suicides. The resulting unease together with the increased complexi- ty of society due to globalization and technological networks, people are more inclined to accept simplistic solutions that do not require reflection, such as conspiracy theories, religious fundamentalism or populist ideologies.

Recommendations

• discourage the use of unhealthy foods or habits, e.g. by putting warning labels on food, imposing a tax on junk food, while subsidizing healthy natural foods

• stimulate natural exercise in natural surroundings, e.g. by creating more parks and forests in and around cities

Technology tends to upset the ecosystem

As technological systems become more widespread and powerful, their input and output become so large that they interfere with natural processes at the scale of the ecosystem or even the planet. It is important to distinguish the input and output stages, because the corresponding problems have a different dynamic and require a different type of solution.

Problems of output: here the difficulty is the production of materials that have a negative effect on the environment, which we may summarize as pollution. These include green- house gases, garbage and waste materials, plastic in oceans, dust in the air, poisonous effluents and sewage in rivers…

Problems of input: here the problem is the increased consumption of resources from nature that thus become scarce and may disappear altogether. These include rare minerals such as cobalt, wood, fish in the oceans, rare animals and animal products, water, and land. We will call this the problem of (potential) exhaustion.

However, unlike pollution, exhaustion is mitigated by a self-correcting mechanism rooted in the economic law of supply and demand and supported by ephemeralization. The

- 56 - principle is that as a resource becomes less abundant (reduced supply), it becomes more expensive (increased demand). Therefore, people will invest more effort in exploiting it more efficiently, e.g. by reducing waste, developing better extraction methods or recycling. For example, when water is scarce, more effort will be put in minimizing leakages and evaporation, or in bringing in water from deeper wells or farther away reservoirs. They may also replace the resource by a more abundant (and therefore cheaper) one that can fulfill the same function. For example, when wood becomes too scarce to use for heating, it is replaced by coal, oil or electricity.

The result is that predictions of exhaustion have typically not come true. This was illustrat- ed by the famous Simon-Ehrlich wager on the price of mineral resources (Pooley & Tupy, 2020). The ecologist Paul Ehrlich betted that the prices for a number of scarce resources would increase over the next ten years as these resources were becoming ever scarcer because of increasing exploitation. The economist Julian Simon, on the other hand, predict- ed that all these resources would become cheaper, because of more efficient use. Even though Ehrlich was allowed to choose the resources he thought were nearest exhaustion, all of them had effectively decreased in price ten years later, and Simon won the bet.

This self-correction unfortunately does not work for pollution. To understand that, we need to introduce the economic concept of an externality (Brynjolfsson & McAfee, 2014). Externalities are costs and benefits that are not counted in (“external to”) a transaction. Therefore, they do not influence the people who make the transaction, even though they may affect others. Technological production sometimes has costs that are not borne by the users, owners or inventors of the technology. Therefore, these have no incentive to reduce these costs, even when everyone suffers from them. For example, the pollution produced by a car does not directly affect the buyer or the producer of the car. Yet, the pollution pro- duced by all cars together affects everyone.

- 57 -

The exhaustion of scarce resources is not an externality. The consumers of that resource, such as the manufacturers that need a raw material such as cobalt for the production of their smartphones, are the first to pay the price of exhaustion. Therefore they are motivated to reduce their consumption by increasing efficiency or switching to another resource. On the other hand, the production of waste material, such as CO2, is an externality: the producers do not have to pay for releasing gases into the atmosphere. Therefore they are not motivat- ed to reduce their production. In conclusion, the market can deal relatively well with scarcity, but not with pollution.

Recommendations

• tax or prohibit negative externalities: let the producers bear the cost to others of their undesired outputs, e.g. carbon taxes, taxes for non-recyclable packages

• subsidize technologies with positive externalities, such as systems that do not pro- duce undesired outputs (e.g. solar panels) or that neutralize such outputs (e.g. ce-

ment that absorbs CO2)

• aim for sustainable development (Folke et al., 2002). Development can be defined as continuing economic and technological advances, addressing ever more needs of humanity. Development is “sustainable” when it can go on without limits, because it does not exhaust resources or produce unacceptable stress on social and ecological systems

- 58 -

• encourage generalized recycling or what is called a circular economy. That means coupling technological systems in cycles so that every output produced by one sys- tem can be consumed as input by one or more other systems, and nothing is wasted. This requires developing technologies that can process undesired outputs into some- thing useful.

For example, part of the CO2 in the atmosphere is absorbed by plants growing in the fields. But when the remains of these plants (e.g. corn stalks) are burned or left to decompose, the stored carbon is again released in the atmosphere. However, if the remains are converted into charcoal using special ovens, such charcoal will no long- er decompose and return to the atmosphere. Moreover, the charcoal is useful be- cause it can be buried to improve the texture of soils so that more nutrients remain for the next generation of plants.

Technology may produce unemployment

As technological systems become increasingly sophis- ticated, more and more jobs—not only manual but intellectual—are taken over by machines. For example, computer programs are nowadays replacing bank clerks, legal researchers,

- 59 - telephone operators and even news writers... (Brynjolfsson & McAfee, 2014) The loss of jobs was the original motivation of the Luddites in their fight against the introduction of new technologies. However, the loss of jobs to machines is not necessarily a problem given that new jobs can be created without limit: there are always valuable things that a person can do for others. In the present era, more job openings tend to be advertised in Belgium than there are candidates available. In practice, new technologies tend to create as least as many jobs as they make disappear.

A general trend is that emphasis shifts from production, buying and ownership to service. Service is more difficult to automate because it needs to take into account ever changing technological and social contexts, and dealing with people rather than with objects. For example young people are less inclined to buy a car for themselves, but more to pay for access to a car that is maintained or driven by others. Thus, while there will be less jobs for factory workers that produce cars because of automation and perhaps reduced demand, there are likely to be more jobs for chauffeurs or technicians. Such service-based economy may also reduce the production and consumption of goods, since these are shared among several people. Another example is the philosophy of light as service. Instead of a company buying huge amounts of lamps for all their offices and having to regularly replace them, they would pay a specialized firm for the service of always having good lighting. That service provider is motivated to install the most efficient lights available, because they pay for the electricity, and to have them endure as long as possible, because they pay for the new lamps that must be installed if the old ones give up.

The disappearance of existing jobs because of innovation is a real problem only if the accompanying new jobs are not sufficiently well paid. Thus, many relatively well-paid manufacturing jobs have been replaced by so-called “hamburger jobs”, i.e. service jobs that demand very limited skills (such as preparing fries in a fast-food restaurant). These are accordingly paid so little that the people who have them are counted among the “working poor”: they are not unemployed, but they do not own enough to lead a decent life.

Yet, technologies also create new, well-paid jobs in the maintenance and development of the novel systems. However, most workers lack the necessary skills to fill in those jobs. Keeping up-to-date with complex technologies is intrinsically difficult. Therefore, innova- tion inevitably creates stress on employees whose present work may be automated or otherwise transformed by technology.

Recommendations

• provide computer-supported education using the latest technologies to help employ- ees stay up-to-date or learn new skills for dealing with new systems and trends

• provide a universal basic income, i.e. a continuing income guaranteed even in case of illness or unemployment, so as to create a safety net for people who are stressed out, lack the necessary skills to take on a new job, or require the breathing space to

- 60 - reorient their career. Given the monetary wealth created by technology, there is cer- tainly enough money available to pay everyone a decent income, even those who cannot work.

Technologies can be monopolized by special interests

The diffusion of a technology is subject to what in economics is known as “increasing returns”: the more a technology is used, the more valuable it becomes (Arthur, 2009). More traditional resources, such as food or paper, are subject to decreasing returns: once you have enough, producing more of it creates less value per item produced. For example, a bakery that produces more bread will initially make profit, because more breads are sold, but once everyone who might need a bread has bought one, the additional breads will not bring in any money, while requiring additional costs to produce them. Many technologies, on the other hand, will profit from “network effects” (Brynjolfsson & McAfee, 2014; Katz & Shapiro, 1994): the more systems of that type have been sold, the more valuable it becomes to have such a system. For example, if there are only two telephones in the world, having one is not very useful, since you can only use it to call one other person. But the more people have telephones, the more useful it becomes to have a telephone yourself and thus become part of the network.

In general, for systems that can do more when they are networked with other systems of the same type, it is better to use the technologies that are compatible with that of most other people. This is particularly applicable to so-called “platform” technologies that intercon- nect different people and systems. Well-known examples are the Facebook social network, the Amazon or Ebay marketplaces, Apple computers, smartphones with an Android operating system, or Microsoft software.

Such platforms are not limited to information technologies, but also include transport networks, such as railways or metro lines. Here trains should ideally be able to reach any node in the network from any other node. This is impossible if the rails are not connected or if they obey different standards, such as width of the rails or voltage for power. Further examples of platform technologies are standard specifications, including different types of cables or connectors, such as USB, Ethernet, or 220 Volt AC electrical outlets, the PDF standard for electronic documents, and the GSM standard for cellular phones.

The more commonly used the platform, the more information, people, places or services you can reach with it. That means that if different platform technologies compete, the one that for whatever reason managed to get the most users initially will tend to attract the most additional users to join—even if it is not really the best. Thus, it grows exponentially until it dominates the market. People that initially used a different system, when they see that most of their friends or clients use another system, will tend to adopt the more popular one, and

- 61 - eventually abandon their initial platform. Thus, one system tends to become the dominant standard, outcompeting rival systems with fewer users, until (nearly) everyone uses the same technology.

Whichever organization owns that technology now has a near monopoly. This allows it to do more are less as it pleases, even if that is to the detriment of the people that use its technology. For example, it may make further innovation by others difficult, by not allowing these new technologies to use or connect to its ubiquitous network. Or it may use its system for surveillance, abuse of personal data, or the exclusion of particular individuals or groups.

Another reason that a system can achieve a near monopoly is because its technology is so expensive or complex as to be difficult to reproduce by others. Examples are nuclear reactors and nuclear weapons, or certain genetically modified crops (where the former Monsanto, now part of Bayer had a near monopoly). In such cases, the owners of the technology try to maintain control over its use by keeping the procedures secret, so that others cannot reproduce or control it. This is a good precaution in case of intrinsically dangerous technologies such as nuclear weapons of potentially lethal virus cultures.

But in other cases, the resulting lack of transparency means that there is a lack of democrat- ic control about how the technology is used. That increases the risk of manipulation by special interests or closed groups that don't care about the general good. These include governments (e.g. China and the US, which have been spying on their citizens) and corporations (e.g. Apple, Facebook, Bayer), but also hackers, spammers, hate groups, crime syndicates or political propagandists that have somehow gotten access to these hidden systems.

Recommendations

• limit concentration of power through the equivalent of anti-monopoly laws. If a de facto monopoly is unavoidable because of efficiency reasons (e.g. ensuring a single standard), ensure that the organization that has the monopoly is democratically overseen and bound by transparent and fair regulation

• If necessary, force corporations to control abuse by imposing ethical rules (e.g. Fa- cebook and Google forced by EU to use fact checkers so as to stop the spread of false news)

• protect whistleblowers that leak inside information in order to expose hidden abuse to the public

• promote open-access or open-source technologies (Heylighen, 2007b; Lerner & Tirole, 2002). These are technologies not protected by intellectual property, so that everyone can freely use, reproduce and examine them. By opening up the inside working of the technology to anyone interested (e.g. by publishing software as open

- 62 - source), bugs, negative side effects and potential abuse can often be detected in an early stage while improvements are suggested.

• try to enforce public, open-domain standards that everyone can use, such as the http, TCP/IP and HTML protocols that enable the World-Wide Web, rather than stand- ards that are intellectually owned by a particular corporation, such as iOS or Win- dows.

Technology can amplify or reduce inequalities

Technology affects power relations in society. Those who get access to a new functionality obviously can do things that those without access cannot. For example, having a car in a society where everyone walks on foot allows you go places and get things that others cannot get at. When a new technology appears, initially only the people sufficiently expert, informed or rich will be able to use it. For example, the first computers were very expen- sive while requiring advanced knowledge to operate. Therefore, very few people actually used them. Thus, after an important innovation, in a first stage the power of the wealthy and knowledgeable increases relative to the poor and non-expert. That is because only the wealthy can afford to buy the newest technologies, while the highly educated are better informed about the benefits and quicker to learn how to use the technology.

However, that changes once the technology starts to spread. We saw that because of ephemeralization it then also becomes cheaper, easier to use and more efficient. In fact, technology tends to empower the weak more than the already powerful, who usually could already obtain the benefits from the new technology via some other means. For example, you have less need for a wide-ranging communication medium such as the Internet, if you can pay people to collect and distribute information for you. That medium effectively distributed the power of communication much more widely. The revolutions in countries such as Tunisia and Egypt during the so-called Arab Spring illustrate how social media allowed the population to mobilize against a dictatorial regime.

Communication technologies also benefit the poor economically. For example, the intro- duction of cell phones in poor countries, like in central Africa, allowed them to build up a communication infrastructure without first having to install expensive networks of tele- phone cables. Moreover, these countries have developed systems for people to make payments digitally via their phone without needing a bank account. In India, fishermen use their cell phones on the sea in order to find out in which port there is most demand for the fish they caught, so that they get the best price for their catch (and relieve the most urgent need) (Schmidt & Cohen, 2013).

- 63 - More generally, thanks to new technologies you no longer need to be an expert or have access to expensive tools to perform creative activities such as playing or composing music, making videos or movies, publishing news, designing objects and buildings, or following specialized courses. What used to be reserved for the elite becomes ubiquitous. Through ephemeralization, technology becomes inexpensive, universally accessible and easy to use. For example, former “luxuries”, such as plane travel, cars, smartphones, dishwashers, or microwave ovens, have become standard accessories. Thus, in the longer term technology seems to reduce rather than increase inequality.

However, lack of friction accelerates non-linear amplification. This may again lead to growing inequality. The idea is that the rich not only get richer, but that they do so more quickly than the poor. The reason is that in a frictionless world the difference between the most successful one and the next one tends to increase. This is because of what we called network effects: the more people use your products/services, the more people are likely to start using these same services, but also the less people will use any competing services, even if these are in practice just as good. This produces a so-called winner-takes-all dynamic: the most successful ones eventually become dominant, getting most of the benefits, while all the others lose out (Brynjolfsson & McAfee, 2014).

This positive feedback mechanism has always existed in a capitalist economy, because the more money you have, the easier it is to invest that money in stocks or activities that will bring you even more money. But technology supported network effects and reduction of friction have made the resulting exponential growth of wealth even more pronounced, resulting in a small group of billionaires, such as Bill Gates, Jeff Bezos or Warren Buffett, that together earn more than the poorest billion of the world population.

Recommendations

• tax financial wealth to redistribute it more fairly and effectively

• facilitate the “trickle-down” of technologies to the less advantaged

- 64 - • skip expensive stages of technology development when possible. For example

o don't invest in telephone lines when you can do with wireless technology

o don't invest in power plants when electricity can be locally produced with solar panels and batteries

Technologies evolve too quickly for us to cope

The dynamic of accelerating technological innovation affects both individual human psychology and the functioning of society. It creates a constant need to learn to use new tools, interfaces and systems. This puts a heavy burden on our mental capabilities to assimilate and remember information, and to understand the world around us.

Communication technologies, such as computers, social media, planes and smartphones, also affect the way social systems function. For example, they are used to create new platforms that supposedly facilitate and automate work in organizations, but in fact demand ever more inputs from employees. These now typically have to report on various types of activities, plans, aspects, risks and attitudes that previously were difficult to track because the paper and pencil systems used for doing so were too clumsy. Thus, while computer and network technology was intended to reduce the bureaucratic burden on employees by automating repetitive tasks, in practice it only seems to have increased it, by asking them to express vaguely defined and ever changing situational factors into a form that can be processed by the computer system.

A more general effect of technological acceleration is that socio-cultural changes become ever faster. The problem is that too rapid change is stressful. In 1970, the futurologist Alvin Toffler wrote a prophet- ic book in which he warned for the negative conse- quences of such change (Toffler, 1970). As Toffler documented, the accumulation of too many changes experienced by a person may lead to symptoms such as stress, disorientation, anxiety and confusion. He called the resulting syndrome Future Shock, in reference to the “Shell Shock” (or what is now called “Post-Traumatic Stress Disorder” or PTSD) experi- enced by soldiers that have been confronted with the unpredictable and uncontrollable events of war. This on-going stress is a major contributor to the world- wide epidemic of burnouts and depressions.

- 65 - Another danger of too fast change is that it may lead to a conservative backlash. Here people react to the changes by trying to reset the clock to an earlier, simpler period. On the political level, it leads to movements and ideologies such as fundamentalism and national- ism that create a picture of an idealized past, and then push policies to get back to that past. This can sometimes take the form of terrorism to destroy the supposed sources of the changes or of totalitarian systems that suppress new ideas or movements.

Considering the limitations of human psychology and society, the singularity (infinite speed of technological advance) is unlikely to happen because humans cannot further accelerate their ability to innovate and adapt to innovation. Therefore, new technologies that demand much change are likely to take time to be adopted. Thus, technological innovation may well have reached its maximum speed.

A related limitation is information overload. Technologies both produce more information and make existing information more available. Given the brain’s limited capacity for processing information, the amount of information we are presently confronted with is simply too much for us to duly make sense of. Such overload is not only stressful, but it also may make people miss crucial information. This may produce serious failures or accidents. For example, in hindsight the FBI and other intelligence agencies did find some indications for the upcoming 9/11 terrorist attack in the data they had collected. But given the absolute overwhelming amount of data they had to review, these indications were overlooked, with the result that the attack was not prevented.

More generally, technological acceleration promotes a so-called VUCA world (Beigi, 2015; Bennett & Lemoine, 2014), with the VUCA acronym standing for a situation that is intrinsically:

• Volatile: things change ever more quickly

• Uncertain: we cannot predict what will happen

• Complex: everything interacts with everything

• Ambiguous: we don't really know what things mean

This creates deep confusion and a sense of loss of control, at all levels of society.

Recommendations

• try to cushion too fast changes in society

• Minimize less important changes, such as changes in interface

• do not push new technologies that offer only marginal improvements

• wait until a new technology has been shown to function really well before adopting it

- 66 - o avoid being on the “bleeding edge” of innovation

o give applications time to stabilize

o and iron out bugs, confusions and side effects

• Maintain “enclaves” of the past

o allow people not to adapt to new systems, by keeping old systems available

o maintain backward compatibility with older versions of technologies

- 67 - Human-technology co-evolution

Different technological systems and their human users co-evolve: they adapt the one to the other, relying on each other’s outputs. They thus form an ecosystem of symbiotic, mutually dependent systems, which we will now investigate in more depth.

Technological niches

Each system on its own evolves through variation and selection (Heylighen, 2014). Varia- tion here refers to new inventions, innovations, or simply small changes in design, typically building further on existing designs. Selection means that most variations do not survive: they are abandoned or forgotten, while only a few are widely adopted. The systems that survive and spread are those that are best adapted to their local environment, i.e. that have found a “niche” in which they fit. A niche is a specific way of extracting resources suffi- cient for the system to survive and grow. The main resource needed for a technology to spread is people who want to use it. That means that there must be a “demand” for the technology, i.e. people who are willing to pay or otherwise reward others for developing, producing and distributing that technology.

For example, there is a niche for drugs or systems that make people lose fat or build muscle. But there is no niche for drugs that would make people lose muscle, or accumulate fat. Such a niche depends on the desires of individuals, the socio-cultural system (which e.g. values a thin, athletic body, but not a fat body), and the market. Indeed, many people are overweight, want to lose fat, and are willing to pay for achieving that outcome.

If a technology satisfies the demand better than rival systems, then it will be selected, while its rivals will be eventually eliminated. “Better” means more effective and more efficient: achieving more of what people want and less of what people don't want, while consuming less resources. In this case, the preferred technology would be a drug that would make people lose more fat, with less negative side effects, while achieving that result for a smaller cost, e.g. because it is easier to manufacture so that it can be sold for less money.

We noted that a technology has more utility or value if it is more in tune with users’ desires or demands. That depends to some degree on cultural expectations, fashion, status and various subjective or esthetic factors. Ideally, we would like to define value as that what helps people live a happy and fulfilled life over the long term. In practice, however, it tends to mean what people find attractive here and now. Demand can emerge by discover- ing something that gives pleasure to people, even if it goes contrary to people’s real needs, as we saw with the examples of heroin, addictive computer games and junk food. Demand

- 68 - can be created artificially via marketing and publicity: convincing people they need this thing they never thought of using before, or simply by imitation: wanting to have the same as your neighbor. This explains the spread of fads, gadgets, fashions, brands...

Next to these socio-cultural conditions, the niche for a technology also depends on the presence of physical resources (means, opportunities) and constraints (limitations) in the environment. For example, steam engines depend on the availability of iron ore to make the steel for the engine, water to produce the steam from, coal to burn for heating the water, and plumbing to get the water to where it is needed. As an example of an environmental constraint, there will be no niche for skateboards in places without large smooth surfaces.

A niche is also determined by the availability of complementary technologies. For example, cars depend on fuel stations, rubber tires, garages, etc. Electric cars as yet have difficulty spreading because of a relative lack of charging stations, and the high cost of batteries. Smartphones could never have spread without the previous development of the Internet, wireless communication, processors powerful and efficient enough to run on relatively small batteries, and a whole library of apps making practical use of the capabilities of the hardware. Thus, popular technologies create their own demands for complementary technologies to support them and therefore niches for such technologies to occupy. For example, cars have created a massive demand for the distillation of fuel from petroleum, processing of rubber, production of windshields etc... The network formed by these mutually dependent niches and the technologies that have evolved to fill them defines the technological ecosystem.

Actor-network theory

The philosopher and sociologist of science Bruno Latour and his collaborators have investigated a number of large-scale innovations, such as the development of nuclear energy. They developed a conceptual framework called actor-network theory (ANT) to make sense of the complex interactions that give shape to such a society-wide development (Latour, 1996a, 1996b). Latour notes that any large-scale innovation depends on the interaction between very different systems, which he calls actors, because their active involvement is necessary for innovation to happen. Actors include:

• people, such as researchers, inventors, entrepreneurs, and political decision-makers

• institutions and organizations, such as universities, government, companies, and the market,

• physical resources, such as minerals, fuels, and agricultural products

• existing technological support systems, such as computers, roads, planes, or the Internet

- 69 - These actors form complex, ad hoc networks of mutual support, cooperation, and some- times competition. No single actor is in control: they all depend on each other and com- plement each other. If these actors manage to find a synergetic configuration, i.e. if they can achieve more together than if they would be working separately, then their joint product may take off and conquer society.

The picture illustrates a network of the different actors (here called “actants”) involved in the publishing of books including human actors (such as the writer and publisher), technological actors (such as Internet and E-readers), and organizations actors (such as governments).

These different human, physical, technological and social systems/actors co-evolve: each one is adapting to the environment formed by the others. The overall configuration is constantly changing, because the adaptation of the one changes the environment for the others, pushing them too to adapt, thus again changing the environment and creating a new pressure to adapt. No one is in control of this process or can predict what the overall system will do. About the only constant in this on-going evolution is that tech- nologies become more efficient and effective in fulfilling whatever demands or niches this evolving ecosystem creates, because that is the only way to beat their competitors

- 70 - Human-technology symbiosis

The technology watcher Kevin Kelly in his book “What technology wants” has conceived the ecosystem formed by all technological systems as what he calls the technium (Kelly, 2010). This “technium” is seen as a new evolving realm, to some degree similar to biological life, but based on a very different substrate. This realm produces its own “organ- isms”: technological systems. Like biological organisms, these technological organisms “want” to survive and multiply, because they are subject to the same evolutionary principle of the survival of the fittest. But for that they are dependent on humans to invent, develop and build these systems.

Thus, the technium in some respects behaves like a parasite on humanity. It evolves by using human resources to grow. It cannot exist without people to support and produce it. Therefore, it pushes people to produce ever more of it, the way a virus subverts human cells to produce more copies of itself. In that sense, technology has an autonomous “will”, independent of the will of the people that uses it.

On the other hand, humanity also profits from technology to grow itself. The human population explosion, from perhaps a million of our hunter-gatherer ancestors to billions of people in our industrialized global society, would not have happened without technology to produce and distribute food more efficiently, combat diseases, build houses and infrastruc- ture, etc. Therefore, it would be more accurate to see the technium as a symbiotic, mutualist organism. In biology, symbiosis means that two or more organisms are living together in such an intimate way that the one depends on the other. Mutualism means that both partners equally profit from the symbiosis. This is in contrast to parasitism, where only the parasite profits, while the host suffers.

Because of this intimate connection, technology co-evolves with humanity, in a relationship of co-shaping. This idea goes back to McLuhan’s observation that we shape technology, while technology shapes us. Humanity and technology influence each other and benefit from each other. They become increasingly entwined and mutually dependent. In biology it has been observed that symbiotic organisms may eventually merge. For example, a lichen is in fact a merger of two very different types of living organisms: algae and fungi. The

- 71 - complex, “eukaryotic” cells that constitute the bodies of all larger living systems were created by the merger of simpler bacterial cells, some of which are still recognizable as “organelles” within the cell body. In the same way, our connection with technology becomes ever closer.

Transparent user interfaces

Technological systems will feel more natural to us when they have an effective user interface. The interface mediates between the human user and the underlying system, by presenting the capabilities that the system offers in such a way that the person can easily access, understand, and apply them. Basic characteristics of well-designed interfaces are (Heylighen et al., 2013):

• simplicity: the number of components or options that can be seen at once (e.g. on the screen or control window) is relatively small, and common actions can be per- formed with a small number of steps

• intuitiveness: user actions have as much as possible the effects that a naïve user would expect them to have

• transparency: functions are self-explanatory, or can be understood after a minimal investigation

• esthetics: color schemes and designs are calm, elegant and pleasant; images, sounds and movements are not coarse, grainy or jerky

• interactivity: the tool responds clearly, distinctly and immediately to different user actions. The user gets immediate feedback on the result of the action, thus confirm- ing that the intended effect was achieved—or clearly indicating what needs to be done additionally.

• consistency: the same actions or interface elements always produce the same effects

A system that satisfies these characteristics will be easy to learn and pleasant to use. After a while it will start to feel like an extension of the self: no longer a foreign object obeying its own rules, but something that does exactly what you expect, without you needing to think or check on what it is doing. That is when you start forgetting that you are using a techno- logical system. The next step is that the system effectively becomes part of who you are.

Technology as mediator

The Dutch philosopher of technology Peter-Paul Verbeeck has developed an approach

- 72 - called mediation theory (Verbeek, 2015). This is rooted in the ‘post-phenomenological’ approach in philosophy of technology, which was founded by Don Ihde (Ihde, 2012). That means that it combines:

1. the philosophy of pragmatism, which is based on practical action and its effects in the real world,

2. the philosophy of phenomenology, which studies an individual’s subjective experi- ence of that world, i.e. how that person senses and feels the phenomena in the world.

The basic idea of mediation theory is that technological systems function as mediators between people and their environment: technologies to a large part determine how we perceive, know and act on the world.

For an illustration, see the video: https://vimeo.com/221545135

A first implication concerns epistemology, which is the philosophical theory investigating how we can acquire knowledge about the world. Technologies such as scientific instru- ments, satellites, telescopes, sensors, and data mining obviously have a great impact on what kind of knowledge we can acquire. An even more direct example of a technology that extends and changes our perception of the environment is so-called augmented reality—as implemented e.g. in Google glasses (Van Krevelen & Poelman, 2010). What we see of the world through such glasses is “augmented” with additional information about the things we are viewing. For example, when looking at a building, an AI system connected to the glasses that recognized the building may add data, such as architect, history or visiting hours, to the image we are seeing through the glasses.

- 73 - Pragmatism examines how we act on the world. Here too technologies for e.g. surgery make possible new interventions on the human body, while genetic manipulation allows us to change biological organisms. This has obvious implications for ethics: which kinds of actions should or should not be performed? The field of medical ethics, e.g., depends heavily on what technology makes possible. For example, it did not make sense to formu- late ethical rules on human cloning before cloning became technologically feasible. Another example of a newly raised issue is the ability to distinguish between conscious and vegetative states in coma patients. With new brain scanning technologies it becomes possible to estimate to what degree a coma patient still has some minimal consciousness. That obviously influences the decision about whether such a patient should be kept artifi- cially alive or allowed to die.

The connection between humans, technology and world becomes ever more intimate. Technologies are being in-built into the human body, such as pacemakers or in some cases microchips. Thus, people become increasingly cyborgs: hybrids of biology and technology. A more spectacular example is an exoskeleton. This is a robotic extension of the human body that may allow paralyzed people to walk or healthy runners to run faster than they could with just their biological body, or just to perform movements a human body cannot do. For a demonstration by the artist/performer Stelarc (see photo), you can check the following video: https://www.youtube.com/watch?v=R2MntBUwUxY&t=160s

Moreover, technologies are becoming in-built into the environment. For example, ambient intelligence (Verbeek, 2009) uses sensors and chips built into walls, objects, and even plants, animals to collect information, and intelligently react to people's desires. The Internet of Things is a developing suite of protocols that would allow all these objects with

- 74 - in-built intelligence to communicate with us and other systems wirelessly via the internet. Thus, we would be able to directly affect things in the environment, even from a distance. For example, we could make sure to have coffee prepared at our home by the time we arrive, by simply giving a command via a smartphone.

Technology affects our choices

Any choice made in designing a technology affects how humans and world interact. If we want these actions to have positive effects, these choices must be grounded in ethical considerations. We have to take into account deeper values that consider long-term, global, “external” effects, as well as “soft”, “ethical” values related to society and well-being. We should not let design choices be based purely on commercial interests, pleasure, and short- term thinking, or on “hard”, quantitative values, such as efficiency, effectiveness, profit, or material output.

A fundamental issue is that technology increases the number of possibilities we have for acting in the world. But an increase in available options also makes it more difficult to make good (ethical, wise, intelligent...) choices. Because of the intrinsic limitations of our brain, we cannot systematically consider thousands or millions of options in order to select the best one. In practice, therefore, technologies implicitly pre-structure the choice, by making certain options easier to choose.

This can be understood with the concept of choice architecture, proposed by Richard Thaler and Cass Sunstein (Thaler & Sunstein, 2008). The idea is that when there are many possibilities, people will not systematically consider each of them to the same degree. In practice, options tend to be presented in such a way that certain options are more obvious or easier to select, and thus more likely to be chosen. For example, in the table of contents of a handbook or catalog, the topics are presented in a certain order, from more “basic” to more “advanced”. The arrangement of items you can buy in a supermarket or in a cafeteria is not random, but organized. For example, items in promotion are positioned at eye-height on a counter at the entrance of an aisle, while rarely bought items are positioned in the difficult to reach top or bottom shelves. As a result, the most visible or easy to reach products tend to be preferentially picked up and put in the shopping basket or tray.

Such a structured arrangement applies in particular to technologies that offer many possible options. Thus, their user interface implicitly biases our actions towards certain uses rather than others. For example, the dashboard of a car or plane will normally be designed so that the most important or useful functions have big, easily visible buttons, levers or other controls. Software applications normally come with a number of defaults. These are the options preferred by most people. They are automatically installed unless the user explicitly overrides the default and selects some non-standard options.

- 75 - The predefined ordering of options is most obvious in the results of a search engine, such as Google, where the pages that are supposed to be most useful are listed first, while the options considered less relevant may only appear after several pages of clicking through. But perhaps Google’s estimate of relevance is very different from yours, and the pages most useful for you may be hidden behind masses of less good options…

When the number of possibilities is so large, some form of ordering or pre-structuring is unavoidable. But the pre-structuring can also be explicitly intended and designed to promote certain choices or uses, because this benefits the user, the producer of the technol- ogy, or some other involved party.

Mobilizing the user

Promoting certain choices is an example what Thaler and Sunstein call nudging. Nudging means gently inciting people to do certain things, but without forcing them or explicitly rewarding them for doing so (Thaler & Sunstein, 2008). Instead, nudging uses a variety of psychological clues to make certain options more attractive. The principle is that people can still choose freely what to do. But for them to make a choice that deviates from the promot- ed options demands additional effort and conscious reflection. Therefore, they are more likely to follow the promoted option.

This principle has been developed most strongly in what are called persuasive technolo- gies. These are systems that explicitly aim to influence people to act in a certain way. For example, many apps are nowadays available that motivate people to lead a healthier lifestyle, e.g. by helping them choose healthy options when buying food, or giving them feedback on how well they are doing in their program to do more physical exercise or lose weight.

Persuasive technologies will be most effective if they (Heylighen et al., 2013):

• tap into real needs (e.g., combating the dangers of obesity)

• present clear goals (e.g., realistic weight targets)

• make it easy to do what is needed (e.g. prepare healthy meals)

• give feedback about the progress made so far (e.g. compare your present weight with your initial and ideal weights)

• provide clear visualizations of potential means or ends, so that users can easily im- agine the effect of their future actions (e.g. a computer-generated photo of how you would look after losing all that weight)

• make use of social pressure (e.g. by pointing out the achievements or expectations

- 76 - of others)

• provide timely triggers to stimulate their users to do something (e.g. alarms to re- mind you to exercise)

One of the most effective methods used to persuade people to act in a certain way is gamification (Deterding et al., 2011; Heylighen et al., 2013). The principle is to make the achievement of your objectives like a fun game. Gamification applies a variety of game mechanics. These are methods initially developed for computer games that help to produce a compelling, engaging experience for the game player or more generally user of a system. They include:

• challenges, in which the player is incited to achieve some difficult objective,,

• scores or points, to quantify all the small, step-by-step advances accumulated by the player, so that players get a continuous feedback about how well they are doing

• levels, which represent larger, discontinuous transitions to a higher degree of game difficulty or status

• competitions, where players can compare their achievements with those of other players, so that they are incited to do better than they did up to now

• epic meaning, in which the impression is created that the player is working to achieve a goal that is particularly important or awe-inspiring

• trophies, in which players receive virtual presents as a reward for their achievement

A combination of choice architecture, persuasion and gamification applied in a good user interface produces a technological system that strongly incites or motivates a person to use that system in order to make certain choices, perform certain actions, or achieve certain objectives. This is what I have called in previous research a mobilization system (Heylighen et al., 2013).

But are these options indeed the best ones? As we saw, computer games can be addictive, and the options people choose, such as watching particular commercials or visiting certain websites, can be manipulated so as to serve the objectives of a company or government. Therefore, the design of technological systems, even if they serve apparently innocent purposes such as playing games, getting news, or searching for information, necessarily involves values. Therefore, technology design cannot be separated from ethics. Let us then review basic ethical theories and see in how far they can be applied to technology.

- 77 - Towards an ethics of technology

Normative ethics

Normative ethics concerns the norms, values or criteria that we use to distinguish good from evil. These norms specify what we should do and what we should not do. An ethics of technology (“technoethics”) should formulate general norms that tell us which uses of technology should be avoided or prohibited, and which should be promoted or made obligatory. Implementing such norms is necessary to avoid the many negative effects we reviewed that technology can have on individuals, the ecosystem, and society, but also to ensure an effective deployment of the many positive effects.

One might hope that technology developers would spontaneously want to avoid such negative effects, and that therefore there would be no need to impose norms from the outside. For example, developers are intrinsically motivated to reduce consumption of resources and complexity of use, because they want to avoid unnecessary costs and to please their users.

Yet, in other cases, the developers may not be aware of negative effects, or find them too cumbersome to avoid. Other reasons to impose norms on the developers may be that they do not care about the negative effects because they themselves are not affected by them. These are what we called externalities, such as pollution, waste, or noise. In some cases, the developers may actually profit from these negative effects, for example when they make their systems addictive, or too complex to repair, so that users are inclined to buy a new system rather than repair it. In all these cases, the developers and producers have little incentive to prevent these negative effects. Clearly formulated, socially accepted norms can produce such an incentive, by creating the danger of punishments, fines or reputation damage for those developers who ignore the norms, and rewards, such as subsidies or public approval for those who those who apply them.

In traditional philosophy, we find different foundations for a normative ethics, proposed and elaborated by different philosophers. We will discuss those four that seem most directly applicable to technology:

• virtue ethics

• deontological ethics

• utilitarian ethics

• pragmatic ethics

- 78 - Virtue ethics

Virtue ethics was formulated already in Antiquity, by Aristotle. Norms are here conceived as “virtues”. These are good qualities that we should strive to achieve for ourselves and others. These include e.g. wisdom, tolerance, altruism, serenity, non-violence, moderation... The opposite of virtues are “vices”. These are negative qualities or behaviors that we should discourage or avoid, such as envy, hate, ignorance, or selfishness.

Virtues and vices are not absolute duties but merely values to orient our behavior: nobody is perfectly wise or serene, or will never be ignorant or envious. The norm stipulates that people should try to become more serene and less envious, not that they should always be serene and never envious.

Since these virtues were formulated for people, there seems to be no obvious application to technology. However, we could try to establish a list of “virtues” that characterize good technologies. These may include characteristics such as efficiency, robustness, transparen- cy, adaptability, ubiquity, openness, and lack of waste products. This should be comple- mented by a list of technological “vices”, such as being addictive, overly complex, prone to cascading failures, or unsustainable.

Deontological ethics

Deontological ethics is probably the best-known approach to formulating norms. It was elaborated among others by the philosopher Immanuel Kant. Here, norms are formulated as absolutes rules that specify what is obligatory (“duties”) and what is forbidden (“prohibi- tions”).

Ethical duties are obligations that apply to every person. For example, it is your duty to help people in need when you can, educate yourself and your children, and care for your family. Kant tried to generalize such concrete obligations by formulating a universal rule, known as the categorical imperative:

Act only according to that norm whereby you can, at the same time, will that it should become a universal law.

The underlying idea is to avoid different kinds of selfish behaviors that benefit only certain individuals or groups. For example, a norm such as “maximize profit for the firm” will benefit all those who have the economic means to make profit, but not those who are so poor that they have to accept any payment they can get. Another example of a supposedly universal norm is the Golden Rule, which says:

Act towards others the way you would like them to act towards you

Next to duties, deontological ethics formulates prohibitions. These are norms specifying

- 79 - what no one should do. E.g. you should not lie, murder, steal, or rape. A final type of deontological norms are rights. These specify what every individual is entitled to do or to receive. For example, the right of free speech states that an individual is entitled to express an opinion publicly. Rights imply the prohibition on others not to obstruct that right. For example, you should not prevent people from expressing their opinion. They also imply the duty on others to uphold that right. For example, you should protect the rights of free speech of other people.

What distinguishes the deontological approach is that the value of an action depends on whether it obeys the normative rules or not. It does not depend on whether the consequenc- es of the action are good or bad. For example, according to deontological ethics, if someone would have murdered Adolf Hitler in 1940, that would be have been bad, because it would have transgressed the prohibition on murder—even if this murder had prevented WWII and thus saved millions of innocent lives.

But such a priori rejection of certain actions makes deontological ethics less suited as a foundation for technoethics. The problem is that technological evolution constantly creates novel situations to which existing norms are not obviously applicable. For example, does it make sense to apply the prohibition “thou shalt not kill (a human being)” to a two-week old embryo produced in a test tube? Formulating a priori rules to specify which technologies are allowed and which are not is nearly impossible, because we are very bad at estimating the future positive or negative effects of technology, as we discussed previously. The danger is that we may prohibit a technology that eventually would have saved millions of lives on the basis of moral absolutes, such as the sanctity of human life, that were formulat- ed in centuries where our understanding of life was very different.

On the basis of such principles, we may for example decide to prohibit the use of viruses that were engineered to remove defective genes from people with genetic diseases. Yet, at this moment it seems likely that such a technology (known as CRISPR-Cas9), when further developed, may relieve the suffering of untold millions without serious side effects. Should we stop research into this very promising treatment method on the basis of some absolute norm according to which the human genome is not to be tampered with?

Vice-versa, we may give a technology free reign on the basis of a seemingly universal right, such as free speech or freedom of religion, and then discover that it creates deep problems. For example, the free, worldwide communication enabled by the Internet has been used for the recruitment of terrorists, the organization of pedophile rings exchanging child pornography via the web, incitements to violence, and the spread of sometimes dangerously false news via social media. Should the algorithms of Facebook or Twitter be adapted to filter out such noxious messages, or should they let anything pass because of the principle of free speech?

- 80 - The precautionary principle

The precautionary principle is an example of a deontological rule that has been formulat- ed specifically to apply to technology (Kriebel et al., 2001; Sunstein, 2003). Simply put, it says:

Do not develop or deploy a technology if you cannot exclude that it would create seri- ous harm.

At first sight, this seems like a reasonable recommendation. However, in practice we can never be certain that something novel is not dangerous, because of the general unpredicta- bility of large-scale or long-term effects. Therefore, the precautionary principle could potentially be used to stop any true innovation. The principle expects the developer to prove that the technology won’t have any harmful effects. But you cannot prove a negative, because you cannot consider the infinite number of circumstances in which the technology could potentially be deployed. Still, we know from experience that most novel technologies have more positive than negative effects. By prohibiting them a priori, because of lack of proof, we will never reap those benefits. The precautionary principle is in essence con- servative, because systematically applied it would prevent all truly novel and thus uncertain technologies from being tested out in the field (Sunstein, 2003).

An example where the precautionary principle has been applied is genetic modification. The deployment of genetically modified organisms (GMO) is strongly restricted in Europe because of potential dangers. These include people getting ill by consuming genetically modified food, or modified genes “escaping” into the wild and disrupting natural ecosys- tems. Yet, after decades of use outside of Europe, no such negative effects have been observed. On the other hand, GMO technologies have proven benefits for increasing agricultural productivity, thus combating hunger, poverty and disease in poor countries dependent on agriculture. Increased productivity also helps in reducing unsustainable use of agricultural land, fertilizers and fossil fuels.

The problem is that in practice the “potential danger” of technologies tends to be estimated more on the basis of emotions, common narratives and cultural prejudices than on the basis of scientific knowledge. Any good scientist when questioned about unknown dangers will tell you “we cannot be certain that these do not exist”, however, while pointing out that given the present state of knowledge there is no specific reason to assume there is a great risk. But that inevitable uncertainty is often enough to feed fears incited by common associations.

For example, the association with atomic bombs and the “invisible killer” of radioactivity makes people particularly afraid of nuclear technology. Everybody remembers the Fuku- shima disaster in which a nuclear plant was destroyed by a tsunami wave. However, ten years after the disaster the official count is that some 20 000 people died by drowning

- 81 - caused by the wave (mostly outside of Fukushima), while one person died because of radioactive contamination in the plant. While it is likely that remaining radioactivity will kill people indirectly via an increase in cancer rates, present indications are that additional deaths would run in the hundreds rather than the thousands killed by the wave. But because waves are “natural” and easy to understand, while nuclear technology is not, there is a tendency to only focus on the dangers of the latter.

Similar fears for genetic modification, cloning or medical implants are inspired by the common “monster of Frankenstein” narrative, in which people take control of some biological system, but then their creation gets out of hand. Another popular narrative is the “takeover by the robots”, which we have seen played out in numerous science fiction movies and novels. This narrative makes people afraid for AI technologies and robots as potentially eradicating or enslaving humanity. In Oriental cultures, like in Japan, on the other hand, the more common narrative is that robots are friendly and cute, acting more like pets than like potential overlords. Therefore this technology is not met with fear, but with joy and excitement.

The subjectivity and culture dependence of such fears is further illustrated by the fact that people are not afraid of fossil fuel technologies, such as coal and gas burning energy plants, which demonstrably produce lots of deadly accidents (much more than nuclear technolo- gies), people-killing smog and global warning. People are also typically not afraid of computer games that create addictions, cars that kill thousands of people every day, and the radiation from cell phones that may affect their brain. On the other hand, many people are afraid of vaccines, which have been proven to save millions of lives by eradicating most epidemic diseases, and which are undoubtedly one of the greatest technological successes ever.

Utilitarian ethics

Utilitarian ethics was developed in particular by the 19th century philosopher John Stuart Mill (West, 2004). It is a special case of consequentialist ethics, which judges actions on their practical consequences rather than on whether they obey a priori principles—the way deontological ethics does. For example, from a consequentialist point of view, killing Hitler would have been good because of the positive consequence of avoiding WWII. Such an ethics seems more immediately applicable to technology, where the consequences (attaina- ble objectives and side effects) of an innovation are after all what is important, not the technological system (means) itself.

What distinguishes utilitarianism is that it proposes a single overall criterion to evaluate the “goodness” of a consequence: utility or happiness. Thus, its only norm or principle is:

- 82 - Act so as to produce the greatest happiness for the greatest number of people.

Utility is greater when the people affected by your action become more happy and/or when more people become happy. The utilitarian principle says that you should choose that action (e.g. adoption of a technology) that would produce the greatest utility. For example, a technological innovation may increase utility by reducing disease, poverty, stress, conflict or other sources of misery, at least as long as it does not produce too many negative side effects, such as pollution, that would decrease the overall happiness of humanity.

This assumes that we can determine overall utility in the form of some measurable or computable quantity. This quantity is often conceived as the sum of all the pleasures, minus the sum of all the pains, as produced by your action. However, calculating that quantity is essentially impossible for something as complex as a technological innovation: there simply are too many unknown factors, unpredictable effects, and complex interactions involved. Yet, in practice we rarely need to calculate overall utility, because we typically only need to decide which of a few options is the best. When choosing between two technologies, it is sufficient to establish which one has the highest utility. For example, the preferable one may have the same positive effects (e.g. producing light), but weaker negative effects (e.g. consuming energy).

Pragmatic ethics

Pragmatic ethics is most associated with the 20th century pragmatist philosopher John Dewey (Fesmire, 2003; LaFollette, 1997). Pragmatism is based on concrete actions and their practical effects, not on absolute moral or metaphysical principles. It assumes that we don't know a priori what is good or bad, and that there are no universally applicable rules, virtues, or values. The reason is that in practice the application of a rule always depends on the context, and for every rule there are exceptions.

For example, even the Golden Rule, which says you should treat others the way you want to be treated, is relative. Indeed, as a masochist who enjoys humiliation and pain you should not approach non-masochists the way you want to be approached. Kant’s categorical imperative, on the other hand, is highly abstract and not obviously applicable to unique situations. For example, how can your specific use of a new technological system—of which you don't yet know all the consequences it may have—be conceived as a universal principle that everyone should obey?

Yet, pragmatic ethical reasoning is not just relativistic. We do have values and preferences based on common sense, moral intuitions and past experience. An example of such a pragmatic norm could be:

Try not to harm other people if you can avoid it.

- 83 - What distinguishes pragmatic ethics is its acceptance that norms and values can change as society evolves, and especially as our knowledge of the consequences of our actions becomes clearer.

For example, in our present Western culture homosexuality is perfectly acceptable, while in the past it was considered a deadly sin. That is because we have understood that homosexu- al behavior among consenting adults does not harm anybody, while it can contribute strongly to their happiness. Pedophilia, on the other hand, was largely ignored in the past, while now we know that it tends to be traumatic for the children that experienced it. Therefore, according to our contemporary norms, pedophile behavior is strictly prohibited.

Pragmatic ethics assumes that any norm as to what is good or evil can only be provisional. We should be able to revise that norm when we understand long-term social implications better. Ethical norms are best viewed as similar to scientific theories. That means that they should be based on observations, experiments and theoretical reflections, while always remaining open to revision as new data come in. But just like scientific knowledge, the result of that process is that ethical systems become increasingly reliable as more correc- tions are made. Therefore, there is ethical progress, just as there is scientific and technolog- ical progress.

In sum, a pragmatic ethics of technology would emphasize practical action: based on available knowledge, try out an interesting new technology and carefully monitor what the consequences are. Be ready to stop or correct the experiment if the consequences appear more negative than positive. Use the resulting experience to formulate more general norms for these kinds of technologies. But be aware that these norms may have to be reformulated later.

Side effect and dangers of ethical evaluation

The difference between the pragmatic and deontological (e.g. the precautionary principle) approaches towards technology points towards a more general problem of normative ethics applied to technology: the tendency to impose a priori norms without understanding the complexity of the situation.

People who lack the scientific or engineering background to understand advanced techno- logical systems often feel frustrated about that. To compensate for that perceived weakness, they may be tempted to put themselves in a superior position, declaring themselves to be “ethicists” that care about more fundamental values than efficiency or robustness, and who therefore feel entitled to tell the engineers what to do. For example, they may declare that humans should not interfere with biological organisms, and therefore that genetic modifica- tion should be prohibited, without being aware that humans have interfered with biology since the early days of agriculture, and that inserting foreign genes in organisms is a natural process that has been performed by viruses for billions of years.

- 84 - The pragmatic approach puts engineers and ethicists on the same level: neither of them is morally superior, or knows what the ultimate norms should be. Developing better norms is best done in collaboration and involves plenty of back-and-forth discussion. Yet, in the great majority of practical situations, you do not need a professional ethicist to tell you what to do. We all have our common-sense, intuitive understanding of ethical norms, including the engineers who design technological systems. It is only in particularly diffi- cult, non-standard cases with potentially far-reaching implications that you may need to consult an ethical committee for advice on how to proceed with the system you are design- ing.

In the present culture, there is a tendency to a priori avoid any possible risk of anything going wrong, especially things about which someone afterwards could complain. To minimize such risks, each project needs to be justified, examined and evaluated so as to make sure it would not transgress any norm or create any risk. That creates a huge amount of bureaucracy, burdening researchers, designers, innovators and various review and ethical committees with endless proposals, reports, reviews, evaluations, risk mitigation strategies, ethical questionnaires etc.

For example, I recently submitted a research project on scientifically inspired art. The project said that we would interview selected artists to ask how they see their work. But a reviewer complained that I had not marked on the ethical questionnaire that the project would involve “human subjects”, and that I should submit the project to an ethical commit- tee to make sure that the rights of these subjects would not in any way be restricted. From a pragmatic point of view, I, as a researcher would of course not push these artists to say anything they would not be comfortable with, while I assume that the artists would be mature enough to know what they would and would not want to say to a researcher. But apparently a recorded conversation about art between two consenting adults is considered delicate enough that the way it is performed must first be approved by an ethical commit- tee…

The problem remains that the essence of true innovation is creative exploration. That means that you do not a priori know where your research will lead, or which positive or negative consequences may come out of it. Having to justify your research strategy in every detail before you have actually started the project implies that you have to stick to safe, prede- fined objectives. Therefore, you won’t be able to pick up promising avenues that deviate from the plan, and thus miss the opportunity to discover something really novel (like the effect of microwave radiation on food).

In sum, not only technology but also ethical reviews can have negative side effects (such as stifling bureaucracy) and dangers (such as a priori excluding truly innovative approaches).

- 85 - Towards an integrated technoethics

One way to develop an ethics of technology, or technoethics (Luppicini, 2009; Rocci, 2010), is to synthesize the principles of virtue-based, utilitarian and pragmatic ethics. This ethics would be based on values, recommendations and guidelines, rather than on absolute duties or prohibitions. A good start is to formulate values in the form of a list of technologi- cal “virtues” and “vices”. These would specify general aspects of technological systems that appear clearly positive or negative, as determined by our practical experience as well as our deeper, theoretical understanding of systems and their potential effects on human life and society. For each technology, we can then try to make these values more concrete in the form of guidelines on how to make better technologies, while preventing clear abuses, dangers or negative side effects.

The overall principle would be that developers should try to design the technology so that it maximizes the virtues and minimizes the vices. From a utilitarian perspective, that means that they should try, insofar possible, to maximize the sum total of the expected positive consequences minus the sum of the expected negative ones. The precautionary principle would add that we should pay special attention to negatives that could potentially get out of hand, and create great harm, however without requiring proof that such risks do not exist.

Pragmatism would add that such estimation of more or less likely positives and negatives is easier to achieve by first testing out the technology in small-scale, controllable experiments, a method similar to what is known in engineering as “rapid prototyping”. That means building a simple implementation or prototype used by real people in realistic situations, but so that the prototype is easy to modify depending on their experiences. As you get feedback on the system’s utility, enhance where possible the positives, while reducing the negatives. If this is successful, deploy the technology increasingly widely, while continuing to monitor for as yet unforeseen effects. Be ready to make corrections for unexpected abuses, or if necessary, to withdraw the system from operation. Also, make sure to check for interaction effects with other technologies or with psychological and social factors.

Ethical considerations should be an integral part of the design of technologies. That means that design criteria should not just take into account the stated intent or objective of the system, such as the transport of goods or the treatment of a certain disease, but also its likely effects on other values or people, such as pollution, dangers or potential abuse by criminals. Optimizing a system towards a single value, such as effectiveness or efficiency, is dangerous, because it is likely to make you blind for negative side effects.

Some guidelines

• clearly formulate universal values underlying the “utility” and “virtues” that you are trying to achieve

• strive for transparency about how the technology functions

- 86 - o democratic ability to question any aspect of it

o ability to examine all the components of the system • be aware of in-built biases and parasitic tendencies

o make sure these biases are positive rather than detrimental

o and that there is no tendency to create addictions • anticipate and monitor non-obvious dangers and abuses, such as:

o potential cascading failures

o sensitivity to Black Swan effects

o fragility because of dependency on one central or critical component

o facilitation of violent or criminal activities • control for side-effects and externalities

o make the polluter pay the costs of the pollution suffered by others Where possible, it is good to specify who would be responsible when something specific goes wrong, because like that the person or organization responsible will be motivated to avoid such problems. However, you should not insist on always finding a guilty party after something has gone wrong. The reason is that some consequences of the deploy- ment of a system simply could not have been foreseen. Such effects emerge from a net- work of interacting factors that no one controls. For example, assume that systems A and B function perfectly well on their own, but go disastrously wrong when A is coupled to B. Who is responsible for that calamity?

• the designer of A

• the designer of B

• the person who coupled A to B

• or some emergent interaction that no one could have foreseen?

In such cases, there is no sense in designating a scapegoat who is blamed for the ill ef- fect. That would not only make an innocent party suffer, but lull the others into thinking that the problem can be reduced to this one party making a mistake, thus making them overlook the true complexity of socio-technological interactions.

- 87 - Clarifying the utility of technology

According to the utilitarian principle, a technology should create more happiness for more people. But what are the most important factors that contribute to the overall hap- piness or well-being of people in society? Research on happiness by sociologists and psychologists has shown that while individual well-being is to some degree subjective, at the level of society there are universal requirements, such as health, safety and social participation, that are necessary to achieve happiness (Heylighen, 2020). We can con- ceptualize these fundamental conditions in term of human needs: what do people need to be happy? And what does society need to be able to function so that it can satisfy these individual needs?

Thus, we can reformulate the utilitarian principle in the following, more pragmatic form:

a good technology should try to maximally satisfy the true needs of individuals and society

These include the needs of people individually, the overall needs of the social system formed by individuals and their technological extensions, and the need to coordinate the different individual and technological components. We will summarize each of these in turn.

Individual human needs

From an evolutionary, biological perspective, the most fundamental needs of an organ- ism are survival, development and reproduction. For social animals, such as humans, that also requires being accepted and supported by the social group. These needs are realized as our biological drives and instincts. For example, to survive we need food. That need creates the feeling of hunger, which in turn drives us to eat. Abraham Maslow (Heylighen, 2020; Maslow, 1970) and other psychologists have proposed theories of human needs. Let us summarize these theories in a list of needs, while adding examples of the kind of technologies that may satisfy them (Heylighen, 2013):

Physical needs

• food: agriculture, food processing, cooking, fishing, …

• drink: plumbing, water purification, bottles and tins, …

• shelter: construction engineering, insulation, architecture, heating …

Health needs

• physical health

- 88 - o protection against infectious diseases: vaccines, antibiotics, antivirals…

o prevention of degenerative diseases: drugs, training equipment, artificial organs…

o recovery from accidents: surgery, artificial limbs, blood transfusion … • fitness (physical strength, endurance, flexibility, speed...): sports equipment, running shoes, training regimes, vibration plates…

• mental health (emotional well-being): psychotherapeutic techniques, meditation apps, antidepressants, anxiolytic drugs, …

• relaxation: beds, entertainment media, internet radio, …

Safety needs

• preventing accidents: airbags, self-driving cars, fire alarms, …

• protection against natural disasters: flood walls, earthquake proof building, weather warnings, …

• preventing crime and violence: locks, computer security, burglar alarms, camer- as, …

Social needs

• being able to communicate, meet, and be part of a community: telephone, email, social media, video conferencing …

• having friends and relationships: dating apps, social media…

• being respected (having reasons to feel proud and appreciated because of your achievements): websites and homepages, electronic portfolios, professional networks, …

Growth needs (developing your capabilities)

• personal expression and creativity: social media, blogs, art and music apps, pho- tography…

• cognitive needs (education, training, information): Wikipedia, MOOCs, training apps, electronic books…

• achievement needs (feeling capable to achieve goals): various tools, project planners, productivity apps…

• spiritual development: meditation and mindfulness apps, motivational videos, …

- 89 -

Needs of the socio-technological system

Living was developed by the biologist and systems scientist James Grier Miller (Miller, 1965). It describes what any “living” system needs to survive and function effectively. Such systems not only comprise biological organisms, such as the human body, but also socio-technological systems, such as a city, a university or a factory. The theory is therefore applicable to the global system formed by individuals, society, and technology together.

It provides a list of critical subsystems or functions that every such “living system” must have. Therefore, the presence of these subsystems can be interpreted as needs of the socio- technological system. Let us briefly review these essential functions (Heylighen, 2007a).

Ingestor:

• general function: getting necessary resources into the system

• bodily functions: eating, drinking, inhaling

• technological functions: mining, harvesting, pumping oil, capturing sunlight...

Converter

• general function: processing resources to make them useable

• bodily functions: digestive system, lungs

• technological functions: refineries, processing plants

Distributor

• general function: transporting resources to where they are needed

• bodily functions: circulatory system, blood and lymph vessels

• technological functions: transport networks: roads, railways, electricity networks, pipelines, airports

Producer

• general function: producing new components

• bodily functions: stem cells

• technological functions: factories, machines, robots, …

Extruder

• general function: getting rid of waste products

- 90 - • bodily functions: urine excretion, defecation, exhaling

• technological functions: sewers, waste disposal, smokestacks

Storage

• general function: maintaining reserves of resources/products for when they are needed

• bodily functions: fat, bones, muscles

• technological functions: warehouses, containers

Support

• general function: keeping components physically in place

• bodily functions: skeleton

• technological functions: buildings, bridges, dams, walls...

Motor

• general function: producing movement

• bodily functions: muscles

• technological functions: engines

Sensor

• general function: collecting incoming information about the environment

• bodily functions: sensory organs: eyes, ears, taste, smell

• technological functions: sensors, cameras, satellites, thermometers, ....

Decoder

• general function: interpreting incoming information

• bodily functions: perception in brain

• technological functions: data processing and data mining

Channel and Net

• general function: distributing information throughout the system

• bodily functions: nervous system

• technological functions: Internet, communication media

- 91 - Associator

• general function: learning new information

• bodily functions: reinforcement of synaptic connections between neurons

• technological functions: machine learning...

Memory

• general function: storing information

• bodily functions: memory in the brain

• technological functions: the “Cloud”, databases, hard disks

Decider

• general function: choosing between different options

• bodily functions: “executive” function in the prefrontal cortex

• technological functions: recommendation algorithms, decision support systems

Effector

• general function: implementing decisions into physical actions

• bodily functions: nerves activating muscles

• technological functions: robots, remote and automatic control

Coordination needs

In addition to these specific functions, there is a need for the different parts of the socio- technological organism to function together smoothly, i.e. without misunderstandings, conflicts, frictions, delays, or confusions. Such global coordination is still far from being achieved. Yet, technological solutions are being conceived and developed.

One of the more ambitious one is the Semantic Web. This is a developing suite of proto- cols to standardize categories and rules, just like the World-Wide Web proposed the HTML protocol to standardize document formats so that every computer could read them (Berners- Lee & Fischetti, 1999). The semantic web uses an “ontology” of basic concepts. That is a formal classification of kinds of things, so that everybody agrees what belongs to which category. For example, such ontologies can list different chemical products depending on their molecular structure, different biological species according to biological taxonomies, diseases according to medical handbooks, or electronic system components, depending on their functions and connections. The categories are identified by distinct labels, in a format

- 92 - that can be understood by computers, as well as by people from different languages and backgrounds. Like that, information and requests can be shared without misunderstanding.

For example, when using a common ontology an order for a particular product will be understood in the same way by the different clients, suppliers, and website catalogs. An older example of such an ontological classification scheme is the ISBN numbering for books, which ensures that you get the right book in spite of variations in titles, printing runs, spelling of the author’s name, or publishing data. A more recent example is the DOI (Digital Object Identifier), which is used to uniquely characterize publications available on the web, such as scientific papers.

The Semantic Web is not only intended to unambiguously identify categories of things, but to specify the properties and relations between different categories. For example, things in the category of books are connected to things in the categories of authors, publishers and years. Because the labeling is unambiguous, computer programs understand these proper- ties and thus can apply formal rules to reason and answer questions, such as “Which book did author X publish in 1998, and what was its publisher?” or “Which molecules have a melting point of 143 degrees C?” or “Do penguins lay eggs?” Thus, they can search through immense repositories of data to find the exact things that satisfy certain required conditions, thus connecting any specific need or request with an offer that can fulfill that need.

A mobilization system, as I have defined it in my research (Heylighen et al., 2013), facilitates the coordination of people that work together by inciting them to take on the most pressing tasks and efficiently dividing the labor. A simple example of such a technol- ogy is a shared list of “to do's” that stimulate people to take on a task useful for the group. Another example is how Wikipedia articles often have a header noting what is lacking in the article, and thus inviting readers to add missing information. A more elaborate version is a so-called issue queue in which members of a community or organization list problems, questions or objectives. Everyone in the group can see the list and select one or more problems they are willing and capable to address. Their motivation can be enhanced by rewarding workers proportionately to the number of problems they solve.

Another very promising technological concept, developed in part in my research group, is a so-called “offer network” or “web of needs” (Heylighen, 2017b). This is a conceived Internet platform to exchange products and services without need for money. The idea is that everyone publicizes what they want (their needs), and what they are willing to give (their offers), using a shared ontology. The system uses intelligent algorithms to try to maximally match offers and demands across a large network of people, so that everyone in the network gives what s/he is willing to give, and receives what s/he would like to have. This is much more flexible than a barter economy, where one person may want to exchange A for B, but will only get what s/he wants if some other person is willing to exchange B for A. With an offer network, it is sufficient that someone in the network is willing to give B

- 93 - on some condition and someone else is willing to give something in return for A. The intelligent system then tries to find intermediate people to satisfy the needs and accept the offers, so that everyone is satisfied.

- 94 - Recent issues in the philosophy of technology

We will now briefly discuss a number of more “science fiction”-like extensions of contem- porary technology that are controversial, because they seem to question our humanity, or announce radical and potentially catastrophic developments. Therefore, they are worth reflecting about, so as to better understand the underlying philosophical conceptions and ethical issues.

Becoming cyborgs

As we noted, a , or “cybernetic organism”, is a human body coupled directly to a technological extension, thus forming a hybrid between biology and technology (Clark, 2004). This technology is typically viewed as “implanted” in the body. But technological extensions are not really new. Technology has always extended our capabilities, and this often in close contact with the body. Examples of such extensions are glasses, walking sticks, shoes, hearing aids, and pacemakers. After a brief while, the users of such aids stop distinguishing between body and extension.

This happens in particular when the extension is perfectly controllable. That means that it reacts immediately and predicta- bly, in perfect synchrony with the biological organ it connects to. For example, people don't notice their glasses or shoes, and would feel handicapped without them. Blind people using a cane “feel” obstacles before them through the tip of their cane, as if the cane extends their hand. More generally, there is an on-going evolution towards developing intimate interfaces, which mini- mize the physical and psychological distance between human and technological system.

A more radical example of this is a computer chip that is im- planted in someone’s arm. The chip registers and reacts to electrical signals in the arm muscles. These signals are then wirelessly transmitted. That allows the person to remotely control some machine or robotic system. This technology does not add much to a more traditional interface activated by touch or voice, and is therefore unlikely to take off soon. But it provocatively illustrates the potential for becom- ing a cyborg, thus making people reflect more deeply about the implications. Such implants have been demonstrated in a rather sensational way by the (Warwick, 2003, 2014) and by the artist Stelarc.

- 95 - (video about Warwick: https://www.youtube.com/watch?v=LW6tcuBJ6-w)

A more useful application is a system that allows blind people to “see” with their tongue or part of the skin on their back. A camera mounted on their forehead sends signals to a device implanted on their tongue or back that stimulates particular patches of skin. Each “pixel” sensed by the camera stimulates a small zone of skin, and neighboring pixels stimulate neighboring zones. Using such a device, people eventually learn to “feel” the patterns of stimulation as if they were visual, thus being able to recognize objects in front of them.

Probably the most intimate connection being developed is a brain-computer interface (He et al., 2013). Here a device is used that senses a person’s neural pattern of activation, e.g. using an electro-encephalogram (EEG) to register brain waves on the skull, or, more radically, electronic sensors inserted into brain tissue. The sensed signals are interpreted by neural network software (see further), which has been trained to associate certain activation patterns with certain simple thoughts. These thoughts are converted into commands that control an external system. For example, the system may learn to distinguish between thoughts corresponding to “up”, “down”, “left” and “right”. These commands are enough to steer a cursor across a computer screen, and to click buttons or type on a virtual keyboard. Such technology is being developed for people who are paralyzed and cannot move any part of their body. That would allow them to e.g. direct a wheelchair or type text while just using their brain.

If such technologies would become more sophisticated, eventually we may just have to think a command to have it executed, or we may even be able to communicate in a “tele- pathic” manner, directly brain to brain. However, the bandwidth of the signals produced and sensed by our bodily sensory-motor systems is still much higher than any measure-

- 96 - ments of brain activity. Therefore, interfaces operated by seeing, hearing, speaking, and moving hands are likely to remain dominant for the time being. Still, Neuralink, a neuro- technology company founded among others by Elon Musk, claims to be developing implantable, ultra-high bandwidth brain-machine interfaces to connect humans and com- puters.

Transhumanism

Transhumanism is a philosophy that says that we should use technology to amplify human capabilities as much as we can, even beyond what humans are biologically capable of (Bostrom, 2005; More, 2013). For example, we may extend our sensory and cognitive capabilities through implanted information technology, having the equivalent of a very powerful computer and memory in our skull. This would augment the information pro- cessing capabilities of our brain. On the biochemical level, we may develop and use drugs that increase physical capabilities and even intelligence. That is not so far-fetched given the many (legal and illegal) performance-enhancing drugs already used by athletes and stu- dents. There also exist “smart drugs” that increase blood flow in the brain or increase the availability of certain neurotransmitters needed for focus and memory, thus helping us to think more clearly. Exoskeletons, as we saw, may increase physical power.

Surgery or genetic modification may make the body function more efficiently, e.g. correct- ing for genetic defects or protecting against diseases. Artificial or lab-grown organs may replace poorly functioning natural ones. We may soon learn to control stem cells so that they grow new tissues on command in the shape of a specific organ. Thus, any part of the body could in principle be replaced by a more powerful or efficient one. As body parts are replaced one by one by better versions, eventually nothing of the original body may remain, although the personal identity could still be maintained

Another important theme in transhumanist thinking is that we should promote medical advances that would reduce or stop ageing. The ageing scientist Aubrey de Grey has formulated an ambitious long-term project that he calls “Strategies for Engineered Negligi- ble Senescence” (SENS). That means developing technologies that would reduce the deterioration of the organism caused by ageing (senescence) to such a degree that their effect becomes negligible (De Grey & Rae, 2007). Other scientists are looking for tech- niques to undo the accumulated damage caused by ageing, thus rejuvenating the body. The more futuristic scenarios imagine nanobots injected into the blood stream that would repair damage at the cellular and even molecular level. A combination of such innovations may extend human life to centuries or perhaps millennia, eventually leading to immortality.

Such an “enhancement” scenario raises many fundamental questions. Would people with such capabilities still be human? Or should we perhaps better see them the way transhu-

- 97 - manists do, as a next stage of evolution, which is as different from Homo sapiens as we are from the apes? A related ethical question is: should we allow people to gain such capabili- ties? Will this not create a fundamental inequality? The danger seems real that such developments would result in an “underrace” of those who stay behind, because they do not have the means or the willingness to be technologically enhanced. Aren’t such interven- tions against “human nature”? Wouldn't we create a “Frankenstein” monster that gets out of control? And wouldn't drastic life extension create overpopulation because people are born but no longer die?

From a pragmatic perspective, the transhumanist scenario seems merely an (accelerated) continuation of technological evolution. Extension or enhancement is nothing really new: that is what technology has always done. For example, a spear or a rock extends our physical reach “superhumanly” far beyond the length of our arms. Writing “superhumanly” extends our memory, because it allows us to store knowledge potentially forever. The telephone “superhumanly” extends our voice, because it allows us to speak with people on the other side of the planet. Implanting such technologies inside the human body does not radically change their effect and, as noted above, is in practice rarely useful.

Reduction of ageing also merely continues an existing trend. Life expectancy has been increasing spectacularly with about 3 years per decade over the past century because of medical advances. But this does not in itself lead to a population explosion, because fertility drops as life expectancy increases. As a result in the technologically most developed countries, the fertility rate is below the one needed to sustain the population at the same level.

Transhumanism (sometimes abbreviated as H+) is a philosophy according to which we should strive to become better than the conventional human, using any technological means available to extend our life and capabilities. For example, the transhumanist thinker Natasha Vita-More (in collaboration with her husband, Max More) has stated that “Posthumans will be almost entirely augmented — human minds in artificial, eternally upgradable bodies.” That implies that humanistic values will have to be replaced by

- 98 - “transhumanistic” ones (More & Vita-More, 2013). For example, the humanists’ value of self-actualization (maximally developing or actualizing one’s human potentials) (Heylighen, 2020) should be replaced by one of self-transcendence, in the sense of develop- ing new, superhuman potentials.

Video on transhumanism: https://www.youtube.com/watch?v=STsTUEOqP-g&t=796s

But, again, we cannot clearly distinguish between improving oneself with or without advanced technology. After all, the humanistic ideal of self-improvement through education is in practice unthinkable without the technologies of writing and the printing press. Thus, transhumanism can be seen as merely an updated version of humanism, i.e. a secular philosophy and system of values for the age of advanced technology.

Artificial intelligence

Artificial Intelligence (or AI) is a technology for building machines with a human-like intelligence. Intelligence here can be characterized as the ability to:

• solve problems and answer questions

• recognize phenomena, make sense of them, and decide about appropriate actions

• learn new patterns and rules

While these abilities are very general, in practice intelligence requires specialized knowledge and skills. AI applications therefore started with so-called “expert systems” or “knowledge-based systems”. These are programs that contain some of the knowledge of a human expert, using it to solve problems. Building such systems requires “knowledge engineering”. That means acquiring or extracting knowledge from humans, and then structuring and formalizing that knowledge so that a computer program can use it for automated reasoning. Formal knowledge is expressed in the form of propositions that specify features of things that belong to certain categories. Applying logic to these proposi- tions allows the program to infer further propositions. For example, the given propositions that “penguins are birds” and that “birds lays eggs” allow the program to deduce that “penguins lays eggs”. That means that it can correctly answer the question: “does a penguin lay eggs?” with the answer “yes”.

Thus, it seems that in order to build an intelligent program, you merely need to provide it with a huge number of propositions describing all the relevant knowledge and an “inference engine” that can make logical deductions from these propositions. In practice, however, knowledge acquisition from humans is very difficult and time consuming. That is because people most of the time do not reason in terms of logical rules, but on the basis of their intuition. Such intuitive knowledge is very difficult to formulate in a manner sufficiently

- 99 - explicit so that a machine would be able perform logical inferences with it.

Therefore, contemporary AI mostly uses an alternative approach: machine learning. The idea is to let the computer learn its own knowledge by finding recurrent patterns in the data it is fed with. These data typically do not have logical structures, but they exhibit statistical regularities: some things frequently go together with other things. For example, by analyz- ing huge numbers of text available on the Internet, the AI program may note that similar phrases like “penguins are fish-eating birds” and the “bird lays its eggs” occur frequently, and thus learn to associate penguins with birds and birds with eggs, hopefully concluding at some stage that penguins do lay eggs.

Machine learning techniques are the basis of what is known as data mining: extracting potentially useful patterns from huge amounts of so-called “Big Data”. Thanks to the Internet, the ubiquitous use of computers for data storage, and input from an ever-growing number of sensors, we are provided with an exponentially growing quantity of data about everything going on in the world. The more data, the more patterns can potentially be extracted. This requires the use of complex mathematical methods. We will discuss the most common method, deep learning, in the section on neural networks.

The produced knowledge is fundamentally quantitative—expressed in terms of probabili- ties, preferences or associations—and not qualitative—expressed in terms of explicitly defined properties and relations. The disadvantage is that such knowledge learned from unstructured data typical lacks a logical structure. For example, the program is more likely to conclude that penguins and eggs are associated than that “all penguins lay eggs”. Therefore, true logical inferences cannot be made, and the answers you get to your ques- tions are likely to be approximate, or not fully reliable.

Are AI programs truly intelligent?

A fundamental problem in AI is the following: when can you conclude that a computer program has reached a similar level of intelligence as a human? Rather than trying to answer this question theoretically, you could perform a test: can you distinguish a human from an artificial intelligence? In order to avoid all biases, the only fair test is to blindly have a conversation with some intelligent entity without knowing whether it is human or artificial. For example, you could type statements and questions into a computer terminal, and then decide on the basis of the replies you get whether there is a human on the other side of the line or an AI program. If the AI program succeeds in the test, i.e. if people cannot distinguish it from a human, then one might conclude that it exhibits true intelli- gence. This is known as the Turing test, after the computer science pioneer Alan Turing who first proposed it.

- 100 -

In practice, however, it is not so difficult to fake common human responses to written questions, without therefore being very intelligent. Some existing AI programs commonly pass short, 10 minute long Turing tests: most people cannot distinguish their answers from those of actual people. That has more to do with the fact that people are easily fooled by natural sounding reactions, while ignoring any deep reasoning behind the program. For example, already in the 1960's there was the famous Eliza computer program that would simulate a psychotherapist whose conversation largely consists of asking questions and restating what the person said before. For example, if you said, “My father does not understand me”, it would respond with “So, you think your father misunderstands you. Please, tell me more about that”. Most people assumed that Eliza was an actual person. Contemporary chatbots have a much larger command of language and common knowledge and therefore are even more likely to be taken for real people, although their understanding is still very limited.

- 101 -

The Chinese room thought experiment is a famous criticism of AI and the Turing test proposed by the philosopher John Searle (Searle, 2006). Imagine a person in a closed room with whom you can communicate by passing on messages written on pieces of paper and then receiving answers in the same way. The person has a very extensive book with instructions that tells him how to assemble Chinese characters in response to sentences written in Chinese characters that are entered into the room. Assuming the book contains appropriate rules on how to interpret sequences of Chinese characters, then the output of the room may appear like intelligent answers to questions asked in Chinese. However, the thought experiment assumes that the person in the room does not understand Chinese. According to Searle, AI programs that pass the Turing test are equivalent to Chinese rooms: while they may seem to give appropriate answers, they don't have any real understanding of what they are doing: they merely mechanically apply rules.

AI theorists reply that the intelligence of such a room would be neither in the book of rules nor in the person assembling the answers, but in the emergent system formed by both working together: it is the whole system (the room) that counts, not its components sepa- rately. Similarly, you cannot blame a computer processor or a list of computer instructions for not understanding a conversation, as long as together they can provide intelligent answers. But this thought experiment is of course purely hypothetical: in practice, a Chinese room cannot be built, because we can of course never codify all the rules necessary to hold an intelligent conversation in Chinese in a book of instructions.

Let us investigate the shortcomings of AI more concretely. Traditional AI programs use rules (e.g. mathematics, grammar and logic) to manipulate symbols (e.g. characters, words or numbers) in order to generate and interpret expressions (e.g. questions) that are made out of combinations of symbols. They use their logic to infer new expressions (e.g. answers)

- 102 - from the expressions they are given. However, they don't know what a symbol means or stands for: symbols for them are in essence just lists of 1’s and 0’s that need to be processed according to a particular procedure into some different list.

The underlying difficulty was analyzed by the cognitive scientist Stevan Harnad as the symbol grounding problem (Harnad, 2002). For computer programs, symbols are purely formal or abstract labels: they are not “grounded” in reality. The AI system has no “experi- ence” of the real world. Therefore, it does not really know what a symbol means. For example, it may know that the symbol “blue” belongs to the symbolic category “color”, and that it is a property of another symbolic category called “sky”. Nevertheless, it has never actually seen a blue color, and therefore cannot imagine how a person would experience the sensation of blueness.

However, that does not imply that we can never create an artificial intelligence. To tackle this problem, we may build so-called situated and embodied agents (Heylighen, 2020). Such agents could be autonomous robots that can perceive their situation via their sensors (e.g. a camera registering colored scenes) and act upon it via effectors (e.g. robot hands manipulating colored objects), while pursuing their goals. Thus, thanks to their physical body, they could experience the real world by interacting with it. They would learn how it functions from the feedback loop between perception and action. Thus, they could discover the meaning of their internal, symbolic representations.

Nevertheless, great difficulties remain. Robotic sensors and effectors are still very clumsy compared to our human sensory organs and muscles. Therefore their experience and learning is much less refined than our human experience. Moreover, goals need to be preprogrammed into the robot. Therefore, such a robot is not really autonomous. It lacks the extremely sophisticated system of values of a human, which is the product of billions of years of biological evolution, millennia of social evolution, and decades of personal experience.

In conclusion, AI has made great advances in tackling various practical problems, produc- ing impressive programs that can do a variety of things that until recently seemed impossi- ble to achieve without human intelligence. Yet, for the time being there seems to be no clear prospect of building a truly human-like, autonomous intelligence.

Neural networks

In part because of the shortcomings of symbolic methods, present-day AI technologies mostly rely on a very different type of techniques, commonly known as deep learning neural networks. Artificial neural networks are inspired by the functioning of the biologi- cal neural networks in our brain, which consist of neurons (nodes of the network) linked by synapses (connections between neurons/nodes). They learn in a non-symbolic manner by

- 103 - reinforcing successful connections (those that contributed to a correct answer), while weakening unsuccessful ones (those that contributed to a wrong answer). In this way, the network becomes increasingly effective in solving the problems posed to it.

“Deep” networks consist of many “layers” of neurons that subsequently process the input information (the problem posed to it), with the last, output layer providing the proposed solution. The information in consecutive layers corresponds to increasingly abstract interpretations of the input. For example, a neural network for recognizing objects in an image may start from elementary data as input, such as the color of the pixels in the image, then find the edges separating zones with different colors so as to recognize lines, move on to recognize simple shapes, such as circles or triangles, then to more complex types of objects, such as a table or a flower, up to more abstract categories, such as furniture or plants.

Such deep neural networks can learn to recognize patterns in very complex, fuzzy and ambiguous data. For example, they can be trained to recognize people from different photos of their face, classify paintings depending on the style (e.g. impressionist, abstract, baroque, …), or recognize words and interpret sentences in recordings of speech. However, they are not limited to patterns that people are good at recognizing. They can also discover hidden trends, for example that certain categories of people (e.g. gay, middle-aged men) tend to like certain types of movies or buy certain types goods, or that people with particular combinations of symptoms and lifestyle factors are likely to develop certain types of rare diseases.

Once a network has been trained to effectively recognize a certain type of pattern, a complementary network can be trained to generate patterns of that type so that they are recognized by the first network. Such networks have succeeded in generating novel artworks that are virtually indistinguishable from works created by human artists. The danger is that this can be abused to produce fake photos, videos, or recordings that are nearly indistinguishable from real ones.

- 104 - In spite of these impressive capabilities, such networks still cannot compare to human intelligence. The reason is that they are not autonomous: they need to be trained with given data and rewarded for learning particular responses. They cannot for themselves decide what is important. They cannot move about in order to collect interesting experiences on their own. They are still technological supports that automate some of the functions of our brain, but they cannot replace a human as a whole.

Such deep learning networks raise a number of ethical issues. One problem is that networks that learn from biased data will necessarily develop a biased view. For example, they may learn to associate being black with being criminal, or being Muslim with being a terrorist.

The problem is that we don't know what kind of rules these networks learn. The reason is that their processing mechanism is distributed across millions of artificial “neurons” and their connections. All these parts working together come to a certain conclusion, however, without us being able to single out the most important connections. Therefore, we cannot reconstruct how they made their decision. The advantage of the older, symbolic AI is that its assumptions, logic and rules are clear and explicit. Therefore, we can in principle find out why it came to the wrong conclusion, and remedy the error. But that is no longer the case with AI based on neural networks.

If such networks come to unethical conclusions or recommendations, then who is to blame? Does the fault lie with the designer, the available data, the provided selection of the data, the structure of the network, or perhaps the used learning algorithms? Or was it just a fluke, and will the network come to more acceptable conclusions in other situations? Here we recognize the general problem of the lack of transparency and predictability that character- izes the most advanced technologies.

Mind Uploading

The idea of “mind uploading” is to somehow extract the mental content of a particular person, including their knowledge, feelings and personality, from that person’s brain, and turning it into a computer program that would be equivalent to that person (Blackford & Broderick, 2014; Hauskeller, 2012; Wiley, 2014). That means that questions asked to that program would be answered exactly as if answered by the real person, or that the program would react in the same way as the person to any situation that it “experiences”. This program therefore can be seen as a “virtual copy” of that person.

Being separate from the body, this program can be “uploaded”. That means that the corresponding information structure is transferred and stored to a very powerful, distributed computer memory, similar to what is commonly known as the “cloud”. Thus, the virtual person would “live” in a purely digital realm. There it may move about and do things in a virtual reality environment, using a virtual body. This virtual person may still be coupled to

- 105 - the real world via communication links, sensors and effectors, so that it can interact physically with the world without having a biological body. For example, the program may control a robotic body that can move about similarly to a real human being.

While this may seem like a purely speculative idea, proponents of mind uploading have been reflecting about concrete methods to achieve this. The main idea is to measure very precisely the organization of a person’s brain, and then design an equivalent neural network on a computer. In this artificial neural network every actual neuron and synapse in the brain would have an equivalent virtual neuron or synapse in the simulation. Thus, the neural network would exactly duplicate the functioning of the person's brain. That means that it would react in the same way to the same input. Input here are situations perceived by the brain, such as questions asked, visual patterns seen, bodily movements felt or voices heard.

We first must note that such precise measurement of something as complex as the brain is still far away in the future. Still, there are already a number of scientific projects, such as the “Blue Brain” project, that have started mapping the human brain in ever more detail. But the complexity of the brain with its about 100 billion neurons and about 10 000 times as many synapses is so staggering that these maps for the time being remain far too coarse to simulate actual thinking processes.

Let us then reflect about simpler methods to build a copy of a human personality (Turchin, 2018). For example, assume that all the things you perceive and do throughout your life are monitored by your smartphone or personal computer. An AI learning program should be able to learn from those data what your habitual way of reacting and thinking is. After a while, it would know you so well that it could function as your “avatar” or “software agent”, i.e. your virtual representative. For example, your software agent could reduce your workload by answering email questions in your stead. The better the agent gets to know you, the less distinguishable its responses will become from yours, until the agent can fully replace you when you are not available.

However, perfectly learning your way of thinking just by monitoring what you do still seems far too difficult for present machine learning technologies. Let us then make the problem easier still. An even simpler method for creating a virtual personality would start by programming a general, all-round AI agent with typically human knowledge and emotions. (This is a problem that AI researchers have already started to address, though

- 106 - they haven’t quite developed a convincing solution yet.) Given such a “generic” virtual person, you could then adjust its knowledge and personality traits to better match yours, for example making it more introverted than average, interested specifically in philosophical questions, while knowing a lot about chemistry in general, and the job you are doing at this moment in particular. The resulting agent's reactions may not be identical to yours but similar enough to represent you to people who do not know you intimately. So, it could for example function as a “chatbot” that answers common questions about your work and interests.

A major reason for the interest in mind uploading is that such a virtual copy of you could still survive after your biological death. For example, your grandchildren could still talk with the avatar of their long deceased grandparent, and thus profit from your wisdom.

But in how far would an “uploaded” personality still be a human being? As we noted, in principle the upload could control sensors and effectors and thus interact with the real world by using a robotic body. However, the subtlety and sensitivity of the phenomena experienced by such an avatar is likely to be much smaller than the one of a biological person. Moreover, the link with the biochemical processes that make up human life would be lost.

From a philosophical point of view, mind uploading implicitly assumes Cartesian dualism, i.e. the idea that mind and body are completely different kinds of things, which can be cleanly separated (Heylighen, 2020). In our contemporary science and philosophy of mind and brain, such separability is considered highly unrealistic: it is becoming more and more clear that the biological body plays a crucial role in our thinking and feeling.

Another interesting specula- tion around mind uploading has been called the “rap- ture of the nerds”. Rapture denotes a mystical experi- ence of being transported into a spiritual realm, commonly applied to the second coming of Jesus Christ when true believers are expected to rise up to join him in heaven. Indeed, some theorists interpret these developments in a quasi-religious way, with uploaded minds equivalents to souls or spirits, which are immor- tal and independent of material limitations, and the virtual world or cloud as equivalent to Heaven. Their reasoning is that the digital realm could be programmed so that virtual

- 107 - personalities are intrinsically blissful. Thus, mind uploading would be a technological version of religious visions of perfection.

The physicist Frank Tipler has even literally reinterpreted the Christian idea of resurrec- tion as an eventual reconstruction and uploading of the personalities of all dead people from the physical traces they left in the world (Tipler, 1994). The idea is that all events, includ- ing the thoughts and movements of people, leave some scattered, minuscule traces in the physical world. These are much too weak for our present technology to sense. Yet, Tipler claims that a future superpowerful AI should be able to do this, and thus recreate everybody who ever lived in an all-encompassing digital realm.

From a pragmatic point of view it is extremely implausible that a long dead person could be accurately “reconstructed” by any future technology. One reason is the second law of thermodynamics, which says that the inevitable increase in entropy will make any infor- mation left in physical traces dissipate, until nothing useable is left. Moreover, there does not seem to be a good reason for any future technological system to reconstruct all people who ever lived. On the other hand, it would be interesting to build a simulation of some well-known personalities, so that you could have a conversation e.g. with a virtual Einstein or Van Gogh.

The global superorganism

The last issue in the philosophy of technology I wish to present is one in which I have a done a lot of my own research: the idea that humanity and its symbiotic technologies would eventually merge into a single organism. This organism has been called the global super- organism (where the biological concept of a superorganism refers to an organism whose components are organisms themselves) (Heylighen, 2007a). Related concepts focusing on the mental aspects of this superorganism are the global brain (the “brain of the Earth”) and the noosphere (the sphere of mind or thought that envelopes the Earth).

This idea is older than it may seem. Similar concepts were already proposed in the 1930's by the following visionaries:

• the British science fiction author and political activist H. G. Wells, who named it the “World Brain” (Wells, 1937),

• the Belgian founder of information science, Paul Otlet, who anticipated a World- Wide Web like technology to access such a universal knowledge network (Otlet, 1935), and

• the French paleontologist and theologian Pierre Teilhard de Chardin, who called it the “Noosphere” (Shoshitaishvili, 2021; Teilhard de Chardin, 1959). Teilhard fur- ther predicted that humanity would unite and develop God-like capabilities in his

- 108 - version of the Singularity, which he called the “Omega Point”.

The idea was revived in the 1980’s with the of the Internet, and especially in the 1990’s with the birth of the World-Wide Web—technologies that seemed capable of concretely realizing these speculations about world-spanning networks of communication, collaboration and thought (Heylighen, 2008, 2017a; Heylighen & Lenartowicz, 2017).

Next to my own more technical papers, several easy-to-read books review these ideas, including:

• Joel de Rosnay: “Symbiotic Man” (De Rosnay, 2000)

• Gregory Stock: “Metaman” (Stock, 1993)

• Peter Russel: “The Global Brain Awakens” (Russell, 2008)

In 2012, my collaborators and I founded the Global Brain Institute at the VUB, thanks to a large grant financed by the Milner Foundation (Yuri Milner is a billionaire who acquired his fortune by investing in social media such as Facebook before these became so domi- nant). The aim of our project was to develop a mathematical model of distributed intelli- gence, as well as future scenarios for the long-term effects of human-technology symbiosis. The “offer network” system of intelligent distributed coordination mentioned earlier was one of our results (Heylighen, 2017b). Presently, our research group is supported by the Kacyra Foundation to develop a more scientific foundation for Teilhard’s speculative theory of the noosphere, and thus help humanity make sense of the “singular” transfor- mations that are taking place in our technological society.

Without going into the technical details, let me summarize the main ideas of this vision. First, the symbiosis between people and their technological systems is becoming so intimate that these biological and artificial organisms can no longer live independently of each other. Together, they form an increasingly coherent and integrated “superorganism”. This superorganism is global, because globalization has made all the different countries and social systems interdependent to such a degree that none of them can still go it alone (Heylighen, 2007a). The resulting planetary superorganism is a “living system”, as defined by the theory of Miller that we reviewed. That means that it includes all the critical subsys- tems we listed, for processing matter, energy and information.

- 109 -

The information processing functions of the global superorganism—such as sensor, memory, and decider—define the “global brain”. This plays the role of a nervous system for the superorganism. It consists of all people, computers, databases and network links that interconnect them. This global brain learns as new knowledge is discovered and new connections are made, thus becoming ever smarter. The global brain also thinks as prob- lems are collectively analyzed and decisions are made.

Its intelligence is not centralized but distributed over all its components, humans as well as technological (Heylighen, 2013). No single agent is in control. Information processing happens in parallel: billions of human and technological agents simultaneously add data, find patterns and make decisions, making most of their results available to others via the Internet, so that these others can build further on them.

This form of distributed processing is similar to what happens in a neural network, where billions of neurons are simultaneously active passing signals to each other, while collective- ly “deciding” what a perceived situation means, or what should be done about it. On the other hand, it does not suffer from the lack of autonomy and embodiment that characterizes symbolic AI and artificial neural networks. Indeed, both the human components of the global superorganism and their technological extensions into the physical world are embodied: they can sense and act upon a real environment, which provides them with concrete feedback. Therefore, they can learn from experience and correct their mistakes.

That is why a global brain would be intrinsically much more intelligent than any intelligent computer program, even in the highly advanced versions conceived by Singularity thinkers. Indeed, this computer program will anyway be wholly dependent on the surrounding socio- technological system to provide it with data, tell it what to do, and implement its decisions. Therefore, its local, specialized intelligence, however advanced, will merely perform a supporting role within the encompassing intelligence of the global superorganism. That is exactly what you expect from an individual technological system: playing a supportive or symbiotic role within the larger socio-technological organism.

Critics of this conception fear that a global brain would function like a totalitarian world government: taking absolute control of individuals, suppressing their freedom and diversity,

- 110 - so that everyone would think and behave the same (Heylighen, 2007a; Heylighen & Lenartowicz, 2017). However, scientific theories of distributed systems and point in the opposite direction: the intelligence of such a system can be maximized only by increasing the diversity and intellectual autonomy of the components. Components that all think the same are collectively just as stupid as a single component.

A more relevant criticism is that global society as yet is far from integrated: there are too many conflicts and misunderstandings between individuals and countries to see it as a coherent organism. Yet, while political and cultural tensions are what attract most attention, most of the economic, social, technological and scientific activities across the globe are remarkably well coordinated. They perform all the critical functions for the superorganism, while dependably providing us with the food, shelter, health care, social support and education that we need—even in exceptionally difficult circumstances such as a global pandemic. In spite of the focus by the news agencies, even war and violence are gradually disappearing on the global level (Pinker, 2011).

Thus, we may hope that further social and technological advances will eventually resolve these remaining problems as well as the many novel side effects and dangers of technology that we reviewed. A deeper understanding of the complex relations between technology and society can only help us in achieving this ideal. And then the “Return to Eden” scenario (Heylighen, 2015) may no longer appear so utopian…

- 111 - - 112 - References

* = denotes general, non-technical works that are recommended reading for those wishing to further explore the main topics in this course

*Arthur, W. B. (2009). The nature of technology: What it is and how it evolves. Simon and Schuster.

*Barrett, D. (2010). Supernormal stimuli: How primal urges overran their evolutionary purpose. W.W. Norton & Co.

Baudrillard, J. (2000). Simulacra and simulations. na.

Beigi, S. (2015). Mindfulness Engineering: A Unifying Theory of Resilience for Volatile, Uncertain, Complex and Ambiguous (VUCA) World [Ph.D., University of Bristol].

Beigi, S. (2019). A Road Map for Cross Operationalization of Resilience. In S. I. S. Rattan & M. Kyriazis (Eds.), The Science of Hormesis in Health and Longevity (pp. 235–242). Academic Press. https://doi.org/10.1016/B978-0-12-814253-0.00021-8

Bennett, N., & Lemoine, J. (2014). What VUCA Really Means for You (SSRN Scholarly Paper ID 2389563). Social Science Research Network. http://papers.ssrn.com/abstract=2389563

*Berners-Lee, T., & Fischetti, M. (1999). Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web by Its Inventor. Harper San Francisco. http://portal.acm.org/citation.cfm?id=554813

Blackford, R., & Broderick, D. (2014). Intelligence Unbound: The Future of Uploaded and Machine Minds. John Wiley & Sons.

Bonifati, G. (2013). Exaptation and emerging degeneracy in innovation processes. Economics of Innovation and New Technology, 22(1), 1–21.

Bostrom, N. (2005). A history of transhumanist thought. Journal of Evolution and Technology, 14(1), 1–25.

Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1), 15–31.

Brodie, R. (1996). Virus of the Mind: The New Science of the . Integral Pr.

*Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.

*Carr, N. G. (2011). The shallows: What the Internet is doing to our brains. W.W. Norton.

Carrera-Bastos, P., Fontes-Villalba, M., O’Keefe, J. H., Lindeberg, S., & Cordain, L. (2011). The western diet and lifestyle and diseases of civilization. Research Reports in Clinical Cardiology, 15. https://doi.org/10.2147/RRCC.S16919

- 113 - Chou, T.-J., & Ting, C.-C. (2003). The role of flow experience in cyber-game addiction. CyberPsychology & Behavior, 6(6), 663–675.

*Clark, A. (2004). Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence (1 edition). Oxford University Press.

Compernolle, T. (2014). Brain Chains: Discover Your Brain, to Unleash Its Full Potential In a Hypercon- nected, Multitasking World (1 edition). compublications.

Csikszentmihalyi, M. (1990). Flow: The Psychology of Optimal Experience. Harper & Row.

De Grey, A., & Rae, M. (2007). Ending aging: The rejuvenation breakthroughs that could reverse human aging in our lifetime. St. Martin’s Press.

*De Rosnay, J. (2000). The Symbiotic Man: A new understanding of the organization of life and a vision of the future. Mcgraw-Hill. http://pespmc1.vub.ac.be/books/DeRosnay.TheSymbioticMan.pdf

Deterding, S., Sicart, M., Nacke, L., O’Hara, K., & Dixon, D. (2011). Gamification. Using game-design elements in non-gaming contexts. Proceedings of the 2011 Annual Conference Extended Abstracts on Human Factors in Computing Systems, 2425–2428.

*Diamandis, P. H., & Kotler, S. (2012). Abundance: The Future Is Better Than You Think. Free Press.

Drexler, E. K. (2013). Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization. PublicAffairs.

Dueñas-Osorio, L., & Vemuru, S. M. (2009). Cascading failures in complex infrastructure systems. Structural Safety, 31(2), 157–167. https://doi.org/10.1016/j.strusafe.2008.06.007

Eden, A. H., Raker, J. H. S., Moor, J. H., & Steinhart, E. H. (2013). Singularity Hypotheses: A Scientific and Philosophical Assessment. Springer-Verlag New York Incorporated.

Ellul, J. (2018). The technological system. Wipf and Stock Publishers.

Fesmire, S. (2003). John Dewey and moral imagination: Pragmatism in ethics. Indiana University Press.

Folke, C., Carpenter, S., Elmqvist, T., Gunderson, L., Holling, C. S., & Walker, B. (2002). Resilience and sustainable development: Building adaptive capacity in a world of transformations. AMBIO: A Journal of the Human Environment, 31(5), 437–440.

Fuller, R. B. (2019). 33. Ephemeralization. In Nine Chains to the Moon (pp. 267–270). Birkhäuser. https://www.degruyter.com/document/doi/10.1515/9783035617764-035/html

Greenman, J. P., Schuchardt, R. M., & Toly, N. J. (2012). Understanding Jacques Ellul. Wipf and Stock Publishers.

Grinde, B., & Patil, G. G. (2009). Biophilia: Does Visual Contact with Nature Impact on Health and Well- Being?

- 114 - Harnad, S. (2002). Symbol grounding and the origin of language. Computationalism: New Directions, 143– 158.

Hauskeller, M. (2012). My brain, my mind, and i: Some philosophical assumptions of mind-uploading. International Journal of Machine Consciousness, 04(01), 187–200. https://doi.org/10.1142/S1793843012400100

He, B., Gao, S., Yuan, H., & Wolpaw, J. R. (2013). Brain–Computer Interfaces. In B. He (Ed.), Neural Engineering (pp. 87–151). Springer US. http://www.springerlink.com/index/10.1007/978-1-4614-5227-0_2

Heath, C., & Heath, D. (2007). Made to Stick: Why Some Ideas Survive and Others Die. Random House.

Heerwagen, J. (2009). Biophilia, health, and well-being. Notes.

*Heylighen, F. (2007a). The Global Superorganism: An evolutionary-cybernetic model of the emerging network society. Social Evolution & History, 6(1), 58–119.

Heylighen, F. (2007b). Why is Open Access Development so Successful? Stigmergic organization and the economics of information. In Bernd Lutterbeck, Matthias Bärwolff, & Robert A. Gehring (Eds.), Open Source Jahrbuch 2007 (pp. 165–180). Lehmanns Media. http://pespmc1.vub.ac.be/Papers/OpenSourceStigmergy.pdf

*Heylighen, F. (2008). Accelerating socio-technological evolution: From ephemeralization and to the global brain. In Globalization as evolutionary process: Modeling global change (p. 284). Routledge. http://pcp.vub.ac.be/papers/AcceleratingEvolution.pdf

Heylighen, F. (2013). Distributed Intelligence Technologies: A survey of present and future applications of the Global Brain (No. 2013–02; GBI Working Paper). http://pespmc1.vub.ac.be/Papers/GB-applications- survey.pdf

*Heylighen, F. (2014). Complexity and Evolution: Fundamental concepts of a new scientific worldview. ECCO, Vrije Universiteit Brussel. http://pespmc1.vub.ac.be/Books/Complexity-Evolution.pdf

*Heylighen, F. (2015). Return to Eden? Promises and Perils on the Road to a Global Superintelligence. In B. Goertzel & T. Goertzel (Eds.), The End of the Beginning: Life, Society and Economy on the Brink of the Singularity (pp. 243–305). Humanity+ Press. http://pespmc1.vub.ac.be/Papers/BrinkofSingularity.pdf

Heylighen, F. (2017a). Conceptions of a Global Brain: An historical review. In B. Rodrigue, L. Grinin, & A. Korotayev (Eds.), The Way that Big History Works: Vol. III (pp. 341–256). Primus Books.

Heylighen, F. (2017b). Towards an intelligent network for matching offer and demand: From the sharing economy to the global brain. Technological Forecasting and Social Change, 114, 74–85. https://doi.org/10.1016/j.techfore.2016.02.004

*Heylighen, F. (2020). Mind, Brain and Body. An evolutionary perspective on the human condition (ECCO Working Papers No. 2020–01). http://134.184.131.111/Papers/Mind-Brain-Body-lecturenotes.pdf

Heylighen, F. (1998). What makes a meme successful? Selection criteria for cultural evolution. Proc. 15th Int. Congress on Cybernetics, 413–418. http://pcp.vub.ac.be/Papers/Memetics-Namur.pdf

- 115 - Heylighen, F., & Chielens, K. (2009). Evolution of Culture, . In R. Meyers (Ed.), Encyclopedia of Complexity and (pp. 3205–3220). Springer. https://doi.org/10.1007/978-0-387-30440-3_189

*Heylighen, F., Kostov, I., & Kiemen, M. (2013). Mobilization Systems: Technologies for motivating and coordinating human action. In Peters M. A., Besley T. and Araya D. (Ed.), The New Development Paradigm: Education, Knowledge Economy and Digital Futures (pp. 115–144). Peter Lang. http://pcp.vub.ac.be/Papers/MobilizationSystems.pdf

Heylighen, F., & Lenartowicz, M. (2017). The Global Brain as a model of the future information society: An introduction to the special issue. Technological Forecasting and Social Change, 114, 1–6. https://doi.org/10.1016/j.techfore.2016.10.063

Ihde, D. (2012). Technics and praxis: A philosophy of technology (Vol. 24). Springer Science & Business Media.

Janis, I. L. (1972). Victims of groupthink: A psychological study of foreign-policy decisions and fiascoes. Houghton Mifflin Boston.

Jones, S. E. (2013). Against Technology: From the Luddites to Neo-Luddism. Routledge.

Kaczynski, Theodore J. (2010). Technological Slavery: The Collected Writings of Theodore J. Kaczynski, Aka" The Unabomber". Feral House.

Kaczynski, Theodore John. (1995). Industrial society and its future. Washington Post.

Katz, M. L., & Shapiro, C. (1994). Systems Competition and Network Effects. Journal of Economic Perspectives, 8(2), 93–115. https://doi.org/10.1257/jep.8.2.93

*Kelly, K. (2010). What Technology Wants. Penguin.

Kriebel D, Tickner J, Epstein P, Lemons J, Levins R, Loechler E L, Quinn M, Rudel R, Schettler T, & Stoto M. (2001). The precautionary principle in environmental science. Environmental Health Perspectives, 109(9), 871–876. https://doi.org/10.1289/ehp.01109871

*Kurzweil, R. (2005). The singularity is near. Penguin books.

LaFollette, H. (1997). Pragmatic Ethics. In H. LaFollette (Ed.), Blackwell Guide to Ethical Theory (pp. 400– 419). Blackwell.

Latour, B. (1996a). On actor-network theory: A few clarifications. Soziale Welt, 369–381.

Latour, B. (1996b). Aramis, or the Love of Technology. Harvard University Press.

Le Corre, E. (2019). The Practice of Natural Movement: Reclaim Power, Health, and Freedom (1 edition). Victory Belt Publishing.

Lerner, J., & Tirole, J. (2002). Some simple economics of open source. The Journal of Industrial Economics, 50(2), 197–234.

- 116 - *Logan, R. K. (2010). Understanding new media: Extending Marshall McLuhan. Peter Lang.

Luppicini, R. (2009). The emerging field of technoethics. In Handbook of research on technoethics (pp. 1– 19). IGI Global.

Maslow, A. H. (1970). Motivation and personality (2nd ed.). Harper & Row.

McLuhan, M. (1994). Understanding media: The extensions of man. MIT press.

Meade, N., & Islam, T. (2006). Modelling and forecasting the diffusion of innovation – A 25-year review. International Journal of Forecasting, 22(3), 519–545. https://doi.org/10.1016/j.ijforecast.2006.01.005

Miller, J. G. (1965). Living systems: Basic concepts. Behavioral Science, 10(3), 193–237.

*More, M. (2013). The philosophy of transhumanism. The Transhumanist Reader, 8.

More, M., & Vita-More, N. (2013). The transhumanist reader: Classical and contemporary essays on the science, technology, and philosophy of the human future. John Wiley & Sons.

Nakamura, J., & Csikszentmihalyi, M. (2002). The concept of flow. In C. R. Snyder (Ed.), Handbook of positive psychology (pp. 89–105). Oxford University Press.

Nutt, D. J., Lingford-Hughes, A., Erritzoe, D., & Stokes, P. R. A. (2015). The dopamine theory of addiction: 40 years of highs and lows. Nature Reviews Neuroscience, 16(5), 305–312. https://doi.org/10.1038/nrn3939

Ord, T. (2020). The precipice: Existential risk and the future of humanity. Hachette Books.

Otlet, P. (1935). Monde: Essai d’Universalisme. Mundaneum.

Pinker, S. (2011). The better angels of our nature: The decline of violence in history and its causes. Penguin.

Pooley, G., & Tupy, M. (2020). Luck or insight? The Simon–Ehrlich bet re-examined. Economic Affairs, 40(2), 277–280. https://doi.org/10.1111/ecaf.12398

Quiggin, J. (2005). The Y2K scare: Causes, Costs and Cures. Australian Journal of Public Administration, 64(3), 46–55. https://doi.org/10.1111/j.1467-8500.2005.00451.x

*Rifkin, J. (2014). The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism. Palgrave Macmillan. http://www.bookdepository.com/Zero-Marginal-Cost-Society- Jeremy-Rifkin/9781137278463

Rocci, L. (2010). Technoethics and the Evolving Knowledge Society: Ethical Issues in Technological Design, Research, Development, and Innovation: Ethical Issues in Technological Design, Research, Development, and Innovation. IGI Global.

Russell, P. (2008). The Global Brain: The Awakening Earth in a New Century. Floris Books.

Schmidt, E., & Cohen, J. (2013). The New Digital Age: Reshaping the Future of People, Nations and Business. Random House Digital, Inc.

- 117 - Searle, J. (2006). Chinese Room Argument, The. In Encyclopedia of Cognitive Science. American Cancer Society. https://doi.org/10.1002/0470018860.s00159

Shoshitaishvili, B. (2021). From Anthropocene to Noosphere: The Great Acceleration. Earth’s Future, 9(2), e2020EF001917. https://doi.org/10.1029/2020EF001917

Sisson, M. (2013). The Primal Connection: Follow Your Genetic Blueprint to Health and Happiness. Primal Nutrition, Inc.

*Stock, G. (1993). Metaman: The merging of humans and machines into a global superorganism. Simon & Schuster.

Sunstein, C. R. (2003). Beyond the Precautionary Principle. University of Pennsylvania Law Review, 151(3), 1003–1058. https://doi.org/10.2307/3312884

Taleb, N. N. (2010). The Black Swan: Second Edition: The Impact of the Highly Improbable: With a new section: “On Robustness and Fragility” (2nd ed.). Random House Trade Paperbacks.

Teilhard de Chardin, P. (1959). The phenomenon of man. Collins London.

Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale Univ Pr.

Tipler, F. J. (1994). The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead. Anchor.

Toffler, A. (1970). Future shock. Random House.

Turchin, A. (2018). Digital immortality: Theory and protocol for indirect mind uploading.

Van Krevelen, D. W. F., & Poelman, R. (2010). A survey of augmented reality technologies, applications and limitations. International Journal of Virtual Reality, 9(2), 1.

Verbeek, P.-P. (2009). Ambient Intelligence and Persuasive Technology: The Blurring Boundaries Between Human and Technology. NanoEthics, 3(3), 231–242. https://doi.org/10.1007/s11569-009-0077-8

Verbeek, P.-P. (2015). Toward a theory of technological mediation. Technoscience and Postphenomenology: The Manhattan Papers, 189.

Warwick, K. (2003). Cyborg morals, cyborg values, cyborg ethics. Ethics and Information Technology, 5(3), 131–137.

Warwick, K. (2014). The cyborg revolution. Nanoethics, 8(3), 263–273.

Wells, H. (1937). World Brain. Ayer Co Pub.

West, H. R. (2004). An introduction to Mill’s utilitarian ethics. Cambridge University Press.

Wiley, K. (2014). A Taxonomy and Metaphysics of Mind-Uploading. Humanity+ Press and Alautun Press.

- 118 - Further reading

Allen, C., Wallach, W., & Smit, I. (2006). Why machine ethics? IEEE Intelligent Systems, 21(4), 12–17.

Bell, D. (2006). Cyberculture Theorists: Manuel Castells and Donna Haraway. Routledge.

Berdichevsky, D., & Neuenschwander, E. (1999). Toward an ethics of persuasive technology. Communica- tions of the ACM, 42(5), 51–58.

Bonifati, G. (2013). Exaptation and emerging degeneracy in innovation processes. Economics of Innovation and New Technology, 22(1), 1–21.

Bostrom, N. (2005). A history of transhumanist thought. Journal of Evolution and Technology, 14(1), 1–25.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. OUP Oxford.

Bostrom, N., & Roache, R. (2008). Ethical issues in human enhancement. New Waves in Applied Ethics, 120– 152.

Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. The Cambridge Handbook of Artificial Intelligence, 316, 334.

Brey, P. (2010). Philosophy of technology after the empirical turn. Techné: Research in Philosophy and Technology, 14(1), 36–48.

Cohen, D., DeGeorge, R., Dreyfus, H., Edgar, S., Hollander, R., Mayo, D., … Mitcham, C. (1997). Technolo- gy and values. Rowman & Littlefield Publishers.

Doede, B. (2009). Transhumanism, technology, and the future: posthumanity emerging or sub-humanity descending? Appraisal, 7(3).

Dreyfus, H. L. (1997). Heidegger on gaining a free relation to technology. Technology and Values, Rowman & Littlefield Publishers, 41–53.

Dusek, V. (2006). Philosophy of technology: An introduction (Vol. 90). Blackwell MaldenOxfordCarlston.

Ellul, J. (1962). The technological order. Technology and Culture, 3(4), 394–421.

Ellul, J. (1992). Technology and democracy. In Democracy in a technological society (pp. 35–50). Springer.

Ellul, J. (2003). The ‘autonomy’of the technological phenomenon. Philosophy of Technology. The Technolog- ical Condition. An Anthology”, Malden, MA: Blackwell.

Fogg, B. J. (2003). Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann.

Glenn, J. C. (1989). Future Mind: Artificial Intelligence: The Merging of the Mystical and the Technological in the 21st Century. Acropolis Books, Incorporated.

Hanks, C. (2009). Technology and values: Essential readings. John Wiley & Sons.

Harari, Y. N. (2016). Homo Deus: A brief history of tomorrow. Random House.

Harari, Y. N. (2018). 21 Lessons for the 21st Century. Random House.

- 119 - Haraway, D. (2006). A cyborg manifesto: Science, technology, and socialist-feminism in the late 20th century. In The international handbook of virtual learning environments (pp. 117–158). Springer.

Heidegger, M. (1954). The question concerning technology. Technology and Values: Essential Readings, 99, 113.

Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Indiana University Press.

Ihde, D. (2002). Bodies in technology (Vol. 5). U of Minnesota Press.

Ihde, D. (2012). Technics and praxis: A philosophy of technology (Vol. 24). Springer Science & Business Media.

Jones, S. E. (2013). Against technology: From the Luddites to neo-Luddism. Routledge.

Kaplan, D. M. (2009). Readings in the Philosophy of Technology. Rowman & Littlefield Publishers.

Kellner, D. (1999). Virilio, war and technology: Some critical reflections. Theory, Culture & Society, 16(5– 6), 103–125.

Kellner, D. (2003). Jean Baudrillard. The Blackwell Companion to Major Contemporary Social Theorists, 310–331.

Kramarae, C. (2004). Technology and women’s voices: Keeping in touch. Routledge.

Lane, R. J. (2008). Jean Baudrillard. Routledge.

Lanier, J. (2000). One half of a manifesto. Edge Journal.

Lanier, J. (2010). You are not a gadget: A manifesto. Vintage.

Lanier, J. (2014). Who owns the future? Simon and Schuster.

Latour Bruno, Reassembling the social: An introduction to Actor-Network Theory, Oxford University Press, 2007

Latour, B. (1990). Technology is society made durable. The Sociological Review, 38(1_suppl), 103–131.

Latour, B., & Venn, C. (2002). Morality and technology. Theory, Culture & Society, 19(5–6), 247–260.

Marcuse, H. (2004). Technology, War and Fascism: Collected Papers of Herbert Marcuse (Vol. 1). Routledge.

McCarthy, J., & Wright, P. (2004). Technology as experience. Interactions, 11(5), 43.

McLuhan, E., & Zingrone, F. (1997). Essential McLuhan. Routledge.

McLuhan, M. (2014). Media Research: Technology, Art and Communication. Routledge.

Misa, T. J., Brey, P., & Feenberg, A. (2004). Modernity and technology. MIT Press.

Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.

Moor, J. H. (2017). What is computer ethics? In Computer Ethics (pp. 31–40). Routledge.

Redhead, S. (2004). Paul Virilio: Theorist for an accelerated culture. University of Toronto Press.

Rosen, L. D. (2012). iDisorder: Understanding Our Obsession with Technology and Overcoming Its Hold on Us (First Edition). Palgrave Macmillan.

- 120 - Scharff, R. C., & Dusek, V. (2013). Philosophy of technology: The technological condition: An anthology. John Wiley & Sons.

Schroeder, R. (1994). Cyberculture, cyborg post-modernism and the Sociology of Virtual Reality Technolo- gies: Surfing the Soul in the Information Age. Futures, 26(5), 519–528.

Turkle, S. (2011). The Tethered Self: Technology Reinvents Intimacy and Solitude. Continuing Higher Education Review, 75, 28–31.

Turkle, S. (2017). Alone together: Why we expect more from technology and less from each other. Hachette UK.

Van de Poel, I., & Royakkers, L. (2011). Ethics, technology, and engineering: An introduction. John Wiley & Sons.

Van Den Hoven, J., & Weckert, J. (2008). Information technology and moral philosophy. Cambridge University Press.

*Verbeek Peter Paul, De Grens van de Mens. Over Techniek, Ethiek en de Menselijke Natuur, Lemniscaat, 2011

Verbeek, P.-P. (2005). What things do: Philosophical reflections on technology, agency, and design. Penn State Press.

Verbeek, P.-P. (2006). Materializing morality: Design ethics and technological mediation. Science, Technolo- gy, & Human Values, 31(3), 361–380.

Verbeek, P.-P. (2008). Cyborg intentionality: Rethinking the phenomenology of human–technology relations. Phenomenology and the Cognitive Sciences, 7(3), 387–395.

Verbeek, P.-P. (2009). Ambient Intelligence and Persuasive Technology: The Blurring Boundaries Between Human and Technology. NanoEthics, 3(3), 231–242. https://doi.org/10.1007/s11569-009-0077-8

Verbeek, P.-P. (2011). Moralizing technology: Understanding and designing the morality of things. Universi- ty of Chicago Press.

Vinge, V. (1993). The coming technological singularity. Whole Earth Review, 88–95.

Wallach, W. (2015). A dangerous master: How to keep technology from slipping beyond our control. Basic Books.

Winner, L. (1978). Autonomous technology: Technics-out-of-control as a theme in political thought. Mit Press.

Winner, L. (2010). The whale and the reactor: A search for limits in an age of high technology. University of Chicago Press.

Zurbrugg, N. (1997). Jean Baudrillard, art and artefact. Sage.

- 121 -