<<

The Ethical Dilemmas of

Eric Lafferty

The rapid advancements in technology that the world has witnessed over the past century have made a reality of many of mankind‟s wildest dreams. From being able to cross the earth, air, and sea at extreme speeds to being able to send and receive information instantly via the

Internet, the technological advancements in recent years have become cornerstones of modern society. One dream that is still yet to be perfectly fulfilled by advancements in technology is the development of human-like and self-aware , often referred to as androids. While robotic technology has come a long way since its initial attempts, the which is largely indistinguishable from a human is still far from a reality. However, as technology continues to develop and evolve exponentially, many people believe it is only a matter of time. If and when truly "living" robots were to come about, one can foresee a slew of ethical dilemmas developing.

A complete consensus on the definition of the word “robot” has yet to be reached.

However, it is commonly accepted that robots contain some combination of the following attributes: mobility, intelligent behavior, sense and manipulation of environment (“Robot”). This being the case, the term “robot” truly extends to more than just androids. However, for the purpose of this paper I will focus for the most part on androids and their ethical implications.

The

Using the term “robot” to refer specifically to androids is actually how the term was first applied. The commonly accepted first use of the word was in 1920 in the form of a play written by Karel Capek. The play was entitled R.U.R. (Rossum's Universal Robots) and involves the development of artificial people. These people are referred to as robots and while they are given

1 the ability to think, they are designed to be happy as servants. The use of the word “robot” in

Capek's play comes from the Slavic languages‟ word for “work,” which is robota ("R.U.R.

(Rossum's Universal Robots).").

While the word “robot” was not used until 1920, the idea of mechanical humans has been around as far back as Greek mythology. One example that closely relates to the servant robots seen in Capek's play is the servants of the Greek god Hephaestus, the god of fire and the forge. It is recorded that Hephaestus had built robots out of which were “his helpers, including a complete set of life-size golden handmaidens who helped around the house” (“Hephaestus:

Greek God of the Forge and Fire”). Another example of robots in Greek mythology comes from the stories of Pygmalion, who is said to have crafted a statue of Galatea that would come to life

(“Timeline of Robotics”).

Beyond the ancient myths which speak of humanoid robots, one of the milestones in the design and development of such robots came with the discovery of Leonardo Da Vinci's journals which contained detailed plans for the construction of a . Inspired by the ancient myths, the robot was designed in the form of an armored knight and was to possess the ability to sit up, wave its arms, move its head, and open its mouth. The journals in which the plans were found date back to 1495 (“Timeline of Robotics”). It is unknown if this robot was built by

Da Vinci, but merely conceiving it was a milestone in the timeline of robotic history.

The Modern State of Robots

From Da Vinci to the current day the development of humanoid robots has continued to approach the goal of a robot that is indistinguishable from a human. However, despite the massive recent advancements in technology and even the exponential growth of computing power of the past decades, this dream is still far from a reality. In a comprehensive article in the

2

New York Times, Robin Marantz Henig discusses her experiences with what are often labeled

“social robots.” These robots are by no means what the servant robots of Greek mythology have led many people to hope for; rather they are infant versions, at best, of the long-hoped-for androids. Henig comments these machines are “not the docile companions of our collective dreams, robots designed to flawlessly serve our dinners, fold our clothes and do the dull or dangerous jobs that we don‟t want to do. Nor are they the villains of our collective nightmares, poised for robotic rebellion against humans whose machine creations have become smarter than the humans themselves. They are, instead, hunks of metal tethered to computers, which need their human designers to get them going and to smooth the hiccups along the way” (Henig 1).

Despite the disappointment that many people feel when they are given the chance to interact with the latest robots, some major players in the robotic industry are quite optimistic.

Rodney Brooks is an expert in robotics and artificial intelligence. In an article written in 2008,

Brooks explains that it is no longer a question of whether human-level artificial intelligence will be developed, but rather how and when (Brooks). Brooks adds, “I'm far from alone in my conviction that one day we will create a human-level artificial intelligence, often called an artificial general intelligence, or AGI. But how and when we will get there, and what will happen after we do, are now the subjects of fierce debate in my circles” (Brooks).

While it is true that androids are not the only robots which have a great impact on our lives, their development introduces a set of unique ethical issues which industrial robots do not evoke. Working under the assumption that it is only a matter of time until androids are an everyday reality, it is proper to begin thinking about what these ethical issues are and how they may be dealt with in the coming years. The overarching question that results is what exactly

3 these robots are. Are they simply piles of electronics running advanced algorithms, or are they a new form of life?

What Is Life?

The question of what constitutes life is one on which the world may never come to a consensus. From the ancient philosophers to the common man on the street, it seems that everyone has an opinion on what a living organism consists of. One of the more prevailing views throughout history has been that of Aristotle. The basic tenets of Aristotle‟s view are that an organism has both “matter” and “form.” This differs from the philosophical position known as materialism, which has become popular in modern times and finds its roots among the ancient

Indians (“Materialism”). Materialism does not entertain any notion of organisms having a “form” or “soul”; rather, organisms are made simply of various types of “matter.” These two views are at odds with one another and the philosophical position society adopts will inevitably have a huge impact on how humans interact with robots.

Aristotle

The view articulated by Aristotle and his modern-day followers describes life in terms of unity, a composite of both “matter” and “form.” One type of “matter” which Aristotle speaks of could be biological material such as what plants, animals, and humans consist of. Another type of “matter” could also be the mechanical and electronic components which make up modern-day robots. Clearly it is not the “matter” alone which distinguishes whether an object is a living organism, for if it were, Aristotle‟s view would differ little from materialism. The distinguishing characteristic of Aristotle is his inclusion of “form.” The term simply means whatever it is that makes a human a human, a plant a plant, and an animal an animal. Each of these have a specific

“form” which is not the same as its “matter,” but is a functioning unity which is essential to each

4 living organism in order for it to be just that, living. The word used to describe the “form” of a living organism is “psyche” or “soul.” The current-day philosopher Dr. Robert Greene explains

Aristotle‟s teaching that “the self-organization of living matter is based on the presence of a substantial unity called the psyche or „soul,‟” which functions in this way (Greene 142).

Materialism

Opposed to Aristotle's view on what exactly life is comprised of, materialism is another philosophical theory contending to answer this question. The basic tenet of materialism is that

“matter” is the only thing which exists. In short, according to Wikipedia, materialism teaches

“that all things are composed of material and all phenomena (including consciousness) are the result of material interactions. In other words, matter is the only substance” (“Materialism”).

This view of the world is shared by many and is even the view of Rodney Brooks, who is quoted above.

Unlike Aristotle's philosophical view, which was embraced by various religions, perhaps most notably by the Roman Catholic Church and more specifically by St. Thomas Aquinas, materialism often finds itself at odds with most religious views in the world. Catholicism being a prime example of this, one will not find a favorable description of materialism when looking at the opening lines of its definition in the Catholic Encyclopedia. The encyclopedia's entry begins by defining materialism as “a philosophical system which regards matter as the only reality in the world, which undertakes to explain every event in the universe as resulting from the conditions and activity of matter, and which thus denies the existence of God and the soul”

(“Catholic Encyclopedia (1913)/Materialism”).

Why does it matter that materialism is at odds with Catholicism and most other religions?

More specifically, what does this have to do with robots and androids? I would argue that it is

5 relevant because if materialism is correct, then humans should have the power to develop new forms of life. If it is true that everything in the universe is simply material and the result of material interactions, then nothing should be stopping us from creating androids and recognizing them as just as valid a life form as humans.

Robotic Life

While I am personally opposed to the materialist view of life, the ethical questions that arise as a result of assuming its accuracy are nevertheless of great interest. This being so, I will discuss what might happen if we are actually able to develop and build living androids.

If we accept the idea that androids should be considered a new form of life, albeit made up of machinery rather than biological components, to what form of life are we to equate them?

For the sake of simplicity there are three primary forms of life that are accepted by the modern world. The first is plant life. While plants are living organisms, they have no mind, and for this reason should not be the form of life to which androids are compared. The second form is animal life. This covers a wide array of forms from insects to dogs to dolphins. If we consider androids to be animals, they would have to be of the highest sort. The androids that the future promises would no doubt quickly surpass even the smartest of mere wild animals. Moreover, since androids would likely be intertwined with humans, it would be more intuitive to equate them directly with the third form of life, humans. But would the fact that the androids were developed by humans prove that the androids are inferior to humans? Since there is no previous case of humans creating new life forms to reference, it is difficult to answer this question.

The Three

The decision of what level of life robots are to be considered is an essential one. If they are less than human, then perhaps science fiction has some valuable advice for us. In 1942 Isaac

6

Asimov introduced to the world of science fiction what are known as the Three Laws of

Robotics, which were published in his short story “Runaround.” The laws Asimov formulated are defined as follows:

1. A robot may not injure a human being or, through inaction, allow a human being to

come to harm.

2. A robot must obey any orders given to it by human beings, except where such orders

would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict

with the First or Second Law ("").

While these laws are part of science fiction history, the current state of robotic technology demands that they be considered in a new light. As with many ideas once confined to the world of science fiction, Asimov‟s laws are now able to make the transition into reality.

At first glance these three laws seem to be an excellent way to ensure the safe development of this supposed new life form. However, Asimov‟s laws presuppose that human life is of greater value than that of the androids being developed. If we work under the assumption that androids should be considered just below humans, Asimov‟s laws may hold true.

But what if we hold to the conclusion materialism reaches, that androids should be placed at or above the level of humans? If this is the case, Asimov‟s laws will not be able to be applied. The main reason is that we could not see androids as equal forms of life and implement Asimov‟s laws, which place androids in direct submission to humans. How can it be that an should give its life for a human if an android has a right to life equal to that of a human? Imagine an army made up of both androids and humans. Should the android always give its life to save a human‟s life? Would human soldiers be willing to die for an android? As much as people may

7 believe in materialism and come to conclusions that robots will one day be a life form equal to humans, I find it hard to believe that many people would actually die for a robot.

Robot Code of Ethics

While it remains true that robotics technology is not at a place where ethical codes for robots are necessary, it is not stopping some countries from being proactive and taking the beginning steps in the development of a robot code of ethics. South Korea is considered one of the most high-tech countries in the world and they are leading the way in the development of such a code. Known officially as the Charter, it is being drawn up “to prevent human abuse of robots—and vice versa” (Lovgren). The main focus of the charter is said to be on the social problems the mass integration of robots into society is bound to create. In particular it aims to define how people are to properly interact with robots, in Stefan Lovgren‟s words,

“human control over robots and humans becoming addicted to robot interaction” (Lovgren).

Beyond the social problems robots may bring with them, there also is an array of legal issues, the primary one in the charter being what and how information is collected and distributed by robots

(Lovgren).

To many it seems as though South Korea‟s Robot Ethics Charter is the beginning of a modern-day implementation of Asimov‟s Three Laws of Robotics. However, many robot designers such as Mark Tilden think this is all a bit premature. Tilden claims that we are simply not at a point where robots can be given morals and compares it to “teaching an ant to yodel”

(qtd. in Lovgren). Tilden goes on to claim that when we do reach that point, the interactions will be less than pleasant, stating that “as many of Asimov's stories show, the conundrums robots and humans would face would result in more tragedy than utility” (qtd. in Lovgren). Despite Tilden‟s and others‟ pessimistic view of what the future holds for the human-robot relationship,

8 technology will slow down for no one. It is only a matter of time before other countries will follow in South Korea‟s footsteps and create their own code of ethics for robots and their interactions with humans.

Sex and Robots

While general relations between robots and humans are important, there is one issue that could easily be in the forefront of the robot ethics discussion: sex. Henrik Christensen is a member of the European Robotics Research Network and stated in 2006 that he expects that

“people are going to be having sex with robots within five years” (qtd. in Habershon). The expectations in regard to a robot‟s ability to provide sexual pleasure for humans could change the sexual tendencies of the world. It will be no surprise if the adult entertainment industry were to seize the opportunity robotics will soon provide, as they have with past technological advancements, namely, robots designed specifically for pleasure. In fact, this is exactly what has begun to happen, confirming that Christensen‟s prediction was correct. In January 2010, at the

Adult Entertainment Expo in Las Vegas, Nevada, Douglas Hines introduced Roxxxy to the world. CNN journalist Brandon Griggs comments that “to some men, she might seem like the perfect woman: She's a willowy 5 feet 7 and 120 pounds. She'll chat with you endlessly about your interests. And she'll have sex whenever you please -- as long as her battery doesn't run out”

(Griggs). Roxxxy is scheduled to ship later this year, and while a price tag of $7000 may deter many potential customers, Hines claims that pre-orders have been rolling in (Griggs). Moreover, if the product reaches mass production the price will surely drop.

Inevitably the days will soon be upon us when people seek out robots for sexual pleasure on a larger scale, and why not? The robots will be designed to be completely customizable to satisfy the tastes of every customer. The greater adoption of sex robots could very likely lead to a

9 drop in both prostitution and sexually transmitted diseases, which would be seen as a positive by many. However, there are also negatives to the adoption of using robots for sexual pleasure which must be carefully considered. As a result of on-demand sexual intercourse, sexual addictions are likely to skyrocket. Furthermore, it could lead to the degradation of the traditional view on sexual intercourse holding a place of sanctity within marriage. Some of these issues hinge on whether the robots are mere machines and not a new mechanical organism. When laws are passed in regard to human-robot sexual relations, what should the legislation contain?

Clearly, if the robots are a form of life, then it would be wrong for humans to have free rein with them. On the one hand, it could be seen as a form of rape, and on the other hand, interspecies intercourse is frowned upon by most societies, if not forbidden in their law. While many people will abstain from the use of sex robots because of objections arising from religious, moral, and philosophical beliefs, many people would find pleasure in them. For these reasons it is essential that the ethical issues regarding robotic sex begin to be discussed in societies before it becomes a widespread reality.

Conclusion

The root question around which all ethical issues involving human-robot relations revolve is whether humans can peacefully exist with another intelligent species. If we look back into history, it seems doubtful that humans could accomplish this. An excellent example of this came with the discovery of the Americas and the exploitation and slaughter of the Native

Americans who lived there. In that case where both sides were human, it took hundreds of years before peace could be reached and even then at the cost of countless lives.

While it remains a matter of debate, I personally do not believe that society will ever accept the idea that androids are an equal or greater form of life than humans. Human nature is

10 prideful and I do not believe human society as a whole could handle not being on top. No matter what happens, upcoming technological advancements will lead us to consider closely just what constitutes life.

11

Bibliography

"Aristotle." Wikipedia, The Free Encyclopedia. Wikimedia Foundation, 11 May. 2010. Web. 11

May. 2010. .

Brooks, Rodney. "I, Rodney Brooks, Am a Robot." IEEE Spectrum Online. June 2008. Web. 11

May 2010. .

"Catholic Encyclopedia (1913)/Materialism." Wikisource, The Free Library. 16 Dec 2009, 13:29

UTC. Wikimedia Foundation. 19 Apr 2010.

&oldid=1696403>.

Greene, Robert. The Death and Life of Philosophy. South Bend, Ind.: St. Augustine's Press,

1999.

Griggs, Brandon. "Inventor Unveils $7,000 Talking Sex Robot." CNN.com. Cable News

Network, 1 Feb. 2010. Web. 11 May 2010.

.

Habershon, Ed, and Richard Woods. "No Sex Please, Robot, Just Clean the Floor." Times Online.

18 June 2006. Web. 18 Apr. 2010.

.

Henig, Robin M. "The Real Transformers." The New York Times. 29 July 2007. Web. 19 Apr.

2010. .

"Hephaestus: Greek God of the Forge and Fire." The Myths of the Greek Gods: Greek Mythology

and Archetypes. 2003. Web. 19 Apr. 2010.

minds.com/Hephaestus-greek-god.html>.

12

Lovgren, Stefan. "Robot Code of Ethics to Prevent Android Abuse, Protect Humans." National

Geographic. 16 Mar. 2007. Web. 19 Apr. 2010.

.

"Materialism." Wikipedia, The Free Encyclopedia. Wikimedia Foundation, 13 Apr. 2010. Web.

19 Apr. 2010. .

"Robot." Wikipedia, The Free Encyclopedia. Wikimedia Foundation, 13 Apr. 2010. Web. 19

Apr. 2010. .

"R.U.R. (Rossum's Universal Robots)." Wikipedia, The Free Encyclopedia. Wikimedia

Foundation, 3 Apr. 2010. Web. 20 Apr. 2010.

< http://en.wikipedia.org/wiki/R.U.R._(Rossum's_Universal_Robots)>.

"Three Laws of Robotics." Wikipedia, The Free Encyclopedia. Wikimedia Foundation, 19 Apr.

2010. Web. 20 Apr. 2010. .

"Timeline of Robotics." The History of Computing Project. 19 Nov. 2007. Web. 19 Apr. 2010.

.

13