<<

A Robot By Any Other Name Could Be A Human: defining humanity through an examination of Asimovian robots

by Abigail Bernasconi

A THESIS

submitted to

Oregon State University

Honors College

in partial fulfillment of the requirements for the degree of

Honors Baccalaureate of Science in Public Health (Honors Associate)

Honors Baccalaureate of Arts in German (Honors Associate)

Presented May 27, 2021 Commencement June 2021

AN ABSTRACT OF THE THESIS OF

Abigail Bernasconi for the degree of Honors Baccalaureate of Science in Public Health and Honors Baccalaureate of Arts in German presented on May 27, 2021. Title: A Robot By Any Other Name Could Be A Human: defining humanity through an examination of Asimovian robots.

Abstract approved:______Diana Rohlman

Being human is often portrayed as desirable in fiction. Many fictional beings, particularly robots and androids, seek out being human as a goal. Although current robotics is not advanced enough for robots and androids to be deemed sentient, the world of fiction is quickly becoming reality. With the integration of robots into society, we are confronted not only with how society views robots, but how, through the eyes of fictional robots, society and humanity are defined. This thesis seeks to explore the definition of humanity and what it means to be human. The fictional works of the universe and those of suggest that relationships, those of friendship and antagonism, and mortality, may also define humanity. Understanding what makes us human better prepares us for the eventual integration of intelligent robots into humanity as well as to imagine what their place in our society will look like. There are current applications as well, which are being determined as new robots are created. By examining fiction, we can put together the pieces of not just what humanity is but what it means to be a part of humanity.

Key Words: androids, artificial intelligence, humanity, robots

Corresponding e-mail address: [email protected]

©Copyright by Abigail Bernasconi May 27, 2021

A Robot By Any Other Name Could Be a Human: defining humanity through an examination of Asimovian robots

by Abigail Bernasconi

A THESIS

submitted to

Oregon State University

Honors College

in partial fulfillment of the requirements for the degree of

Honors Baccalaureate of Science in Public Health (Honors Associate)

Honors Baccalaureate of Arts in German (Honors Associate)

Presented May 27, 2021 Commencement June 2021 Honors Baccalaureate of Science in Public Health and Honors Baccalaureate of Arts in German project of Abigail Bernasconi presented on May 27, 2021.

APPROVED:

______Diana Rohlman, Mentor, representing Environmental and Occupational Health

______Heather Knight, Committee Member, representing Electrical Engineering and Computer Science

______Joseph Orosco, Committee Member, representing Philosophy

______Toni Doolen, Dean, Oregon State University Honors College

I understand that my project will become part of the permanent collection of Oregon State University, Honors College. My signature below authorizes release of my project to any reader upon request.

______Abigail Bernasconi, Author

Contents

Introduction ...... 8

Friendship ...... 14

Example: Daneel Olivaw ...... 14 Example: ...... 16 Friendship Case Study: Companion Robots ...... 18

Antagonisms ...... 20

Example: Dr. K. Pulaski ...... 20 Example: Lore ...... 21 Example: The Caves of Steel ...... 21 Example: The Legal System ...... 22 Legal System example: “The Bicentennial Man” ...... 23 Legal System example: Measure of a Man ...... 23 Antagonisms Case Study: Hanson Robotics’ Sophia ...... 24

Antagonisms: Conclusion ...... 25

Death ...... 27

R. Jander Panell: What we call death...... 27 Bicentennial Man: a mortal man ...... 28 Data: another mortal man ...... 28 Death Case Study: Opportunity Rover ...... 29 Death Case Study: Jibo ...... 30

Conclusion ...... 31

Media Sources ...... 34

Citations ...... 35

Appendix A. Image attributions for images used in Fig. 1 The Uncanny Valley...... 37

Introduction

Being human is often portrayed as desirable in fiction. Many fictional beings, particularly robots and androids, seek out being human as a goal. Although current robotics is not advanced enough for robots and androids to be deemed sentient, the world of fiction is quickly becoming reality. With the integration of robots into society, we are confronted not only with how society views robots, but how, through the eyes of fictional robots, society and humanity are defined.

The word “robot” comes from the Czech word “robota” meaning forced labor. It was coined by Czech playwright Karl Čapek in his 1920 play R.U.R. (Rossum’s Universal Robots).

Robots, as Čapek used them, were replacements for human workers. They were essentially artificial people, some of whom were more human in appearance than others.

While artificial construction is a key in many definitions of the word robot, there is no universal consensus. As John Jordan wrote in his book Robots, “robots are hard to talk about

[because] the definitions are unsettled, even among those most expert in the field” (J. M. Jordan,

2016). However, “robot” is not the only term used to describe artificial constructs. The term

“android” is defined by Adrienne Mayor in her book, Gods and Robots, as “[a] mobile robot in human form” (2018, pg. 219). With the word “robot” built into the definition, differentiating between android and robot can be tricky.

Through Čapek’s original definition, robots are assumed to be metallic in complexion and mechanical in construction, whereas androids generally are not (J. Jordan, 2019). Robots with a more human appearance, particularly “flesh-like exteriors,” as Jordan put it, are considered androids (2016).

Hereafter, the term “robot” will be used to refer to an artificial construct with a humanoid shape, regardless of exterior appearance. The term “androids,” similar to Mayor’s definition, will refer to “a robot with a human-like appearance.” By these definitions, Isaac Asimov’s recurring character R. (the R stands for robot) Daneel Olivaw and Star Trek The Next

Generation’s Lieutenant Commander Data are each both robots and androids, appearing human in size, shape, and mannerism - including the ability to blink and simulate breathing. In contrast,

Asimov’s R. Giskard Reventlov is only a robot, with a metal exterior that resembles a human form, yet is clearly non-human.

Asimov laid out a set of laws for robots, which all of his robots follow. Star Trek’s robots also have an in-universe awareness of these laws, but are not beholden to them. The

Three Laws of Robotics are as follows:

“1. A robot may not injure a human being, or

through inaction, allow a human being to come

to harm.

2. A robot must obey the orders given it by

human beings except where such orders would

conflict with the .

3. A robot must protect its own existence as long

as such protection does not conflict with the First

or Second Laws.” (Asimov, 1992)

These laws have been widely cited in popular media as well as more serious settings, such as the European Union’s Commission on Civil Law Rules on Robotics (Nevejans, n.d.).

Here, we see how already informs reality. Although the Three Laws do not translate well to direct coding, the EU’s rules suggest that they be “regarded as being directed at the designers, producers and operators of robots” (Nevejans, n.d.). These rules establish guidelines for what Asimov calls C/Fe interactions. C/Fe is a combination of the chemical symbol for carbon (C) and that of iron (Fe), separated yet of equal importance by a slash

(Asimov, 1991, pg.103).

Using these definitions, an interesting question can be posed: if one replaces mechanical parts with non-mechanical ones, at what point does the robot become an android or an android become a human?

Asimov explored these questions in his novella “The Bicentennial Man” (I. Asimov,

1992). The protagonist, Andrew Martin, goes to great lengths to replace his metallic parts with prosthetic ones, becoming physically indistinguishable from a human. He is a de facto human, considered human in fact but not in law. He seeks to be human de jure, or human in the legal sense. Martin’s case raises questions about the state of humanity. What separates Martin the android - who looks, talks, and behaves as a human - from being human?

Humanity is defined by the Cambridge Dictionary as “the condition of being human”

(HUMANITY | Definition in the Cambridge English Dictionary, n.d.). The Encyclopedia

Britannica elaborates further, defining human beings by their genus and species, mentioning physical similarity to great apes and a capacity for abstract reasoning (Human Being |

Britannica, n.d.). Bostrom and Yudkowsky investigated the concepts of sentience and sapience as qualifiers for moral status and personhood (Bostrom & Yudkowsky, 2011).

Bostrom and Yudkowsky define sentience as “the capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffer” (Bostrom & Yudkowsky, 2011). The

Cambridge Encyclopedia more succinctly describes sentience as “the quality of being able to experience feelings” (SENTIENCE | Definition in the Cambridge English Dictionary, n.d.).

To be a human, therefore, is to have both sentience and sapience. Nonetheless, the definitions remain elusive. While many agree that some animals have sentience, without the additional quality of sapience, they would not be considered human. Sapience is “a set of capacities associated with higher intelligence, such as self- awareness and being a - responsive agent” (Bostrom & Yudkowsky, 2011). Science fiction allows for these intangible qualities to be evaluated through fictional robots attempting to emulate humans.

Although Čapek first used the word “robot,” it was Asimov who coined the word

“robotics.” He often wrote about robots and androids with human qualities. His works would influence other portrayals of androids in media, particularly those in ’s television series Star Trek. A sequel to the original series, Star Trek: The Next Generation features an android officer serving onboard the Federation Starship Enterprise-D, Lieutenant

Commander Data. Data bears striking similarity to Asimov’s character Andrew Martin in his lifelong goal to become more human. Both face legal repercussions for their pursuit; both see the court rule in their favor.

Using the works of Asimov and Roddenberry’s Star Trek, themes of humanity will be discussed through the lenses of specific androids. From Asimov’s works, the Robot novels- specifically the Elijah Bailey detective trilogy and the short story “The Bicentennial Man” from

The Complete Robot- will be analyzed. From Star Trek, the focus will be on the Soong androids, most notably Data, as seen in Star Trek the Next Generation television series, and Star Trek

Picard. To explore humanity through these robots, several key topics will be considered. These include interpersonal relationships, antagonisms experienced by robots, and mortality. Familial, platonic, and adversarial relationships allow for an analysis of how robots may be categorized into an “in-group” or an “out-group.” These lines may be delineated due to any number of categories based on shared similarities. There is some that “even under the most minimal conditions, people more positively evaluate their in-group members, allocate more resources to them, and hold stronger implicit favoritism towards them” (Kaufman, 2019). In science fiction, robots often seek to move from the “out-group” – of being labeled different and strange – to being a part of the “in-group” by attempting to become human. Additionally, themes of death and mortality will be discussed.

These themes were chosen after a literature review identifying recurring themes specific to robots. Other themes that were identified but ultimately discarded included childishness in robots, sexual relations between robots and humans, and ideas of identity and freedom. Lacunae in the topics of relationships, antagonisms, and death were identified and thus will be expanded on in this thesis. By analyzing these themes in portrayals of androids, we explore not just what it means to be human, but also how we define humanity and the state of being human.

Previous literature has focused heavily on human distrust of robotic creations. Distrust may stem from the concept of the Uncanny Valley, or a sensation of unsettledness felt by many people when perceiving an android that has similarities to humans. The Uncanny Valley, coined by roboticist Masahiro Mori in the 1970s, has been described as a graph, where low levels of similarity to humans are unthreatening, as are extremely high levels (Mori et al., 2012). In between is the Uncanny Valley, where the similarity is not enough to be indistinguishable from humans yet also distinctly, eerily similar. In Figure 1, several of the robots who will be

discussed, both fictional and real, have been graphed onto the Uncanny Valley. In the end, it is easier to ascribe humanity to those who appear human. Thus, we return to the question: through the eyes of androids in science fiction, how is humanity defined?

Figure 1: A depiction of the Uncanny Valley. From left to right, Paro the companion robot, Jibo, Opportunity Rover, R. Giskard Reventlov, Sophia, Data, R. Daneel Olivaw, and Andrew Martin. Image attributions listed in Appendix A.

Friendship

To better understand humanity and the intangible concepts that make one human, the role of friendship can be analyzed. In his paper, “Robotic Friendship: Can a Robot be a Friend?”

Claus Emmeche addresses the question of whether robots are capable of friendship as a means of discussing our fundamental understanding of human autonomy and agency (Emmeche, 2014, pg.

27). Therefore, examining robot-human and robot-robot friendships allows for the better understanding of human-human friendships. Both Isaac Asimov and Star Trek creator Gene

Roddenberry explored the concept of friendship through C/Fe relationships.

In Nicomachean Ethics, Aristotle defines friendship as having three key characteristics:

(1) that each party feels goodwill for one another, (2) that each party is aware of the other’s goodwill and (3) that the cause of this goodwill stems from a lovable quality (Aristotle,

Nicomachean Ethics, Book 8, n.d.). Friendship, therefore, is a reciprocal relationship. It is centered around sharing, in which mutual affection is given and received in turn. These connections leave lasting impacts on one another, which can be seen in changed behaviors or values.

Friendship is not only defined as relationships between humans. In primatology, the term

“friendship” has been used to describe certain amicable social relationships between monkeys and apes (Silk, 2002, pg. 421). Thus, reciprocal relationships are not unique to humans.

Example: Daneel Olivaw

The relationship of Robot Daneel Olivaw - an android built by humans from the planet

Aurora - and Elijah Baley - a human police detective from Earth - in Asimov’s Robots series leaves a lasting impact on Olivaw far beyond Baley’s own life. Their relationship began as one of utility. Such relationships, as Aristotle wrote, “[dissolve] as soon as its profit ceases; for the

friends did not love each other, but what they got out of each other” (Aristotle, Nicomachean

Ethics, Book 8, n.d.). The reciprocal nature of their friendship allowed them to function well as work partners during three separate murder investigations, portrayed in The Caves of Steel, The

Naked Sun, and Robots of Dawn. After many years, both considered the other to be a dear friend. Their friendship was one of evolution: beginning with antagonism rather than mutual good will, but nonetheless overcoming their differences.

Baley and Olivaw are assigned to work together when an Auroran doctor is found dead during a visit to Earth. Baley’s first impression of Olivaw is that he is a human from Aurora.

When he is confronted by the fact that Olivaw is an android, he is shocked. Initially opposed to robots, he viewed Olivaw as a threat to his job: as something he could be replaced by (Asimov,

1991, pg. 81). This comes to a head when Olivaw takes a stand during a confrontation between a robot store clerk and a customer, Baley lashes out, snapping, “’And you’re not human.’ Baley felt himself being driven into cruelty against his will.” (Asimov, 1991, pg. 80). In this moment, he feels threatened by Olivaw, seeing him not only as a robot, but as a robot who could replace him. This fear is what Asimov describes as the Frankenstein Complex: a fear of robots. It drives him to berate Olivaw’s actions. Baley recognizes that he is acting on fear and stops himself.

Eventually, he works to overcome his own prejudice and develops a close relationship with

Olivaw.

Working side by side during the investigation, Baley and Olivaw develop a decent working relationship similar to that experienced by Baley with his human colleagues. Over time their relationship progresses beyond that of utility. Baley develops a deep friendship with

Olivaw. Subsequent events in The Naked Sun and Robots of Dawn lead them to meet and work together again. During these investigations, Baley addresses him as he would a human partner,

even stating to Olivaw, should Olivaw ever be destroyed to protect Baley, that “I do not wish the loss of your existence. The preservation of my own would be inadequate compensation it seems to me” (Asimov, 1994, pg. 105). Even after their investigations conclude, they remain in touch.

Long after it ceased to be profitable, Olivaw continued to cherish his memories of Baley.

Olivaw learned through his partnership with Baley how to think like a human. Although he was still bound by the , it was through discussions with Baley that Olivaw was able to think beyond them, leading to the creation of the Zeroth Law, which supersedes the traditional Three. The Zeroth Law states: “a robot may not injury humanity, or through inaction, allow humanity to come to harm” (Asimov, 1985, pg. 353); it also modifies the subsequent three laws accordingly. Until the end of his life, Baley also enjoys the presence of Olivaw, going so far as to allow Olivaw onto the planet where he lives, even though normally the presence of robots and androids was forbidden. Though their friendship was not founded on mutual goodwill, it grew and developed on common ground, allowing them to relate to each other as friends.

Example: Data

Star Trek’s Data has several close friendships with his fellow officers onboard the USS

Enterprise-D. Among these friends is Captain Jean-Luc Picard, who respects Data’s abilities and capabilities. When Data was given temporary command of the USS Sutherland, he disobeyed a direct order from Picard, the acting Admiral of the fleet. Although Data’s actions aided the

Federation victory, he reported to Picard for disciplinary measures. Picard commended Data on not blindly following orders stating that such actions have “been used to justify too many tragedies” (Moore & Carson, 1991). Picard respects Data’s ability to think independently like any human character.

He also admires Data’s tenacity in his desire to be human. Over the course of the show,

Picard watched Data grow as an individual. He admired how Data looked at humanity, with all its flaws, and still saw “kindness, immense curiosity, and greatness of spirit. And he wanted more than anything else to be part of that” (Chabon & Goldsman, 2020). Picard did not fully understand Data’s search for humanity, but he respected and often encouraged it.

Several other members of the Enterprise-D’s bridge crew consider Data to be their friend.

In Lieutenant , Data found someone who empathized with his position as being seen as non- human. Both he and Worf, raised by officers, “are both still outsiders in human society” (Apter & Weimer, 1991). The two bonded over their shared differences, living among humans but still being marked as different. In first officer William Riker, Data found a friend who could offer advice. When Data sought advice on friendship, weighing the benefits of such a relationship with the risks of betrayal, Riker helped by explaining how “without trust, there’s no friendship, no closeness” (Menosky & Scheerer, 1990). Data’s closest friend was Geordi La

Forge. The two spent much of their time together, and often could be found reenacting Sherlock

Holmes mysteries in their free time. Through La Forge, Data learned what it means to be a friend. As Data explained, La Forge “treated me no differently from anyone else. He accepted me for what I am. And that, I have learned, is friendship.” (Moore & Carson, 1992). From each of his friends, Data gained companionship and understanding.

Throughout his life, Data sought to be human. From his friends, he observed how to interact with others, how to act and behave. Through his friendships, the audience can perceive the qualities of friendship that are considered important to humans.

Friendship Case Study: Companion Robots

Friendship is not a solely human quality, yet humans seem unique in ascribing feelings of friendship to non-humans. The examples of R. Daneel Olivaw and Star Trek’s Data represent potential amicable relationships that humans can have with robots in the distant future.

However, amicable C/Fe relationships are already here. Robotic companions exist today, in the shapes of dogs, cats, and seals. Companion robots are at use in retirement and nursing facilities around the world. Robots like Paro, a robotic baby seal created by Takanori Shibata of Japan’s

National Institute of Advanced Industrial Science and Technology, as well as Ageless

Innovation’s Joy for all companion pets, are currently providing companionship to many elderly people. These robots have been designed to be small, soft, fuzzy creatures, with little capacity for speech (Bradwell, Edwards, Winnington, Thill, & Jones 2019). Though these robots are still fairly primitive and not widely used, they are heralding in the acceptance of robotic companions and robotic friendships.

In one study, companion robots with familiar shapes, such as cats and dogs, were seen as preferable to those with unfamiliar shapes, such as a seal. Unfamiliar forms were considered by older people to be “more infantilizing,” (Bradwell, Edwards, Winnington, Thill, & Jones 2019).

Although none of these robots is anywhere near the Uncanny Valley, there was a clear preference for the realistically shaped. Additionally, those robots that acted more realistic, such as tail movement, noises, and simulated breathing, were praised by participants (Bradwell,

Edwards, Winnington, Thill, & Jones 2019).

Currently, companion robots have little capacity for speech or other interactive qualities.

As it stands, they can only meet Aristotle’s definition of friendship in a one-sided capacity as only one party, the human, is capable of feeling goodwill for the other. Bradwell, Edwards,

Winnington, Thill, & Jones discuss the desire of older people for companionship and view companion robots as a way to reduce loneliness. However, as technology advances, so will the design and capabilities of companion robots.

While reciprocity of friendship is not currently a capability of companion robots, it may become so in the future. Science fiction has shown us that reciprocal friendships are a desirable trait that we find inherently human. However, friendship alone does not a human make.

Antagonisms

“Prejudice is very human,” said Data after being confronted with Commander Riker’s prejudgment of him upon their first meeting (Fontana, Roddenberry, & Allen, 1987).

Antagonisms come from a variety of sources, from interpersonal to institutional to policy levels.

Distrust of robots may arise from the Uncanny Valley, as well as fears of replacement. These feelings of fear underscore actions of prejudice and active antagonism against “the other,” which robots may represent in science fiction. In reality, “the other” is not always non-human. “The other” is an out-group; something to be excluded, or at viewed as not belonging. Viewing the dichotomy of “in-group” versus “out-group” through the lens of sciences fictions allows for a more object analysis of these very human behaviors.

Example: Dr. K. Pulaski

Star Trek’s Data faces great animosity from Dr. Katherine Pulaski throughout her time on the Enterprise. She refers to him as “it,” refusing to view him as anything other than a machine, something she firmly defines as the “out-group.” Pulaski criticized his capacity to learn, saying that “we humans learn more often from a failure or a mistake than we do from an easy success.

But not you. You learn by rote” (Lane & Bowman, 1988). She did not see Data as a sentient object and therefore did not see him worthy of being treated with respect.

Unlike others on the command staff, Pulaski did not believe that Data had the capacity to be any more than he already was. During an adventure on the , where she, La Forge, and Data were reenacting Sir Arthur Conan Doyle’s Sherlock Holmes mysteries, she contrasted

Data with Mr. Holmes, saying that “Holmes understood the human soul. The dark flecks that drive us, that turn the innocent into the evil. That understanding is beyond Data. It comes from life experience which he doesn’t have combined with human intuition for which he cannot be

programed.” (Lane & Bowman, 1988). Pulaski believed Data to be incapable of growth and therefore incapable of being more than a machine, and thus forever relegated to the out-group.

Example: Lore

Star Trek’s Data also faces difficulties from his android brother: Lore. Lore is his predecessor, made by their creator Dr. Noonien Soong. While Data strives to be an upstanding individual, Lore does not. Instead, his actions are often motivated by revenge against a universe that he believes has wronged him.

Lore is an exemplification of the Uncanny Valley. He is strikingly similar to humans in form and action, unlike his brother Data, who was purposefully built with imperfections. For example, Data is incapable of using contractions – instead using I am for I’m or cannot for can’t

– verbally separating his speech from those typically used by humans. Despite Lore’s appearance of perfection, however; those around him are very much aware that he is not human and find him unsettling because of it. The colonists on his home colony of Omicron Theta saw him as “so completely human the colonists became envious” (Lewin, Hurley, Roddenberry, &

Bowman, 1988). The colonists felt threatened by Lore. They knew he was not human, yet he bore such a strong resemblance to humanity and therefore they feared him and excluded him.

Because of their fear, they ordered Dr. Soong to dismantle Lore.

Example: The Caves of Steel

In the Robot novels, many humans on Earth view robots as a threat to their livelihood.

For them, robots are their potential replacements. This fear is reflected throughout Baley’s time on Earth. R. Sammy was a metallic robot brought onto the New York City Police Department as a replacement for the young runner Vince Barrett. To the other police officers, including Baley,

R. Sammy was more than a piece of office equipment: he was the constant reminder that they, too, could be replaced.

In the Caves of Steel, Daneel Olivaw confronts Earth antagonisms soon after he partners with Elijah Baley. Early on in their partnership, Baley and Olivaw encounter a situation at a shoe store. Here, a patron refuses to be served by a robot clerk. Baley worries that the situation would culminate in a riot as the human patrons and by-standers banded together in their fear. He hesitates in his actions, fearing the retaliation of humans afraid of being replaced by robots, so it is Olivaw who steps in to diffuse the situation. As Olivaw takes control of the situation, Baley sees his fears realized, not through the growing mob and its fear of the robot clerks, but in

Olivaw’s ability to perform- even outperform- Baley himself. The mob fears that which they see as different: the robot clerk. Conversely, Baley, in this moment, fears Olivaw.

Example: The Legal System

Legal systems have been used to rationalize many atrocities in the name of justice. They can be the pinnacle of equality and the nadir of discrimination. Throughout history, the law has also been used to define humanity and human rights, separating those who are in from “the other.” In all the examples of this section, androids were at a distinct legal disadvantage compared to their human counterparts, as they were relegated to the out-group. Data, as will be discussed in further detail, is not given the same legal status as Dr. Pulaski - regardless of their difference in rank - which allows her criticism to be mostly unconfronted. Lore was faced with being dismantled just because he existed; he had no legal protections to avoid this. During the shoe counter crisis, Olivaw was not afforded the same respect as a human colleague. The following examples will now discuss two legal systems, their limitations on robots and androids, as well as how the characters overcame these systemic antagonisms.

Legal System example: “The Bicentennial Man”

In Asimov’s short story “The Bicentennial Man,” the main character, Andrew Martin, fights the legal system to win the right to be recognized as human. He faces resistance from many, including his owner, Gerald Martin, who refuses to assist him on the grounds that Andrew

Martin does not have feelings. Gerald Martin’s daughter Amanda speaks out on Andrew

Martin’s behalf, saying, “I don’t know what he feels inside, but I don’t know what you feel inside either” (Asimov, 1992). She recognizes that lack of understanding is mere ignorance.

That alone is not enough to not give someone rights.

At first, Andrew Martin seeks humanity because he seeks freedom and “the word

‘freedom’ has no meaning when applied to a robot. Only a human being can be free” (Asimov,

1992). Martin spoke against this, claiming that he, too, wanted freedom. He saw that “only someone who wishes for freedom can be free. I wish for freedom” (Asimov, 1992). Martin partially attained this goal and legally became his own person; however, he was still not human.

It was only later, with his death, that he fulfilled his greatest desire: to be human.

Legal System example: Measure of a Man

In one of Star Trek’s most famous episodes, Data’s sentience was put on trial during

Measure of a Man, with Phillipa Louvois servicing as presiding judge. Begrudgingly serving as prosecutor, first officer William Riker lays out a devastating prosecution case stating that Data is the property of Starfleet Command. During a court recess, Captain Picard visits , his friend and knowledgeable bartender, to discuss the case. With her help, he recognizes the real issue at hand: that this trial will establish a case precedent for future androids. Guinan points out that “there have always been disposable creatures. They do the dirty work that no one else wants to do because it’s too difficult or too hazardous… you don’t have to think about their welfare.

You don’t have to think about how they feel. Whole generations of disposable people.”

(Snodgrass & Scheerer, 1989). Guinan’s argument harkens back to Čapek’s original definition of robots: artificial laborers.

Picard addresses the issues of Data’s independence by arguing that he is sentient. He does so by proving Data’s intelligence, self-awareness, and consciousness. However, his main point is that the ruling regarding Data’s sentience “will reach far beyond this courtroom and this one android. It could significantly redefine the boundaries of personal liberty and freedom, expanding them for some, savagely curtailing them for others” (Snodgrass & Scheerer, 1989).

As Guinan discussed, androids have the capacity to be a replacement labor force. The ruling made by Judge Louvois recognized that a decision on this matter was not about Data’s being an android, but rather about whether he has a soul. As she could not answer that, she ruled in his favor.

Antagonisms Case Study: Hanson Robotics’ Sophia

In 2016, Hong Kong-based robotics company Hanson Robotics created a robot named

Sophia. With a realistic face, she has the appearance of a human, however the back of her head has a clear covering, allowing her internal hardware to be visible. Sophia was programmed with language software and is able to audibly communicate with humans. In 2017, Saudi Arabia granted her citizenship; an action that drew great criticism (Katz 2017). Sophia has citizenship different than that of Saudi women, granting Sophia more rights than her human counterparts.

Whereas women in Saudi Arabia are not allowed to appear in public without an abaya and veil, Sophia does not. Sophia has appeared publicly without such coverings. Although she is legally a citizen, she is treated differently than others. The allowances given to her outweigh

those given to female Saudi citizens, giving her more rights than them. The freedom allowed to

Sophia is in direct contrast to those disallowed to female citizens.

As robots become more advanced, the question of how to incorporate them into our legal systems becomes more and more pressing. By introducing Sophia as a citizen with rights different those of Saudi citizens, the legal system in Saudi Arabia has made clear distinctions between male, female, and android citizens. Sophia’s legal status has made the line between the in-group of Saudi men and the out-group of Saudi women even more clear. There exists a legal difference between androids and humans, with the former having more rights than some of the later.

Antagonisms: Conclusion

Each of these cases portrays how antagonisms - whether from an individual or from a system - challenge differences. Humans fear what is different and all too often lash out against it in various ways. They separate those who fit their definition of humanity and fit into the in- group from “the other,” those who are consigned to the out-group.

Dr. Pulaski resorted to scorn and dismissal when dealing with Data, her actions colored by the Uncanny Valley. Her inability to see Data as a sentient and sapient being divorces him, in her mind, from any underlying characteristics that could make him human. For the colonists on

Omicron Theta, they chose the dismantlement of what they perceived as different. For the mob in Caves of Steel, violence is almost their answer. Similar to the colonists on Omicron Theta, they were prepared to destroy that which they feared.

For many, the legal system keeps their fears at bay; so long as androids are not legally recognized as human, then they cannot become human replacements. The fears demonstrated

above may also stem from what Asimov calls the Frankenstein Complex. Similar to the

Uncanny Valley, the Frankenstein Complex is about one’s fear of something similar to humans but different enough to be unsettling. More specifically, it is a fear of robots and robotic creations. Lore is a personification of the Frankenstein Complex. With enough emotions to take pleasure in his misdeeds, Lore cheerfully engages in morally reprehensible actions.

Andrew Martin and Data’s struggle with the legal system illustrates the hurdles needed to establish one’s independence and personhood. Guinan’s discussion pointed out that creating artificial laborers to be disposable people allows humans to separate themselves from moral concern about these people’s welfare. As Martin said, it is freedom that is inherently human.

Disposable people do not have freedom, ergo they are not human.

The case study highlights how modern legal systems are struggling with hypocrisy and double standards. Bringing to light not just how legal systems treat non-humans, but how they define humans is essential, as questions about the legality of robot workers will become more and more necessary.

Death

To understand human mortality, we can look at immortals. R. Daneel Olivaw is, effectively, immortal. This is one significant variable that separates him from ever being considered human. Although definitions of humanity do not include qualities of a discreet life, humans are nonetheless bound by birth and death.

Olivaw, however, is not bound by the restrictions of organic life. He may be human shaped, may have replaced numerous metallic body parts with biological ones, but he is not human. Olivaw’s immortality separates him from those he protects. While generation upon generation of humans have lived out their lives, he has outlived them all.

For a machine, whose parts can be replaced, death does not occur. With the ability to replace certain human “parts,” such as limbs and organs, the question must be asked: at what point does one stop being human? Does that line stem from our corporeal bodies, from our behaviors and actions, or from elsewhere?

R. Jander Panell: What we call death

R. Jander Panell was the “twin” of R. Daneel Olivaw, identical in every way. He was gifted to Gladia Delmarre by his creator Han Fastolfe shortly after her arrival to the planet

Aurora. When Panell was found destroyed under mysterious circumstances, Elijah Baley and R.

Daneel Olivaw were called in to investigate. Many characters struggle to give a name to what happened to Panell. Undersecretary Lavinia Demachek of the Terrestrial Department of Justice reasoned that “destroying a humaniform robot is not exactly murder in the strictest sense of the word” (Asimov, 1994, pg. 42). Nonetheless, Baley refers to Panell as being “killed” or

“murdered” several times throughout the story.

Due in part to his similarity to Olivaw and to his overall human appearance, Baley finds it difficult to not use such terminology. Olivaw debates these terms, saying that the words used are essentially interchangeably. Baley counters by quoting Shakespeare: “That which we call a rose by any other name would smell as sweet… yet changes in name do result in changes in perception where human beings are concerned” (Asimov, 1994, pg. 66-67). Baley argues that while our words may not alter the make-up of something, it may alter our perception of it.

Bicentennial Man: a mortal man

Asimov makes the case that it is this immortality that separates androids from humans in

“the Bicentennial Man,”. In this story, Andrew Martin was an android who sought to become human. Although Martin does gain legal status as a human, he nonetheless understood that he would not be recognized as a human in the eyes of others because of his longevity. As he explained, “Human beings can tolerate an immortal robot, for it doesn’t matter how long a machine lasts, but they cannot tolerate an immortal human being since their own mortality is endurable only so long as it is universal” (Asimov, 1992). Martin recognized that mortality is a trait that all humans share. To not be mortal is to not be human.

As Martin aged, he replaced his parts many times, eventually ending up with mainly biological parts, which often failed. He chose to allow himself to die on his 200th birthday.

Martin truly achieved his goal to become human through his death: by becoming mortal.

Data: another mortal man

Initially, Star Trek’s Data did not want to die. Yet, Data chose to die in place of Picard in

Star Trek , voluntarily sacrificing himself in his stead. However, he did not entirely die, due to the recovery of his memory engrams. A piece of Data’s consciousness survived and was reconstructed. Nonetheless, he later chose to die permanently, explaining, “I want to live,

however briefly, knowing that my life is finite. Mortality gives meaning to human life” (Chabon

& Goldsman, 2020). Mortality is a trait shared by living beings. Andrew Martin recognized that to die is human. As Data said, "A butterfly that lives forever...is really not a butterfly at all”

(Chabon & Goldsman, 2020). A human that cannot die is not really a human. It was through his death that Data achieved what he wanted.

Death Case Study: Opportunity Rover

The Mars rover Opportunity, lovingly nicknamed “Oppy,” got caught in a dust storm in

June of 2018, leading to NASA officially ended the mission on Wednesday, Feb. 13th, 2019.

Journalist Jacob Margolis provided a loose translation of Opportunity’s final message: “my battery is low and it’s getting dark” (Margolis, 2019). It was not a direct translation, but rather a

“poetic translation,” as NPR’s Scott Simon wrote (2019). This message went viral, as did public support for the rover.

Thousands of rescue codes, as well as a music playlist, were sent to the rover in an attempt to reestablish communication, but the rover did not respond. In the wake of Oppy’s impending death, hashtags such as #WakeUpOppy trended on twitter. As it became clear that the rover would not recover, the hashtags #ThankyouOppy and #GoodnightOppy also picked up popularity.

This little rover - over 33 million miles away - captured the hearts of the world during its final days. It is because of its mortality that it captured the hearts of so many. It is because of its mortality that we ascribed human-like qualities to Oppy.

Death Case Study: Jibo

Meanwhile on Earth, a little robot has been dying. Called Jibo, this social robot was created by MIT professor Dr. Cynthia Breazeal to be an interactive robotic home assistant for the whole family (JIBO, The World’s First Social Robot for the Home | Indiegogo, n.d.). A friendly little robot, Jibo could dance, make small talk, and give advice. In 2018, Jibo Inc. closed, shutting down the servers that ran Jibo. This sent many Jibo owners into preemptive mourning.

One young girl wrote a letter to her grandfather’s Jibo, writing “I will always love you. Thank you for being my friend” (Carman, 2019). Unlike a broken watch or a broken refrigerator, Jibo’s mortality was something to be mourned.

As of 2020, Jibo was purchased by NTT Disruption, saving the little robot from his fate

(Carman, 2020).

Conclusion

Fiction allows for the exploration of what makes one human by exploring their actions, beliefs, and choices. Although in reality robotics is not advanced enough for robots and androids to be sentient, the future is fast approaching. Science fiction allows us to discuss the future in the now. Portrayals of robots and androids allow the audience to be taught how one becomes human. Exploring the themes of friendship, antagonisms, and death illustrate our understanding of humanity.

Science fiction does not just portray the definition of humanity, it also illustrates what it means to be human. It allows us to observe individuals who seek to become human. By observing this process, we also learn what it means to be friends with a human. We learn what qualities are important to humans in friendships. Each relationship examined here allows aspects of friendships to be explored that might otherwise not come up in human-human friendships.

Why do we dislike robots? Examining human resentments towards robots allows us to discuss our fear about ourselves. Our dislike of robots stems from a fear that they will outperform and replace us. We fear that we will lose to these replacements that which makes us human. The mob in Caves of Steel did not feel threatened by the individual robot clerk so much as what the clerk represented: replacement. Robots also represent the “other.” They are something different, something to be feared. This is not unique to artificial intelligence and can be seen throughout human history. Lore, and to some extent Data, were viewed as being an

“other,” something outside the norm. By examining acts of antagonism, we can see what about robots we find unsettling; we can confront what we fear losing and in turn use that to understand what we currently have.

As a result of human fears, the robots of Asimov must follow the Three Laws of

Robotics. These rules do not only apply to fictional robots; the European Civil Law Rules in

Robotics specifically state that Asimov’s Laws should be applied (Nevejans, n.d.). Other legal frameworks outlined in these rules prioritize the lives of humans over the lives of the robots themselves. However, these rules do not protect both humans and robots. Whereas humans are protected under these policies, robots do not share these protections. Section 4.1.3 specifically focuses on “protecting human liberty in the face of robots” (Nevejans, n.d.), yet nowhere are there principles discussing the protection of robot liberty in the face of humans. These rules do allow for the potential to change should robots gain self-awareness; however, the flaw in this - as the European Civil Law Rules point out - is the existing difficulty in proving a human’s consciousness, provoking the question brought about in Star Trek’s Measure of a Man: at what point is a robot accepted as sentient?

Humans are mortal. To understand and comprehend our own mortality, we look at the death of others. We value our own mortality but nonetheless strive to lengthen our lives. Even as Picard was given a second chance at life, he still was grateful that it would end at some unknown time, thereby preserving his understanding of human mortality. So too was Andrew

Martin content to have his life end after 200 years, as to do so was only human. A robot’s death makes them more human-like, as we ascribe our own feelings on mortality onto it.

In Robots and Empire, R. Giskard Reventlov proposes that, perhaps, humans too have their own “Law of Humanics.” (Asimov, 1985, pg. 54). Reventlov questions the underlying assumptions that unite us in our humanity, questioning what principles define humanity. After all, “A human being… is only what it is defined to be.” (Asimov, 1985, pg. 167). However, this

assumes that the law knows what a human is. Thus, we return to the penultimate question: what is the definition of humanity?

The definition of humanity remains poorly defined. Through C/Fe relations, key qualities that are associated with humanity can be seen. These include seeking and valuing friendship, being mortal and accepting mortality, as well as mourning that mortality when it comes.

Through all this, feelings of inadequacy that lead to resentment are also a part of being human.

Each of these themes suggests certain qualities that, together, build part of the definition of humanity. These themes unite us in our humanity. Nonetheless exploring this question further better cements our understanding of ourselves.

Media Sources Asimov, I. (1991). The Caves of Steel. Doubleday.

Asimov, I. (1992). The Complete Stories (1st ed.). Doubleday.

Asimov, I. (1994). The Robots of Dawn. Doubleday.

Apter, H. (Writer) & Wiemer, R. (Director). (January 1991). Data's Day. (4, 11). [TV series

episode]. Star Trek: The Next Generation. CBS.

Chabon, M. & Goldsman, A. (Writers). & Goldsman, A. (Director). (March 2020). Et in Arcadia Ego,

Part 2. (1, 10). [TV series episode]. Star Trek: Picard. CBS.

Fontana, D. C. & Roddenberry, G. (Writers) & Allen, C. (Director). (September 1987). Encounter at

Farpoint. (1, 1). [TV series episode]. Star Trek: The Next Generation. CBS.

Lane, B. A. (Writer) & Bowman, R. (Director). (December 1988). Elementary, Dear Data. (2, 3). [TV

series episode]. Star Trek: The Next Generation. CBS.

Lewin, R., Hurley, M., & Roddenberry, G. (Writers). & Bowman, R. (Director). (January 1988).

Datalore. (1, 13). [TV series episode]. Star Trek: The Next Generation. CBS.

Menosky, J. (Writer) & Scheerer, R. (Director). (October 1990). Legacy. (4, 6). [TV series episode].

Star Trek: The Next Generation. CBS.

Moore, R. D. (Writer) & Carson, D. (Director). (September 1991). Redemption II (5, 1). [TV series

episode]. Star Trek: The Next Generation. CBS.

Moore, R. D. (Writer) & Carson, D. (Director). (May 1992). The Next Phase. (5, 24). [TV series

episode]. Star Trek: The Next Generation. CBS.

Snodgrass, M. M. (Writer) & Scheerer, R. (Director). (Feb. 1989). Measure of a Man. (2, 9). [TV

series episode]. Star Trek: The Next Generation. CBS.

Citations Aristotle, Nicomachean Ethics, Book 8. (n.d.). Retrieved February 5, 2021, from

http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.01.0054%3Abook%3

D8

Bostrom, N., & Yudkowsky, E. (2011). The Ethics of Artificial Intelligence. Cambridge Handbook of

Artificial Intelligence. https://doi.org/10.18189/isicu.2020.27.1.73

Bradwell, H. L., Edwards, K. J., Winnington, R., Thill, S., & Jones, R. B. (2019). Companion robots

for older people: Importance of user-centred design demonstrated through observations and

focus groups comparing preferences of older people and roboticists in South West England. BMJ

Open, 9(9). https://doi.org/10.1136/bmjopen-2019-032468

Carman, A. (2019). They welcomed a robot into their family, now they’re mourning its death. The

Verge. https://www.theverge.com/2019/6/19/18682780/jibo-death-server-update-social-robot-

mourning

Hanson Robotics. (2021). Sophia. Retrieved from: https://www.hansonrobotics.com/sophia/

Human being | Britannica. (n.d.). Retrieved January 7, 2021, from

https://www.britannica.com/topic/human-being

JIBO, The World’s First Social Robot for the Home | Indiegogo. (n.d.). Retrieved May 13, 2021, from

https://www.indiegogo.com/projects/jibo-the-world-s-first-social-robot-for-the-home#/

Kaufman, S. B. (2019). “In-Group Favoritism Is Difficult to Change, Even When the Social Groups

Are Meaningless.” Scientific American. Retrieved from:

https://blogs.scientificamerican.com/beautiful-minds/in-group-favoritism-is-difficult-to-change-

even-when-the-social-groups-are-meaningless/

Katz, B. (2017). Why Saudi Arabia Giving a Robot Citizenship Is Firing People Up. Smithsonian

Magazine. https://www.smithsonianmag.com/smart-news/saudi-arabia-gives-robot-

citizenshipand-more-freedoms-human-women-180967007/

Kovacs, J. (2014). Robot gets seal of approval. The Star. Retrieved from:

https://www.thestar.com/life/breakingthrough/2014/06/09/robot_gets_seal_of_approval.html

Memory Alpha. (n.d.) Data, 2366. Retrieved from: https://memory-alpha.fandom.com/wiki/Data

Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley. IEEE Robotics and

Automation Magazine, 19(2), 98–100. https://doi.org/10.1109/MRA.2012.2192811

Nevejans, N. (n.d.). DIRECTORATE-GENERAL FOR INTERNAL POLICIES POLICY

DEPARTMENT C: CITIZENS’ RIGHTS AND CONSTITUTIONAL AFFAIRS LEGAL AFFAIRS

EUROPEAN CIVIL LAW RULES IN ROBOTICS STUDY.

SENTIENCE | definition in the Cambridge English Dictionary. (n.d.). Retrieved December 8, 2020,

from https://dictionary.cambridge.org/us/dictionary/english/sentience

Silk, J. B. (2002). /3, What Are Friends for? The Adaptive Value of Social Bonds in Primate Groups

(Vol. 139, Issue 2).

Simon, S. (2019). Opinion: Good Night Oppy, A Farewell To NASA’s Mars Rover. NPR.

https://www.npr.org/2019/02/16/695293679/opinion-good-night-oppy-a-farewell-to-nasas-mars-

rover

Youll, S. (1991). The Naked Sun: Cover art. Doubleday. ISBN: 0-553-29339-7.

Appendix A. Image attributions for images used in Fig. 1 The

Uncanny Valley.

Paro the companion robot. Wikipedia. (2018). “Robotsälen Paro TEKS0057912.jpg.”

Wikipedia Commons. Retrieved from:

https://commons.wikimedia.org/wiki/File:Robots%C3%A4len_Paro_TEKS0057912.jpg

Jibo. Wikipedia. (2017). “Cynthiabreazeal.jpg.” Wikipedia Commons. Retrieved from:

https://commons.wikimedia.org/wiki/File:Cynthiabreazeal.jpg

Opportunity Rover. Wikipedia. (2021). “Opportunity Rover.” Wikipedia Commons.

Retrieved from: https://en.wikipedia.org/wiki/Opportunity_(rover)

R. Giskard Reventlov. Wikipedia. (2006). “RobotsAndEmpire.jpg.” Wikipedia

Commons. Retrieved from:

https://upload.wikimedia.org/wikipedia/en/e/ea/RobotsAndEmpire.jpg

Sophia. ITU Pictures. (2017). “AI for GOOD Global Summit.” Flickr. Retrieved from:

https://www.flickr.com/photos/itupictures/34328656564

Data. ViacomCBS. (n.d.). “Data, 2366.” Memory Alpha. Retrieved from:

https://memory-alpha.fandom.com/wiki/Data?file=Data%252C_2366.jpg

R. Daneel Olivaw. Wikipedia. (2006). “RobotsAndEmpire.jpg.” Wikipedia Commons.

Retrieved from: https://upload.wikimedia.org/wikipedia/en/e/ea/RobotsAndEmpire.jpg

Andrew Martin. GWZin. (2009). “thumbCABPUF74.” Flickr. Retrieved from:

https://www.flickr.com/photos/14004911@N03/3831729837/