<<

Enabling Value Sensitive AI Systems through Participatory Fictions

Q. Vera Liao and Michael Muller IBM Research [email protected], [email protected]

Abstract welfare may fail, or rapidly become obsolete, for the near- Two general routes have been followed to develop artificial term AI technologies working in narrow domains. agents that are sensitive to human values---a top-down ap- To complement the top-down approach, which may fail to proach to encode values into the agents, and a bottom-up ap- proach to learn from human actions, whether from real-world account for nuanced choices with rigid specification, some interactions or stories. Although both approaches have made researchers explored bottom-up approaches to learn the tacit exciting scientific progress, they may face challenges when values from human actions (whether observed or interacted applied to the current development practices of AI systems, with) (Hadfield-Menell et al. 2016; Riedl and Harrison which require the understanding of the specific domains and 2016). Naturally, such approaches would require large specific stakeholders involved. In this work, we bring to- gether perspectives from the human-computer interaction amount of human choices data. A promising direction that (HCI) community, where designing technologies sensitive to AI researchers have started to explore is learning from user values has been a longstanding focus. We highlight sev- “crowdsourced” choices with story scenarios, where the eral well-established areas focusing on developing empirical story setup constrains the action . For instance, the methods for inquiring user values. Based on these methods, “trolley problem” is a widely used of ethical di- we propose fictions to study user values involved in AI systems and present preliminary results from lemma, for which large crowdsourced datasets have been a case study. With this paper, we invite the consideration of created in the hope of informing the development of “moral user-centered value inquiry and value learning. machines” (Bonnefon et al. 2016; Shariff et al. 2017). How- ever, before we can map out a complete picture of human activities, such approaches are not ready to relieve us from

Introduction the needs to understand values in specific domains and of With the rapid development of AI technologies, there is a specific stakeholders that are directly or indirectly impacted growing interest in both the scientific community and the by the technology (Friedman et al. 2013). public to embed ethics and human values in autonomous To understand the values of stakeholders, task domains and agents (Russell et al. 2017). A main thread of research ex- different user communities is, in fact, a core undertaking of plored adapting decision frameworks by embedding values the HCI community, with the dedicated research area of as rules, constraints or preference structures (Rossi 2015; (VSD) (Friedman 1996; Friedman et Greene et al. 2016). Most AI researchers have focused on al. 2013). The tension between the top-down approach by the computational implementation of these models and del- seeking normative, utilitarian values and the bottom-up ap- egated the specification of values to classic ethical theories proach by empirically inquiring user values is also a familiar such as utilitarianism and deontology. However, to allow AI one to the HCI community (Borning and Muller 2012). In systems to work in practice, one may need to specify values the following section, we will review this line of work and or constraints that are consistent with the values of actual methods in participatory design (PD) to study diverse val- users and their communities (Cheon and Su 2016). Consid- ues of stakeholders. We also introduce design fictions (DF) ering the complexities of human value systems (Schwartz as a speculative design method to understand user needs for 2012) – even for people in a shared context (Draper et al. technologies that do not yet exist – i.e., emerging or near- 2014) – there are many arguments why being bounded by or future technologies. Based on these approaches, we propose optimizing for some utilitarian values such as happiness or a set of user engagements with participatory design fictions

(PDF)---soliciting stakeholder input with strategically in- values in the context? How do they prioritize different val- complete fictions---to analyze the value issues, including ues?” Lastly, the technical investigation considers how tech- but not limited to types of values and their priority orders, nological properties and alternatives (e.g. different ) for AI applications. support or hinder human values identified in the conceptual We consider PDF to potentially complement both the top- and empirical analysis. For the scope of these investigation, down and bottom-up approaches for computationally em- VSD emphasizes the involvement of both direct stakehold- bedding human values in AI models. While it is straightfor- ers (direct users) and indirect stakeholders (who may be in- ward to encode the value issues identified from analysis of directly impacted). PDF as constraints or preference structures, we also see a In its essence, VSD centers upon the explication of value path to data-driven approaches by iteratively creating more issues impacting diverse stakeholders of a technology and optimal scenarios to collect human decision data for the embedding them as goals and constraints in the design pro- agents to learn from. That is, instead of attempting to gener- cess. This shares much of the same spirit with the currently alize machine moral decisions from a single scenario such pursued approaches for embedding human values in AI as the trolley problem, we seek to create more application- models. In an early paper, VSD recommended 12 human specific scenarios to make the value learning task more trac- values that hinge on ethical theories including human wel- table, and more applicable. We would like to draw a parallel fare, ownership, autonomy, privacy, etc (Friedman and with user-centered design in conventional technology de- Kahn 2003). But critique ensued, including a lack of ground velopment process to invite the consideration of user-cen- to proclaim normative values with complex technologies tered model development for AI technologies to be more and ever-changing social contexts (Manders-Huits 2011), sensitive to the values of its diverse stakeholders. Just like and biases towards western, liberal and civic values (Born- user-centered design, this process can be iterative, moving ing & Muller 2012). More recently, a set of empirical inno- from open-ended discursive fictional spaces for formative vations emerged within the VSD framework on soliciting understanding of the scope of values, to more well-defined values oriented considerations from stakeholders using in- scenarios (like the trolley problem) for quantitatively vali- terviews, card-based exercises, etc. (e.g., Friedman and dating and identifying nuanced value issues. Hendry 2012; Woelfer et al. 2011). Our work in what we call value sensitive inquiry is built on this foundation. HCI Approaches to Values Participatory Design We start by introducing three established HCI areas that are Participatory design (PD) is a set of theories and practices to concerned with studying and designing for user values. We bring end-users as full participants into activities leading to note that each scholarship tradition has long threads of the- technological products (Bodker 2014). A main focus of PD ories and methods that go beyond what are reviewed here, research is the development of diverse tools and techniques where our primary focus is on value inquiry and their prag- to bridge participants’ current practices and the abstract de- matic use for AI technologies. sign space of new or near-future technologies, and enable Value Sensitive Deign fruitful dialogues between developers/researchers and end Originally introduced by Batya Friedman (1996), value sen- users (Kensing and Blomberg 1998; Muller and Druin sitive design (VSD) is “a theoretically grounded approach 2010). Narrative structures play a key role in serving these to the design of technology that accounts for human values goals by avoiding abstract representation, and allowing end in a principled and comprehensive manner throughout the users and to experiment with various design pos- design process”. VSD was developed in an effort to consider sibilities (Kensing and Blomberg, 1998). Narratives in PD human values with ethical import as a central technology have taken various forms including scenarios, story-telling design criterion, along with traditional design criteria such (e.g. co-creation of photograph and drama), game playing as performance and usability. Freedman et al. developed a (e.g. card games) and co-creation of prototypes (Muller and “tripartite methodology”, consisting of iterations of concep- Druin 2010). tual, empirical and technical investigations. Specifically, In the next section, we will consider adapting tools and tech- conceptual investigations are concerned with careful con- niques from PD researchers’ repertoire for value inquiry or ceptualization of fundamental value issues raised by the value learning. The bottom-up approaches of learning val- technology at hand, asking questions such as “what values ues from human choices in scenarios of ethical dilemma and whose values should be supported by the design pro- may be considered a distributed form of participatory de- cess?” Empirical investigations further complement the un- sign, although PD have traditionally adopted more qualita- derstanding through investigations of the human activities tive analysis on the narratives. We note that PD owes its in which the technical artifact is situated, asking more fine- roots in value issues with the political movement in the grained questions such as “how do stakeholders apprehend 1970s to entitle workers along with the management to de- termine what technologies to bring into the workplace (Winograd and Kuhn, 1996). PD thus embodies moral com- the trolley problem is further complicated with the replace- mitments to diverse human values, meanwhile a more de- ment of self-driving car for the human controlling the lever scriptive, bottom-up approach to integrate them in technol- (Bonnefon et al. 2016). ogy design. PD potentially offers solutions to the recent warnings raised by researchers of a techno-centric view to Participatory Design Fictions (PDF): Towards AI system design, which puts users and the society in a pas- Value Inquiry for AI Technologies sive role of accepting and adapting to these technologies In summary, we introduce perspectives from the HCI com- (Sabanovic 2010; Cheon and Su 2016) munities to begin to think about empirical methods to enable Design Fictions value inquiry and value learning for AI applications. We A unique issue in considering values involved in AI sys- employ the stakeholder concepts and an iterative combina- tems, at least in the current term, is a future orientation. This tion of conceptual and empirical investigation of values is not only because many progressing AI technologies are from Values Sensitive Design. We borrow tools from De- not yet a part of ordinary people’s everyday life (e.g., self- sign Fictions to elicit value issues involved in future-ori- driving cars, humanoid robots), but also because of the rich ented AI technologies. Finally, we use Participatory Design depiction of AI technologies in popular culture’s imagined methods to de-center the inquiry from technology experts, futures (e.g. books and movies). Although and to re-center the inquiry on the users. some argue that the pre-exposure to fictional experience In this section, we consider some examples of participatory may implicate challenges in the design of AI technologies, design techniques and adapt them for introducing fictional they may in fact provide an experimental space for values experience spaces where potential stakeholders can individ- issues, which are considered more abstract and relatively ually or collectively envision, design, and critique emerging stable constructs. or future AI technologies. We expect value issues to be ar- This idea converges with the recent development of Design ticulated in the discursive story world in a structured way, Fictions (DF), a genre of speculative , as first as enforced by the carefully constructed story boundaries. introduced by Sterling’s definition of “deliberate use of die- Ultimately, this research area should strive to provide a set getic prototypes to suspend disbelief about change” (2012). of guidelines for constructing such boundaries and enabling In particular, DF has been embraced for revealing values as- effective value inquiry. Meanwhile, we also advocate an it- sociated with new technologies (Dourish and Bell 2014; erative process to start with more formative inquiry with Tanenbaum et al. 2016); for opening a space of diverse and open-ended discursive spaces (e.g., providing just a begin- polyvocal speculations about future technologies (Blythe ning of a story and main characters), asking questions such 2014); and engaging and inquiring about specifications of as “what values and whose values should be considered?”. these values (Draper et al. 2014). Compared to convention- Then one can refine the storyline to narrow down to more ally used scenario-based design methods, DF focuses more analytical inquiry of the key value-sensitive system actions on constructing an engaging fictional space for speculating identified, asking fine-grained questions such as “how to re- and provoking, some calling it “fieldwork of the future” solve the diverse or conflicting values in this situation?” In- (Odom et al. 2012). novative setup that allows social emergence of critical value We introduce fictional narratives to explore the value issues issues such as collective deliberation platforms and social of AI technologies as a natural extension of the fictional games can be adopted. It is also possible to work iteratively world people have already been exposed to through popular in parallel with the AI model development process, for culture. Importantly, we argue for carefully constructed fic- which the fictional experience and scenario-based model tions that put the targeted technology and its personal and testing can be engaged in at the same time. social contexts in the focal point. It is necessary to construct Fictions as Probes. A main usage of stories in PD is to technology-specific fictions because machine ethics may be elicit users’ evaluation as probes in user interviews or focus a complicated issue and it may not be sufficient to generalize groups (e.g., Sato and Salvador 1999). In this way, we could from human ethical standards. For example, in a recent Sci- present fictions regarding the targeted technology as starting ence paper, through a survey study, researchers found that points for a conversation. Questions addressed to users could focus on the support or hindrance of values that they perceive in the stories. For example, Draper et al. (2014) used fictional probes to elicit ethical evaluation of eldercare robots with focus groups. Cheon and Su (2016) used short fictions as evaluation cases to solicit roboticists’ considera- tions of value issues in their development work. This ap- heard,” Wang et al. (1996) invited Chinese village women proach can be considered a more close-ended employment to articulate their lives through photo novellas with the goal of fictions for value inquiry. of influencing policy-makers. We could leverage digital Fictions as Conceptual or Literal Guerrilla Theatre. technologies to enable group co-creation of values-oriented Soliciting tacit value issues is a challenging task and PD re- stories about AI technologies. For example, online games searchers have pursued more critical approaches by inviting have been recently considered as collective venues for de- users to change the stories. Boal’s Theatre of the Oppressed sign fictions (Tanenbaum et al. 2012). employed stories (enacted as brief dramas, portrayed in the Data from PDF for AI Models: Possible Directions street) as a critique of power (1992). Surprised passers-by The user narratives collected from PDF should be consid- were encouraged to rewrite the story – or to participate in its ered data. Analyses on the data can generate high- in- changed enactment - so that it would have an outcome that sights for value issues around a targeted AI technology, and they preferred. PD researchers derived similar theatre ap- can also be used as direct input for learning based models. proaches as a means to critique and change a proposed user Following the tradition of inductive content analysis in so- experience (Muller et al. 1994). For work with future-ori- cial science research, Grounded Theory based analysis can ented AI technologies, plot devices, whether in written or be used to identify value issues in the collected narratives. performative forms, can be adopted to invite users to engage Grounded Theory provides a set of rigorous manual coding in value (re) configurations by rewriting of characters, plots procedures leading to the emergence of conceptual catego- and outcomes. ries (Charmaz 2016). Recently, researchers started explor- Fictions as Participatory Constructions. A more am- ing the parallel between unsupervised learning for pattern bitious approach in PD is to solicit stories to be written by discovery and Grounded Theory based manual analysis on users. For instance, Beeson and Miskelly (2000) advocate relatively “small data” (Muller et al. 2016; Baumer et al. user-created stories through hypermedia technologies such 2017). We consider the complementariness of qualitative that “plurality, dissent, and moral space can be preserved.” value inquiry and data-driven value learning to be a promis- Druin engages children to construct their own narratives in ing future research area for analyzing narratives collected a playful technology environment (2002). Prost et al. de- from PDF. scribed a structured, participatory workshop process to teach Another area to explore is to complement the reinforcement students to create design fictions for sustainability technol- learning based frameworks for AI ethics, for example, to fill ogies (2015). We envision engaging users in completing the gaps in specifying utility or reward functions based on their own versions of stories by posing value-oriented ques- empirical investigation. The idea of inverse reinforcement tions as starting points. For example, a fictional character learning (IRL) by modeling the reward function as a proba- facing a complex choice would require explicit considera- bilistic inference problem by observing human actions or tion and articulation of values, and also encourage fictional feedback is particularly interesting. If one can map the agent simulation of outcomes with different “value configura- actions, and even the learned reward functions, to a fictional tions”. In the next section, we present a preliminary case space, we can envision many possibilities of eliciting feed- study experimenting with this approach. back and for these “real stories” of AI agents under ethical Fictions as Group Co-Creations. There are two reasons to development. This is in line with the recent proposal of co- consider collective creation of fictions. One is a possibility operative inverse reinforcement learning, and leveraging ac- for social emergence of critical value issues, by preserving tive learning, active teaching and communicative actions, to and building upon social traces of others. In a recent large- achieve effective value alignment of AI systems (Hadfield- scale online experiment to collect moral decision data for Menell et al. 2016). the trolley problem, MIT researchers introduced the mecha- nism of crowd-created scenarios to be judged by others, who could be inspired to produce variations that are centered PDF: A Case Study around core value issues (Shariff et al. 2017). By making ar- As a first step, we have conducted multiple experiments ticulated values visible and actionable to other participants, with using incomplete design fictions to engage users in one can thus enable a collaborative process that makes crit- writing their own stories about future applications of AI ap- ical value issues emergent. plications. These experiments aimed to explore how future Another reason to pursue group co-creation is that stake- users react to the probe of strategically incomplete stories holders may also interact with AI technologies as groups or and the scope of generated narratives. By design, each fic- in social contexts, so we need ways to collectively speculate tion begins with a setting, one or more characters, and a plot. about these technologies. In PD, photodocumentaries have The fiction contains brief diegetic aspects, such as suggest- been co-created by communities. For example, to address the problem that “rural women are often neither seen nor ing concerns and motivations of the protagonist (s). The sto- “what happens afterwards” storyline. To preserve the integ- ries centers around the use of the targeted AI technology and rity of the participants’ narratives, the inquiry protocol for end prematurely at an unresolved decision point. We invite writing formats should be kept short and general (e.g., “what the participants to complete the story, or to provide details decision would you make?”) instead of asking multiple pro- that are implied by the story (e.g., “what choice do you think gressive questions as in interviews. In the design workshop, the character should make?” or “how would you design the we adapted the task into an “artifact design” task. Instead of robot?”). By posing value-oriented questions with a rich but asking about the shopping choices of the nanny bot, which bounded choice space, we hope to elicit narratives that can may be difficult for the group to resolve conflicting specifi- articulate values explicitly or implicitly involved in using cations of individual members, we asked the group to create the technology. Below, we present one example of such an a fictional inventory of the bot shopping website (i.e., what bot features can be specified), which would suggest collec- incomplete story we used: tive “pooling” of values. The Nanny Bot Following grounded-theory based analysis, we identified a We had both taken parental leave with Ariel, our new set of values that can be embedded in developing childcare baby, but now we needed to return to full-time work. Be- AI systems. Some of the values are among the values iden- cause our work schedules were unpredictable, we needed tified to have universal “ethical import” in Friedman et al.’s a part-time, on-request nanny to cover an extra 1-2 hours VSD papers (2003; 2012), including accountability (e.g., at the end of the day. “punctual”, “never have accidents”), trust, and identity So of course we went to the Bot Store website, where we (“how I would interact with the baby…our friend’s values could order and personalize our nanny bot. There were may be different”). Some are less obvious and specific to the so many choices! Our close friends had made their nanny specs viewable. We copied their nanny specs as our start- childcare context, such as family tradition (e.g., language, ing point, and then we began to personalize those specs appearance and habits) and social development for the child to suit our family. We examined some of the attributes (e.g. “quality time with the baby”, “encourage interaction that we could choose, like personality, skin color, gen- with others”) We also note that certain values are not uni- der…… versal or held to the same standard. For example, partici- pants expressed highly varied requirements for robot auton- [complete the story here] omy and privacy.

When we’d finished the choices that were important to Results collected from the small-scale studies are encourag- us, we accepted the other personalization from our ing, demonstrating the potentials of using strategically de- friends' records. And then we were done! We were ex- signed incomplete fictions to elicit rich and complex value cited to be bringing the nanny-bot into our lives, and into issues, especially ones that might be ignored if we took, in- Ariel's young life. stead, a purely normative view on values. Meanwhile, the This incomplete story involves a targeted AI technology in observation suggests that informants’ reactions to PDF are a specific domain (childcare). The setup is a more distant highly diverse and individualized. On the one hand, this im- future than the current technological capabilities to allow plies challenges for developing principled methods to con- broader value inquiries. The story is left blank at the shop- struct the fictions, as well as for data-driven approaches to ping decision point, a familiar decision context that both discover values from narratives. On the other hand, the ob- serves to reduce abstraction of the technological space and servations further highlight the needs to start from more imposes a need to explicate values. In the last paragraph, we open-ended formative inquiry without presuming values or mention the consideration of “friends’ records” to nudge for even the narrative frameworks. In some of our encounters, social considerations (i.e., “what I value in relation to what participants questioned our conceptualization of opposing others value”). The brief story ends at a prospective point values to begin with. For example, in another incomplete that makes further exploration and simulating the outcomes fiction we experimented with inquiring about labor issues, of the decision configuration possible. by asking participants how the owner of an apple orchard In our preliminary exploration, we experimented with dif- should make choices between robot apple pickers and mi- grant workers. Participants pushed back the binary choice ferent formats of inquiries. To date, we engaged six individ- ual volunteers for interviews and writing tasks, and a short and suggested creative solutions that imply more complex group design workshop at the Human Computer Interaction value configurations (e.g., “employ human workers for the Consortium (HCIC) 2017. With the interview format, the re- most expensive apple varieties…and robots for the others”). searcher could actively engage in interactive inquiries, and Lastly, we consider the possibilities of extending the fic- we found it effective in soliciting the reasoning behind the tional setup beyond a written or conversational format, and choices, which in itself provides value relevant information, beyond individual creations. For example, the experience of including priority orders and conflicts of values. In the writ- “bot shopping” and “bot store” can be readily developed into ing task format, participants were more willing to engage in performative actions and spaces for more immersive user exploring the outcomes of value choices by writing lengthier engagement that can be participated by groups (similar to the User Enactments, physical objects and spaces used by Cheon, E., and Su, N.M. 2016. Integrating roboticist values into a Odom et al. to investigate designs of radical technologies value sensitive design framework for humanoid robots. Proceed- (2012)). The possibilities for digital games are particularly ings of HRI 2016, Conference on Human Robot Interaction, ACM Press, 375-382. promising for gathering large-scale narratives data and so- cial emergence of value issues. We note that popular simu- Draper, H., Sorell, T., et al. (2014, October). Ethical dimensions of human-robot interactions in the care of older people: Insights lation games such as SIMS already embody some of the key from 21 focus groups convened in the UK, France and the Neth- elements such as the diegetic aspects, decision point and erlands. In International Conference on Social Robotics (pp. 135- bounded space. By employing game plots with carefully 145). Springer, Cham. constructed value prompts, not only can one elicit more nat- Dourish, P., & Bell, G. (2014). “Resistance is futile”: reading sci- ural values-oriented narratives, but also it is possible to en- ence fiction alongside . Personal and Ubiq- able large-scale collective co-creation of “possible, probable uitous Computing, 18(4), 769-778. and preferred future”, providing empirically and socially Engelmore, R., and Morgan, A. eds. 1986. Blackboard Systems. validated rich values data for AI systems to evolve their Reading, Mass.: Addison-Wesley. moral status. Friedman, B. (1996). Value-sensitive design. Interactions, 3(6), 16-23. Friedman, B., & Kahn Jr, P. H. (2003). Human values, ethics, and Conclusion design. The human-computer interaction handbook, 1177-1201. The goal to embed human values in AI system is critical and Friedman, B., Kahn Jr, P. H., Borning, A., & Huldtgren, A. (2013). Value sensitive design and information systems. In Early inevitable. As the scientific community develops computa- engagement and new technologies: Opening up the laboratory tional solutions, so should we make progress in innovating (pp. 55-95). Springer Netherlands. empirical methods to elicit and understand values of differ- Greene, J., Rossi, F., Tasioulas, J., Venable, K. B., & Williams, ent user communities and in different tasks (IEEE 2016). B. C. (2016, February). Embedding Ethical Principles in Collec- This work represents an effort in that direction by proposing tive Decision Support Systems. In AAAI (pp. 4147-4151). Participatory Design Fictions. We build our foundation on Hadfield-Menell, D., Russell, S. J., Abbeel, P., & Dragan, A. the basis of three established HCI areas studying user value (2016). Cooperative inverse reinforcement learning. In Advances issues, and consider adapting existing user research methods in neural information processing systems (pp. 3909-3917). to: 1) enable the explication of broad and tacit value issues IEEE Standards Association. (2016). The IEEE Global Initiative around future-oriented technologies by leveraging fictional for Ethical Considerations in and Autono- mous Systems. spaces and diegetic elements; 2) more closely serve value inquiry and value learning by leveraging strategically in- Kensing, F., & Blomberg, J. (1998). Participatory design: Issues and concerns. Computer Supported Cooperative Work (CSCW), complete stories that centers upon targeted technology and 7(3-4), 167-185. critical decision points. We present preliminary insights Manders-Huits, N. (2011). What values in design? The challenge from our case studies and invite joint effort across scientific of incorporating moral values into design. Science and engineer- communities for user-centered approaches to value inquiry ing ethics, 17(2), 271-287. and value learning for AI systems. Muller, M. J., Wildman, D. M., & White, E. A. (1994, April). Participatory design through games and other group exercises. In Conference companion on Human factors in computing systems (pp. 411-412). ACM. References Muller, M. J., and Druin, A. (2010). Participatory design: the third space in HCI. Human-computer interaction: Development Baumer, E.P.S., Mimno, D., Guha, S., Quan, E., and Gay, G.K. process, 3rd edition, 4235, 165-185. Muller, M., Guha, S., 2017. Comparing grounded theory and topic modeling: Extreme Baumer, E. P., Mimno, D., & Shami, N. S. (2016, November). divergence or unlikely convergence? Journal of the Association for Machine Learning and Grounded Theory Method: Convergence, and Technology 68(6), 1397-1410. Divergence, and Combination. In Proceedings of the 19th Inter- Beeson, I., & Miskelly, C. (2000, January). Dialogue and Dissent national Conference on Supporting Group Work (pp. 3-8). ACM. in Stories Community. In PDC (pp. 1-10). Odom, W., Zimmerman, J., Davidoff, S., Forlizzi, J., Dey, A. K., Boal, A. (1992): Games for actors and non-actors (Jackson A., & Lee, M. K. (2012, June). A fieldwork of the future with user trans.). London: Routledge. enactments. In Proceedings of the Designing Interactive Systems Conference (pp. 338-347). ACM. Bødker, K., Kensing, F., and Simonsen, J. 2014. Participatory IT design: Designing for business and workplace realities. MIT Press. Rossi, F. (2015, September). Safety constraints and ethical princi- ples in collective decision making systems. In Joint German/Aus- Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social di- trian Conference on Artificial Intelligence (pp. 3-15). Springer lemma of autonomous vehicles. Science, 352(6293), 1573-1576. International Publishing. Borning, A., & Muller, M. (2012, May). Next steps for value Riedl, M. O., & Harrison, B. (2016, February). Using Stories to sensitive design. In Proceedings of the SIGCHI conference on hu- Teach Human Values to Artificial Agents. In AAAI Workshop: man factors in computing systems (pp. 1125-1134). ACM. AI, Ethics, and Society. Russell, S., Dietterich, T, Horvitz, E., Selman, B., Rossi, F., Has- sabis, D., Legg, S., Suleyman, M., George, D., and Phoenix, S. 2017. Open letter to the editor: Research priorities for rubust and beneficial artificial intelligence: An open letter. AI Magazine 36(4), Šabanović, S. (2010). Robots in society, society in robots. Inter- national Journal of Social Robotics, 2(4), 439-450. Sato, S., & Salvador, T. (1999). Playacting and focus troupes: theater techniques for creating quick, intense, immersive, and en- gaging focus group sessions. INTERACTIONS-NEW YORK-, 35- 41. Shariff, A., Bonnefon, J. F., & Rahwan, I. (2017). Psychological roadblocks to the adoption of self-driving vehicles. Nature Hu- man Behaviour, 1. Schwartz, S. H. 2012. An Overview of the Schwartz Theory of Basic Values. Online Readings in Psychology and Culture, 2(1). https://doi.org/10.9707/2307-0919.1116 Sterling, B. (2012): ‘In Torie Bosch. 2012. Sci-fi writer Bruce Ster- ling explains the intriguing new concept of ’. Slate 2 March 2012. Tanenbaum, J., Tanenbaum, K., & Wakkary, R. (2012, May). as design fiction. In Proceedings of the SIGCHI Con- ference on Human Factors in Computing Systems (pp. 1583- 1592). ACM. Tanenbaum, J. (2014). Design fictional interactions: why HCI should care about stories. Interactions, 21(5), 22-23. Winograd, T., & Kuhn, S. (1996). Participatory Design. Bringing Design to Software. Addison-Wesley. Woelfer, J. P., Iverson, A., Hendry, D. G., Friedman, B., & Gill, B. T. (2011, May). Improving the safety of homeless young peo- ple with mobile phones: values, form and function. In Proceed- ings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1707-1716). ACM.