Welcome 04 Maps 06 Schedule Overview – Sun, 3/3 15 – Mon, 3/4 16 – Tue, 3/5 18 – Wed, 3/6 20 Visits 22 Tutorials and Workshops 23 Plenary Talk 27 Panel Session 30 Session 32 Map - Demo & Poster 46 Late-Breaking Reports & Poster Session 48 Video Session 56 Demo Session 60 Exhibition 66 Sponsorship 68 Organizers 72 Reviewers 74 Welcome to Tokyo! The Eighth Annual Accompanying the full papers are the Late ACM/IEEE International Conference on Breaking Reports, Videos, and Demos. Hideaki Kuzuoka Welcome Human- Interaction (HRI 2013) is a For the LBR, 95 out of 100 (95%) two- HRI’13 General Co-Chair highly selective conference that aims to page papers were accepted and will be University of Tsukuba, Japan showcase the very best interdisciplinary presented as posters at the conference. and multidisciplinary research in human- For the Videos, 16 of 22 (72%) short videos robot interaction with roots in robotics, were accepted and will be presented during social psychology, cognitive science, HCI, the video session. The Demos is new to our Vanessa Evers human factors, artificial intelligence, conference. We have 22 robot systems for HRI’13 General Co-Chair design, engineering, and many more. We all participants to be able to interact with University of Twente, Netherlands invite broad participation and encourage the innovative systems. discussion and sharing of ideas across a diverse audience. Rounding out the program are two keynote Michita Imai speakers who will discuss topics relevant to HRI’13 Program Co-Chair Robotics is growing increasingly HRI: Dr. Yuichiro Anzai and Dr. Tomotaka Keio University, Japan multidisciplinary as it moves towards Takahashi. We also have a panel session realizing capable and collaborative on Revisioning HRI Given Exponential that are studied in both laboratory and real Technological Growth. world settings. Concurrent development of Jodi Forlizzi technical, social, and designed aspects of HRI’13 Program Co-Chair systems, with a concern for how they will The conference could not have occurred improve the world, is needed. Therefore, without the extensive volunteer effort put Carnegie Mellon University, USA this year’s theme is dedicated to Robots forth by the organizing committee, program as Holistic Systems, which highlights the committee, and reviewers. We would also importance of an interdisciplinary approach like to thank the keynote speakers, panelists to all of HRI. HRI 2013 focuses on a wide for their participation and attendance. variety of robotic systems that operate, The sponsors of the conference are ACM collaborate with, learn from, and meet SIGCHI, ACM SIGART, and IEEE Robotics the needs of human users in real-world and Automation. The conference is in environments. cooperation with AAAI and HFES.

Full Papers submitted to the conference Finally, we are especially thankful for the were thoroughly reviewed and discussed. hard work by authors who submitted The process utilized a rebuttal process papers, videos, and demos. HRI is a vibrant and a worldwide team of dedicated, community, and there was a large volume interdisciplinary reviewers. This year’s of high quality submissions. This makes conference continues the tradition of reviewing difficult, but ensures a high selectivity with 26 out of 107 (24%) quality conference. We hope you will be submissions accepted. Due to the joint inspired by this high quality content and sponsorship of ACM and IEEE, papers are enjoy your stay in Tokyo. archived in both the ACM Digital Library and IEEE Xplore. This year’s conference has journal special sessions as a new category for the technical sessions. 7 papers accepted on Journal of Human-Robot Interaction appear on the conference to show their journal level works.

4 5 Map - Odaiba

Miraikan HRI 2013

Daiba Park Restaurant & Convenience Store

MIRAIKAN Restaurant Cafe Fastfood Shop

Ship Museum(Funo-no-kagakukan) Bayside Terrace “Cabin” Seaside Restaurant “Kaiou”

Hotel Niko Tokyo Aqua City Decks Time 24 Bldg Odaiba Mediage Tokyo Beach Sky Restaurant “Seagull” Daiba Odaiba-Kaihinkoen Cafe “La mer” Sunkus Fuji Television Shiokaze Park Aomi Frontier Bldg Grand Pacific Le Daiba Metropolitan Expressway Wangan Line Green’s Chef am-pm Diver City Tokyo Teleport Tokyo Plaza Zepp Diver City Telecom Center Shinshu Soba Noodle “Soji-boh” Symbol Promenade Park Italian Market “Vario” Central Square Bar “Roku Uemon” Telecom Center Artium “ Espresso Americano” Cafeteria “Ai House” Fune-no-kagakukan China Restaurant “Touen” Palette Town Meatball & Bowl “Shoya” Ship Museum Aomi Restaurant & Bar “Precious Tokyo Bay” (Fune-no-kagakukan) Lunchbox Food “Yatai Deli” Soya Tokyo Intl Tokyo Intl. Exchange Center Exchange Center Daily Yamzaki

Tokyo Wangan Fusi-TV MIRAIKAN Police Wangan Aomi Station Studio Clinic AIST Chuo-Futo Park Time 24 Bldg AIST Tokyo Seaside Clinic in Hotel Grand Pacific www.ts-clinic.jp Aomi 03-5579-0355 Frontier Bldg Telecom Center Sunfield Clinic in Time 24 Building www.sunfield-c.com Ooedo Hot-Spring 03-3599-3311 (Ooedo Onsen) Telecom Center Bldg Telecom Center Dental Clinic in Telecom Center 03-5500-0418

6 7 Map - Restaurant & Convenience Store

Fune-no-kagakukan ST

❹ ❺ Ship Museum Restaurant Convenience Store (Fune-no-kagakukan) 1 MIRAIKAN Tokyo Intl. Exchange Center Soya Tokyo Intl ❶ Restaurant 1 Daily Yamzaki Exchange ❷ Cafe Center ❸ Fastfood Shop Time 24 Bldg 2 Sunkus Ship Museum(Funo-no-kagakukan) ❹ Bayside Terrace “Cabin” Aomi Frontier Bldg ❶ ❷ ❸ Tokyo ❺ Seaside Restaurant “Kaiou” 3 am-pm Fusi-TV Wangan MIRAIKAN Police Wangan Time 24 Bldg Station Studio AIST ❻ Sky Restaurant “Seagull” Aomi ❼ Cafe “La mer” Chuo-Futo Park ❻ ❼ Aomi Frontier Bldg Time 24 AIST ❽ Green’s Chef Bldg

Aomi Telecom Center 2 Frontier Bldg ❾ Shinshu Soba Noodle “Soji-boh” ❽ 3 ❿ Italian Market “Vario” Telecom Center ST  Bar “Roku Uemon” ❾ ❿    Telecom Center Artium “ Espresso Americano” Ooedo  Cafeteria “Ai House” Hot-Spring  China Restaurant “Touen”      (Ooedo Onsen) Meatball & Bowl “Shoya” Telecom Center Bldg   Restaurant & Bar “Precious Tokyo Bay”  Lunchbox Food “Yatai Deli”

8 9 Map - Clinic

Daiba Odaiba-Kaihinkoen Fuji Television Grand Pacific Le Daiba Clinic

Diver City Tokyo Seaside Clinic in Hotel Grand Pacific Tokyo Plaza www.ts-clinic.jp Zepp Diver City 03-5579-0355 Symbol Promenade Park Central Square Sunfield Clinic in Time 24 Building www.sunfield-c.com Fune-no-kagakukan ST 03-3599-3311 Palette Town

Aomi Telecom Center Dental Clinic in Telecom Center 03-5500-0418

Tokyo Fusi-TV Wangan MIRAIKAN Wangan Police Station AIST Studio Aomi Chuo-Futo Time 24 AIST Park Bldg Aomi Frontier Bldg Telecom Center ST

Telecom Center Bldg

10 11 Map - HRI 2013 Miraikan

1F 7F

Waiting Space Elevator (For Demo) Reception Room Meeting Room EV Gate EV EV Lobby EV Demo Facilities Demo Facilities Storage CR 2 Garden Miraikan INH LNG CR 3 Hall

CR 1 Foyer ←Registration Desk EV EV EV EV Lobby EV EV EV EV Entrance Hall Free Space/Lobby Disaster Prevation Main Entrance Cloak Room Museum Shop OR 2 OR 1 Center (10:00~18:00) Gate (9:00~10:00, 18:00~20:00)

12 13 Map - Coffee Break & Lunch Sun, 3/3

For more information, see page 23.

Time Place Title Authors Waiting Space 08:00 Registration Open Meeting Room Frank Broz (University of Plymouth) Hagen Lehmann Lobby Workshop 1 09:00~17:00 (University of Hertfordshire) HRI Face-to-Face: 1F OR1 Bilge Mutlu (Full day) Gaze and Speech Communication (University of Wisconsin-Madison) Reception Yukiko Nakano Room (Seikei University)

Hidenobu Sumioka, CR 2 Workshop 2 Takashi Minato (ATR) 09:00~17:00 Design of Human Likeness in HRI Yoshio Matsumoto (AIST) 7F LNG Coffee Break from Uncanny Valley Pericle Salvini (BioRobotics Institute & Miraikan (Full day) Garden to Minimal Design of Scuola Superiore Sant’ Anna) CR 3 Lunch Hall Hiroshi Ishiguro (Osaka University)

Anca Dragan (Carnegie Mellon University) Workshop 3 09:00~17:00 Andrea Thomaz Collaborative Manipulation: 7F CR2 (Georgia Institute of Technology) (Full day) New Challenges for Robotics and HRI CR 1 Siddhartha Srinivasa Foyer (Carnegie Mellon University)

Oliver Damm, Frank Hegel, ←Registration Desk 09:00~12:45 Workshop 4 Karoline Malchus, Britta Wrede 7F CR3 (Half day) Applications for Emotional Robots (Bielefeld University) Manja Lohse (University of Twente)

Solace Shen Workshop 5 09:00~18:00 (University of Washington) Human-Robot Interaction 7F CR1 Astrid Rosenthal-von der Pütten Free Space Lobby (Full day) Pioneers Workshop Cloak Room (University of Duisburg-Essen)

14 15 Mon, 3/4

Time Place Title Authors Time Place Title Authors

Eric Martinson, Wallace Lawson, 08:00 Registration Open Identifying People with Soft- Greg Trafton (US Naval Research 11:25 Biometrics at Fleet Week 7F Laboratory) 09:00 HRI 2013 Welcome Miraikan Hall Takuya Kitade (ATR/Keio University) Understanding Suitable Locations for 11:45 Satoru Satake, Takayuki Kanda (ATR) Chair: Sara Kiesler Waiting 09:10~10:10 How Do We Perceive Robots? Michita Imai (ATR/Keio University) (Carnegie Mellon University) 12:05~13:40 Lunch Irene Rae (University of Wisconsin– Madison) The Influence of Height on Robotic 7F 09:10 Leila Takayama (Willow Garage) 7F Communication Products Miraikan Bilge Mutlu (University of 13:40~15:00 HRI 2013 Madness Miraikan Hall Wisconsin–Madison) Hall Evaluating the Effects of Limited Aris Valtazanos, 15:00~15:30 7F CR 2 Break 09:30 Perception on Interactive Decisions Subramanian Ramamoorthy in Mixed Robotic Environments (University of Edinburgh) 7F Plenary Talk 1: Human-Robot 15:30~16:30 Miraikan Yuichiro Anzai Kuanhao Zheng, Dylan F. Glas, Hall Interaction by Information Sharing Supervisory Control of Takayuki Kanda (ATR) 09:50 Multiple Social Robots for Navigation Hiroshi Ishiguro (Osaka University) 7F Norihiro Hagita (ATR) 16:30~18:00 CR1 HRI 2013 Demo Session CR 2 10:10~10:25 7F CR 2 Break 7F Chair: Bilge Mutlu 10:25~12:05 Groups and Public Places CR1 (University of Wisconsin–Madison) CR2 HRI 2013 Late Breaking Reports and 18:00~20:00 CR3 Poster Session Cindy L. Bethel, Deborah K. Eakin, Free Space Eyewitnesses Are Misled By Human Sujan Anreddy, James Kaleb Stuart, 10:25 Lobby But Not Robot Interviewers Daniel Carruth (Mississippi State University)

Human-Robot Cross-Training: 7F Stefanos Nikolaidis, Julie Shah Computational Formulation, Modeling Miraikan (Massachusetts Institute of 10:45 and Evaluation of a Human Team Hall Technology) Training Strategy

Iolanda Leite (INESC-ID and IST, Technical University of Lisbon) Sensors in the Wild: Exploring Rui Henriques (IST, Technical 11:05 Electrodermal Activity in Child-Robot University of Lisbon) Interaction Carlos Martinho, Ana Paiva (INESC-ID and IST, Technical University of Lisbon)

16 17 Tue, 3/5 Time Place Title Authors Chair: Takayuki Kanda Journal Session 1 13:40~14:40 (ATR)

Kyle Strabala, Min Kyung Lee, Anca Dragan, Jodi Forlizzi, Siddhartha S. Srinivasa Time Place Title Authors Towards Seamless Human-Robot (Carnegie Mellon University) 13:40 Handovers Maya Cakmak (Willow Garage), 08:00 Registration Open 7F Vincenzo Micelli (Universita` Degli Studi di Parma) Miraikan Chair: Cindy Bethel Hall 09:00~10:40 Trust, Help, and Influence Derek McColl (Mississippi State University) Meal-Time with a Socially Assistive (University of Toronto) 14:00 Robot and Older Adults at a Long- Goldie Nejat Munjal Desai term Care Facility (University of Toronto, Toronto (University of Massachusetts Lowell) Rehabilitation Institute) Poornima Kaniarasu (Carnegie Mellon University) Yutaka Kondo, Kentaro Takemura, Impact of Robot Failures and Mikhail Medvedev A Gesture-Centric System for 09:00 Jun Takamatsu, Tsukasa Ogasawara Feedback on Real-Time Trust (University of Massachusetts Lowell) 14:20 Multi-Party Human-Robot Interaction (Nara Institute of Science and Technology) Aaron Steinfeld (Carnegie Mellon University) Holly Yanco 14:40~14:50 7F CR 2 Break (University of Massachusetts Lowell) Chair: Mike Goodrich ~ Journal Session 2 14:50 16:10 (Brigham Young University) 7F Will I Bother Here? Hiroyuki Kidokoro, Takayuki Kanda, Drazen Brscic, Masahiro Shiomi 09:20 Miraikan – A Robot Anticipating its Influence on Pedestrian Walking Comfort (ATR) Controlling Social Dynamics with a Hall Crystal Chao, Andrea L. Thomaz Parametrized Model of Floor 14:50 (Georgia Institute of Technology) Phoebe Liu, Dylan F. Glas, Takayuki Regulation It’s Not Polite to Point: Generating Kanda (ATR) Socially-Appropriate Deictic Behaviors 09:40 Hiroshi Ishiguro (Osaka University) J. Gregory Trafton, Laura M. Hiatt, towards People Norihiro Hagita (ATR) ACT-R/E: An Embodied Cognitive Anthony M. Harrison, Franklin P. 15:10 Architecture for Human-Robot Tamborello, II, Sangeet S. Khemlani, Cristen Torrey (Adobe Systems) 7F Interaction & Alan C. Schultz 10:00 How a Robot Should Give Advice Susan Fussell (Cornell University) Miraikan (Naval Research Laboratory) Sara Kiesler (Carnegie Mellon University) Hall Sebastian Wrede, Christian A User Study on Kinesthetic Teaching Emmerich, Ricarda Grünberg, Akanksha Prakash, Jenay M. Beer, of Redundant Robots in Task and 15:30 Arne Nordmann, Agnes Swadzba, Older Adults’ Medication Management Travis Deyle, Cory-Ann Smarr, Configuration Space 10:20 in the Home: Tiffany L. Chen, Tracy L. Mitzner, Jochen Steil (Bielefeld University) How can Robots Help? Charles C. Kemp, Wendy A. Rogers (Georgia Institute of Technology) Cynthia Breazeal, Nick DePalma, Crowdsourcing Human-Robot Jeff Orkin (Massachusetts Institute 10:40~11:00 7F CR2 Break 15:50 Interaction: New Methods and System of Technology), Sonia Chernova Evaluation in a Public Environment (Worcester Polytechnic Institute), 7F Malte Jung (Stanford University) 11:00~12:00 Miraikan Panel Session Hall 16:10~16:30 7F CR 2 Break


12:00~13:40 Lunch Bus visit to the University of Tokyo: Intelligent Systems and Informatics Lab, Jouhou System Kougaku Lab, 16:30~ and Nakamura Lab

Lab visit to AIST (Advanced Industrial Science and Technology) Tokyo Waterfront, Digital Human Research Center

18 19 Wed, 3/6

Time Place Title Authors Time Place Title Authors

08:00 Registration Open 12:00~13:40 Lunch

Chair: Fumihide Tanaka 13:40~15:20 Is The Robot like Me? ~ Companions, Collaboration, Chair: Greg Trafton (Tsukuba Univ.) 09:00 10:20 and Control (Naval Research Laboratory) Maxim Makatchev, Reid Simmons Communicating Affect via Flight Megha Sharma, Dale Hildebrandt, Expressing Ethnicity through (Carnegie Mellon University), Path: Exploring Use of the Laban Gem Newman, James E. Young, 13:40 09:00 Behaviors of a Robot Character Majd Sakr, Micheline Ziadee Effort System for Designing Affective Rasit Eskicioglu (CMU Qatar) Locomotion Paths (University of Manitoba) The Inversion Effect in HRI: Jakub Zlotowski, Christoph Bartneck 7F Anca D. Dragan (Carnegie 14:00 Are Robots Perceived More Like Mellon University), Kenton C.T. Lee (University of Canterbury) Miraikan Legibility and Predictability of Humans or Objects? (University of Pennsylvania) 7F 09:20 Hall Robot Motion Siddhartha S. Srinivasa (Carnegie Miraikan Daniel T. Levin, Julie A. Adams, Mellon University) Hall A Transition Model for Cognitions Megan M. Saylor, Gautam Biswas 14:20 about Agency (Vanderbilt University) Taking Your Robot For a Walk: François Ferland, Arnaud Aumont, 09:40 Force-Guiding a Using Dominic Letourneau, François Presentation of (Telepresent) Self: Compliant Arms Michaud (Universite de Sherbrooke) Leila Takayama (Willow Garage), On the Double-Edged Effects of 14:40 Helen Harris (Stanford University) Mirrors Effects of Robotic Companionship on Guy Hoffman, Keinan Vanunu 10:00 Music Enjoyment and Agent Perception (IDC Herzliya) Are You Looking At Me?: Henny Admoni, Bradley Hayes, Perception of Robot Attention is David Feil-Seifer, Daniel Ullman, 10:20~10:40 7F CR2 Break 15:00 Mediated by Gaze Duration and Brian Scassellati (Yale University) Group Size Chair: Leila Takayama 10:40~12:00 Verbal and Non-Verbal Behavior (Willow Garage) 15:20~15:40 7F CR 2 Break

A Model for Synthesizing a Combined 15:40~16:40 Video Session Verbal and NonVerbal Behavior Based Amir Aly, Adriana Tapus 10:40 on Personality Traits in Human-Robot (ENSTA-ParisTech) 7F Plenary Talk 2: The Creation of a New Interaction Miraikan Tomotaka Takahashi 16:40~17:40 Robot Era Hall Automatic Processing of Irrelevant Cory J. Hayes, Charles R. Crowell, 7F 11:00 Co-Speech Gestures with Human but Laurel D . Riek 17:40~18:00 Closing and Award Miraikan not Robot Actors (University of Notre Dame) Hall Rhetorical Robots: Making Robots Sean Andrist, Erin Spannan, 11:20 More Effective Speakers Using Bilge Mutlu Linguistic Cues of Expertise (University of Wisconsin–Madison)

Brian Gleeson, Karon MacLean, Gestures for Industry: Intuitive Human- Amir Haddadi, Elizabeth Croft Robot Communication from Human 11:40 (University of British Columbia), Observation Javier Alcazar (General Motors)

20 21 Workshop coffee breaks are from 11:00~11:20 and 15:00~15:30, and the lunch box shop will Visits Tutorials be open from 12:00~13:30. & Workshops

Workshop 1 (Full day): Workshop 2 (Full day): HRI Face-to-Face: Gaze and Speech Design of Human Likeness in HRI from Communication Uncanny Valley to Minimal Design 1. Sun, 3/3, Miraikan Sun, 3/3 (09:00~17:00) Sun, 3/3 (09:00~17:00) OR1 (Orientation Room 1), 1F LNG (Lounge Room), 7F 13:00~15:00 Frank Broz (University of Plymouth), Hidenobu Sumioka, Takashi Minato (ATR), Hagen Lehmann (University of Hertfordshire), Yoshio Matsumoto (AIST), Guided visit of the National Museum of Emerging Science and Innovation (Miraikan): Bilge Mutlu (University of Wisconsin-Madison), Pericle Salvini (BioRobotics Institute of Scuola Demonstrations of Honda ASIMO, Honda UNI-CUB, GEOCOSMOS, etc. Yukiko Nakano (Seikei University) Superiore Sant’ Anna), Hiroshi Ishiguro (Osaka University) (No extra cost, max 20 persons) No extra cost, max 20 persons) The purpose of this workshop is to explore the Human likeness of social agents is crucial relationship between gaze and speech during for human partners to interact with the “face-to-face” human-robot interaction. As agents intuitively since it makes the partners 2. Tue, 3/5, University of Tokyo 3. Tue, 3/5, AIST advances in speech recognition have made unconsciously respond to the agents in the same speech-based interaction with robots possible, manner as what they show to other people. 16:30~20:30 16:30~19:00 it has become increasingly apparent that robots Although many studies suggest that an agent’s need to exhibit nonverbal social cues in order human likeness plays an important role in human- to disambiguate and structure their spoken robot interaction, it remains unclear how to communication with humans. Gaze behavior design humanlike form that evokes interpersonal Bus visit to the University of Tokyo: Lab visit to AIST (Advanced Industrial is one of the most powerful and fundamental behavior from human partners. One approach is Intelligent Systems and Informatics Science and Technology) Tokyo Waterfront, sources of supplementary information to spoken to make a copy of an existing person. Although Lab, Jouhou System Kougaku Lab, and Digital Human Research Center communication. Gaze structures turn-taking, this extreme helps us explore how we recognize Nakamura Lab indicates attention, and implicitly communicates another person, the Uncanny Valley effect must (No extra cost, max 60 persons) information about social roles and relationships. be taken into account. Basic questions, including (Extra 38 USD, Max 40 persons and canceled There is a growing body of work on gaze and why we experience the uncanny valley and how if attendees are less than 10persons) speech based interaction in HRI, involving both we overcome it should be addressed to give new the measurement and evaluation of human insights into an underlying mechanism in our speech and gaze during interaction with robots perception of human likeness. Another approach and the design and implementation of robot is to extract crucial elements that represent speech and accompanying gaze behavior for human appearance and behavior, as addressed in interaction with humans. design of computer-animated human characters. The exploration of minimal requirement to evoke interpersonal behavior from human partners provides more effective and simpler way to design social agents that facilitate communication with human. This full-day workshop aims to bring together the prominent researchers from different backgrounds in order to present and discuss the most recent achievement in design of humanlike in a wide range of research topics from uncanny valley effects and minimal design of human-robot communication.

22 23 Workshop 3 (Full day): Workshop 4 (Half day): Workshop 5 (Full day): Collaborative Manipulation: New Applications for Emotional Robots Human-Robot Interaction Pioneers Challenges for Robotics and HRI Sun, 3/3 (09:00~12:45) Workshop Sun, 3/3 (09:00~17:00) CR3 (Conference Room 3), 7F Sun, 3/3 (09:00~18:00) CR2 (Conference Room 2), 7F CR1 (Conference Room 1), 7F Oliver Damm, Frank Hegel (Bielefeld University), Anca Dragan (Carnegie Mellon University), Manja Lohse (University of Twente), Solace Shen (University of Washington), Andrea Thomaz (Georgia Institute of Technology), Karoline Malchus, Britta Wrede Astrid Rosenthal-von der Pütten Siddhartha Srinivasa (Carnegie Mellon University) (Bielefeld University) (University of Duisburg-Essen)

The eighth annual Human-Robot Interaction Autonomous manipulation has made In social interaction between humans Pioneers Workshop will be held in Tokyo, tremendous progress in recent years, leveraging expressing, recognizing, and understanding Japan on Sunday, March 3rd, 2013. Exploring new algorithms and capabilities of mobile emotions is essential. Therefore, artificial human-robot interaction in a welcoming and manipulators to address complex human emotions are also being exploited to improve interactive forum, HRI Pioneers is the premiere environments. However, most current systems human-robot interaction (HRI) and to build venue for up-and-coming student researchers inadequately address one key feature of human robots that interact in a more human-like and in the field. This highly selective workshop is environments: that they are populated with intuitive manner. Some characteristics of this designed to empower innovators early in their humans. What would it take for a human and kind of robots are to express/perceive emotions, careers and assembles together a cohort of the robot to prepare a meal together in a kitchen, or to communicate with (highlevel) dialogues, world’s top student researchers seeking to foster to assemble a part together in a manufacturing to learn/recognize models of other agents, to creativity, communication, and collaboration workcell? Collaboration with humans is the establish and maintain social relationships, across the incredibly diverse field of human- next frontier in robotics, be it shared workspace and to develop social competencies. These robot interaction. Workshop participants will collaboration, assistive teleoperation and sliding socially interactive robots are used for different have the opportunity to learn about the current autonomy, or teacher-l earner collaboration, purposes, e.g. as toys, as educational tools, or state of HRI, to present their research, and to and raises new challenges for both robotics and as research platforms. Thus, for the engineering network with one another and with select HRI. A collaborative robot must engage in a side of robotic research it is necessary to create senior HRI researchers. We warmly welcome delicate dance of prediction and action, where robots for specific contexts, requirements, and and invite all conference participants to join us it must understand its collaborator’s intentions, expectations. With this half day workshop for the Pioneers Poster Session from 14:00 to act to make its own intentions clear, decide we open up a platform to discuss different 15:00! Participation in the Pioneers Workshop when to assist and when to back off, as well interdisciplinary perspectives on the application is determined through an independent as continuously adapt its behavior and enable of robots that are able to display and perceive competitive application process (deadline customization. Addressing these challenges emotions. We want to develop an idea of how December 3, 2012). For more information, demands a joint effort from the HRI and robotics the context influences the characteristics that an and details on how to apply, please visit the communities. We hope that this workshop will emotional robot needs to have and to gain new workshop website at: http://www.hripioneers. not only serve to attract more roboticists into insights in the role of emotions in HRI. info/ the HRI community under this unifying theme, but will also create valuable collaborations to explore this rich, interdisciplinary area. We welcome high-quality work in all areas related to collaborative manipulation.

24 25 Plenary Talk 1

Mon, 3/4 (15:30~16:30) 7F Miraikan Hall

Human-Robot Interaction by Information Sharing

Yuichiro Anzai

Abstract In the plenary talk addressed at ROMAN92, considered as the first scientific meeting for human- robot interaction [1], I noted that ‘one of the noticeable features of the current work on human- robot communication is that it generally lacks attention to computer science, particularly to human- computer interaction’ [2]. Until that time the field of HRI had been covered by industrial robotics, remote control and augmented reality, and much less attention had been paid to interaction per se, or the design of interactive systems supported by computers and computer network technology. Twenty years after, we are in a very fruitful period of time for the development of HRI, with the plentiful contribution of many researchers and developers with challenging minds. We are now fully aware that HRI is a growing and prospering field of research, related deeply with the neighboring field of human-computer interaction. A notably wide spectrum of work has been developed, from theories and models to the design and implementation of interactive systems, and further to a variety of applications such as control, assistance, rescue and entertainment. At least two points are essential as the backgrounds of the research of both HRI as a whole and our own. The first point is that human-centric analysis and design is indispensable for HRI, however, it needs to be based upon, or augmented by, the solid foresight and implementation of advanced technology from robotics and control engineering to computer and communication sciences. The development of HRI shall be on the dynamic balance between humans and technology, that is, it must be on the trajectory of an ellipse with two foci – humans and technology. HRI can metaphorically be a point on an ellipse’s trajectory whose sum of the distances from the human focus and the technology focus is constant. The second point is that we need some central or transversal concepts that play a leading role in the design and management of HRI. Though there are several strong candidates, I chose the concept of information sharing, referring to the state of interaction where interacting systems, or agents, share information. This notion of interaction is much related to the concept of phatic communion proposed by B.K.Malinowski as early as in the 1920’s [3] in the context of linguistic communication, as well as to those of knowing and believing that have been dealt with in cognitive science and artificial intelligence.

26 27 With these contexts and backgrounds in hand, the talk is three-fold. First, I provide a brief survey of the past and present status, as well as some future prospect, of HRI research, particularly based on the meta-model of an ellipse with the human focus and the technology focus. Plenary Talk 2 Second, let me talk about the results from our HRI project, named PRIME (Physically-grounded human-Robot-computer Interaction in Multi-agent Environment) started in April 1991 as a particular example of the development upon the ellipse model. Within PRIME project, my colleagues, students Wed, 3/6 (16:40~17:40) and I developed and implemented a variety of hardware, software, interface systems and artificial 7F Miraikan Hall intelligence systems such as a module-based hardware, a real-time basic software, a middleware for sensor networks and human-robot interactive systems including multiple robot systems. The third topic in the talk is to provide, from the viewpoint of information sharing, the more recent The Creation of a New Robot Era development of HRI research that has been done since 2001 at my lab, now Imai lab led by Michita Imai, at Keio University, with colleagues at ATR laboratories and other institutions. It includes examples of the research on sharing social protocols, attention, eye gaze, word meanings, cognitive spaces, gestures and empathy in human-robot interaction. HRI is a growing and enlarging field that is open to anyone interested in the interaction of humans and robots: the talk touches only bits of its enchantment, especially from my own experience and viewpoints. Yet I bet that it contains some important bits for the prospect of HRI research to contribute to the future of humans and their society.

References [1] M.A.Goodrich and A.C.Shultz, Human-robot Interaction: A survey, Foundations and Trends in Tomotaka Takahashi Human-Computer Interaction, Vol.1, No.3, 203-275, 2007. [2] Y.Anzai, Towards a new paradigm of human-robot-computer interaction, Proc. of the IEEE International Workshop on Robot and Human Communication, 11-17, 1992. Abstract [3] B.K.Malinowski, The problem of meaning in primitive languages, In C.K.Ogden and I.A.Richards (eds.), The Meaning of Meaning: A Study of the Influence of Language upon Thought and The In recent years, it may seem that technology trend is in a regression process, from technology itself Science of Symbolism, Routedge & Kegan Paul, 1923. to humanlike aspects. Then, intuitive and comfortable operating environment and the technology which enables such environment become to draw an attention, rather than what we called “high performance” “high specifications” whose advertising message simply speaks as is today. Biography A sense of distance between human and mechatronics products becomes much smaller accordingly, Yuichiro Anzai started research on human cognitive processes and machine learning in mid 1970’s, and available information in an interactive manner drastically swollen out of our daily life. Compact and spent 1976-78 and 1981-82 at Carnegie-Mellon University as a post-doc and a visiting assistant which may communicate with us, stands at the very leading edge of technology professor, respectively. After coming back to his country, he has kept working on learning and roadmap. It might be like a smart phone with arms and legs or a state-of-the-art “Tinker Bell”, and problem solving as well as doing extensive research on human-robot interaction since 1991 at Keio everyone can involve in information exchange in casual conversation via such humanoids. University and Hokkaido University. He published numerous academic papers and books, including We can enjoy innocent conversation with humanoid, due to a kind of personification. As a result it Pattern Recognition and Machine Learning, Concepts and Characteristics of Knowledge-based allows us to gather our daily life information and to utilize them for command and control purpose Systems (co-ed) and Symbiosis of Human and Artifact Vols. 1 and 2 (co-ed). on any mechatronics products, services and information. I would like to discuss such future life with He spent eight years as Dean of Faculty of Science and Technology, Keio University (1993-2001), then intelligent humanoid, which would be realized in the next 15 years, with certain demonstration of eight more years as President of the same university (2001-09), while he led with his colleagues a a latest humanoid. large-scale commemorative project for Keio’s 150th anniversary. In October 2011 he accepted the present position of President, Japan Society for the Promotion of Science, the representative research funding agency in Japan, keeping the position of Executive Academic Advisor for Keio University. Anzai has been Chairperson of University Subcouncil, Central Council for Education, Ministry Biography of Education, Culture, Science, Sports and Technology (MEXT), as well as many others. His past Founder and CEO of Robo Garage, research associated professor of The University of Tokyo, visiting public contribution includes Advisor to MEXT (2010-11), Chairperson of Association of Pacific Rim professor of Fukuyama University and Osaka Electro-Communication University. Solely research, Universities (2008-09), Member of the Science Council of Japan (2005-11), and President of the develop, design, and manufacture humanoid robots from scratch. Master pieces are Ropid, Chroino, Information Processing Society of Japan (2005-07) and Japanese Cognitive Science Society (1993-94). FT, Evolta, Tachikoma, and Vision. The awards received by Anzai include Medal with Purple Ribbon from the Japanese Government, Awards; TIME magazine ‘Coolest Inventions 2004’, Popular Science magazine ‘33 persons changing Commandeur de l’Ordre des Palmes Académiques from the French Government, and honorary the future’, Guinness world record of long distance remote controlled robot car, and Robo-cup doctoral degrees from Yonsei University in Seoul and École Centrale de Nantes in Nantes. He received world champion 2004-2008. M.S. in 1971 and Ph.D. in 1974 from Keio University.

28 29 Panel Session Panelists: Gerhard Sagerer Tue, 3/5 (11:00~12:00) Bielefeld University (Germany) 7F Miraikan Hall Gerhard Sagerer is Rektor of Bielefeld University, and Professor of Applied Informatics at Bielefeld University’s Faculty of Technology. He studied Informatics at the University of Erlangen-Nürnberg, Panel: where he took his doctorate in 1985. His interests include cognitive and social robotics, human– Revisioning HRI Given Exponential Technological Growth robot interaction, speech and dialogue systems, and the architecture of intelligent systems. He Session Chair: Peter H. Kahn, Jr., University of Washington has served as coordinator and member of the executive board for numerous EU projects and Collaborative Research Centres. He is also on the Scientific Board of the German Section of Computer Scientists for Peace and Social Responsibility. Sometimes it’s said that the technical problems in robotics are harder and more intransigent than the field expected decades ago. That’s often the preamble to the sort of statement: “And those of us in HRI need to be realistic about what robots actually will be able to do in the near future.” This panel explores the idea that that view – of slow technological growth – is Andrea L. Thomaz fundamentally wrong. Our springboard is Ray Kurzweil’s idea from his book The Singularity is Georgia Institute of Technology (United States) Near. He argues that our minds think in linear terms while the technological change is increasing exponentially. To illustrate exponential growth, take a dollar and double it every day. After a week, you have $64, which is hardly much to shout about. After a month you have over a billion Andrea Thomaz is an Assistant Professor of Interactive Computing at the Georgia Institute dollars. Kurzweil shows that we’re at the “knee” of that exponential curve, where technological of Technology. She joined the faculty in 2007. She earned a B.S. in Electrical and Computer growth has begun to accelerate at an increasingly astonishing rate. Given this proposition, the Engineering from the University of Texas at Austin in 1999, and Sc.M. and Ph.D. degrees from MIT panelists discuss how we should be revisioning the field of HRI. in 2002 and 2006. She is published in the areas of Artificial Intelligence, Robotics, Human-Robot Interaction, and Human-Computer Interaction. She directs the Socially Intelligent Machines lab, which is affiliated with the Robotics and Intelligent Machines (RIM) Center and with the Graphics Visualization and Usability (GVU) Center.

Takayuki Kanda ATR (Japan)

Takayuki Kanda is Senior Research Scientist at ATR Intelligent Robotics and Communication Laboratories, Kyoto, Japan. He received his B. Eng, M. Eng, and Ph. D. degrees in computer science from Kyoto University, Kyoto, Japan, in 1998, 2000, and 2003, respectively. He is one of the starting members of Communication Robots project at ATR. He has developed a communication robot, Robovie, and applied it in daily situations, such as in peer-tutoring at an elementary school and as a museum exhibit guide. His research focuses on human-robot interaction with interactive humanoid robots, often conducted through field studies.

Peter H. Kahn, Jr. University of Washington (United States)

Peter Kahn is Professor in the Department of Psychology and Director of the Human Interaction with Nature and Technological Systems (HINTS) Lab at the University of Washington. The HINTS Lab seeks to address - from an ethical stance - two world trends that are powerfully reshaping human existence: (1) the degradation if not destruction of large parts of the natural world, and (2) unprecedented technological development, both in terms of its computational sophistication and pervasiveness. He has published five books with MIT Press, including Technological Nature: Adaptation and the Future of Human Life.

30 31 Many robotic applications feature a mixture of a single operator and multiple mobile social robots. We design and evaluate human-robot cross- interacting teleoperated and autonomous robots. Finally, we implemented a semi-autonomous robot training, a strategy widely used and validated Mon, 3/4 In several such domains, human operators must system and conducted experiments in a shopping for effective human team training. Cross-training make decisions using very limited perceptual mall to verify the effectiveness of our proposed is an interactive planning method in which a Miraikan Hall information, e.g. by viewing only the noisy camera methods in a real-world environment. human and a robot iteratively switch roles to feed of their robot. There are many interaction learn a shared plan for a collaborative task. We scenarios where such restricted visibility impacts first present a computational formulation of the teleoperation performance, and where the role robot’s interrole knowledge and show that it is of autonomous robots needs to be reinforced. In • 10:25~12:05 quantitatively comparable to the human mental • 09:10~10:10 this paper, we report on an experimental study model. Based on this encoding, we formulate How Do We Perceive Robots? assessing the effects of limited perception on Groups and Public Places human-robot cross-training and evaluate it human decision making, in interactions between in human subject experiments (n = 36). We Chair: Sara Kiesler autonomous and teleoperated robots, Chair: Bilge Mutlu compare human-robot cross-training to standard (Carnegie Mellon University) where subjects do not have prior knowledge of (University of Wisconsin–Madison) reinforcement learning techniques, and show how other agents will respond to their decisions. that cross-training provides statistically significant We evaluate the performance of several subjects improvements in quantitative team performance Eyewitnesses Are Misled By Human But Not The Influence of Height on Robotic under varying perceptual constraints in two measures. Additionally, significant differences Robot Interviewers (10:25) Communication Products (09:10) scenarios; a simple cooperative task requiring emerge in the perceived robot performance and collaboration with an autonomous robot, and Cindy L. Bethel, Deborah K. Eakin, Sujan human trust. These results support the hypothesis Irene Rae (University of Wisconsin–Madison), a more demanding adversarial task, where Anreddy, James Kaleb Stuart, Daniel Carruth that effective and fluent human-robot teaming Leila Takayama (Willow Garage), Bilge Mutlu an autonomous robot is actively trying to (Mississippi State University) may be best achieved by modeling effective (University of Wisconsin–Madison) outperform the human. Our results indicate that practices for human teamwork. limited perception has minimal impact on user performance when the task is simple. By contrast, This paper presents research results from a study A large body of research in human communication when the other agent becomes more strategic, to determine whether eyewitness memory was has shown that a person’s height plays a key role restricted visibility has an adverse effect on most impacted by a human interviewer versus a robot Sensors in the Wild: Exploring Electrodermal in how persuasive, attractive, and dominant others subjects, with the performance level even falling interviewer when presented misleading post-event Activity in Child-Robot Interaction (11:05) judge the person to be. Robotic below that achieved by an autonomous robot information. The study was conducted with 101 systems---systems that combine video- Iolanda Leite (INESC-ID and IST, Technical with identical restrictions. Our results could inform participants who viewed a slideshow depicting the conferencing capabilities with robotic navigation University of Lisbon), Rui Henriques (IST, Technical decisions about the division of control between events of a crime. All of the participants interacted to allow geographically dispersed people to University of Lisbon), Carlos Martinho, Ana humans and robots in mixed-initiative systems, with the humanoid robot, NAO, by playing a maneuver in remote locations---represent remote Paiva (INESC-ID and IST, Technical University of and in determining when autonomous robots trivia game. Participants were then interviewed users, operators, to local users, locals, through Lisbon) should intervene to assist operators. by either a human or a robot interviewer that the use of an alternate physical representation. In presented either control or misleading information this representation, physical characteristics such about the events depicted in the slideshow. This Recent advances in biosensor technology enabled as height are dictated by the manufacturer of was followed by another filler interval task of trivia the appearance of commercial wireless sensors that the system. We conducted a two-by-two (relative Supervisory Control of Multiple Social with the robot. Following the interview and robot can measure electrodermal activity (EDA) in user’s system height: shorter vs. taller; team role: leader Robots for Navigation (09:50) interactions, participants completed a paper- everyday settings. In this paper, we investigate vs. follower), between-participants study (n = 40) pencil post-event memory test to determine their the potential benefits of measuring EDA to better to investigate how the system’s height affects the Kuanhao Zheng, Dylan F. Glas, Takayuki recall of the events of the slideshow. The results understand children-robot interaction in two local’s perceptions of the operator and subsequent Kanda (ATR), Hiroshi Ishiguro (Osaka University), indicated that eyewitnesses were misled by a distinct directions: to characterize and evaluate interactions. Our findings show that, when Norihiro Hagita (ATR) human interviewer (t(46) = 2.79, p < 0.01, d = the interaction, and to dynamically recognize the system was shorter than the local and the 0.83) but not by a robot interviewer (t(46) = 0.34, user’s affective states. To do so, we present a operator was in a leadership role, the local found This paper presents a human study and system p > 0.05). The results of this research could have study in which 38 children interacted with an the operator to be less persuasive. Furthermore, implementation for the supervisory control of multiple strong implications for the gathering of sensitive iCat robot while wearing a wireless sensor that having a leadership role significantly affected the social robots for navigational tasks. We studied the information from an eyewitness about the events measured their electrodermal activity. We found local’s feelings of dominance with regard to being acceptable range of speed for robots interacting with of a crime. that different patterns of electrodermal variation in control of the conversation. people through navigation, and we discovered that emerge for different supportive behaviours elicited entertaining people by speaking during navigation by the robot and for different affective states of can increase peoples tolerance toward robots’ slow the children. The results also yield significant locomotion speed. Based on these results and using Human-Robot Cross-Training: Computational correlations between statistical features extracted Evaluating the Effects of Limited Perception a robot safety model developed to ensure safety Formulation, Modeling and Evaluation of a from the signal and surveyed parameters regarding on Interactive Decisions in Mixed Robotic of robots during navigation, we implemented Human Team Training Strategy (10:45) how children perceived the interaction and their Environments (09:30) an algorithm which can proactively adjust robot Stefanos Nikolaidis, Julie Shah affective state. Aris Valtazanos, Subramanian Ramamoorthy behaviors during navigation to improve the (Massachusetts Institute of Technology) (University of Edinburgh) performance of a human-robot team consisting of

32 33 Identifying People with Soft-Biometrics at in narrow spaces. To address this problem, our Fleet Week (11:25) idea is to endow the robot with capability to Martinson, Wallace Lawson, Greg Trafton Tue, 3/5 understand humans’ crowding phenomena. The (US Naval Research Laboratory) proposed mechanism consists of three underlying Miraikan Hall models: a model of pedestrian flow, a model of pedestrian interaction, and a model of walking Person identification is a fundamental robotic comfort. Combining these models a robot is capability for long-term interactions with able to simulate hypothetical situations where it people. It is important to know with whom the navigates between pedestrians, and anticipate the robot is interacting for social reasons, as well as • 09:00~10:40 degree to which this would affect the pedestrians’ to remember user preferences and interaction Trust, Help, and Influence walking comfort. This idea is implemented in a histories. There exist, however, a number of friendly-patrolling scenario. During planning, the different features by which people can be Chair: Cindy Bethel robot simulates the interaction with pedestrian identified. This work describes three alternative, (Mississippi State University) crowd and determines the best path to roam. The soft biometrics (clothing, complexion, and height) result of a field experiment demonstrated that that can be learned in real-time and utilized by with the proposed method the pedestrians around a humanoid robot in a social setting for person Impact of Robot Failures and the robot perceived better walking comfort than identification. The use of these biometrics is then Feedback on Real-Time Trust (09:00) pedestrians around the robot that only maximized evaluated as part of a novel experiment in robotic its exposure. Munjal Desai (University of Massachusetts person identification carried out at Fleet Week, Lowell), Poornima Kaniarasu (Carnegie Mellon New York City in May, 2012. In this experiment, University), Mikhail Medvedev (University Octavia employed soft biometrics to discriminate of Massachusetts Lowell), Aaron Steinfeld between groups of 3 people. 202 volunteers It’s Not Polite to Point: Generating (Carnegie Mellon University), Holly Yanco interacted with Octavia as part of the study, Socially-Appropriate Deictic Behaviors (University of Massachusetts Lowell) interacting with the robot from multiple locations towards People (09:40) in a challenging environment. Phoebe Liu, Dylan F. Glas, Takayuki Kanda Prior work in human trust of autonomous robots (ATR), Hiroshi Ishiguro (Osaka University), suggests the timing of reliability drops impact Norihiro Hagita (ATR) trust and control allocation strategies. However, Understanding Suitable Locations for trust is traditionally measured post-run, thereby Waiting (11:45) masking the real-time changes in trust, reducing Pointing behaviors are used for referring to Takuya Kitade (ATR/Keio University), Satoru sensitivity to factors like inertia, and subjecting the objects and people in everyday interactions, but Satake, Takayuki Kanda (ATR), Michita Imai measure to biases like the primacy-recency effect. the behaviors used for referring to objects are (ATR/Keio University) Likewise, little is known on how feedback of not necessarily polite or socially appropriate for robot confidence interacts in real-time with trust referring to humans. In this study, we confirm that and control allocation strategies. An experiment although people would point precisely to an object This study addresses the robot that waits for to examine these issues showed trust loss due to to indicate where it is, they were hesitant to do users while they shop. In order to wait, the early reliability drops is masked in traditional post- so when pointing to another person. We propose robot needs to understand which locations run measures, trust demonstrates inertia, and a model for generating socially-appropriate are appropriate for waiting. We investigated feedback alters allocation strategies independent deictic behaviors in a robot. The model is based how people choose locations for waiting, and of trust. The implications of specific findings on on balancing two factors: understandability and revealed that they are concerned with disturbing development of trust models and robot design are social appropriateness. In an experiment with a pedestrians and disturbing shop activities. Using also discussed. robot in a shopping mall, we found that the robots these criteria, we developed a classifier of waiting deictic behavior was perceived as more polite, locations. Disturbing pedestrians are estimated more natural, and better overall when using from statistics of pedestrian trajectories, which is our model, compared with a model considering observed with a human-tracking system based on Will I Bother Here? – A Robot understandability alone. laser range finders. Disturbing shop activities are Anticipating its Influence on estimated based on shop visibility. We evaluated Pedestrian Walking Comfort (09:20) this autonomous waiting behavior in a shopping- Hiroyuki Kidokoro, Takayuki Kanda, Drazen assist scenario. The experimental results revealed Brscic, Masahiro Shiomi (ATR) that users found the autonomous waiting robot chose appropriate waiting locations for waiting more than a robot with random choice or one A robot working among pedestrians can attract controlled manually by the user him or herself. crowds of people around it, and consequentially become a bothersome entity causing congestion

34 35 How a Robot Should Give Advice (10:00) the medication management task. For instance, Meal-Time with a Socially Assistive Robot planning, which allows communication through Cristen Torrey (Adobe Systems), Susan Fussell they preferred a robot (over a human) to remind and Older Adults at a Long-term Care Facility eye-to-eye contact. We implemented the proposed (Cornell University), Sara Kiesler (Carnegie Mellon them to take medications, but preferred human (14:00) motion planning method on an android - University) assistance for deciding what medication to take Derek McColl (University of Toronto), SIT and we proposed to use a Key-Value Store and for administering the medication. Factors Goldie Nejat (University of Toronto, Toronto to connect the components of our systems. The such as perceptions of ones own capability and Rehabilitation Institute) Key-Value Store is a high-speed and lightweight With advances in robotics, robots can give advice robot reliability influenced their attitudes. dictionary database with parallelism and scalability. and help using natural language. The field of HRI, We conducted multi-party HRI experiments for however, has not yet developed a communication As people get older, their ability to perform basic 1,662 subjects in total. In our HRI system, over 60 strategy for giving advice effectively. Drawing on self-maintenance activities can be diminished percent of subjects started speaking to the Actroid literature in politeness and informal speech, we • 13:40~14:40 due to the prevalence of cognitive and physical and the residence time of their communication propose options for a robot’s help-giving speech- impairments or as a result of social isolation. The also became longer. In addition, we confirmed using hedges or discourse markers, both of which Journal Session 1 objective of our work is to design socially assistive our system gave humans a more sophisticated can mitigate the commanding tone implied in robots capable of providing cognitive assistance, impression of the Actroid. direct statements of advice. To test these options, Chair: Takayuki Kanda (ATR) targeted engagement, and motivation to elderly we experimentally compared two help-giving individuals, in order to promote participation in strategies depicted in videos of human and robot self-maintenance activities of daily living. In this Toward Seamless Human–Robot Handovers helpers. We found that when robot and human paper, we present the design and implementation • 14:50~16:10 helpers used a hedge or discourse markers, they (13:40) of the expressive human-like robot, Brian 2.1, as seemed more considerate and likeable, and less Kyle Strabala, Min Kyung Lee, Anca Dragan, a social motivator for the important activity of Journal Session 2 controlling. The robot that used discourse markers Jodi Forlizzi, Siddhartha S. Srinivasa (Carnegie eating meals. An exploratory study was conducted had even more impact than the human helper. The Mellon University), Maya Cakmak (Willow at an elderly care facility with the robot and Chair: Mike Goodrich findings suggest that communication strategies Garage), Vincenzo Micelli (Universita` Degli eight individuals, ages 82-93, to investigate user (Brigham Young University) derived from speech used when people help Studi di Parma) engagement and compliance during meal-time each other in natural settings can be effective for interactions with the robot along with overall planning the help dialogues of robotic assistants. acceptance and attitudes toward the robot. Controlling Social Dynamics with a A handover is a complex collaboration, where Results of the study show that the individuals were Parametrized Model of Floor Regulation actors coordinate in time and space to transfer both engaged in the interactions and complied (14:50) control of an object. This coordination comprises with the robot during two different meal- Crystal Chao, Andrea L. Thomaz two processes: the physical process of moving Older Adults’ Medication Management in eating scenarios. A post-study robot acceptance (Georgia Institute of Technology) the Home: How can Robots Help? (10:20) to get close enough to transfer the object, and questionnaire also determined that, in general, the cognitive process of exchanging information the participants enjoyed interacting with Brian 2.1 Akanksha Prakash, Jenay M. Beer, Travis to guide the transfer. Despite this complexity, we Deyle, Cory-Ann Smarr, Tiffany L. Chen, Tracy and had positive attitudes toward the robot for Turn-taking is ubiquitous in human humans are capable of performing handovers the intended activity. communication, yet turn-taking between humans L. Mitzner, Charles C. Kemp, Wendy A. Rogers seamlessly in a wide variety of situations, even (Georgia Institute of Technology) and robots continues to be stilted and awkward when unexpected. This suggests a common for human users. The goal of our work is to build procedure that guides all handover interactions. autonomous robot controllers for successfully Successful management of medications is critical Our goal is to codify that procedure. A Gesture-Centric Android System for Multi- engaging in human-like turn-taking interactions. to maintaining healthy and independent living for To that end, we first study how people hand over Party Human-Robot Interaction (14:20) Towards this end, we present CADENCE, a novel computational model and architecture that older adults. However, medication non-adherence objects to each other in order to understand their Yutaka Kondo, Kentaro Takemura, Jun explicitly reasons about the four components of is a common problem with a high risk for severe coordination process and the signals and cues that Takamatsu, Tsukasa Ogasawara consequences, which can jeopardize older they use and observe with their partners. Based on floor regulation: seizing the floor, yielding the (Nara Institute of Science and Technology) adults chances to age in place. Well-designed these studies, we propose a coordination structure floor, holding the floor, and auditing the owner robots assisting with medication management for human–robot handovers that considers the of the floor. The model is parametrized to enable tasks could support older adults independence. physical and social-cognitive aspects of the Natural body gesturing and speech dialogue is the robot to achieve a range of social dynamics Design of successful robots will be enhanced interaction separately. This handover structure crucial for human-robot interaction (HRI) and for the human-robot dyad. In a between-groups through understanding concerns, attitudes, and describes how people approach, reach out their human-robot symbiosis. Real interaction is not experiment with 30 participants, our humanoid preferences for medication assistance tasks. We hands, and transfer objects while simultaneously only with one-to-one communication but also robot uses this turn-taking system at two assessed older adults reactions to medication coordinating the what, when, and where of among multiple people. We have therefore contrasting parametrizations to engage users in hand-off from a mobile manipulator with 12 handovers: to agree that the handover will happen developed a system that can adjust gestures and autonomous object play interactions. Our results participants (68-79 years). We identified factors (and with what object), to establish the timing of facial expressions based on a speaker’s location from the study show that: (1) manipulating these that affected their attitudes toward a mobile the handover, and to decide the configuration at or situation for multi-party communication. turn-taking parameters results in significantly manipulator for supporting general medication which the handover will occur. We experimentally By extending our already developed real-time different robot behavior; (2) people perceive the management tasks in the home. The older adults evaluate human–robot handover behaviors that gesture planning method, we propose a gesture robot’s behavioral differences and consequently were open to robot assistance; however, their exploit this structure and offer design implications adjustment suitable for human demand through attribute different personalities to the robot; and preferences varied depending on the nature of for seamless human–robot handover interactions. motion parameterization and gaze motion (3) changing the robot’s personality results in

36 37 different behavior from the human, manipulating to implicitly model the specific constraints in task the human-operated robot in several important the social dynamics of the dyad. We discuss the and configuration space. We argue that current measures. We examined video recordings of the implications of this work for various contextual programming-by-demonstration approaches are real-world game to draw additional insights as to applications as well as the key limitations of the not efficient for kinesthetic teaching of redundant how the novice participants attempted to interact system to be addressed in future work. robots and show that typical teach-in procedures with the robot in a loosely structured collaborative are too complex for novice users. In order to task. We discovered that many of the collaborative enable non-experts to master the configuration interactions were generated in the moment and programming of a redundant robot in and were driven by interpersonal dynamics, ACT-R/E: An Embodied Cognitive the presence of non-trivial constraints such as not necessarily by the task design. We explored Architecture for Human-Robot Interaction confined spaces, we propose a new interaction using bids analysis as a meaningful construct to (15:10) scheme combining kinesthetic teaching and tap into affective qualities of HRI. An important J. Gregory Trafton, Laura M. Hiatt, Anthony learning within an integrated system architecture. lesson from this work is that in loosely structured M. Harrison, Franklin P. Tamborello, II, We evaluated this approach in a user study with collaborative tasks, robots need to be skillful Sangeet S. Khemlani, & Alan C. Schultz 49 industrial workers at HARTING, a medium- in handling these in-the-moment interpersonal (Naval Research Laboratory) sized manufacturing company. The results show dynamics, as these dynamics have an important that the interaction concepts implemented on a impact on the affective quality of the interaction KUKA Lightweight Robot IV are easy to handle for people. How such interactions dovetail with We present ACT-R/E (Adaptive Character of for novice users, demonstrate the feasibility more task-oriented policies is an important area Thought-Rational / Embodied), a cognitive of kinesthetic teaching for implicit constraint for future work, as we anticipate such interactions architecture for human-robot interaction. Our modeling in configuration space, and yield becoming commonplace in situations where reason for using ACT-R/E is two-fold. First, ACT- significantly improved performance for the teach- personal robots perform loosely structured tasks R/E enables researchers to build good embodied in of trajectories in task space. in interaction with people in human living spaces. models of people to understand how and why people think the way they do. Then, we leverage that knowledge of people by using it to predict what a person will do in different situations; e.g., Crowdsourcing Human-Robot Interaction: that a person may forget something and may New Methods and System Evaluation in a need to be reminded or that a person cannot see Public Environment (15:50) everything the robot sees. We also discuss methods Cynthia Breazeal, Nick DePalma, Jeff Orkin of how to evaluate a cognitive architecture and (Massachusetts Institute of Technology), Sonia show numerous, empirically validated examples of Chernova (Worcester Polytechnic Institute), ACT-R/E models. Malte Jung (Stanford University)

Supporting a wide variety of interaction styles A User Study on Kinesthetic Teaching of across a diverse set of people is a significant Redundant Robots in Task and Configuration challenge in human-robot interaction (HRI). In Space (15:30) this work, we explore a data-driven approach Sebastian Wrede, Christian Emmerich, that relies on crowdsourcing as a rich source of Ricarda Grünberg, Arne Nordmann, Agnes interactions that cover a wide repertoire of human Swadzba, Jochen Steil (Bielefeld University) behavior. We first develop an online game that requires two players to collaborate to solve a task. One player takes the role of a robot avatar and the The recent advent of compliant and kinematically other a human avatar, each with a different set of redundant robots poses new research challenges capabilities that must be coordinated to overcome for human-robot interaction. While these robots challenges and complete the task. Leveraging the provide a great degree of flexibility for the interaction data recorded in the online game, we realization of complex applications, the flexibility present a novel technique for data-driven behavior gained generates the need for additional generation using case-based planning for a real modeling steps and definition of criteria for robot. We compare the resulting autonomous redundancy resolution constraining the robot’s robot behavior against a Wizard of Oz base case movement generation. The explicit modeling of condition in a real-world reproduction of the such criteria usually requires experts to adapt the online game that was conducted at the Boston robot’s movement generation subsystem. A typical Museum of Science. Results of a post-study survey way of dealing with this configuration challenge is of participants indicate that the autonomous to utilize kinesthetic teaching by guiding the robot robot behavior matched the performance of

38 39 Legibility and Predictability of Robot Motion stiffness in specific degrees-of-freedom, our • 10:40~12:00 (09:20) approach allows the general position of the Wed, 3/6 Verbal and Non-Verbal Behavior Anca D. Dragan (Carnegie Mellon University), arms to change to the preferences of the person Kenton C.T. Lee (University of Pennsylvania), interacting with it, a capability that is not possible Chair: Leila Takayama using a strictly position-based control approach. Miraikan Hall Siddhartha S. Srinivasa (Carnegie Mellon (Willow Garage) University) A test case with 15 volunteers was conducted on IRL-1, an omnidirectional, non-holonomic mobile robot, to study and fine-tune our approach in an A key requirement for seamless human-robot unconstrained guiding task, making IRL-1 go in • 09:00~10:20 A Model for Synthesizing a Combined Verbal collaboration is for the robot to make its and out of a room through a doorway. and NonVerbal Behavior Based on Personality Companions, Collaboration, intentions clear to its human collaborator. A Traits in Human-Robot Interaction (10:40) and Control collaborative robot’s motion must be legible, or intent-expressive. Legibility is often described Amir Aly, Adriana Tapus (ENSTA-ParisTech) Chair: Greg Trafton in the literature as and effect of predictable, Effects of Robotic Companionship on Music unsurprising, or expected motion. Our central Enjoyment and Agent Perception (10:00) (Naval Research Laboratory) Robots are more and more present in our daily insight is that predictability and legibility are Guy Hoffman, Keinan Vanunu (IDC Herzliya) life; they have to move into human-centered fundamentally different and often contradictory environments, to interact with humans, and Communicating Affect via Flight properties of motion. We develop a formalism to obey some social rules so as to produce an Path: Exploring Use of the Laban to mathematically define and distinguish We evaluate the effects of robotic listening appropriate social behavior in accordance with Effort System for Designing Affective predictability and legibility of motion. We formalize companionship on people’s enjoyment of music, human’s profile (i.e., personality, state of mood, Locomotion Paths (09:00) the two based on inferences between trajectories and on their perception of the robot. We present a and preferences). Recent researches discussed and goals in opposing directions, drawing the robotic speaker device designed for joint listening Megha Sharma, Dale Hildebrandt, Gem the effect of personality traits on the verbal and analogy to action interpretation in psychology. and embodied performance of the music played Newman, James E. Young, Rasit Eskicioglu nonverbal production, which plays a major role We then propose mathematical models for these on it. The robot generates smoothed real-time (University of Manitoba) in transferring and understanding messages in a inferences based on optimizing cost, drawing the beat-synchronized dance moves, uses nonverbal social interaction between a human and a robot. analogy to the principle of rational action. Our gestures for common ground, and can make The characteristics of the generated gestures (e.g., People and animals use various kinds of motion experiments validate our formalism’s prediction and maintain eye-contact. In an experimental amplitude, direction, rate, and speed) during the in a multitude of ways to communicate their that predictability and legibility can contradict, between-subject study (n=67), participants nonverbal communication can differ according to ideas and affective state, such as their moods and provide support for our models. Our findings listened to songs played on the speaker device, the personality trait, which, similarly, influences or emotions. Further, people attribute affect indicate that for robots to seamlessly collaborate with the robot either moving in sync with the the verbal content of the human speech in terms and personalities to movements of even non- with humans, they must change the way they plan beat, moving off-beat, or not moving at all. We of verbosity, repetitions, etc. Therefore, our life like entities based solely on the style of their their motion. found that while the robot’s beat precision was research tries to map a human’s verbal behavior motions, e.g., the locomotion style of a geometric not consciously detected by Ps, an on-beat robot to a corresponding combined robot’s verbal- shape (how it moves about) can be interpreted positively affected song liking. There was no effect nonverbal behavior based on the personality as being shy, aggressive, etc. We investigate on overall experience enjoyment. In addition, dimensions of the interacting human. The system how robots can leverage this locomotion-style Taking Your Robot For a Walk: Force-Guiding the robot’s response caused Ps to attribute more estimates first the interacting human’s personality communication channel for communication a Mobile Robot Using Compliant Arms positive human-like traits to the robot, as well traits through a psycholinguistic analysis of the with people. Specifically, our work deals with (09:40) as rate the robot as more similar to themselves. spoken language, then it uses PERSONAGE designing stylistic flying-robot locomotion paths Notably, personal listening habits (solitary vs. François Ferland, Arnaud Aumont, Dominic natural language generator that tries to generate for communicating affective state. To author and social) affected agent attributions. This work Letourneau, François Michaud (Universite de a corresponding verbal language to the estimated unpack the parameters of affect-oriented flying- points to a larger question, namely how a robot’s Sherbrooke) personality traits. Gestures are generated by using robot locomotion styles we employ the Laban perceived response to an event might affect a BEAT toolkit, which performs a linguistic and Effort System, a standard method for interpreting human’s perception of the same event. human motion commonly used in the performing Guiding a mobile robot by the hand would make contextual analysis of the generated language arts. This paper describes our adaption of the a simple and natural interface. This requires the relying on rules derived from extensive research Laban Effort System to author motions for flying ability to sense forces applied on the robot from into human conversational behavior. We explored robots, and the results of a formal experiment direct physical contacts, and to translate these the human-robot personality matching aspect that investigated how various Laban Effort System forces into motion commands. This paper presents and the differences of the adapted mixed robot’s parameters influence peoples perception of the a joint-space impedance control approach that behavior (gesture and speech) over the adapted resulting robotic motions. We summarize with does so by perceiving forces applied on compliant speech only robot’s behavior in an interaction. a set of guidelines for aiding designers in using arms, making the robot react as a real-life physical Our model validated that individuals preferred the Laban Effort System to author flying robot object to a user pulling and pushing on one or more to interact with a robot that had the same motions to elicit desired affective responses. both of its arms. By independently controlling personality with theirs and that an adapted mixed robot’s behavior (gesture and speech) was more

40 41 engaging and effective than a speech only robot’s examine cues in speech that would help robots and found the gestures to be easily interpreted by Images of human faces and bodies suffer from behavior. Our experiments were done with Nao not only to provide expert knowledge, but also observers. We found that observation of human- the inversion effect whereas images of objects do robot. to deliver this knowledge effectively. To test the human interaction can be effective in determining not. The effect may be caused by the configural effectiveness of these cues, we conducted an what should be communicated in a given human- processing of faces and body postures, which experiment in which participants created a plan robot task, how communication gestures should is dependent on the perception of spatial to tour a fictional city based on suggestions be executed, and priorities for robotic system relations between different parts of the stimuli. Automatic Processing of Irrelevant Co- by two robots. We manipulated the landmark implementation based on frequency of use. We investigated if the inversion effect applies Speech Gestures with Human but not Robot descriptions along two dimensions of expertise: to images of robots in the hope of using it as a Actors (11:00) practical knowledge and rhetorical ability. We then measurement tool for robot’s anthropomorphism. Cory J. Hayes, Charles R. Crowell, Laurel D . Riek measured which locations the participants chose The results suggest that robots, similarly to (University of Notre Dame) to include in the tour based on their descriptions. • 13:40~15:20 humans, are subject to the inversion effect. Our results showed that participants were strongly Furthermore, there is a significant, but weak linear influenced by both practical knowledge and Is The Robot like Me? relationship between the recognition accuracy and Non-verbal, or visual, communication is an rhetorical ability; they included more landmarks Chair: Fumihide Tanaka perceived anthropomorphism. The small variance important factor of daily human-to-human described using expert linguistic cues than those explained by the inversion effect renders this test interaction. Gestures make up one mode of described using simple facts. Even when the (Tsukuba Univ.) inferior to the questionnaire based Godspeed visual communication, where movement of the overall level of practical knowledge was high, an Anthropomorphism Scale. body is used to convey a message either alone or increase in rhetorical ability resulted in significant Expressing Ethnicity through Behaviors of a in conjunction with speech. The purpose of this improvements. These results have implications for Robot Character (13:40) experiment is to explore how humans perceive the development of effective dialogue strategies gestures made by a humanoid robot compared to for informational robots. Maxim Makatchev, Reid Simmons (Carnegie A Transition Model for Cognitions about the same gestures made by a human. We do this Mellon University), Majd Sakr, Micheline Ziadee Agency (14:20) (CMU Qatar) by adapting and replicating a human perceptual Daniel T. Levin, Julie A. Adams, Megan M. experiment by Kelly et al., where a Stroop-like Saylor, Gautam Biswas (Vanderbilt University) task was used to demonstrate the automatic Achieving homophily, or association based on Gestures for Industry: Intuitive Human-Robot processing of gesture and speech together. 59 similarity, between a human user and a robot Communication from Human Observation college students participated in our experiment. holds a promise of improved perception and task Recent research in a range of fields has explored (11:40) Our results support the notion that automatic performance. However, no previous studies that people’s concepts about agency, and this issue gesture processing occurs when interacting with Brian Gleeson, Karon MacLean, Amir address homophily via ethnic similarity with robots is clearly important for understanding the human actors, but not robot actors. We discuss Haddadi, Elizabeth Croft (University of British exist. In this paper, we discuss the difficulties of conceptual basis of human-robot interaction. the implications of these findings for the HRI Columbia), Javier Alcazar (General Motors) evoking ethnic cues in a robot, as opposed to This research takes a wide range of approaches, community. a virtual agent, and an approach to overcome but no systematic model of reasoning about those difficulties based on using ethnically agency has combined the concepts and processes Human-robot collaborative work has the salient behaviors. We outline our methodology involved agency-reasoning comprehensively potential to advance quality, efficiency and safety for selecting and evaluating such behaviors, enough to support research exploring issues such in manufacturing. In this paper we present a and culminate with a study that evaluates our as conceptual change in reasoning about agents, Rhetorical Robots: Making Robots More gestural communication lexicon for human- hypotheses of the possibility of ethnic attribution and the interaction between concepts about Effective Speakers Using Linguistic Cues of robot collaboration in industrial assembly tasks of a robot character through verbal and nonverbal agents and visual attention. Our goal in this paper Expertise (11:20) and establish methodology for producing such behaviors and of achieving the homophily effect. is to develop a transition model of reasoning Sean Andrist, Erin Spannan, Bilge Mutlu a lexicon. Our user experiments are grounded about agency that achieves three important (University of Wisconsin–Madison) in a study of industry needs, providing potential goals. First, we aim to specify the different kinds real-world applicability to our results. Actions of knowledge that is likely to be accessed when required for industrial assembly tasks are The Inversion Effect in HRI: Are Robots people reason about agents. Second, we specify Robots hold great promise as informational abstracted into three classes: part acquisition, Perceived More Like Humans or Objects? the circumstances under which these different assistants such as museum guides, information part manipulation, and part operations. We (14:00) kinds of knowledge might be accessed and be booth attendants, concierges, shopkeepers, and analyzed the communication between human changed. Finally, we discuss how this knowledge more. In such positions, people will expect them pairs performing these subtasks and derived a set Jakub Zlotowski, Christoph Bartneck might affect basic psychological processes of to be experts on their area of specialty. Not only of communication terms and gestures. We found (University of Canterbury) attention and memory. Our approach will be to will robots need to be experts, but they will also that participant-provided gestures are intuitive first describe the transition model, then to discuss need to communicate their expertise effectively and well suited to robotic implementation, but The inversion effect describes a phenomenon how it might be applied in two specific domains: in order to raise trust and compliance with the that interpretation is highly dependent on task in which certain types of images are harder to computer interfaces that allow a single operator information that they provide. This paper draws context. We then implemented these gestures on recognize when they are presented upside down to track multiple robots, and a teachable agent upon literature in psychology and linguistics to a robot arm in a human-robot interaction context, compared to when they are shown upright. system currently in use assisting primary and

42 43 middle school students in learning natural science two types of gaze behaviors---short, frequent concepts. glances and long, less frequent stares---to find which behavior is better at conveying a robot’s visual attention. We describe the development of a programmable research platform from MyKeepon Presentation of (Telepresent) Self: On the toys, and the use of these programmable robots Double-Edged Effects of Mirrors (14:40) to examine the effects of gaze type and group size Leila Takayama (Willow Garage), Helen Harris on the perception of attention. In our experiment, (Stanford University) participants viewed a group of MyKeepon robots executing random motions, occasionally fixating on various points in the room or directly on the Mobile remote presence systems present new participant. We varied type of gaze fixations within opportunities and challenges for physically participants and group size between participants. distributed people to meet and work together. Results show that people are more accurate at One of the challenges observed from a couple of recognizing shorter, more frequent fixations years of using Texai, a mobile remote presence than longer, less frequent ones, and that their (MRP) system, is that remote operators are often performance improves as group size decreases. unaware of how they present themselves through From these results, we conclude that multiple the MRP. Problems arise when remote operators short gazes are preferable for indicating attention are not clearly visible through the MRP video over one long gaze, and that the visual search for display; this mistake makes the MRP operators robot attention is susceptible to group size effects. look like anonymous intruders into the local space rather than approachable colleagues. To address this problem, this study explores the effects of visual feedback for remote teleoperators, using a controlled experiment in which mirrors were either present or absent in the local room with the MRP system (N=24). Participants engaged in a warm- up remote communication task followed by a remote driving task. Compared to mirrors-absent participants, mirrors-present participants were more visible on the MRP screens and practiced navigating longer. However, the mirrors-present participants also reported experiencing more frustration and having less fun. Implications for theory and design are discussed.

Are You Looking At Me?: Perception of Robot Attention is Mediated by Gaze Duration and Group Size (15:00) Henny Admoni, Bradley Hayes, David Feil- Seifer, Daniel Ullman, Brian Scassellati (Yale University)

Studies in HRI have shown that people follow and understand robot gaze. However, only a few studies to date have examined the time-course of a meaningful robot gaze, and none have directly investigated what type of gaze is best for eliciting the perception of attention. This paper investigates

44 45 Map - Demo & Poster

Demo Session : 3/4 (16:30~18:00) Late Breaking Reports and Poster Session : 3/4 (18:00~20:00) Human-Robot Interaction Pioneers Workshop : 3/4 (18:00~20:00) Waiting Meeting Space Room Exhibition : 3/3~3/6

Reception L01 Room L02 L03 D10 D12 D08 L04 L17 E01 L05 L25 L26 E05 L18 L06 E02 L19 E06 L07 CR 2 E03 CR 3 L20 Garden Miraikan L08 E07 L27 L28 L21 E04 L09 D14 L22 D16 D15 L10 Hall L23 L11 D20 D22 D09 D06 D02 L12 L29 L30 L24 L13 D21 D04 L14 CR 1 L15 D13 Foyer D19 D17 D07 D05 L16 Registration Desk Voting Box LBR Poster Free L31 L93 Space L32 Cloak Lobby L92 L33 Room D01 L34 D18 Demo L35 L91 L36L37L38L39L40L41 L42 L43L44 L45 L90 L46 L89 L88 L87 D11 L47 L86 L85 D03 L84 L83 L48 Exhibition L82 Y22 Y01 L49 L81L80 Y21 Y02 L79L78 L50 L51L52 L53 L77L76 L75L74 L54L55 Y20 Y03 L73 L56 L57L58 L59 L72L71 L60 Y Y L70L69 L61 Y Y Y19 Y04 L68L67 15 16 17 18 Pioneers Workshop Poster L66L65 Y L64L63 Y Y Y05 L62 14 Y Y Y Y 13 12 11 Y 10 09 08 07 Y06 Food

46 47 WaitingMeeting Space Room Map - Late-Breaking Reports

Mon, 3/4 (18:00~20:00) CR3/Free Space/Lobby Reception 01 Room 02 03 04 17 CR 2 05 25 26 18 06 Demo CR 3 19 & 07 Exhibition 20 Miraikan 08 27 28 Garden 21 09 Hall 22 10 23 11 29 30 24 12 CR 1 13 Demo 14

15 Foyer 16 Voting Box Registration Desk

31 93 32 Cloak Room 92 33 34 Demo LBR Poster 35 91 36 37 38 39 40 41 42 43 44 45 90 46 89 88 87 47 86 Free Space/Lobby 85 48 84 83 82 81 80 49 79 50 51 78 77 52 53 54 76 55 56 57 75 74 58 59 60 73 72 61 71 70 69 68 Pioneers Workshop 67 66 65 64 63 62 Poster

48 49 07. Embedded Multimodal Nonverbal and 15. Legible User Input for Intent 23. Have You Ever Lied?: The Impacts of Late-Breaking Verbal Interactions between a Mobile Toy Prediction Gaze Avoidance on People’s Perception of a Robot and Autistic Children Kenton C. T. Lee (University of Pennsylvania), Robot Reports Irini Giannopulu (UCP & UPMC) Anca D. Dragan, Siddhartha S. Srinivasa Jung Ju Choi (Ewha Womans University), (Carnegie Mellon University) Yunkyung Kim (KAIST), Sonya S. Kwak (Ewha Womans University) Mon, 3/4 (18:00~20:00) 08. Recognition for Psychological CR3/Free Space/Lobby Boundary of Robot 16. Development of a Glove-Based Chyon Hae Kim (Honda Research Institute Optical Fiber Sensor for Applications in 24. Measurement of Rapport-Expectation Japan), Yumiko Yamazaki, Shunsuke Human-Robot Interaction with a Robot Nagahama, Shigeki Sugano (Waseda University) Eric Fujiwara, Danilo Yugo Miyatake, Murilo Tatsuya Nomura (Ryukoku University), 01. Experiencing the Familiar, Ferreira Marques Santos, Carlos Kenichi Suzuki Takayuki Kanda (ATR Intelligent Robotics and Understanding the Interaction and (State University of Campinas) Communication Laboratories) Responding to a Robot Proactive Partner 09. Interaction with an Agent in Blended Reality Gentiane Venture (Tokyo University of 17. Unified Environment-Adaptive 25. Directing Robot Motions with Agriculture and Technology), Ritta Baddoura Yusuke Kanai, Hirotaka Osawa, MIchita Imai (Keio University) Control of Accompanying Robots Using Paralinguistic Information (Universite de Montpellier), Tianxiang Zhang Artificial Potential Field (Tokyo University of Agriculture and Technology) Takanori Komatsu, Yuuki Seki Kazushi Nakazawa, Keita Takahashi, (Shinshu University) 10. Movement Synchronization Fails Masahide Kaneko (The University of Electro- 02. The Affect of Collaboratively during Non-Adaptive Human-Robot Communications) Programming Robots in a 3D Virtual Interaction 26. Quadrotor or Blimp? Noise and Simulation Tamara Lorenz (Ludwig-Maximilians Appearance Considerations in Designing 18. Directly or on Detours? How Should Social Aerial Robot Michael Vallance (Future University Hakodate) Universität), Alexander Mörtl, Sandra Hirche (Technische Universität München) Industrial Robots Approximate Humans? Chun Fui Liew, Takehisa Yairi Dino Bortot, Maximilian Born, Klaus Bengler (The University of Tokyo) 03. ASAHI: OK for Failure (Technische Universität München) 11. Design of Robot Eyes Suitable for Yutaka Hiroi (Osaka Institute of Technology), Gaze Communication Akinori Ito (Tohoku University) 27. The Impacts of Intergroup Relations Tomomi Onuki, Takafumi Ishinoda (Saitama 19. Enabling Clinicians to Rapidly and Body Zones on People’s Acceptance of a University), Yoshinori Kobayashi (Saitama Animate Robots Robot 04. The Vernissage Corpus: University, JST PRESTO), Yoshinori Kuno John Alan Atherton, Michael A Goodrich Jung Ju Choi (Ewha Womans University), A Conversational Human-Robot-Interaction (Saitama University) (Brigham Young University) Yunkyung Kim (KAIST), Sonya S. Kwak (Ewha Dataset Womans University) Dinesh Babu Jayagopi (Idiap Research 12. Ultra-fast Multimodal and Online 20. Be a Robot! Robot Navigation Institute), Samira Sheiki (Idiap Research Institute Transfer Learning on Humanoid Robots Patterns in a Path Crossing Scenario 28. Perception during Interaction is Not and EPFL), David Klotz, Johannes Wienke Daiki Kimura, Ryutaro Nishimura, Akihiro Christina Lichtenthaeler (Technische Universität Based on Statistical Context (Bielefeld University), Jean-Marc Odobez (Idiap Oguro, Osamu Hasegawa (Tokyo Institute of München), Annika Peters (Bielefeld University), Research Institute and EPFL), Sebastien Wrede Alessandra Sciutti, Andrea Del Prete, Lorenzo Technology) Sascha Griffiths (Technische Universität München), Natale (Istituto Italiano di Tecnologia), David Burr (Bielefeld University), Vasil Khalidov, Laurent Alexandra Kirsch (Tübingen University) Nyugen (Idiap Research Institute), Britta Wrede (Università degli Studi di Firenze), Giulio Sandini, (Bielefeld University), Daniel Gatica-Perez (Idiap 13. Individually Specialized Feedback Monica Gori (Istituto Italiano di Tecnologia) Research Institute and EPFL) Interface for Assistance Robots in Standing- 21. Perceptions of Affective Expression in up Motion a Minimalist Robotic Face 29. Given that, Should I Respond? 05. The Effects of Familiarity and Robot Asuka Takai, Chihiro Nakagawa, Atsuhiko Casey C Bennett, Selma `abanovi Contextual Addressee Estimation in Multi- Gesture on User Acceptance of Information Shintani, Tomohiro Ito (Osaka Prefecture (Indiana University) Party Human-Robot Interactions University) Aelee Kim (Sungkyunkwan University), Younbo Dinesh Babu Jayagopi (Idiap Research Institute), Jung (Nanyang Technological University), Jean-Marc Odobez (Idiap Research Institute and 22. Improving Teleoperated Robot Speed EPFL) Kwanmin Lee (University of Southern California), 14. Gamification of a Recycle Bin with Using Optimization Techniques Jooyun Han (Sungkyunkwan University) Emoticons Steve Vozar, Dawn Tilbury Jose Berengueres, Fatma Alsuwairi (United (University of Michigan) 30. Interactive Display Robot: Projector 06. Balance-Arm Tablet Computer Stand Arab Emirates University), Nazar Zaki, Salama Robot with Natural User Interface for Robotic Camera Control Al Helli (College of Information Technology), Sun-Wook Choi, Woong-Ji Kim, Chong Ho Lee Tony Ng (United Arab Emirates University) Peter Turpel, Bing Xia, Xinyi Ge, Shuda Mo, (INHA University) Steve Vozar (University of Michigan)

50 51 31. Robot Confidence and Trust 39. Adopt-a-Robot: A Story of 47. Robots that Can Feel the Mood 55. Robot Embodiment, Operator Alignment Attachment Akira Imayoshi, Nagisa Munekata, Tetsuo Ono Modality, and Social Interaction in Tele- Poornima Kaniarasu, Aaron Steinfeld Damith C. Herath, Christian Kroos, Catherine (Hokkaido University) Existence (Carnegie Mellon University), Munjal Desai, Stevens, Denis Burnham (MARCS Institute) Christian Becker-Asano, Severin Gustorff, Kai Holly Yanco (University of Massachusetts Lowell) Oliver Arras (Albert-Ludwigs-Universität Freiburg), 48. Swimoid: Interacting with an Kohei Ogawa (Osaka University), Shuichi Nishio 40. Personal Service: A Robot that Greets Underwater Buddy Robot (Advanced Telecommunications Research Institute 32. Integration of Work Sequence and People Individually Based on Observed Yu Ukai (The University of Tokyo), Jun Rekimoto Intl.), Hiroshi Ishiguro (Osaka University) Embodied Interaction for Collaborative Behavior Patterns (The University of Tokyo / Sony CSL) Work Based Human-Robot Interaction Dylan F Glas, Kanae Wada (ATR & Osaka Jeffrey Too Chuan Tan, Tetsunari Inamura University), Masahiro Shiomi, Takayuki Kanda 56. Towards Empathic Artificial Tutors (National Institute of Informatics) (ATR), Hiroshi Ishiguro (ATR & Osaka University), 49. The Influence of Robot Appearance Amol Deshmukh (Heriot-Watt University), Norihiro Hagita (ATR) on Assessment Ginevra Castellano (University of Birmingham), Kerstin Sophie Haring, Katsumi Watanabe (The Arvid Kappas (Jacobs University Bremen), 33. Providing Tablets as Collaborative- University of Tokyo), Celine Mougenot (Tokyo Wolmet Barendregt (University of Gothenburg), Task Workspace for Human-Robot 41. A Study of Effective Social Cues Institute of Technology) Fernando Nabais (YDreams Robotics S.A), Interaction within Ubiquitous Robotics Ana Paiva, Tiago Ribeiro, Iolanda Leite Hae Won Park, Ayanna Howard Anara Sandygulova, David Swords, Sameh (GAIPS, INESC-ID and Instituto Superior Tecnico), (Georgia Institute of Technology) Abdel-Naby, G.M.P. O’Hare, Mauro Dragone 50. Development of RoboCup @Home Ruth Aylett (Heriot-Watt University) (University College Dublin) Simulator Tetsunari Inamura, Jeffrey Too Chuan TAN 34. Influence of Robot-Issued Joint (National Institute of Informatics) 57. 3D Auto-Calibration Method for Attention Cues on Gaze and Preference 42. Designing Robotic Avatars Head-Mounted Binocular Gaze Tracker as Human-Robot Interface Sonja Caraian, Nathan Kirchner Angie Lorena Marin Mejia, Doori Jo, Sukhan Lee (University of Technology, Sydney) (Sungkyunkwan University) 51. A Spatial Augmented Reality System Su Hyun Kwon, Min Young Kim for Intuitive Display of Robotic Data (Kyungpook National University) Florian Leutert, Christian Herrmann, Klaus 35. Input Modality and Task Complexity: 43. Eliciting Ideal Tutor Trait Perception in Schilling (Julius-Maximilians-Universität Do they Relate? Robots Würzburg) 58. Empathy between Human and Robot? Gerald Stollnberger, Astrid Weiss, Manfred Jonathan S Herberg, Dev C Behera, Martin Doori Jo, Jooyun Han, Kyungmi Chung, Tscheligi (CDL on Contextual Interfaces) Saerbeck (Institute of High Performance Sukhan Lee (Sungkyunkwan University) Computing) 52. Personalized Robotic Service using N-gram Affective Event model 36. Neural Correlates of Empathy Gi Hyun Lim (Technische Universität München), 59. Single Assembly Robot in Search of Towards Robots 44. Potential Use of Robots in Taiwanese Seung Woo Hong, Inhee Lee, Il Hong Suh Human Partner Nursing Homes Astrid Marieke Rosenthal-von der Pütten (Hanyang University), Michael Beetz (University Ross A. Knepper, Stefanie Tellex (University of Duisburg-Essen), Frank P. Schulte, Wan-Ling Chang, Selma `abanovi of Bremen) (Massachusetts Institute of Technology), Sabrina C Eimler, Laura Hoffmann, Sabrina (Indiana University) Adrian Li (University of Cambridge), Nicholas Sobieraj, Stefan Maderwald, Nicole C Krämer, Roy, Daniela Rus (Massachusetts Institute of Matthias Brand (University of Duisburg-Essen) 53. Attention Control System Considering Technology) 45. Elementary Science Lesson Delivered the Target Person’s Attention Level by Robot Dipankar Das, Mohammed Moshiul Hoque 37. Tell Me Your Story, Robot 60. iProgram: Intuitive Programming of Takuya Hashimoto, Hiroshi Kobayashi (Tokyo (Saitama University), Yoshinori Kobayashi an Industrial HRI Cell Martina Mara (Ars Electronica Futurelab), University of Science), Alex Polishuk, Igor Verner (Saitama University & JST PRESTO), Yoshinori Markus Appel (Johannes Kepler University of (Technion) Kuno (Saitama University) Jürgen Blume, Alexander Bannat, Gerhard Linz), Hideaki Ogawa, Christopher Lindinger, Rigoll (Technische Universität München) Emiko Ogawa (Ars Electronica Futurelab), Hiroshi Ishiguro, Kohei Ogawa (Osaka University) 46. Question Strategy and Interculturality 54. Loneliness Makes The Heart Grow in Human-Robot Interaction Fonder (Of Robots) 61. Robot-Human Hand-Overs in Non- Anthropomorphic Robots Mihoko Fukushima, Rio Fujita, Miyuki Kurihara, Friederike Eyssel (Bielefeld University), 38. Integrating a Robot in a Tabletop Tomoyuki Suzuki, Keiichi Yamazaki (Saitama Natalia Reich (Technical University of Dortmund) Prasanna Kumar Sivakumar, Chittaranjan S Reservoir Engineering Application University), Akiko Yamazaki (Tokyo University Srinivas (SASTRA University), Andrey Kiselev, Sowmya Somanath, Ehud Sharlin, Mario Costa of Technology), Keiko Ikeda (Kansai University), Amy Loutfi (Orebro University) Sousa (University of Calgary) Yoshinori Kuno, Yoshinori Kobayashi, Takaya Ohyama, Eri Yoshida (Saitama University)

52 53 62. Emergence of Turn-taking in 70. Designing for Sociality in HRI by 78. BioSleeve: a Natural EMG-Based 86. Human Pointing as a Robot Directive Unstructured Child-Robot Social Interactions Means of Multiple Personas in Robots Interface for HRI Syed Shaukat Raza Abidi, MaryAnne Williams, Paul Baxter (Plymouth University), Rachel Wood Jolina H. Ruckert, Peter H. Kahn Jr. (University Christopher Assad, Michael Wolf (Jet Benjamin Johnston (University of Technology, (University of Malta), Ilaria Baroni (Fondazione of Washington), Takayuki Kanda (ATR), Hiroshi Propulsion Laboratory, California Institute of Sydney) Centro San Raffaele), James Kennedy (Plymouth Ishiguro (ATR & Osaka University), Solace Shen, Technology), Theodoros Theodoridis (University University), Marco Nalin (Fondazione Centro San Heather E. Gary (University of Washington) of Essex), Kyrre Glette (University of Oslo), Raffaele), Tony Belpaeme (Plymouth University) Adrian Stoica (Jet Propulsion Laboratory, 87. LMA based Emotional Motion California Institute of Technology) Representation using RGB-D Camera 71. What Happens When a Robot Favors Woo Hyun Kim, Jeong Woo Park, Won 63. Human-Agent Teaming for Robot Someone? Hyong Lee (Korea Advanced Institute of Science Management in Multitasking Environments Daphne E. Karreman, Gilberto U. Sepúlveda 79. Execution Memory for Grounding and and Technology), Hui Sung Lee (Hyundai-KIA Jessie Y.C. Chen, Stephanie Quinn, Julia Wright Bradford, Betsy E.M.A.G. van Dijk, Manja Coordination MOTORS), Myung Jin Chung (Korea Advanced (U.S. Army Research Laboratory), Daniel Barber, Lohse, Vanessa Evers (University of Twente) Stephanie Rosenthal, Sarjoun Skaff (Bossa Institute of Science and Technology) David Adams (University of Central Florida), Nova Robotics), Manuela Veloso (Carnegie Michael Barnes (U.S. Army Research Laboratory) Mellon University), Dan Bohus, Eric Horvitz 72. Anthropomorphism in the Factory - A (Microsoft Research) 88. Effects of Robot Capability on User Paradigm Change? Acceptance 64. The Role of Emotional Congruence in Susanne Stadler, Astrid Weiss, Nicole Mirnig, Elizabeth Cha, Anca D Dragan, Siddhartha S Human-Robot Interaction Manfred Tscheligi (University of Salzburg) 80. Spatially Unconstrained, Gesture- Srinivasa (Carnegie Mellon University) Karoline Malchus, Petra Jaecks, Oliver Damm, Based Human-Robot Interaction Prisca Stenneken, Carolin Meyer, Britta Wrede Guillaume Doisy (Ben-Gurion University of the (Bielefeld University) 73. Position-Invariant, Real-Time Gesture Negev), Aleksandar Jevtic (Robosoft), Saaa 89. Interactive Facial Robot System on a Recognition Based on Dynamic Time Bodiroza (Humboldt-Universität zu Berlin) Smart Device Warping Won Hyong Lee, Jeong Woo Park, Woo Hyun 65. Goal Inferences about Robot Behavior Saaa Bodiro~a (Humboldt-Universität zu Berlin), Kim, Myung Jin Chung (Korea Advanced Hedwig Anna Theresia Broers, Jaap Ham, Ron Guillaume Doisy (Ben-Gurion University of the 81. A Wearable Visuo-Inertial Interface Institute of Science and Technology) Broeders (Eindhoven University of Technology), Negev), Verena Vanessa Hafner (Humboldt- for Humanoid Robot Control P. Ravindra de Silva, Michio Okada (Toyohashi Universität zu Berlin) Junichi Sugiyama, Jun Miura University of Technology) (Toyohashi University of Technology) 90. Use of Seal-like Robot PARO in Sensory Group Therapy for Older Adults 74. Generating Finely Synchronized with Dementia 66. Using the AffectButton to Measure Gesture and Speech for Humanoid Robots: 82. Listening to vs Overhearing Robots in Wan-Ling Chang, Selma `abanovi, Lesa Huber Affect in Child and Adult-Robot Interaction A Closed-Loop Approach a Hotel Public Space (Indiana University) Robin Read, Tony Belpaeme Maha Salem, Stefan Kopp (Bielefeld University), Yadong Pan, Haruka Okada, Toshiaki (Plymouth University) Frank Joublin (Honda Research Institute Europe) Uchiyama, Kenji Suzuki (University of Tsukuba) 91. Developing Therapeutic Robot for Children with Autism: A Study on Exploring 67. People Interpret Robotic Non- 75. The NAO Models for the Elderly 83. Using Human Approach Paths to Colour Feedback Linguistic Utterances Categorically David López Recio (Mobile Life @ KTH), Improve Social Navigation Jaeryoung Lee, Goro Obinata Robin Read, Tony Belpaeme Elena Márquez Segura (Mobile Life @ SICS), Eleanor Avrunin, Reid Simmons (Nagoya University) (Plymouth University) Luis Márquez Segura (Fonserrana S.C.A de (Carnegie Mellon University) interés social), Annika Waern (Mobile Life @ Stockholm University) 68. Is That Me? Sensorimotor Learning 84. Learning from the Web: Recognition and Self-Other Distinction in Robotics Method Based on Object Appearance from Guido Schillaci, Verena Vanessa Hafner 76. A Robotic Therapy for Children with Internet Images (Humboldt-Universität zu Berlin), Bruno Lara TBI Enrique Hidalgo-Peña, Luis Felipe Marin- (Universidad Autónoma del Estado de Morelos), Alex Barco, Jordi Albo-Canals, Miguel Kaouk Urias, Fernando Montes-González, Antonio Marc Grosjean (Leibniz Research Centre for Ng, Carles Garriga (LIFAELS La Salle), Laura Marín-Hernández, Homero Vladimir Ríos- Working Environment and Human Factors) Callejón, Marc Turón, Claudia Gómez, Anna Figueroa (Universidad Veracruzana) López-Sala (Servei de Neurologia - Hospital Sant Joan de Déu (HSJD)) 69. Survey of Metrics for Human-Robot 85. Towards a Comprehensive Chore List Interaction for Domestic Robots Robin Murphy (Texas A&M University), 77. Where to Look and Who to Be Maya Cakmak, Leila Takayama Debra Schreckenghost (TRACLabs, Inc.) Lorin D Dole, David M Sirkin, Rebecca M (Willow Garage, Inc.) Currano (Stanford University), Robin R Murphy (Texas A&M University), Clifford I Nass (Stanford University)

54 55 Video Session I Sing the Body Electric Swimoid: Interacting with an Robot - Interactive Continuous Jakub Zlotowski, Timo Bleeker, Christoph Underwater Buddy Robot Learning of Visual Concepts Bartneck, Ryan Reynolds Wed, 3/6 (15:40~16:40) (University of Canterbury) Yu Ukai (The University of Tokyo), Michael Zillich, Kai Zhou (Vienna University Jun Rekimoto (The University of Tokyo / Sony CSL) of Technology), Danijel Skocaj, Matej Kristan, 7F Miraikan Hall Alen Vrecko (University of Ljubljana), Miroslav An Experimental Theatre Play with Robots Janicek, Geert-Jan M. Kruijff (DFKI), Thomas The methodology of presenting information from Keller (Albert-Ludwigs-Univ.), Marc Hanheide, robots to humans in underwater environments Nick Hawes (University of Birmingham), Marko Natural Interaction for Object Hand- has become an important topic because of the Mahnic (University of Ljubljana) rapid technological advancement in the field of Over New Clay for Digital Natives’ HRI: the underwater vehicles and the underwater Mamoun Gharbi, Séverin Lemaignan, Jim Create Your Own Interactions with SiCi applications. However, this topic has not yet The video presents the robot George learning Mainprice, Rachid Alami Jae-Hyun Kim, Jae-Hoon Jung, Jin-Sung been fully investigated in the research field of visual concepts in dialogue with a tutor. (CNRS, LAAS, Univ de Toulouse) Kim, Yong-Gyu Jin, Jung-Yun Sung, Se-Min Underwater Human-Robot Interaction(UHRI). We Oh, Jae-Sung Ryu, Hyo-Yong Kim (Hansung propose a new concept of an underwater robot called the ``Buddy Robot’’. And we define the term The video presents in a didactic way several abilities University), Soo-Hee Han (Konkuk University), Hye-Kyung Cho (Hansung University) ``Buddy Robot’’ as a category of the underwater and algorithms required to achieve interactive robot that has the two abilities; recognizing Talking to my Robot: From Knowledge “pick and give” tasks in a human environment. and following the user and giving out visual Grounding to Dialogue Processing Communication between the human and the This work-in-progress video introduces SiCi (smart information to human using display devices. As robot relies on unconstrained verbal dialogue, the one specific example of the concept, we develop Séverin Lemaignan, Rachid Alami ideas for creative interplay), an authoring tool to (CNRS, LAAS, Univ de Toulouse) robot uses multi-modal perception to track the create new type of robot contents by combining a swim support system called ``Swimoid’’. human and its environment, and implements real- interactions among multimedia entities in the Swimoid consists of three parts, a hardware , a time 3D motion planning algorithms to achieve virtual world with robots in the real world. control software and functions that can support The video presents in a didactic way the tools collision-free and human-aware interactive swimmers in three ways; self-awareness, coaching manipulation. developed at LAAS-CNRS and related to symbol and game. Self-awareness function enables grounding and natural language processing for swimmers to recognize themselves swimming companion robots. It mainly focuses on two form in real time. Coaching function enables LSInvaders of them: the ORO-server knowledge base and coaches on the poolside to give instructions the Dialogs natural language processor. These The Oriboos Going to Nepal Anna Fusté, Judith Amores, Sergi Perdices, to swimmers by drawing some shapes. Game two tools enable three cognitive functions that Santi Ortega, David Miralles function helps novice swimmers to get familiar allow for better natural interaction between Elena Márquez Segura (Mobile Life @ SICS), (La Salle - Universitat Ramon Llull) with water in a fun way. As a result of user tests, Jin Moen (Movinto Fun), Annika Waern humans and robots: a ‘theory of mind’ built we confirmed this system works properly by the upon ‘perspective taking’, ‘multi-modal’ (Mobile Life @ Stockholm University), test user’s comments.’ Adrián Onco Orduna (None) communication, that combines verbal input Cross reality environment inspired by the arcade with gestures, and a limited ‘symbol grounding’ game Space Invaders. capability with a disambiguation mechanism A Story of Playful Encounters. LSInvaders was born from the willingness to supported by the two first cognitive abilities. We created a fictional story about a bunch of explore the collaboration between human and iRIS: A Remote Surrogate for Mutual interactive robot toys, the Oriboos, which travel robots. We have developed a project with the to different schools where children interact aim of unifying different types of interactions Reference in a cooperative environment. The project is and play with them. The story is based on two Hiroaki Kawanobe, Yoshifumi Aosaki, Talking-Ally: Towards Persuasive workshops done in Sweden and Nepal. inspired by the arcade game Space Invaders. Hideaki Kuzuoka (University of Tsukuba), The difference between Space Invaders and Yusuke Suzuki (Oki Electric Industry Co., Ltd.) Communication LSInvaders is that the user receives the help of a physical robot with artificial intelligence to Yuki Odahara, Youhei Kurata, Naoki get over the level. Although the system itself is Ohshima, P. Ravindra S De Silva, Michio Emo-bin: How to Recycle more by Okada (Toyohashi University of Technology) a basic game, and it seems like it would make In this video, we introduce iRIS, a remote using Emoticons no difference whether the robot is embodied or surrogate robot that facilitates mutual reference Jose Berengueres, Fatma Alsuwari, Nazar a typical AI virtual agent its participation as a real to a physical object over the distance. The robot We develop a social robot (Talking-Ally) which Zaki, Salama Alhilli (UAE University), Tony Ng element improves the gameplay. has a display that shows remote participants is capable of liking the state of the person (College of Information Technology) head. The display is mounted on a 3-DOF neck. (addressee) through an utterance generation The robot also has a built-in projector enabling a mechanism (addressivity) that refers to the remote participant to show his/her actual hand hearer’s resources (hearership) in order to In UAE only 10% of PET bottles are recycled. We gestures through a physical object in the local persuade the user through dynamic interactions. introduce an emoticon-bin, a recycle bin that environment. rewards users with smiles and sounds. When a user recycles the bin smiles. We show that by exploiting human responsiveness to emoticons, recycling rates increase by a factor of x3.

56 57 A Dog Tail for Communicating Robotic A Model of Handing Interaction Interactive Object Modeling & Labeling States Towards a Pedestrian for Service Robots Ashish Singh, James E. Young Chao Shi, Masahiro Shiomi (ATR), Christian Alexander J. B. Trevor, John G. Rogers III, (University of Manitoba) Smith (Royal Institute of Technology), Takayuki Akansel Cosgun, Henrik I. Christensen Kanda (ATR), Hiroshi Ishiguro (Osaka University) (Georgia Institute of Technology) We present a dog-tail interface for communicating abstract affective robotic states. We believe that This video reports our research on developing a We present an interactive object modeling and people have a passing knowledge to understand model for a robot handing flyers to pedestrians. labeling system for service robots. The system basic dog tail language (e.g., tail wagging The difficulty is that potential receivers are enables a user to interactively create object means happy). This knowledge can be leveraged pedestrians who are not necessarily cooperative; models for a set of objects. Users also provide a to understand affective states of a robot. For thus, the robot needs to appropriately plan its label for each object, allowing it to be referenced example, by appearing energetic, it can suggest motion making it is easy and non-obstructive later. Interaction with the robot occurs via a that it has a full battery and does not need for potential receivers to receive the flyers. In combination of a smartphone UI and pointing charging. To investigate this, we built a robotic order to establish a model, we analyzed human gestures. tail interface to communicate affective states of interaction, and found that (1) a giver approaches a robot. We conducted an exploratory user study a pedestrian from frontal right/left but not frontal to explore how low-level tail parameters such as center, and (2) he simultaneously stops his speed influence people’s perceptions of affect. In walking motion and arm-extending motion at the this paper, we briefly describe our study design moment when he hands out the flyer. Using these and the results obtained. findings, we established a model for a robot to perform natural proactive handing. The proposed model is implemented in a humanoid robot and is confirmed as effective in a field experiment. CULOT: Sociable Creature for Child’s Playground Nozomi Kina, Daiki Tanaka, Naoki Ohshima, Coaching Robots with Biosignals based P. Ravindra S De Silva, Michio Okada (Toyohashi University of Technology) on Human Affective Social Behaviors Kenji Suzuki, Anna Gruebler, Vincent Berenz (University of Tsukuba) The video shows that CULOT as a sociable robot and playground character to establish play routing with children to develop playground language We introduce a novel paradigm of social through inarticulate sounds by synchronizing its interaction between humans and robots, moving behaviors and body gestures. which is a style of coaching humanoid robots through interaction with a human instructor, who provides reinforcement via affective/social behaviors and biological signals. In particular facial Electromyography (EMG) to capture affective human response by using a personal wearable device is used as guidance or feedback to shape robot behavior. Through real-time pattern classification, facial expressions can be identified from them and interpreted as positive and negative responses from a human. We also developed a behavior-based architecture for testing this approach in the context of complex reactive robot behaviors.

58 59 WaitingMeeting Space Room Map - Demo

Mon, 3/4 (16:30~18:00) CR1/CR2/Free Space/Lobby Reception Room

D10 D08 D12 Exhibition Exhibition CR 2 CR 3 Garden Miraikan LBR Poster D16 D14 D15 Hall

D20 D22 D09 D06 D02

D21 CR 1 D04

D13 Foyer D19 D17 D07 D05

Voting Box Registration Desk

Cloak Room

D01 Demo D18 Free Space

Lobby D03 D11

LBR Poster Pioneers Workshop Poster

60 61 Demo Session D04. Why Do the People Interact with the D07. Evoking an Affection for D10. Interaction with Blended Reality Shadow Circles? Communication Partner by a Robotic Agent “BReA” Mon, 3/4 (16:30~18:00) Communication Medium CR1/CR2/Free Space/Lobby

D01. Robotic Wheelchair Moving Alongside a Companion

Takafumi Sakamoto, Yugo Takeuchi Yusuke Kanai, Hirotaka Osawa, Michita Imai (Shizuoka University) Takashi Minato, Shuichi Nishio (ATR), (Keio University) Hiroshi Ishiguro (Osaka University)

D05. An Open Hardware and Software D11. Sociable Creatures for Child-Robot Ryota Suzuki, Yoshihisa Sato, Yoshinori Microphone Array System for Robotic D08. PINOKY: A Ring That Animates Your Interaction Studies Kobayashi, Yoshinori Kuno, Keiichi Yamazaki Applications Plush Toys (Saitama University), Masaya Arai, Akiko Yamazaki (Tokyo University of Technology)

D02. Strengthening Social Telepresence and Social Bonding by a Remote Handshake

Yasutaka Takeda, Shohei Sawada, Tatsuya Mori, François Grondin, Dominic Létourneau, Yuta Sugiura, Yasutoshi Makino (Keio Kohei Yoshida, Yu Arita, Takahiro Asano, Naoki François Ferland, François Michaud University), Daisuke Sakamoto (The University Ohshima, Ravindra De Silva, Michio Okada (Université de Sherbrooke) of Tokyo), Masahiko Inami (Keio University), (Toyohashi University of Technology) Takeo Igarashi (The University of Tokyo)

D06. The Design of Social Robot D12. Physiotherapy demonstration with Yuya Wada, Kazuaki Tanaka, Hideyuki D09. A Surrogate Robot with Deictic NAO Nakanishi (Osaka University) Pointing Capability

D03. TEROOS: A Wearable Avatar to Enhance Joint Activities

Seita Koike (Tokyo City University) David López Recio, Elena Márquez Segura Hiroaki Kawanobe, Yoshifumi Aosaki, (Mobile Life), Luis Márquez Segura (Fonserrana Hideaki Kuzuoka (University of Tsukuba), S.C.A de interés social), Annika Waern (Mobile Yusuke Suzuki (Oki Electric Industry Co., Ltd.) Life)

Tadakazu Kashiwabara, Tatsuki Ohnishi, Hirotaka Osawa (Keio University), Kazuhiko Shinozawa (ATR), Michita Imai (Keio University)

62 63 D13. Human-Agent Interaction by D16. Perceiving the Active Chair: Agent or D19. Attitude-Aware Communication D22. Natural 3D Controls for Space Robotic Transformative Agential Triggers Machine? Behaviors of a Partner Robot: Politeness for Systems the Master

Hirotaka Osawa, Michita Imai (Keio University) Garrett Johnson, Victor Luo, Alex Menzies, Tomoko Yonezawa (Kansai University), David Mittman, Jeffrey Norris, Scott Davidoff Hirotake Yamazoe (Osaka University), Yuichi (NASA Jet Propulsion Laboratory) Koyama (Nagoya University), Naoto Yoshida D14. Superimposed Self-Character Peng Li, Kazunori Terada, Akira Ito (Kansai University), Shinji Abe (Hiroshima Mediated Video Chat System for Face-to- (Gifu University) Institute of Technology), Kenji Mase (Nagoya Face Interaction Based on the Detection of University) Talkers’ Face Angles

D17. Building Table-Talk Agents that Create a Pleasant Atmosphere D20. Ikitomical Model: Extended Body Sensation through a Cardiovascular Robot

Tomohiro Takada, Shiho Nakayama, Yutaka Ishii, Tomio Watanabe (Okayama Prefectural University)

Masahide Yuasa (Tokyo Denki Univ.) D15. Shape Shifting Information Notification based on Peripheral Cognition Technology D18. Robot Gangnam Style! Yuka Nagata, Yuka Niimoto, Naoto Yoshida, Tomoko Yonezawa (Kansai University)

D21. Exploring Anti-Social Behavior as a Method to Understand Aspect of Social Behavior

Eduardo Sandoval, Christoph Bartneck Kazuki Kobayashi (Shinshu University), (University of Canterbury) Seiji Yamada (National Institute of Informatics)

Takao Watanabe, Masakatsu Fujie (Waseda University), Angelika Mader, Edwin Dertien (University of Twente)

64 65 Exhibition

Sun, 3/3~Wed, 3/6 CR2 E01

E01: Aldebaran Robotics E02: Physio-Tech Co., Ltd E05 E03: Disney Research E04: Bossa Nova Robotics E05: ATR Creative E06: Barrett Technology E02 E07: The MIT Press

E06 CR 2 E03



66 67 Sponsorship Platinum Level Supporters


Silver Level Supporters

Technical Co-Sponsors

Bronze Level Supporters

68 69 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014) March 3-6, 2014 in Bielefeld (Germany) http://www.humanrobotinteraction.org/2014/ (E)Merging Perspectives

Progress in research on human-robot interaction is commonly driven by either a user’s or a system’s perspective. However, what does it mean to interweave both system- driven approaches and empirical research in order to advance new and possibly unorthodox methodologies? To extend current approaches, we strongly call for papers that demonstrate the usage of novel empirical methods, the integration of empirical findings into complex robot systems, and systemic approaches to evaluate systems.

Conference Venue. Intelligent Systems research building at Bielefeld University (Germa- ny): http://www.campus-bielefeld.de/en/campus-bielefeld-2025/

Important Dates. September 10, 2013: Submission of full papers, videos, and tutorial / workshop proposals (tentative).

General Co-Chairs Gerhard Sagerer (Bielefeld University, Germany) Michita Imai (Keio University, Japan) Program Co-Chairs Tony Belpaeme (Plymouth University, United Kingdom) Andrea Thomaz (Georgia Tech, USA) Financial Co-Chairs Franz Kummert (Bielefeld University, Germany) Masahiro Shiomi (ATR, Japan) Local Arrangement Co-Chairs Britta Wrede (Bielefeld University, Germany) Friederike Eyssel (Bielefeld University, Germany) HRI14 Human-Robot Interaction @ Bielefeld University

70 71 Organizers

Organizing Committee: HRI Steering Committee:

General Co-Chairs Publications Co-Chairs Co-chairs Program Committee Hideaki Kuzuoka, University of Tsukuba Jacob Crandall, Masdar Institute of Science and Julie A. Adams, Vanderbilt University, USA Kai Oliver Arras, Freiburg University, Germany Vanessa Evers, University of Twente Technology Takayuki Kanda, ATR, Japan Tony Belpaeme, University of Plymouth, UK Selma Sabanovic, Indiana University Vanessa Evers, University of Twente, The Netherlands Elizabeth Croft, University of British Columbia, Canada (Co-Chair elect) Program Co-Chairs Terry Fong, NASA Ames, USA

Michita Imai, Keio University Fundraising & Exhibitions Co-Chairs Mike Goodrich, Brigham Young University, USA Members (in alphabetical order) Jodi Forlizzi, Carnegie Mellon University Jennifer Burke, The Boeing Company Odest Chadwicke Jenkins, Brown University, USA Tony Belpaeme, University of Plymouth, UK Daisuke Sakamoto, The University of Tokyo Peter Kahn, University of Washington, USA Sylvain Calinon, EPFL, Switzerland Local Arrangement Co-Chairs Takayuki Kanda, ATR, Japan Jodi Forlizzi, Carnegie Mellon University, USA Fumihide Tanaka, University of Tsukuba Design Chair Bilge Mutlu, University of Wisconsin, USA Hirotaka Osawa, Keio University Sonya S. Kwak, Ewha Womans University Jeonghye Han, Cheongju National University of Education, Korea Yukie Nagai, Osaka University,Japan

Michita Imai, Keio University, Japan Tatsuya Nomura, Ryukoku University, Japan Finance Co-Chairs Web Chair Odest Chadwicke Jenkins, Brown University, USA Paul Rybski, Carnegie Mellon University, USA Masahiro Shiomi, ATR Komei Sugiura, NICT Hideaki Kazuoka, University of Tsukuba, Japan Gerhard Sagerer, Bielefeld University, Germany Leila Takayama, Willow Garage Sara Kiesler, Carnegie Mellon University, USA Brian Scassellatti, Yale University, USA Publicity Co-Chairs Franz Kummert, Bielefeld University, Germany Candace Sidner, Worcester Polytechnic Institute, USA Workshop and Tutorials Co-Chairs Aude Billard, EPFL Dong-Soo Kwon, KAIST, South Korea Sidd Srinivasa, Carnegie Mellon University, USA Astrid Weiss, University of Salzburg Dong-Soo Kwon, KAIST Bilge Mutlu, University of Wisconsin–Madison, USA Aaron Steinfeld, Carnegie Mellon University, USA David-Feil Seifer, Yale University Gerhard Sagerer, University of Bielefeld, Germany Leila Takayama, Willow Garage, USA Registration Co-Chairs Brian Scassellatti, Yale University, USA Fumihide Tanaka, Tsukuba University, Japan Late-breaking Reports Co-Chairs Joshua M. Peschel, University of Illinois at Urbana- Masahiro Shiomi, ATR, Japan Adriana Tapus, ENSTA-ParisTech, France Andreas Kolling,Carnegie Mellon University Champaign Aaron Steinfeld, Carnegie Mellon University, USA Andrea Tomasz, Georgia Tech, USA Cindy Bethel, Mississippi State University Satoru Satake, ATR Leila Takayama, Willow Garage, Inc., USA Greg Trafton, Naval Research Laboratory, USA

Andrea Thomaz, Georgia Institute of Technology, USA Holly Yanco, University of Massachusetts-Lowell, USA Video Session Co-Chairs Demo Co-Chairs Manfred Tscheligi, University of Salzburg, Austria Wendy Ju, California College of the Arts Kazunori Terada, Gifu University Holly Yanco, University of Massachusetts–Lowell, USA Tony Belpaeme, University of Plymouth Hirotaka Osawa, Keio University

Past Chairs Panels Chair Michael A. Goodrich, Brigham Young University, USA Peter Kahn, University of Washington

72 73 Reviewers

Adams, Julie Craven, Patrick Haruo, Noma Leila, Takayama Pais, Ana Lucia Suutala, Jaakko Admoni, Henny Crick, Christopher Hayes, Brad Lewis, Michael Pandey, Anshul Vikram Syrdal, Dag Sverre Adriana, Tapus Croft, Elizabeth Hegel, Frank Leyzberg, Dan Park, Chung Hyuk Szafir, Daniel Al Mahmud, Abdullah Cross, E. Vincent Hemion, Nikolas Lohan, Katrin Solveig Peschel, Joshua Taha, Tarek Alissandrakis, Aris Cuayáhuitl, Heriberto Herath, Damith C Lohse, Manja Peter, Kahn Takeuchi, Yugo Aly, Amir Culbertson, Heather Hoffman, Guy Looije, Rosemarijn Pitsch, Karola Tamura, Yusuke Amanda, Sharkey Cürüklü, BaranC Howard, Ayanna Lu, David Pollick, Frank Tanaka, Fumihide AMMI, Mehdi Damm, Oliver Hsiao, Kaijen Luber, Matthias Powers, Aaron Teller, Seth Arras, Kai Oliver Dang, Thi-Hai-Ha Huang, Chien-Ming Lütkebohle, IngoL Prakash, Akanksha Tellex, Stefanie Asfour, Tamim De Silva, Ravindra Ikemoto, Shuhei Makatchev, Maxim Randelli, Gabriele Terada, Kazunori Bailenson, Jeremy DePalma, Nick Ishii, Kentaro Manohar, Vimitha Read, Robin Tessier, Catherine Becker-Asano, Christian Desai, Munjal Jansen, Bart Marin, Luis Rich, Charles Thomaz, Andrea Been-Lirn, Duh Dias, Jorge Jenkins, Odest Chadwicke Marquez, Jessica Riek, Laurel D. Topp, Elin Anna Beer, Jenay Dragan, Anca Jevtic, Aleksandar Martison, Eric Robert, David Tsui, Katherine Behnke, Sven Ellis, Stephen Johnson, David Mavridis, Nikolaos Rolf, Matthias Ugur, Emre Bellotto, Nicola Fasola, Juan Kanda, Takayuki McDermott, Patricia Rosenthal, Stephanie Vallance, Michael Belpaeme, Tony Feil-Seifer, David Kay, Jennifer Menegatti, Emanuele Rufli, Martin Varakin, Alex Berlin, Matt Ferrane, Isabelle Kennedy, James Michaud, Francois Rybski, Paul Vatavu, Radu-Daniel Bethel, Cindy Fong, Terry Kennedy, William Micire, Mark Sabanovic, Selma Vollmer, Anna-Lisa Blisard, Sam Freier, Nathan Kidd, Cory Miller, Christopher Sagerer, Gerhard Walters, Michael Bodiroža, Saša Fukuda, Haruaki Kim, Chyon Hae Moshkina, Lilia Sakamoto, Daisuke Weiss, Astrid Boedecker, Joschka Fussell, Susan Kim, Yunkyung Mower Provost, Emily Scassellati, Brian Wrede, Britta Breslow, Leonard Gartenberg, Dan Kim, Hyoung-Rock Mubin, Omar Severinson-Eklundh, Kerstin Würtz, Rolf Bugajska, Magdalena Geyer, Hartmut Kirchner, Nathan Muelling, Katharina Seyed Mohammad, Khansari- Yanco, Holly Cabibihan, John-John Gielniak, Michael Kirsch, Alexandra Murray, John Zadeh Yano, Hiroaki Cakmak, Maya Glas, Dylan Knox, W. Bradley Mutlu, Bilge Shah, Julie Yim, Ji-Dong Calhoun, Gloria Gleeson, Brian Koay, Kheng Lee Nagai, Yukie Shei, Irene Yoshikawa, Yuichiro Calisgan, Ergun Goodrich, Michael Kollar, Thomas Nakanishi, Hideyuki Shic, Frederick Youngquist, Jim Carlson, Tom Green, Keith Komatsu, Takanori Nakano, Yukiko Shiomi, Masahiro Zeglin, Garth Castellano, Ginevra Greenspan, Michael Kopp, Stefan Nandakumar, Karhik Sidner, Candace Zhang, Kai Chan, Wesley Greg, Trafton Kozima, Hideki Nevatia, Yashodhan Siepmann, Frederic Zhu, Chun Chang, Chen Ya Griffiths, Sascha Kruijff, Ivana Nguyen, Hai Simmons, Reid Chao, Crystal Grindle, Garrett Kruijff, Geert-Jan Niekum, Scott Sinapov, Jivko Chen, Jessie Grollman, Daniel Kuchenbecker, Katherine Nielsen, Curtis Sisbot, Emrah Akin Chen, Tiffany Haddadi, Amir Kulic, Dana Nomura, Tatsuya Sklar, Elizabeth Cheron, Guy Hagita, Norihiro Kulkarni, Kaustubh Nowaczyk, Slawomir Srinivasa, Siddhartha Chetouani, Mohamed Han, Jeonghye Kwak, Jun-young Ogata, Tetsuya St. Clair, Aaron Chien, Shih-Yi Harrison, Anthony Lammer, Lara Ono, Tetsuo Steinfeld, Aaron Crandall, Jacob Hart, Lee, Min Kyung Osawa, Hirotaka Sung, Ja-Young Susnea, Ioan

74 75 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2013) March 3-6, 2013, Tokyo, Japan http://humanrobotinteraction.org/2013/