Use of Low Cost Personal for the Encouragement, Demonstration and Assessment of Physical Activities

Thitipong Nandhabiwat M.Sc. (Information Science), University of Pittsburgh, USA B.Sc. (Computer Science), University of Waikato, NZ

This thesis is presented for the degree of Doctor of Information Technology of Murdoch University January 2011

i

DECLARATION

I declare that this thesis is my own account of my research and contains as its main content work, which has not previously been submitted for a degree at any tertiary education institution. To the best of my knowledge the thesis contains no material previously published or written by another person, except where due reference has been made in the text.

______Thitipong Nandhabiwat

ii

ABSTRACT

One of the biggest challenges today is the issue of an overweight population. The issue is expected to impose tremendous pressure on the economic and financial health of a nation. The problem is now affecting developed and developing countries alike. In countries such as the USA and Australia, it has been reported that over 50% of the adult population are now either overweight or obese and the situation is expected to get worse. In developing countries like China, India and Thailand, similar problem is also emerging and in particular among the younger generation who has been exposed to wide varieties of processed or “junk” food. Apart from dietary and genetic reasons, the root cause of the weight problem is lacking of physical activities. With the increasing choices of indoor entertainment and transport facilities, in particular private cars, adult and children are now on average burning less daily calories while consuming more food loaded with high levels of sugar and fat. One way to combat the issue is to encourage more physical activities through games or exercises. However, this could be a two-edged sword as injuries have been reported due to inappropriate practices of exercises such as yoga without the advice of formal trainers or instructors. Hence, it is the aim of this study to investigate and develop low cost personal robots for the encouragement, demonstration and assessment of physical activities, with an objective to improve the overall health of the population. Most robotic researches so far have yet to focus on the low-cost off- the-shelf robots. This research is therefore one of the few that focuses on this category of robots. This study also investigates the opportunities and potential with the low-cost off-the-shelf robots and how to develop them beyond their default

iii capability. This research has illustrated the feasibility of using low cost personal robots as a supplement to help or assist people of all ages in various kinds of physical exercises during their leisure time in a home environment. In this study,

Thai school children have been motivated to learn or practice traditional Thai dances with the robots. This is not just for the sake of physical activities, but it also helps to preserve the culture and traditions among the future generation in Thailand.

The initial phase of the research has been a literature review on the backgrounds of robots, robotic applications, and human- interaction (HRI) as an emerged discipline. The thesis also describes two low cost robots, the WowWee Robosapien

Media and the Speecys SPC-101C, used in this research. The research has demonstrated on their use as a trainer for the purposes of education, entertainment and training. Four categories of routines including Thai dance, yoga, cheerleading dance, and exercises for the elderly have been used as examples of physical exercises. Discussion on dance robots, entertainment robots and Thai dances have also been covered in this thesis.

Another finding of the research is a demonstration of the assessment of a human subject’s movements by a robot. While most people are trained or learn from human trainers, this approach has its limitations due to cost and the number of trainees at any one time. While it is possible to learn from media such as television, Internet,

CD or DVD, the trainees will not be able to get any feedback or assistance and this has been the main cause of the injuries. The study proves that a robot is capable to provide demonstration in three dimension as well as providing feedback, it can also be used to assess the movements of the human subject through image or

iv processing and measurement. In this project, customized has been developed to control the robot and two web cameras have been used to provide monitoring and capturing of the human movements for assessment and feedback.

Outcome from this study has demonstrated that the two low-cost off-the-shelf humanoid robots used in this research are capable of assisting and enhancing the performance, enjoyment, and motivation of the participants through interactive activities with the robots. The performance of the participants attended the training session was positively improved through training and the improvement is demonstrated through pre-test and post-test assessments. In this thesis, detailed results of the study are reported and future research works are also discussed.

v

ACKNOWLEDGEMENT

First and foremost, I would like to humbly give my thanks and gratitude to the King of Thailand, His Majesty King Bhumibhol Adulyadej for always bestowing on the

Thai people countless precious lifelong thoughts and life living lessons. It is my utmost honor and privilege to serve under Your Royal Highness’s Throne and the country. I pledge myself to give all my devotion to serve society for the betterment of the country.

Second of all, I have to thank my dad and mom for the greatest support and understanding in every aspect of my life. Also, thanks to the invaluable support from my wife, Napapan Nandhabiwat and my son, Dhubdheb Nandhabiwat. Without the inspiration from all of you, I would not have had made it.

Very special thanks and gratitude are wholeheartedly dedicated to my principal supervisor, Associate Professor Dr. Lance C.C. Fung, for his magnificent supervision, advice, support and guidance. To me, you are the greatest supervisor I have known. You are more than just a supervisor and you are the world of giving. You have taught me so much not only through academic lessons, but also through living life lessons. Saying thank you to you in a million times will never suffice. Once again thank you and thank you from the bottom of my heart. Thank you also to my co-supervisor, Associate Professor Dr. Kevin Wong, for your kind supervision and suggestions.

vi

I wish to express my sincere thanks and gratitude to the President of Rangsit

University, Dr. Arthit Ourairat, for his encouragement and support given to me in pursuit of this doctorate degree.

Last but not the least, I’d like to thank the faculty members at the University of

Pittsburgh and the University of Waikato for preparing me for life long learning in the real world, and for provided me with invaluable academic experience and training.

vii

LIST OF PUBLICATIONS

[1] C. C. Fung, T. Nandhabiwat, A. Depickere, and K. Malaivongs. “Thai Dance

Training-Assist Robot,” in the Proceeding of the 8th Electrical Engineering and

Computing Symposium (PEECS 2007), Western Australia: Curtin University of

Technology, 2007, pp. 108-111.

[2] C. C. Fung, T. Nandhabiwat, and A. Depickere. “iThaiSTAR – a Low-Cost

Humanoid Robot for Entertainment and Teaching Thai Dances,” in the Lecture

Notes in Computer Science, Technologies for E-Learning and Digital

Entertainment, Springer Berlin / Heidelberg, Vol. 5093/2008, pp. 99-106.

[3] C. C. Fung and T. Nandhabiwat. “Comparing Two Off-the-shelf Robots for the

Demonstration of Thai Folk Dances – an Initial Study,” in the Proceedings of the

International Conference on Advances in Computer Entertainment Technology

2008 (ACE2008),Yokohama, Japan, 2008, pp. 309-312.

[4] T. Nandhabiwat and C. C. Fung. “Future E-Business Opportunities with

Companion Robots,” in the Proceedings of the 7th International Conference on e-

Business 2008 (INCEB2008), Thailand: Rangsit University, 2008, pp. 192-198.

[5] T. Nandhabiwat and C. C. Fung. “An Analysis of Two Low Cost Humanoid

Robots for the Demonstration of Thai Folk Dances,” in the Proceeding of the 9th

Electrical Engineering and Computing Symposium (PEECS 2008), Western

Australia: University of Western Australia, 2008, pp. 171-175.

viii

TABLE OF CONTENTS

DECLARATION ii

ABSTRACT iii

ACKNOWLEDGEMENT vi

LIST OF PUBLICATION viii

TABLE OF CONTENTS ix

LIST OF TABLES xiv

LIST OF FIGURES xv

LIST OF ACRONYMS xix

CHAPTER 1 INTRODUCTION 1

1.1 Background of the Research 1

1.2 Research Problems and Questions 7

1.3 Research Objectives 8

1.4 Organization of the Thesis 8

CHAPTER 2 LITERATURE REVIEW 10

2.1 Robot 10

2.1.1 History and Evolution of Robots 10

2.1.2 Robot Applications 18

2.1.2.1 Exploration 18

2.1.2.2 Industry 19

ix

2.1.2.3 Medicine 20

2.1.2.4 Military and Police 21

2.1.2.5 Entertainment and Service 23

2.1.2.6 Toys 27

2.1.3 Human-Robot Interaction (HRI) 29

2.1.4 Human-Robot Social Interaction (HRSI) 34

2.2 Dance 42

2.2.1 Dance Robots 42

2.2.2 Thai Dance 46

2.3 Entertainment Robots 49

2.4 WowWee and Robosapien Media 50

2.4.1 History of WowWee 50

2.4.2 Robosapien: Why Robosapien Media? 52

2.5 Speecys and SPC-101C 55

2.5.1 History of Speecys 55

2.5.2 Why Speecys SPC-101C? 56

2.6 Comparison of SPC-101C and RS Media 59

2.7 Physical Movements 61

2.8 Image Processing 63

2.9 Learning Theories 67

CHAPTER 3 RESEARCH STUDY 72

3.1 The Scope of Research 72

3.2 Research Planning and Methodology 74

3.2.1 Research Phases 75

x

3.2.2 Procedure 75

3.2.3 Assessment 76

3.3 The Application of Robosapien Media 77

3.4 The Application of Speecys 78

CHAPTER 4 DEVELOPMENT AND MEASUREMENTS 80

4.1 Software Design and Development 80

4.1.1 Structure Computation 81

4.1.1.1 Epipolar Geometry 81

4.1.1.2 Stereo Calibration 84

4.1.2 Color Detection 88

4.1.3 Correlation Between Robot Joints and Human Joints 93

4.1.4 Software Development 96

4.2 Implementation 97

4.2.1 Development of Thai Dance by iThaiSTAR 97

4.2.2 Development of Cheerleading Dance by iCHEER 100

4.2.3 Development of Yoga Exercises by iCHEER 102

4.2.3.1 Sitting Pose 102

4.2.3.2 Plane Pose 103

4.2.3.3 One Leg Pose 103

4.2.3.4 Mountain Pose 104

4.2.3.5 Forward Bend Pose 105

4.2.3.6 Dog and Cat Pose 105

4.2.3.7 Cobra Pose 106

4.2.3.8 Corpse Pose 107

xi

4.2.3.9 Chaturanga Pose 108

4.2.3.10 Chair Pose 109

4.2.4 Development of Elderly Exercises by iCHEER 109

4.2.4.1 Arm Raise 110

4.2.4.2 Chair Stand 110

4.2.4.3 Bicep Curl 111

4.2.4.4 Shoulder Flexion 111

4.2.4.5 Knee Flexion 112

4.2.4.6 Hip Flexion 112

4.2.4.7 Hip Extension 113

4.2.4.8 Alternative Hamstring Stretch 114

4.2.4.9 Wrist Stretch 114

4.2.5 Camera Calibration Matrix 115

4.2.6 Projection Matrix 116

4.2.7 Camera Rotation and Translation 116

4.2.8 Linear Triangulation Method 117

4.3 Outcome and Evaluation 118

4.3.1 Initial Indication of Response 118

4.3.2 Evaluation 120

CHAPTER 5 RESULTS AND DISCUSSIONS 121

5.1 Results and Discussions 121

5.2 Problems and Limitations 122

5.3 Current Achievement 123

5.4 Plans for Future Work 123

xii

CHAPTER 6 CONCLUSION 125

REFERENCES 127

APPENDICES 140

Appendix A: Codes of iCHEER Control Center 140

Appendix B: Codes of iCHEER Color Tracking 158

Appendix C: Codes for Cameras Calibration 174

Appendix D: Codes for iThaiSTAR Thai Dance Routine 180

Appendix E: Public Exposure of iThaiSTAR and iCHEER 202

xiii

LIST OF TABLES

Table 2.1: Examples of roles and proximity patterns that arise in major application

domains.

Table 4.1: Age distribution of respondents who returned the forms.

Table 4.2: Gender Distribution.

Table 4.3: Answers on “Is this the first time you see a dance robot?”

Table 4.4: Impression on the Dance Robot.

Table 4.5: Preference of Robot verses a Dog.

Table 4.6: Answer to question, “Would you like to have a robot in your house?”

Table 5.1 Main Movement Comparison: RS Media vs SPC-101C.

xiv

LIST OF FIGURES

Figure 2.1: Bristol tortoise.

Figure 2.2: Shakey.

Figure 2.3: General Electric Walking Truck.

Figure 2.4: Unimate Robot.

Figure 2.5: Isaac Asimov and Joe Engleberger.

Figure 2.6: “QRIO”, small biped , formerly known as SDR

(Sony Dream Robot, SDR-4X II).

Figure 2.7: “HOAP-3” in action.

Figure 2.8: “ASIMO” in action.

Figure 2.9: Toyota Trumpet Playing .

Figure 2.10: “Odyssey IIb” and “Sojourner” exploration robots.

Figure 2.11: Industrial robots spot weld automobile bodies on an assembly line.

Figure 2.12: “ROBO DOC” used in medical operation.

Figure 2.13: Campbell Aird, Scottish hotel owner, fitted with the world's first bionic arm.

Figure 2.14: Miniature Emergency Response Vehicle (MERV), police remote

control bomb-disposal robot.

Figure 2.15: “Cypher”, remote control helicopter for military surveillance 6 feet in

diameter with 50 mile range.

Figure 2.16: “BEAR”, a life-size humanoid robot that will search for and rescue

people.

Figure 2.17: “Gort” from the movie “The Day The Earth Stood Still”

xv

Figure 2.18: Keepon dancing with a child.

Figure 2.19: Augmented Robosapiens playing soccer at RoboCup German Open

2005.

Figure 2.20: Cleanmate 365 QQ-2, a robotic vacuum cleaner.

Figure 2.21: MIT Postdoctoral associate Aaron Edsinger puts objects in a box held

out to him by robot .

Figure 2.22: Furby in different color.

Figure 2.23: LEGO Mindstorms NXT.

Figure 2.24: Cobalt Blue “Aibo”.

Figure 2.25: Framework for classifying social robots.

Figure 2.26: Comparison between the robot dance motions developed by researchers and those developed by the dancer.

Figure 2.27: A ballroom dancing robot “MS DanceR”.

Figure 2.28: The 1.5-meter-tall robot “HRP-2 Promet”.

Figure 2.29: QRIO danced with kids.

Figure 2.30: Examples of Thai classical dance forms.

Figure 2.31: Examples of Thai folk-dance forms.

Figure 2.32: WowWee’s mass produced toys.

Figure 2.33: posed with Robosapien Media and .

Figure 2.34: Futaba RBT-1, a flexible 1-foot robot that does all the moves.

Figure 2.35: Robosapien Media with lying down sequences.

Figure 2.36: Speecy SPC-101C.

Figure 2.37: Control of SPC-101C.

Figure 2.38: RS Media and SPC-101C demonstrate an arm movement.

Figure 2.39: Image Processing.

xvi

Figure 3.1: Phases of this research project.

Figure 3.2: Conceptualization of the proposed research approach.

Figure 3.3: RS Media dances with a school kid.

Figure 4.1: Real-time Image Processing of iCHEER

Figure 4.2: Epipolar Geometry.

Figure 4.3: Overview Program work flow.

Figure 4.4: Two images are captured from left and right cameras at the same time.

Figure 4.5: Current status on 3D reconstruction and program for image capture

from two webcam is achieved.

Figure 4.6: Color Object Tracking.

Figure 4.7: MS Visual Studio Software on Colour Tracking.

Figure 4.8: Simulation of Real-time Image Processing of Human to Point Cloud.

Figure 4.9: Moving Horizontal Axis to the Left.

Figure 4.10: Horizontal Axis is on the Left.

Figure 4.11: Simulation Movement of the Human’s Right Foot.

Figure 4.12: Simulation Movement of Human’s Right Hand.

Figure 4.13: Mean-Shift Algorithm.

Figure 4.14: Human Joints.

Figure 4.15: Human Joints Structure.

Figure 4.16: Robot Joints.

Figure 4.17: Main movements of “Tar Sod Soi Ma La”.

Figure 4.18: Main movements of “Tar Ram Sai”.

Figure 4.19: iCHEER Control Center.

Figure 4.20: iCHEER Demonstrated “Sitting Pose”.

Figure 4.21: iCHEER Demonstrated “Plane Pose”.

xvii

Figure 4.22: iCHEER Demonstrated “One Leg Pose”.

Figure 4.23: iCHEER Demonstrated “Mountain Pose”.

Figure 4.24: iCHEER Demonstrated “Forward Bend Pose”.

Figure 4.25: iCHEER Demonstrated “Dog and Cat Pose”.

Figure 4.26: iCHEER Demonstrated “Cobra Pose”.

Figure 4.27: iCHEER Demonstrated “Corpse Pose”.

Figure 4.28: iCHEER Demonstrated “Chaturanga Pose”.

Figure 4.29: iCHEER Demonstrated “Chair Pose”.

Figure 4.30: iCHEER Demonstrated Arm Raise.

Figure 4.31: iCHEER Demonstrated Chair Stand.

Figure 4.32: iCHEER Demonstrated Bicep Curl.

Figure 4.33: iCHEER Demonstrated Shoulder Flexion.

Figure 4.34: iCHEER Demonstrated Knee Flexion.

Figure 4.35: iCHEER Demonstrated Hip Flexion.

Figure 4.36: iCHEER Demonstrated Hip Extension.

Figure 4.37: iCHEER Demonstrated Alternative Hamstring Stretch.

Figure 4.38: iCHEER Demonstrated Wrist Stretch.

Figure 4.39: Perspective.

xviii

LIST OF ACRONYMS

2D 2 Dimensional

3D 3 Dimensional

AC/DC Alternate Current / Direct Current

ASIMO Advanced Step in Innovative Mobility

BEAR Battlefield Extraction-Assist Robot

CLDC Connected Limited Device Configuration

CCD Charge Coupled Device

COM Center of Mass

COP Center of Projection

CT Computed Tomography

DOF Degrees of Freedom

GB Gigabyte

HCI Human-Computer Interaction

HOAP-3 Humanoid Open Architecture Platform-3

HRI Human-Robot Interaction

HRP-2 Humanoid Project-2

IR Infrared iThaiSTAR intelligent Thai Sanook Training-Assist Robot iCHEER intelligent Companion Humanoid Entertainment and

Education Robot

J2ME Java 2 Micro Edition

JRE Java Runtime Environment

xix

JSR Java Specification Request

JTWI Java Technology for the Wireless Industry

KMUTT King Mongkut’s University of Technology Thonburi

LCD Liquid Crystal Display

MB Megabyte

MERV Miniature Emergency Response Vehicle

MIDP Mobile Information Device Profile

MIT Massachusetts Institute of Technology

MP3 MPEG-1 Audio Layer 3

MP4 Moving Picture Experts Group 4

MS DanceR Mobile Smart Dance Robot

NASA National Aeronautics and Space Administration

QRIO Quest for Curiosity

RS Robosapien

RSU Rangsit University

R.U.R. Rossum’s Universal Robots

SD Secure Digital

SDK Software Development Kit

SDR Sony Dream Robot

SE Standard Edition

SPECT Single-Photon Emission Computed Tomography

UV Ultraviolet

xx

CHAPTER 1

INTRODUCTION

1.1 Background of the Research

The rise of overweight and obesity globally has been recognised mainly due to decrease in physical activity patterns as a consequence of changes in lifestyles, nature of work, modes of transportation, and urbanisation (Wen & Rissel, 2008;

Frank et al., 2004; Larsen et al., 2005; Popkin, 2009). Overweight and obesity have been found as the major risk factors for chronic diseases, including cardiovascular disease, diabetes, and some forms of cancer (World Health Organization, 2009).

While high levels of overweight and obesity have generally characterised developed countries such as the United States and Australia, the rates of overweight and obesity are increasing much faster in the developing world (Edwards & Roberts, 2009;

World Health Organization, 2010; Popkin, 2004). In the case of a developing country like Thailand, the prevalence of obesity has been doubled in the past two decades and a recent research indicated the problem no longer restricts to the higher socioeconomic group (Aekplakorn & Mo-Suwan, 2009; Lim et al., 2009; Bell et al.,

2001). Moreover, while obese and overweight students in Thailand are more likely to consume higher proportion of Western-style food, other factors such as reduction in exercise frequency also affects the presence of obesity (Newman et al., 2008; Liu et al., 2007; Ma et al., 2006; Popkin, 2009).

This proposed research is a study on the design, development and evaluation of digital contents for low cost personal humanoid robots for social purposes such as

1 entertainment, education and training. In addition, specific application domains in physical activities such as Thai dance, cheerleading dance, yoga and exercises for elderly will be demonstrated and assessed using two low cost personal robots in this research.

During the past few years, advancement in robotic technology and the availability of low cost off-the-shelf robots have given the opportunity for this investigation. While a wide range of robots in various forms are now readily available, the challenge is how to design and develop contents for these robots to be used for specific purposes.

This research aims to develop physical movements as the contents for low-cost off- the-shelf robots as a supplement or assistant in trainings in various areas, and to develop software tools to evaluate its effectiveness. This will lead to a better understanding and contribution of knowledge to the disciplines of Human-Robot

Interaction (HRI), Human-Robot Social Interaction (HRSI), and social interactive humanoid robots. The outcome of the project will benefit the robotic communities by empowering and assisting them towards the implementation of companion humanoid entertainment and education robots.

Currently, robots have been used from factory floors to dedicated tasks involved with direct human interactions. The evolution of robots is expected to lead to the development of “personal robots” similar to the development of personal computers during the past decades. However, most current industrial robots are heavy and bulky. In a way, this is no difference to the situations with the early generations of computers. Today, computers are no longer perceived as machines only accessible by the privileged few. The reduction of physical size, increased computational

2 power, and diversity of applications have led to a rapid pace in computer technology development. On the other hand, for robots to achieve similar level of accessibility or usability, safety and dependability issues must be resolved (Santis et al., 2008).

Nevertheless, robots are already available to provide physical assistance to humans in order to reduce fatigue and stress, increase human capabilities in terms of force, speed, and precision, and improving in general the quality of life. On the other hand, when involved with robot-assisted tasks, the human has the experience, common sense knowledge, understanding and decision for correct execution of tasks (Khatib et al., 1999). Hence, it is not expected that robots will replace human in all tasks but they will continue to evolve in their roles as assistant and companion.

It is expected that robots will increase their presence in human-centered environments in the near future (Gates, 2007). John Intini (2007) predicted that by

2017, robots will be capable to take care the elderly, autonomous cars will be self- driven, and houses will organise itself and even able to talk to human directly. In

2004, a survey by the United Nations reported that “robots are set to become increasingly familiar companions in homes by 2007 and there will be almost 2.5 million entertainment and “leisure” robots in homes which compares to 137,000 in

2004” (BBC News, 2004). A robotics survey in 2002 conducted by the United

Nations (U.N. and I.F.R.R., 2002) has grouped robotic technologies into three major categories which are primarily defined through their application domains. They are listed as follows:

 Industrial robots have historically represented the vast majority of robotic

development, with the majority of them being deployed in the automotive

3 industry, beginning with the automation of an entire Nissan plant in the 1990s.

The average cost of an industrial robot has decreased by 88.8% between 1990

and 2001. At the same time, U.S. labour costs increased by 50.8%(U.N. and

I.F.R.R., 2002). These opposing trends continue to open up new opportunities for

robotic devices to perform jobs that were previously done by human. However,

industrial robots tend not to interact directly with people. Research in this field

focuses on techniques for rapid and optimal configuration of these robots.

 Professional service robots are subjects of a much less practiced field, but

the field is growing at a much faster pace than industrial robotics. Just like

industrial robots, professional service robots manipulate and navigate among

their physical environments. However, professional service robots assist people

in the pursuit of their professional goals, largely outside industrial settings. Some

of these robots operate in environments that are inaccessible or unsuitable for

human, such as cleaning up nuclear waste (Blackmon et al., 1999; Brady et al.,

1998), or navigate in abandoned mines (Thrun et al., 2003). Other robots have

been used to assist in hospitals, such as the HelpMate® robot (King & Weiman,

1990), which transports food and medication in hospitals; or the surgical robotic

system, used for assisting physicians in surgical procedures. Robot manipulators

are also routinely used in chemical and biological labs, where they handle and

manipulate substances (e.g., blood samples) with speeds and precisions that

people cannot match; other work has investigated the feasibility of inserting

needles into human veins through robotic manipulators (Zivanovic & Davies,

2000). Most professional service applications have emerged in the past decade

and according to the report by U.N. (U.N. and I.F.R.R., 2002), 27% of all

operational professional service robots operate underwater, 20% perform

4 demolitions, 15% offer medical services, and 6% serve people in agriculture such

as milking cows (Reinemann & Smith, 2000). Military applications such as

bomb diffusal or disposal, search and rescue (Casper, 2002), and support of

SWAT teams (Joneset al., 2002) are of increasing relevance (U.N. and I.F.R.R.,

2002). According to the same report, the total operational stock of professional

service robots in 2001 was 12,400, with a 100% growth expectation by 2005.

The amount of direct interaction with people is much larger than in the industrial

robotics field, because service robots often share the same physical space with

people.

 Personal service robots have the highest expected growth rate. They were

estimated to grow from 176,500 in 2001 to 2,021,000 in 2005 – an 1,145%

increase. Personal service robots assist or entertain people directly in domestic

settings or in recreational activities (Thrun 2004). Examples include robotic

vacuum cleaners, lawn mowers, receptionists, robot assistants to elderly (Hirsch

et al., 2000) and people with disabilities, wheelchairs, and toys (cf. Thrun 2004).

Many robots interact with people who have no special skills or training to

operate the robot. These robots support humans in the house (NEC, 2008),

improve communication between distant partners (Gemperle, DiSalvo, Forlizzi,

& Yonkers, 2003) and research vehicles for the study of human-robot

communication (Breazeal, 2003b; Okada, 2001).

It has been almost a decade since a number of researches utilised a robot to train a human user to exercise in the therapy area. In the study by Lum et al. (2002), the training assisted by a robot was compared with conventional treatment of equal intensity and duration. The finding resulted in a clear evidence of the potential of

5 using robots for training purposes. The robot group had significantly greater improvements in their performance relative to the control group. Moreover, findings from other researches that used a robot to train a human have also shown positive results (Lum et al., 2002; Patton & Mussa-Ivaldi, 2002; Kahn et al., 2006; Volpe et al., 2008). This clearly is a positive direction for research into areas other than therapy. In this study, it is intended to explore the opportunity to use low cost robots as a supplement to train human users.

At present, robotic toys and video games have become a part of modern culture.

Some years ago, there already seemed to be evidenced that video gamers were better trained in concentrating on several tasks at a time (Satyen and Ohtsuka, 2002). The reason therefore can be found in the demands of games to perceive several different objects on screen and transform the perception into the right motor commands.

Rosenberg, Landsittel, and Averch (2005), found concrete results indicating that test persons that achieved better results in video games training also acted more successfully in real life tasks which demanded a good eye-hand-coordination. This conclusion was underlined by a study of Iowa State University. It was revealed that surgeons that spent their time regularly on video games made about one third mistakes less in laporoscoptic surgeries on average. In other words, specific training at a video game console is appropriate for improving motor abilities.

On the other hand, entertainment or companion robots that interact with human do not entertain human in a repetitive way. From the observation of the initial research, people will be entertained by the robot only for a short period of time. This is mostly due to the first impression of the robot’s actions that are new to human and once people started to note the repeated pattern of actions, they will lose their interest on

6 the robot. Another challenging problem for this research is how to make the robot’s interactions becoming more intelligent with continuous ability to adapt its own actions with a human partner in order to create a truly entertained interaction regardless of time. Thus, this research will also mark a significant milestone in the development of entertainment robots.

1.2 Research Problems and Questions

This research poses the following research questions:

1: Is it feasible to use low cost off-the-shelf robots for education, entertainment and

training?

2: Can low cost off-the-shelf robots be developed to facilitate training in

physical exercises as a supplement to the training by human or other means?

3: Can a robot enhance and encourage physical activities?

4: How to assess the effectiveness in using robot to facilitate participation in

physical activities?

The research is based on the following hypotheses:

1: A low cost off-the-shelf robot can be programmed for applications like training,

companionship, edutainment and entertainment.

2: A low cost off-the-shelf robot facilitates training in physical exercises.

3: A robot can enhance physical activities.

4: Performance of the participants who have attended a training session will be

improved through the use of the off-the-shelf robot developed in this project.

7 1.3 Research Objectives

The objectives of the research are as follows:

. To study the use of low-cost off-the-shelf humanoid robots in various application

domains of education, entertainment and training. Two readymade robots –

WowWee RS-Media and Speecys SPC-101C have been selected and used as

research tools in this study.

. To study and assess the effectiveness in using robots to achieve the training

objectives.

. To assess the enhancement of user’s performance, level of enjoyment, and

motivation towards the interactive activities with the robot.

. To assess the effectiveness of the developed system.

1.4 Organisation of the Thesis

This thesis is organised into six chapters. Chapter One is the introduction and it provides a background of the research regarding the problems and questions along with the hypotheses to target the research’s objectives. The chapter also discusses the need for an integrated model and the context within which this study took place.

Chapter Two reviews learning theories, robot literature, human-robot interaction, physical movements, and image processing. In particular, it explores the , applications of robots, development of dance robots and entertainment robots, background of Robosapien and Speecys SPC-101C robots, physical movements, and image processing.

8 Chapter Three focuses on the scope of this research following the objectives defined including research planning and methodology. It explains the detailed features, specifications and applications of the Robosapien Media and Speecy SPC-101C used in this project along with its limitation and software package bundled with the robot.

Chapter Four describes the development process, outcome, and evaluation. It also details how video processing is carried out when comparing robot’s movements to the user’s movements for assessment purposes. Also, the implementations of the physical movements of various trainings are covered.

Chapter Five reports the results with the current achievements and discusses problems and limitations faced during the development. Plans for future research are also discussed.

Chapter Six presents the conclusion of this study. In addition, contributions towards the academic and further applications are discussed.

9

CHAPTER 2

LITERATURE REVIEW

2.1 Robot

2.1.1 History and Evolution of Robots

The word “Robot” comes from the 1921 play “R.U.R.” (Rossum’s Universal

Robots) by Czech writer, Karel Capek. “Robot” comes from the Czech word

“robota”, meaning forced labour (Capek, 1923). The word “robotics” also comes from science fiction and it first appeared in the short story “Runaround” published in

1942 by Isaac Asimov. This story was later included in his famous book “I,

Robot”. The robot stories of Isaac Asimov also introduced the idea of the three

“Laws of Robotics” that he used throughout his robot-related works as follows:

1. A robot may not injure a human being or, through inaction allow a human

being to come to harm, unless this would violate a higher order law.

2. A robot must obey orders given to it by human beings, except where such

orders would conflict with a higher order law.

3. A robot must protect its own existence as long as such protection does not

conflict with a higher order law.

Asimov (1950) then later added one more law so called the "zeroth" law: A robot may not injure humanity, or, though inaction, allow humanity to come to harm.

Definition of robot according to The Robot Institute of America, 1979:

“A reprogrammable, multifunctional manipulator designed to move materials, parts,

10 tools, or specialised devices through various programmed motions for the performance of a variety of tasks.”

According to the Webster dictionary a robot is “A machine in the form of a human being that performs the mechanical functions of a human being but lacks sensitivity; an automatic apparatus or device that performs functions ordinarily ascribed to human beings or operates with what appears to be almost human intelligence; a mechanism guided by automatic controls.”

One of the first robots in the real world was the “Clepsydra” or water clock, which was made in 250 B.C. It was created by Ctesibius of Alexandria, a Greek physicist and inventor. The earliest remote control boat was built by Nikola Tesla in

1898. Tesla is best known as the inventor of AC electric power, radio, induction motors, Tesla coils, and other electrical devices.

In 1948, Grey Walter’s “Elsie or Bristol tortoise” of Bristol University was the first electronic autonomous robot that could sense light and contact with external objects, and use these stimuli to navigate. It had been restored recently by Owen Holland and is now fully operational (Holland, 2007; Walter, 1953; Arkin, 1998).

11

Fig. 2.1 Bristol tortoise (Davidbuckley.net, 2008).

From the 1966 through 1972, “Shakey”, the first mobile robot that could think independently and interact with its surroundings, was developed by the Stanford

Research Institute (SRI) in the US. Shakey could perform tasks that required planning, route-finding and the rearranging of simple objects. Life magazine referred to Shakey as the “first electronic person” in 1970. It is now on display at the

Computer History Museum in Mountain View, California in the US (Wickelgren,

1996; Darrach, 1970).

Fig. 2.2 Shakey (Robot Magazine, 2007).

12 Also during the same period, Ralph Moser at General Electric Corporation developed “General Electric Walking Truck”, the first legged vehicle with a computer-brain for the US army. It was a large (3,000 pounds) four legged robot that could walk up to four miles an hour. The complicated coordination of movements within a leg and between different legs during stepping was controlled by a computer

(cf. Wickelgren, 1996).

Fig. 2.3 General Electric Walking Truck (cf. Davidbuckley.net, 2008).

“Unimate” was the first industrial robot, which worked on a General Motors assembly line in 1961 (Nof, 1999). It was created by Devol and Joseph F.

Engelberger in the 1950's. Together they started the world’s first robot manufacturing company, called “Unimation” (Mickle, 1961). Engelberger has been known as the “Father of Robotics”.

13

Fig. 2.4 Unimate Robot (Boots & Sabers, 2008).

Fig. 2.5 Isaac Asimov and Joe Engleberger (Thompson, 1997)

Among the most advanced humanoid robots developed so far, is the 58 cm. tall

“QRIO” (Quest for Curiosity) developed by Sony in 2003 (Fig. 2.6). It contains three

CPUs and has 38 degrees of freedom (DOF). With its plentiful talents, such as rich performances of dancing and singing, smooth biped walking and memory-based dialogues with humans (Behnke et al., 2006). Unfortunately, Sony discontinued the development of QRIO and it was never sold as a product. Nonetheless, it was recorded in the 2005 Guinness World Records book as the fastest running biped humanoid robot with the speed of 23 cm. per second (Scienceinaction.info, 2007).

14

Fig. 2.6 “QRIO”, small biped entertainment robot,

formerly known as SDR (Sony Dream Robot, SDR-4X II) (SONY, 2003).

Unlike QRIO, HOAP-3 (Humanoid Open Architecture Platform-3), has 28 DOF, developed by Fujitsu (JTSC, 2007), has been sold to some labs for the cost that is comparable to the price of a car (Fig. 2.7). Since the sale of the first HOAP-1 in

2001, the HOAP Series humanoid robots have targeted the mechanical engineering departments of universities as well as the Robotic research areas, particularly of controls engineering specialty for research and development. HOAP-3 provides portability, as it measures only 60 cm. in height and weighs a mere 8.8 kg., the same as its predecessor, HOAP-2, which has been on the market since 2003. HOAP-3 was developed with strengthened cooperation functions such as a communication function, an image recognition function and a new exterior. HOAP-3 can be used for research of communication with robots, such as a dialog with people. Furthermore, the actuator of the head and arms, include a distance sensor, grip sensor and more powerful motion expression. The robot is now come with standardised vocal recognition, voice synthesis, image recognition and wireless communication.

15

Fig. 2.7 “HOAP-3” in action (cf. JTSC, 2007).

A taller humanoid, “ASIMO” (Advanced Step in Innovative Mobility), has been developed by Honda (Honda, 2007). Its most recent research version has 34 DOF and height and weight of 130 cm. and 54 kg., respectively (Fig. 2.8).

In 1986, Honda commenced the humanoid robot research and development program.

Keys to the development of the robot included “intelligence” and “mobility.” Honda began with the basic concept that the robot “should coexist and cooperate with human beings, by doing what a person cannot do and by cultivating a new dimension in mobility to ultimately benefit society.” This provided a guideline for developing a new type of robot that would be used in daily life, rather than a robot purpose-built for special operations.

As exemplified by P2 and P3, the two-legged walking technology developed by

Honda represents a unique approach to the challenge of autonomous locomotion.

Using the know-how gained from these prototypes, research and development began on new technology for actual use. ASIMO represents the fruition of this pursuit.

16 ASIMO was conceived to function in an actual human living environment in the near future. It is easy to operate, has a convenient size and weight and can move freely within the human living environment, all with a people-friendly design.

Fig. 2.8 “ASIMO” in action (cf. Honda, 2007).

Yet, another humanoid robot of the same size as ASIMO, has a trumpet playing ability developed by Toyota Motor Corporation (Toyota, 2007), to function as personal assistants for humans. Toyota wants its partner robots to have human characteristics, such as being agile, warm and kind and also intelligent enough to skilfully operate a variety of devices in the areas of personal assistance, care for the elderly, manufacturing, and mobility. Furthermore, Toyota developed artificial lips that move with the same finesse as human lips, which, together with robots’ hands, enables the robots to play trumpets like humans do (Fig. 2.9).

Through the expanded development of the driving control technologies for automobiles, Toyota came up with the new stabilising technologies for robots. A small, light-weight and low-cost high-precision-sensors, developed based upon automotive sensor technology, are used as an attitude sensor that detects a tilt of a robot.

17

Fig. 2.9 Toyota Trumpet Playing Humanoid Robot (cf. Toyota, 2007).

2.1.2 Robot Applications

Adam Currie (2007) has classified the applications of robots into the following areas:

2.1.2.1 Exploration

Some people are required to work in places or environments that are dangerous or inaccessible such as outer space, deep forest, or the deep ocean. Therefore, when people cannot go there themselves, they can send remotely controlled robots there. The robots are able to carry cameras and other instruments, and in the meantime they can collect information and send it back to their human operators. In

Fig. 2.10, the “Odyssey IIb” submersible robot is shown suspended in a tank. It was developed by research scientists at M.I.T. for ocean exploration. The inset of Fig.

2.10 shows the “Sojourner” microrover robot being repaired at the Jet Propulsion

Labs. Sojourner landed on the surface of Mars on July 4, 1998 (Suplee, 1997).

18

Fig. 2.10 “Odyssey IIb” and “Sojourner” exploration robots (cf. Suplee, 1997).

2.1.2.2 Industry

For certain kinds of jobs, robots can accomplish tasks faster and more consistent than humans. Robots do not need to be paid, eat, drink, take a break, or go to the bathroom like people. They can do repetitive work that is absolutely boring to people and they will not stop, slow down, or fall to sleep like a human (Fig. 2.11).

Fig. 2.11 Industrial robots spot weld automobile bodies on an assembly line

(cf. Suplee, 1997).

19 2.1.2.3 Medicine

In some complicating surgeries, doctors have to use a robot instead. A human would not be able to drill a hole exactly one 100th of an inch wide and long. In those tasks, robots can do the job much faster and more accurately than a human can. Also, a robot can be more delicate than a human. Moreover, doctors also use tiny size robots to traverse through human’s blood vessels for certain kind of heart surgeries. Fig.

2.12 shows “Robo Docs”, a modified industrial robot that drills a precise hole in the thigh bone of the skeleton (cf. Suplee, 1997).

Fig. 2.12 “ROBO DOC” used in medical operation (Visual Journal, 2007).

Some doctors and engineers are also developing prosthetic (bionic) limbs that use robotic mechanisms. Dr. David Gow, of the Prosthetics Research and Development

Team at Princess Margaret Rose Orthopaedic Hospital, made the first bionic arm called the Edinburgh Modular Arm System (EMAS) in 1998 (Fig. 2.13) (Electronics

Times, 1998; BBC News, 1998).

20

Fig. 2.13 Campbell Aird, fitted with the world's first bionic arm

(The Irish Times, 1998).

2.1.2.4 Military and Police

Police need certain types of robots for bomb-disposal and for bringing video cameras and into dangerous areas, where a human policeman might get hurt or killed (Fig. 2.14).

Fig. 2.14 Miniature Emergency Response Vehicle (MERV),

police remote control bomb-disposal robot (Currie, 2007).

21 The military also uses robots for locating and destroying mines on land and in water, entering enemy bases to gather information, spying on enemy troops, and rescuing injured soldiers in mission (Fig. 2.15).

Fig. 2.15 “Cypher”, remote control helicopter for military surveillance

6 feet in diameter with 50 mile range (cf. Suplee, 1997).

Vecna Technologies Cambridge Research Laboratory developed “BEAR”

(Battlefield Extraction-Assist Robot), an extremely strong, extremely agile robot with the size of a human. It is built to rescue humans from dangerous areas and safely carry them back to safety. Moreover, in situations where speed over level terrain is of maximum importance, the BEAR can balance upright and move quickly on wheels (Fig. 2.16) (Atwood & Klein, 2007).

Fig. 2.16 “BEAR”, a life-size humanoid robot for search and rescue

(cf. Atwood & Klein, 2007).

22 2.1.2.5 Entertainment and Service

At first, robots were just for entertainment, but as better technology became available, real robots were created. Many robots are still seen on television, such as

Star Trek and in the movies, such as The Day the Earth Stood Still (Fig. 2.17),

Forbidden Planet, Lost in Space, Blade Runner, and Star Wars. These imaginary robots do a lot of things that the real robots are not yet capable of doing. Some robots in movies and video and computer games are made to attack people, but in real life they cannot really hurt people unless they are programmed to do so.

Fig. 2.17 “Gort” from the movie “The Day The Earth Stood Still” (cf. Currie, 2007).

The concept of entertainment robots can also be thought of as one that entertain people and not just as one seen entertaining on television. A good example can be found in a research in a dancing robot “Keepon” (Fig. 2.18) developed by Marek

Michalowski, a Ph.D. student in the Robotics Institute at Carnegie Mellon

University, and Hideki Kozima, a researcher at Japan’s National Institute of

Information and Communications Technology. They are studying dance-oriented

23 nonverbal play between children and Keepon, a small creature-like robot developed to perform emotional and attentional interaction with children. Also, being developed is architecture to allow the robot to perceive, model, and generate social rhythms, synchronising the robot’s behaviours with a human’s. Explicitly Keepon is not being used for entertainment, but to develop technologies and methodologies for human-robot interaction incorporating the rhythmic properties of human interactive behaviour. Their study demonstrated that the rhythmic synchrony of the robot’s movements with the music and the children would have an effect on the quality of interactions and the rhythmic behaviours of children. Furthermore, the analysis and observations suggested that music provided a powerful environmental cue for the negotiation of rhythmic behaviour relative to that of the robot; and that the robot’s responsiveness to people’s behaviours positively affected their engagement with the robot (LERN, 2007; Michalowski et al., 2006; Michalowski et al., 2007).

Fig. 2.18 Keepon dancing with a child (cf. LERN, 2007).

RoboCup, a soccer league for humanoid robots can be thought of as a kind of entertainment for people. As Behnke et al. (2006) demonstrated that augmented multiple Robosapiens used in their research to obtain a team of soccer playing

24 humanoid robots. The robots have been added with a Pocket PC and a colour camera to make it autonomous (Fig. 2.19). Because the humanoid robots have not been ready for playing soccer games so far, the robots had to demonstrate their capabilities by solving a number of subtasks. In the Humanoid Walk they had to walk towards a pole, to turn around it, and to come back to the start. Scoring was based on walking speed and stability. In the penalty kick competition two robots faced each other. While one robot tried to score a goal, the other defended. The teams which participated in the Humanoid League chose very different robot platforms. Most teams constructed their own robots. A few teams used expensive humanoid robots developed by the Japanese industry, for example, Fujitsu HOAP-2 or Honda ASIMO. Some teams purchased servo-driven commercial robots or robot kits.

Fig. 2.19 Augmented Robosapiens playing soccer at RoboCup German Open 2005

(cf. Behnke et al., 2006).

Service robots are increasingly becoming popular for household. One of many popular robotic products is an autonomous vacuum cleaner such as Cleanmate 365

QQ-2 from Infinuvo (Fig. 2.20). For busy people, this QQ-2 can definitely assist human for cleaning floors. QQ-2 has numerous sensors that allow it to navigate

25 around rooms and keep it from falling stairs. It also can find its own way back to charging station. Having a UV lamp attached to disinfect areas and a compartment for distributing fragrance as it vacuums a room are its unique features (Mitsuoka,

2007).

Fig. 2.20 Cleanmate 365 QQ-2, a robotic vacuum cleaner (cf. Mitsuoka, 2007).

Researchers at Massachusetts Institute of Technology are working on a new generation of robot assistants that can adapt to new surroundings and learn on the job. To this end, they have created a prototype household helper named “Domo”

(Fig. 2.21). Domo is equipped with face recognition software, as well as skin that feels when it is touched. This allows its human operators to guide it by touch, an important feature when a robot interacts with homeowners instead of laboratory scientists. One task Domo is learning is to take an object from a human and place it on a shelf. This stimulated its role as a household helper for an elderly or infirm person. The original work on Domo was funded by NASA, and the project is now supported by Toyota, which is interested in developing partner robots for the home

(Trafton, 2007).

26

Fig. 2.21 MIT Postdoctoral associate Aaron Edsinger puts objects in

a box held out to him by robot Domo (cf. Trafton, 2007).

2.1.2.6 Toys

The new robot technology is making interesting types of toys that children will like to play with. In 1998, the “Furby” toy (Fig. 2.22) phenomenon was fueled by cutting edge technology that had never been used previously in a toy. It was a very popular robotic toy especially as a Christmas present. Over 40 million Furby creatures were sold worldwide. Furby is a creature that people can talk to, and it will respond by showing its feelings and talking back in many languages including

Furbish (Hasbro, 2007).

Fig. 2.22. Furby in different colours (cf. Currie, 2007).

27

Also in the same year the “LEGO MINDSTORMS” robot construction kit was launched. It has become the best-selling product in the LEGO Group’s history. The kit, which was developed by the LEGO Company with M.I.T. scientists, provides endless opportunities for robotics fanatics and LEGO builders to build and program robots that do what they want (Fig. 2.23) (LEGO, 2007).

Fig. 2.23 LEGO Mindstorms NXT (cf. LEGO, 2007).

“Aibo” is another robotic dog developed by Sony Corporation (Fig. 2.24). On its debut, Aibo was both an early use of Sony technologies such as the Memory Stick and its proprietary embedded , as well as advanced robotics technology from the company's research and development labs. Over time, the dog became more sophisticated, with the latest version able to speak 1,000 words; react

(in theory) appropriately to an owner's commands and motions; keep blogs, complete with pictures taken by cameras behind its eyes; and play music. According to a company representative, more than 150,000 have been sold since they went on the market in 1999 (Borland, 2006).

28

Fig. 2.24 Cobalt Blue “Aibo” (SONY, 2007).

2.1.3 Human-Robot Interaction (HRI)

The emerging field of human-robot interaction, bringing together research and applications of methodology from Robotics, Human Factors, Human-Computer

Interaction (HCI), Interaction Design, Cognitive Psychology, Education and other disciplines to enable robots to possess more natural and more rewarding interactions with humans throughout their areas of operation. Although much of the research conducted in HCI and Human Factors significantly offer rich resources for the study of Human-Robot Interaction and much of that work will also be applicable to robot development, however, according to Kiesler and Hinds (2004), autonomous robots are a distinctive case for three main reasons.

First, autonomous robots are perceived by people differently than most other computer technologies. Friedman et al. (2003) stated that autonomous robots mental model of people are often more anthropomorphic than are their models of other systems. Scholl and Tremoulet (2000) stated that the tendency for people to

29 anthropomorphise may be fed, in part, by science fiction and, in part, by the powerful impact of autonomous movement on perception. When building the autonomous robots to look human, anthropomorphic mental models of these systems may be encouraged. As Hinds et al. explained, some robotic scientists argue that humanoid robots provide for a more natural interface than more mechanistic robots.

Thus, humanoid robotics is the focus of most research and development work.

Another reason for using autonomous robots is a distinctive case in HCI is that robots are more likely to be completely mobile, bringing them into physical proximity with other robots, people, and objects. The mobile robots will have to negotiate their interactions in a dynamic, sometimes physically challenging, environment.

The last reason is because these robots are capable in making decisions. In other words, they ‘learn’ about themselves and their world, and they exert at least some control over the information they process and actions they emit. Certainly, many computer agents in desktop, automotive, and other computer systems can also make decisions. The autonomous robotic systems add even more complexity because they must adjust their decisions sensibly and safely to the robot’s abilities and to the options available to the robot in a given environment. Moreover, the system needs to detect and respond to changes in the environment and its users. For instance, if a humanoid robot that assists an elderly person in a house and detects when the person is ill or when the house is on fire. What action should the robot take and how certain of itself should it be? As these questions suggest, designing an appropriate interaction plan and interface for such a system requires an understanding of the

30 people who will use such a system, and of their world. Furthermore, Hofmann and

Williams (2007) also clarified that the effective human-robot cooperation requires robotic devices that understand human goals and intentions. These devices must have the capability to track and predict human intention and motion within the context of overall plan task sequences, based on a variety of sensor inputs.

Hence, the aforementioned HRI issues should be taken into consideration and carefully addressed in the research in HRI.

Many researchers in robotics now recognise that a rapidly developing field, “human– robot interaction”(HRI) plays a pivotal role in personal service robotics (Rogers &

Murphy, 2001). In addition, HRI has become a primary concern in the majority of robotic applications (Thrun, 2004). There are many definitions of HRI, some are broad and some are comprehensive. For example, while Fong et al. (2002) defined

HRI as the study of humans, robots, and the ways they influence each other,

Goodrich and Schultz (2007) defined it as a field of study dedicated to understanding, designing, and evaluating robotic systems for use by or with humans.

Interaction, by definition, requires communication between robots and humans.

Communication between a human and a robot may take several forms, but these forms are largely influenced by whether the human and the robot are in close proximity to each other or not. Thus according to Goodrich and Schultz (2007), communication and interaction can be separated into two general categories:

 Remote interaction: The human and the robot are not co-located and are

separated spatially or even temporally. For example, the Mars Rovers are

separated from Earth both in space and time.

31  Proximate interaction: The humans and the robots are co-located. For

example, service robots may be in the same room as humans.

For example, Table 2.1 classifies the most frequent types of interactions for the home and edutainment application domains among other major ones. For many of these domains, current research patterns exhibit a trend away from remote interactions toward proximate interactions, and away from operator roles toward peer or mentor roles (cf. Goodrich and Schultz, 2007).

Application Domain Interaction Type Role Example

Search and Rescue Remote Human is supervisor or operator Remotely operated search robots

Proximate Human and robot are peers Robot supports unstable structures

Assistive Robotics Proximate Human and robot are peers, or Assistance for the blind, and therapy for

robot is tool the elderly

Proximate Robot is mentor Social interaction for autistic children

Military and Police Remote Human is supervisor Reconnaissance, de-mining

Remote or Human and robot are peers Patrol support

Proximate

Remote Human is information consumer Commander using reconnaissance

information

Edutainment Proximate Robot is mentor Robotic classroom assistant

Proximate Robot is mentor Robotic museum tour guide

Proximate Robot is peer Social companion

Space Remote Human is supervisor or operator Remote science and exploration

Proximate Human and robot are peers Robotic astronaut assistant

Home Proximate Human and robot are peers Robotic companion

Proximate Human is supervisor Robotic vacuum

Industry Remote Human is supervisor Robot construction

Table 2.1 Examples of roles and proximity patterns that arise in major application

domains.

32 Moreover, Nakajima et al. (2004) & Nakajima et al. (2003) considered personality is also another key factor in HRI and that the robot personality should match that of the human user (cf. Nakajima et al., 2003). Later, Tapus et al. (2008) defined personality as “the pattern of collective character, behavioral, temperamental, emotional and mental traits of an individual that have consistency over time and situations.”

Also, in the study by Lund (2003), he pointed out that many robotic systems put less artificial intelligent (Al) capabilities than their entertainment functionality as toys.

This remark has motivated this proposed research to focus on the issues that most off-the-shelf robots are lacking of AI capabilities for fulfilling as a personal service robot.

Pransky (2004) has provided an interesting perspective for the different profiles a future robot companion could take. This in turn provides different aspects of advantages and weaknesses of such a future companion. The ‘Robotic Nanny’ would on the one hand play with children and feed them but on the other hand could lead to a child not having any human interaction and viewing robot interaction as the

‘norm’. A ‘Robotic Assistant/homework companion’ would be able to organise meetings and research, and track documents, but could lead to the feeling that robot interaction is easier than human interaction. Finally, the ‘Robotic Butler/Maid’ could do all the housework, but may well cause relationship difficulties at home by being too efficient and making one feels redundant.

33 2.1.4 Human-Robot Social Interaction (HRSI)

This research aims at the development and evaluation of digital content for humanoid robot for social purposes, therefore, it is necessary to understand Human-

Robot Social Interaction (HRSI), or the Socially Interactive Robotics (SIR). This is also known as “Social Robotics” (Breazeal, 2004), as a subfield of HRI. In recent years, HRSI has attracted considerable attention from academics and research communities (Salichs, 2006). Fong et al. (2003) described SIR as ones “for which social interaction plays a key role . . . [in order] to distinguish these robots from other robots that involve ‘conventional’ human-robot interaction, such as those used in teleoperation scenarios” with some indications by describing their characteristics as “A socially interactive robot may express and/or perceive emotions, communicate with high-level dialogue, learn and/or recognise models of other agents, establish and maintain social relationships, use natural cues (gaze, gestures, etc.), exhibit distinctive personality and character, and learn or develop social competencies.”

In addition to the definition of HRSI or SIR, Feil-Seifer and Matarić (2005), further expanded HRSI or SIR to include Socially Assistive Robotics (SAR) to better understand the key unique challenges of this growing field. SAR focuses on aiding through social rather than physical interaction between the human user and the robot.

Moreover, SAR is the intersection of SIR and Assistive Robotics (AR). SAR shares with AR the goal to provide assistance to human users, but it specifies that the assistance is through social interaction. Because of the emphasis on social interaction, SAR has a similar focus as SIR. In SIR, the robot’s goal is to develop close and effective interactions with the human for the sake of interaction itself. In

34 contrast, in SAR, the robot’s goal is to create close and effective interaction with a human user for the purpose of giving assistance and achieving measurable progress in convalescence, rehabilitation, learning, etc.

In addition, Syrdal et al. (2007) mentioned in order for a robot to operate successfully in human-cantered environments, it needs to be able to behave in a manner that is socially appropriate. Furthermore, Fong et al. (2002) emphasised the human interaction and the robot’s autonomy are the key functions that can spread the use of the social robots in human daily life. Nowadays most of the available robots can interact only with their creators or with a small group of specially trained individuals. The long-term goals of most of the robotic research are to develop a social robot that can interact with humans and participates in human society. In this case, the human roles during the interaction process will be evolved from being operator or supervisor to being a peer in form of teammate. Such type of robot must have effective and natural interfaces with high level of robot’s autonomy by which the robot will be able to survive in different situations. This interaction can be social if the robots are able to interact with human as partners if not peers. In this case, there is a need to provide humans and robots with models of each other. Sheridan

(1992) argues that the ideal would be analogous to two people who know each other well and who can pick up subtle cues from one another (e.g., musician playing a duet). In addition, an adaptive personalised robot companion (Dautenhahn, 2004), must also be capable of adapting to the individual needs and preferences of its users.

Salichs (2006) viewed a social robot as “having attitudes or behaviours that take the interests, intentions or needs of the humans into account.” In addition, Bartneck &

Forlizzi (2004) proposed the following definition of a social robot: "A social robot is

35 an autonomous or semi-autonomous robot that interacts and communicates with humans by following the behavioural norms expected by the people with whom the robot is intended to interact." This definition does not say what would actually be humans' normal expectations with regard to socially interactive robots. Moreover, they stated that autonomy is a requirement for a social robot. A semiautonomous robot can be defined as social if it communicates an acceptable set of social norms.

A completely remote controlled robot cannot be considered to be social since it does not make decisions by itself. It is merely an extension of another human.

Communication and interaction with humans is a critical point in this definition.

Therefore, robots that only interact and communicate with other robots would not be considered to be social robots. The interaction is likely to be cooperative, but is not limited to it. Also uncooperative behaviour can be considered social in certain situations. The robot could, for example, exhibit competitive behaviour within the framework of a game. The robot could also interact with minimum or no communication. It could, for example, hand tools to an astronaut working on a space station (Goza et al., 2004).

The term sociable robot has also been coined by Breazeal (2003b) in order to distinguish an anthropomorphic style of HRI from insect-inspired interaction behaviours. In this context, sociable robots can be considered as a distinct subclass of social robots. She defines sociable robots as socially participative creatures with their own internal goals and motivations. To develop a social robot or a sociable robot, many considerations must be taken into account. The required key features consist of form, modality, social norms, autonomy, and interactivity (Bartneck &

Forlizzi, 2004; Breazeal, 2003) which are shown in Fig. 2.25.

36 Form

Abstract Biomorphic Anthropomorphic

Modality

Unimodal Multimodal

Social Norms

No knowledge Minimal Knowledge Full Knowledge

Autonomy

No Autonomy Some Autonomy Full Autonomous

Interactivity

No Causal Behaviour Some Causal Behaviour ull Causal Behaviour

Fig 2.25 Framework for classifying social robots.

As an interdisciplinary field, HRSI integrates synergically robotics, , cognitive science, psychology and other fields like linguistics and ergonomics, in order to improve the naturalness of human-robot interaction (cf.

Salichs, 2006). Many robotic platforms have been built with different design considerations and capabilities to study HRSI. Kismet, for example, is an expressive anthropomorphic robot head developed at MIT with perceptual and motor modalities tailored to natural human communication channels (Breazeal, 2003b). This robot has been designed to assist research into social interactions between robots and humans.

Like Kismet, Sparky is social robot that uses both facial expression and movement to interact with humans (Scheeff, 2002). RUBI is another example of anthropomorphic robot with a head and arms designed for research on real-time social interaction between robots and humans (Fortenberry et al., 2004). A Robota is a sophisticated educational toy robot designed to build human-robot social interactions with children with motor and cognitive disabilities (Billard 2003). In the Lino project, a robot head with a nice, cute appearance and emotional feedback can be configured in such a way that the human user enjoys the interaction and will more easily accept possible

37 misunderstandings (van Breemen et al., 2003). Instead of using mechanical actuation, other social robot projects rely on computer graphics and animation techniques. Vikia, for example, has a 3D rendered face of a woman, which permits many degrees of freedom for generating expressions (Bruce et al., 2002). Valerie

Roboceptionist (Gockley et al., 2005), GRACE (Graduate Robot Attending a

ConferencE) and George (Gockley et al., 2004) are other examples for computer graphic-based social robots, which have expressive faces on panning platforms as well as large array of sensors. Valerie Roboceptionist has been developed to investigate long-term human-robot relationships. This robot is able to continually attract and engage many visitors on a daily basis over a long period (cf. Gockley et al., 2005). All these projects pretend to develop robots that function more naturally and can be considered as partners for the human not just as mere tools. These robots need to interact with human (and perhaps with each other) through similar ways by which humans interact with each other. To achieve this goal, many novel interfaces have been currently developed in order to allow humans to move seamlessly between different modes of interaction, from visual to voice to touch, according to changes in context or user preference (cf. Salichs, 2006).

Robots which are currently commercially available for use in a domestic environment and which have human-like interaction features are often orientated towards toy or entertainment functions (Walters, 2008). In the future, a robot companion which is to find a more generally useful place within a human orientated domestic environment (e.g. sharing a private home with a person or family) must satisfy two main criteria (Dautenhahn et al., 2005; Syrdal et al., 2006; Woods et al.,

2007):

38 Technical Capabilities:

It must be able to perform a range of useful tasks or functions.

Social Abilities:

It must carry out these tasks or functions in a manner that is socially acceptable, comfortable and effective for people it shares the environment with and interacts with.

The technical challenges in getting a robot to perform useful tasks are extremely difficult, and many researchers (cf. Walters, 2008) are currently researching into the technical capabilities that will be required to perform useful functions in a human inhabited environment (E.g. navigation, manipulation, vision, speech, sensing, safety, integration, planning etc.). The second criterion is arguably equally important, because if the robot does not exhibit socially acceptable behavior, then people may reject the robot if it is annoying, irritating, unsettling or even frightening.

Goetz et al. (2003) investigated issues of robot appearance, behavior and task domains and Severinson-Eklundh et al. (2003) documented an HRI trial on the human perspective of a robotic assistant over several weeks. The work of Scopelliti et al. (2005) analysed peoples’ views of domestic robots in order to develop an initial design specification for servant robots. Kanda et al. (2004) present results from a longitudinal HRI trial with a robot as a social partner and peer tutor aiding children learning English.

It is possible to conceive a robot which is very sociable, but not very effective or useful (or vice versa). For example, Kanda et al. (2007) studied a robot which exhibits social cues so that people have the impression that it listens and understands

39 them as they ask for route directions. However, the robot did not comprehend speech, so the human users did not actually gain any useful help from their questions. Breazeal & Scassiallati (1999) presented Kismet, a socially interactive robot head, with a cartoon-like appearance, which exhibits social cues and behavior in order to gain interaction and attention by humans. This engagement was an end in itself in this case, and there was no direct reward for the human partner beyond entertainment or diversion value.

The study of socially interactive robots is relatively new and experimenters in the field commonly use existing research into human-human social interactions as a starting point (Walters, 2008). In a recent HRI study, Walters (2008) believed that

“Robots are perceived by humans in a social way and therefore that humans will respond to robots in a social way. There may be some similarities with the ways that humans respond socially to a pet, another human, or a child or infant. However, while the aim of many robot designers is to create robots that will interact socially with humans, it is probable that humans will not react socially to robots in exactly the same way that they would react to another human. It is probable that a number of factors including robot appearance and behaviour, proximity, task context and the human user's personality will all potentially affect humans' social perceptions of robots. However, as a practical necessity for the first stage in the experimental research, the number of factors under investigation were limited.”

Furthermore, Walters (2008) indicated that proxemics is also an important aspect of

HRSI, primarily due to the physical embodiment of robots, and also because there

40 was evidence from virtual environments and existing HRI research indicating that

“humans respected personal space with regard to both IVE animated agents and robots.” The term "proxemics" was introduced by Edward Hall. In his book, “The

Hidden Dimension”(Hall, 1966), Hall primarily viewed proxemics as the study of differences between cultures. As a first step in the investigation of how humans perceive and respond to social robots, some of the early experiments into human comfortable approach distances were duplicated (Horowitz et al., 1964; Sommer,

1969; Stratton et al., 1973) with the robot in place of a human interactor. Humans were invited to approach a robot to a distance which they perceived as comfortable in order to see if people respected the interpersonal spaces of robots. The same experiment was also performed with robots approaching humans. The broad categories of generally recognised interpersonal social space zones between humans are derived from Hall (1968) and Hall (1974). The research hypothesis advanced for empirically testing in these initial HRI trials was: Human-robot interpersonal distances would be comparable to those found for human-human interpersonal distances (cf. Hall, 1968).

Currently, HRI has many challenge problems area including assistive robotics, in which its key attributes of this problem are the proximity and vulnerability of the human in the interaction plus the potential for interactions that may evolve in unanticipated patterns. Also, humanoid robotics is another challenge area, both in terms of engineering human-like movements and expressions, and in terms of the challenges that arise when a robot takes a human form. With such a form, social and emotional aspects of interaction become paramount. Likewise, natural language interaction is a challenge problem, not only because it requires sophisticated speech

41 recognition and language understanding, but also because it inevitably includes issues of mixed initiative interaction, multi-modal interaction, and cognitive modelling (cf. Goodrich and Schultz, 2007).

2.2 Dance

Dance, is defined in this thesis as a series of actions in having movement of the whole or part of the body synchronise or asynchronised with or without the music rhythm. According to the Webster dictionary dance is “An artistic form of nonverbal communication” or “Taking a series of rhythmical steps (and movements) in time to music.” Similarly Shinozaki et al. described dance as “one of the most sophisticated nonverbal communication methods and also regarded dance as the real-time type of entertainment.” In addition, dance as to human, activates the body and brain to exercise and relax, thus contributes to mental health. On the other hand, dance as to a robot does not offer any relaxation, but instead creates strong motivation for human to exercise and move along.

2.2.1 Dance Robots

Conducting a research on a dance robot certainly introduces the development of a new type of communication between human and robots. Thus, researchers must acknowledge and understand the existence of relationships among human, robot, and entertainment in a dance robot system. The role of robots in dance entertainment is to allow human to become both entertainers and spectators. In other words, as spectators when watching a robot dances with its own autonomous movements together with interactive capabilities (classified as real-time entertainment) or as

42 entertainers when showing the audience a robot built-in with pre-programmed sequence of dance motions (classified as non-real-time entertainment). The scenarios for an interactive dance robot in real-time entertainment could be that the robot could change the dance depending on audience requests or could sense the audience mood and adopt its dancing behaviours to reflect the sensor results. An ideal model of a dance robot would be the one that could provide flexible entertainment that ranges between real-time and non-real-time entertainment

(Shinozaki et al., 2007).

Moreover, Michalowski and Kozima (2007) mentioned that general ideas of turn- taking in conversation are widely implemented, but fine-grained rhythmic perception and synchrony by a robot has been difficult to develop and that a dance robot should be able to exhibit ability to dance not only to the music, but also with a human dance partner in a coordinated way. They demonstrated this with their robot “Keepon” and a software architecture that extracted rhythm from auditory and visual sensor data.

Moreover, this system was new both in the abstraction of rhythmic perception from a particular modality and in the agility and quality of its movement. In existing research projects, rhythmic interaction is generally based on auditory cues and limited perception of embodied movement. However, dance robots should be able to

“tune in” to the bodily rhythms of their interaction partners as well as to generate such nonverbal behaviours (e.g. nodding, swaying, gesturing, and etc.) themselves.

Yet, the research and development of various kinds of dance robots is actively being carried out, especially in Japan. For example, “Hip Hop Dance Robot” of Kwansei

Gakuin University - a humanoid robot achieves various dance performances by

43 concatenating a set of different short dance motions called “dance units” (Fig. 2.26)

(cf. Shinozaki et al., 2007). Another example is from Tohoku University, they proposed a dance partner robot referred to as “MS DanceR” (Mobile Smart Dance

Robot), which has been developed as a platform for realising effective human-robot coordination with physical interaction; it can follow the lead of a human or be the lead by a human by keeping a waltz pattern in its memory and predicting the next step of the dance; it also senses the pressure to its arms and back and tries to stay in sync with the human partner (Fig. 2.27) (Kosuge & Wang Laboratory, 2007).

Besides, a dancebot “HRP-2 Promet” developed by Tokyo University, can perform a traditional Japanese dance by captured human dance movements using video-capture techniques then converted into a sequence of robotic limb movements and fed into its processors (Fig. 2.28) (Randerson, 2007). Moreover, Tanaka et al. (2007) created non-interactive and interactive (posture mirroring) dance modes for a Sony QRIO robot in a playroom with children (fig. 2.29) (Tanaka et al., 2005).

Fig. 2.26 Comparison between the robot dance motions developed by researchers

and those developed by the dancer (cf. Shinozaki et al., 2007).

44

Fig. 2.27 A ballroom dancing robot “MS DanceR” (cf. Kosuge & Wang Laboratory,

2007).

Fig. 2.28 The 1.5-meter-tall robot “HRP-2 Promet” (cf. Randerson, 2007).

Fig. 2.29 QRIO danced with kids (cf. Tanaka et al., 2005).

45 Since both self-synchrony (rhythmic matching of movements and vocalisations) and interactional synchrony are important for normal human-human interaction (Kendon,

1970; Condon et al., 1974), these capabilities should be considered in designing robots that interact socially with humans.

2.2.2 Thai Dance

Based on the concepts and facts of dance robot described above, prior to developing a Thai dance robot, a good grasp of the topic of Thai dance must be achieved first in order to develop an effective system.

Thai dance or “Ram Thai” in Thai language, is the main dramatic art form of

Thailand and is considered one of countless worldwide dance types in existence. Yet,

Thai dance on its own, like many forms of traditional Asian dance, can be divided into two major categories that correspond to the high art (classical dance) (Fig. 2.30) and low art (folk dance) (Fig. 2.31) distinction (Wikipedia, 2007; Mahidol

University, 2007).

Fig. 2.30 Examples of Thai classical dance forms (TAT, 2007).

46 Thai classical dance includes main dance forms like “Fawn Thai” - accompanied by folk music and style of the region; “Khon” - the most stylised form of Thai dance performed by group of non-speaking dancers, the story being told by a chorus at the side of the stage; “Lakhon” - costumes are identical to Khon, but Lakhon dance movements are more graceful, sensual, and fluid, the upper torso and hands being particularly expressive with conventionalised movements portraying specific emotions; etc.

Fig. 2.31 Examples of Thai folk-dance forms (Cultural Office of Samutprakan

Province, 2007; Phon’s Thai Martial Art Center, 2007).

Thai folk-dance includes main dance forms like “Likay” - containing elements of pantomime, comic folk opera, and social satire, is generally performed against a simply painted backdrop during temple fairs; “Ram” - numerous regional dances;

“Ram Muay” - the ritualised dance that takes place before Southeast Asian kickboxing matches such as Muay Thai; “Wai Khru” - a ritualised form of dance meant to pay respect to, or homage to the “khru” or teacher. It is performed annually by Thai classical dance institutions; etc.

One of the most popular Thai folk-dance performances called “Ram Wong” is originally adapted from “Ram Tone”, where it uniquely specifies that dancers must follow the rhythm of the tone drum which is especially made for the dance. It has

47 been popular among Thai people in some regions of Thailand. This Thai dance used to be played with the performance of Thai traditional music instruments consists of

“Ching” (a kind of Thai important percussion instrument made of metal), “Krab” (a kind of Thai important percussion instrument made of wood), and “Tone” (Thai drum made of carved wood or baked clay). In 1940, Ram Wong dance pattern influence spreads to the other regions in Thailand. It has effectively become very popular among the people in every region of the country and has created the rhythmic dialogue to sing together with the performance of Thai music. Basically, the dialogues are about persuasion, teasing, praising, and parting. Ram Wong has been very popular among the people in central region of Thailand during the World

War 2 (1941-1945). Owing to the support of the government, Ram Wong has been reformed by the Fine Arts Department of Thailand in 1944. At that time four new rhythmic dialogues had been created. The songs and music instrument had been adapted to be more contemporary. Some movements such as “Tar Sod Soi Ma La”

(imitated from the actual actions of the local people making a flower garland by having one of the hands holding a cotton string and another hand pulling a flower on the string outwards from the body towards the side), “Tar Ram Sai” (imitated from the actual actions of people trying to persuade one another by stretching both arms almost parallel to the ground and twisting both hands up and down opposite one another), etc., had been settled as standard patterns of Ram Wong. The name Ram

Tone (the tone dance) had been changed into Rum Wong (the circle dance) because of its movement which people was often dancing around like making a circular movement. Later, Premier Piboonsongkram created six more new rhythmic dialogues introducing the Ram Wong as a modern Thai dance. Finally, the Ram

Wong has ten songs with specific movement patterns where the dancers moving

48 round in a circle. The song lyrics refer to the goodness of Thai culture and the ability and daring of Thai warriors. After World War 2, Ram Wong has been kept active among people until now. Ram Wong is widely performed not only by Thai people, but also foreigners in dancing ball. Many Ram Wong songs have been created by following the ten specific standard movement patterns (Fine Arts Department, 1971).

2.3 Entertainment Robot

Entertainment robots have actually existed for over a decade such as a self-playing piano that can be programmed to automatically play music without human presence.

Over the past five years, entertainment robots have become more and more popular as can be seen from the development of toy products having incorporated with robotic technology similar to the ones seen in use by other areas of robotic development. According to Robotics Trend (2007) somewhere in the intersection between toys and domestic service robots lies the realm of entertainment robots, or for the more seriously minded, companion or personal robots. These are robotic products, typically humanoid or animal in form, which cost between US$200 and

$5,000, and offer a number of interactive features designed to entertain, educate and service humans. In the next few years, entertainment robots will certainly emerge as a new class of high margin consumer electronics. When these robots launch to the market, consumers would adopt these products. Developers then must focus on creating a robot with high functionality at a reasonable price.

Crick et al. (2006) regarded the characteristics of entertainment robot as “a social dimension, demanding intimate interaction with humans and the environment. The

49 robot learns and adjusts its behaviour by observing and reasoning about itself and its human partners in real time”.

2.4 WowWee and Robosapien Media

2.4.1 History of WowWee

WowWee Limited, a privately held Hong Kong-based is a leading designer, developer, marketer and distributor of technology-based consumer robotic, toy and electronic products. WowWee is best known for its line of Robo-branded consumer robotic products. WowWee's Robosapien, introduced in 2004, was the first in a line of robotic products that use biomorphic motion technology and is uniquely programmable to perform a variety of functions. Since its launch, Robosapien has been honoured with numerous awards in markets around the world. Today, the Robo line includes Roboraptor, a robotic ; , a robotic reptile; Robopet, a robotic dog; Robopanda, a robotic panda; , a robotic quadraped;

Roboboa, a robotic snake; ; and RS Media a technologically advanced version of Robosapien (Fig. 2.32).

Fig. 2.32 WowWee’s mass produced toys (Whitlock, 2006).

50 WowWee also develops and markets products in its “Alive” series, based upon highly sophisticated and realistic animatronic technology. In early 2007, WowWee launched the award winning Flytech Dragonfly, adding the radio-controlled flight segment to its product base.

WowWee's core competency and key competitive advantage has been its ability to identify cutting-edge technologies and through its unique, creative, and flexible design process, to incorporate these technologies in the development of innovative consumer-focused products. WowWee has demonstrated the ability to be at the forefront of consumer trends with emphasis placed on evolving consumer preferences for technology-based electronic products. WowWee has continued to expand its award-winning portfolio of entertainment robots to incorporate smart, task-based features, setting a standard for affordable, high-end, consumer-robotic and electronic devices. WowWee's products are sold through its direct sales channel as well as through international distributors. Approximately 65% of its products are sold in North America, with the balance being sold internationally.

In 2006 and 2005, WowWee's revenues were approximately US$117 million and

US$131 million, respectively, and earnings before interest, taxes, depreciation and amortisation were approximately US$5 million and US$27.5 million, respectively

(WowWee Ltd., 2007).

On September 27, 2007, Optimal Group, an umbrella organisation for a number of technology-related companies, entered into an agreement to purchase WowWee for

US$65 millions in cash and class A stock. Nonetheless, Optimal Group plans to keep

51 WowWee focused on its already successful business model and product line, while implementing “specific strategies to enhance the growth and financial performance of the WowWee business” (Community Headlines, 2007).

2.4.2 Robosapien: Why Robosapien Media?

Robosapien (RS) is a low-cost humanoid robot that was designed and created by

Mark Tilden (Fig. 2.33), Head of Research and Development of WowWee. RS is marketed with great success for the toy market. Over 50 million units have been sold since the release of Robosapien in 2004. The original Robosapien, a biomorphic robot was a remote-controlled machine that could be programmed to run through different sequences of commands in response to questions and external stimuli. The

Robosapien V2 is larger, more robust robot with a greatly expanded list of English verbalisations. RS V2 also introduced basic color recognition sensors, grip sensors, sonic sensors and a wider variety of possible movements.

The RS Media has a body very similar to RS V2, but an entirely new brain based on a kernel. RS Media is equipped to be a media center and has the ability to record and playback audio, pictures and video.

Fig. 2.33 Mark Tilden posed with Robosapien Media and Roboraptor (Excell, 2006).

52 While the humanoid robots developed by large companies are impressive, they are not available to researchers outside the industry labs or are too expensive for academic research. Some universities built their own robots, but due to limited resources, usually only one prototype has been constructed. Hence, multi-robot experiments with humanoid robots are currently not feasible in academic environments and are likely to be at least difficult in the near future.

While many mass-produced humanoid robot kits or humanoid robots (Fig. 2.34 and

2.35) developed so far by robotics companies are inarguably flexible and movements are human alike, many of them are still small in size, or do not possess the ability to produce audio independently, or otherwise very expensive as the aforementioned. As it would be more convenient for people who already own one or looking to get one of this similar kind of robots used in this research from the toy market, to be able to take the robot anywhere with them and place it to run the Thai dance autonomously without connecting through a computer or external speakers. Thus, this makes a robot selection process currently limited to Robosapien series, in particular RS

Media, which is certainly the most frequently sold humanoid robot today. In contrast to RS V2, RS Media has more powerful brain and ability to be customised using

Java programming language. In addition, in 2007 JavaOne Conference, WowWee sponsored a set of Robosapien programming contests for Java developers and in order to make control of the robot possible, a special Robot Extension SDK was written by Sun Microsystems and bundled with only 200 the RS Media at the conference.

53 Hence, to take a full programming advantage of this, it is crucial to proceed with the choice of RS Media Java SDK capability for this research.

Fig. 2.34 Futaba RBT-1, a flexible 1-foot robot that does all the moves

(Futaba, 2007).

Fig. 2.35 Robosapien Media with lying down sequences (Atwood, 2007).

54 2.5 Speecys and SPC-101C

2.5.1 History of Speecys

Speecys Corp., has been developing networked robots powered by the NetBSD operating system since the company was founded in 2001. SPC-101C is considered the first Internet robot powered by the NetBSD operating system. It was invented by

Tomoaki Kasuga, founder & CEO of Speecys Corp. The inception of the company is to focus mainly on the development of a humanoid robot and the surrounding system that enable the robot to download content via the Internet so that it can provide entertainment and perform various tasks (Speecys Corp., 2008)

The company has discovered that people easily get bored with robots that always run on the same software. With their experience, they have come to acknowledge that the most important aspect for providing a truly entertaining robot is the content download function.

The company defined the Internet Robot IR and RTML (Robot Transaction Markup

Language) for supplying fresh content to the robot and also developed the Internet

Content Server that will readily provide the necessary data to the robot for each purpose.

Speecys supplied to its customers the first “Internet Robot” for application development purposes. The robot is called “SPC-101C”. The idea is simple. SPC-

55 101C can download content via the internet so that it can provide entertainment and perform various tasks. Similar to other popular humanoid robots on the market, the

Speecys SPC-101C is 33 cm tall with a weight of 1.5 kg. The robot has expanded from typical 16 to 17 DOF as in other robots to a total of 22. The additional servos provide greater flexibility and possibly more realistic motion. The robot is supposed to have the ability to mimic human body languages closely.

2.5.2 Why Speecys SPC-101C?

Compare to RS Media, Speecys SPC-101C is smaller but the degree of freedom has doubled. This therefore provided a higher degree of mobility for SPC-101C. There is a 270,000 pixel video camera mounted in the head which can be used to capture video images and to be sent to a receiver. The camera can be panned using the head servo and moved up and down by tilting the torso at the waist. Dual stereo speakers are built into the torso sides. There are LED arrays in the hand and chest which can be used to display characters, text, or robotic emoticons (block graphics). The robot uses Futaba Corp. RS301CR servos and RPU-50 CPU robot controller, powered by a

7.4V 780mAh Futaba Corp. LiPo battery. It also has a miniSD slot for extended programming in addition to 64 MB of RAM and 64 MB of . The robot can also be operated while plugged into the charger. (In the case of RS Media, the lower body is disabled if AC adapter is connected.) SPC-101C runs the NetBSD operating system. Its operations and communications are carried out through a WiFi connection, allowing it to extend its operations over the Internet and it can support

56 its own IP address. Hence, SPC-101C is IP-enabled and can be operated remote via the Internet. This is an important advancement over the RS Media.

Fig. 2.36 Speecys SPC-101C.

This robot allows users to customise series of its movements using the bundle software called “Motion Editor”. In addition, advanced users may customise their applications in Visual Basic, C#, Java, or any other development languages. The

Speecys Corp., designer and manufacturer of SPC-101C, has released the open source software SDK named “Open Roads”, which facilitates the use of a wealth of well established, proven application libraries. For example, the Microsoft .NET 3.0

System Speech libraries can be used to add voice recognition and synthesised speech to the robot. Video capture, object recognition and tracking and other advanced functionality can also be added using the same approach. This can be noted that

SPC-101C offers the potential to utilise the large amount of resources from the library.

The term communication, SPC-101C is considered to be a networked robot as it can be controlled wirelessly through Wireless LAN IEEE 802.11g from any computer, handheld device or smart phone that has Internet access. Moreover, the robot

57 program can be downloaded via Internet servers, as shown in Fig. 2.37. The Robot

Transaction Markup Language (RTML) makes content handling easier by allowing the transfer of service requests or responses with HTTP protocol. Using simple

HTTP protocol, SPC-101C can easily communicate with servers independent of network characteristics. Also, as SPC-101C uses NetBSD as the basic OS for network access and debug. The use of TCP/IP and UDP/IP protocol allows the transfer of motion data or audio data using FTP or Telnet functions. SOAP is also used as an inter object communication protocol that transfer service requests or responses with HTTP protocol. As for the sound sensors, it uses the sound input on the computer as a and is uploaded through Wi-Fi. There is no physical microphone embedded onto the robot. This lack of sound sensors makes the robot not able to detect sound in the environment by itself and external sound device is required. However, this could be an interesting feature as the robot might be able to

“hear” from a distance with a remote sound recorder to upload the file to it. Unlike the RS Media, another sensor that SPC-101C does not have is a touch sensor. From the developer’s viewpoint, this may be unnecessary as since SPC-101C’s hand is very small and it is not feasible to hold and detect objects. Furthermore, SPC-101C also has facial and motion detection ability similar to the RS Media, but the difference is SPC-101C has a more superior and accurate detection.

Fig. 2.37 Control of SPC-101C.

58 2.6 Comparison of SPC-101C and RS Media

As SPC-101C possesses more servo motors than RS Media, therefore, the robot has greater flexibility and more realistic motion. The robot has the ability to mimic human body languages closely. For example, the robot can tilt its torso backwards and forwards at the hips. In comparison with RS Media, although SPC-101C has higher degrees of freedom, it does not guarantee the superior ability to perform all tasks better than RS Media, such as in this research, it was noted that SPC-101C has limitation on the arm motions for Thai dance. It is not as realistic as RS Media, as shown in Fig. 2.38. This is due to the lack of servo in the elbow location that would otherwise allow movement in other directions. However, the lower body part of

SPC-101C is definitely more flexible than the RS-Media.

Fig. 2.38 RS Media and SPC-101C demonstrate an arm movement.

Although a 7.4V 780mAh rechargeable battery pack is used in SPC-101C, it can only operate for 10 to 15 minutes with regular and continuous operations. If the robot is left on and stands still, the battery will last for approximately 30 minutes.

However, unlike the RS-Media, when the adaptor is connected to SPC-101C, there is no obstruction in the operation of the robot whatsoever although one has to be careful about the wire connected to the adaptor.

59 Even though RS Media’s humanoid body is able to perform bending, sitting, standing, lying down, getting up, waving, and dancing, still there are limitations in its movement as it only has a total of 11 DOF. In particular, each leg only has one

DOF. This makes many movements, especially the legs, to be highly restrictive or even impossible. For example, the robot cannot turn left or right the same way as a human. It has to walk backwards and twist its waist at the same time in order for it to turn. Moreover, the RS Media’s ability to walk is still quite awkward as it waddles extensively. The servos in the legs allow it to move from side to side which gradually makes it move forwards or backwards. Therefore, if there is even the slightest amount of obstacle in front of it, such as a slightly raised doorframe on a floor, it will struggle to get over it. Most likely, the robot simply will stop and turns around. In addition, the surface of the floor on which the RS Media walks on also has an effect towards its movements. In particular, many dance sequences or steps could be a challenge. It is noted that the robot has to move on a smooth surface and it cannot dance effectively on the “tend-to-sticky” kind of floors such as a carpet floor.

This research has been used and developed on an off-the-shelf toy robot, WowWee

RS Media, which was obtained to use under the name “iThaiSTAR” (intelligent Thai

Sanook Training-Assist Robot), where the word sanook is a Thai word meaning fun and entertaining . Later on in a second phase of the project, another off-the-shelf robot, Speecys SPC-101C, was obtained and used under the name “iCHEER”

(intelligent Humanoid Entertainment and Education Robot). In the beginning the iCHEER was used as another low-cost robot subject to compare movements with the iThaiSTAR. Then it was utilised for research in the area of companion,

60 entertainment and edutainment robot, which lead to the development of the final stage.

2.7 Physical Movement

According to Whitehouse (1968), movement originates in what Laban calls an inner effort. That is a specific inner impulse having the quality of sensation. This impulse leads outward into space so that movement becomes visible as physical action.

Following the inner sensation, allowing the impulse to take the form of physical action is active imagination in movement, just as following the visual image is active imagination in fantasy.

Whitehouse (1968) further explained for most people, the tempo and pattern of all physical movements is habit formed, automatic unconscious and, above all, organised toward a utilitarian end, toward an objective or goal. It is when the goal or the form of a purposeful action (reaching to get something, running to catch a bus, all thousand-and-one acts packed into the day) is suspended in favour of movement as it happens that one can perceive the movement as an actual substance and begin to ask questions about what it reveals, what it says to the person moving about himself.

And the awakening of the awareness of how one moves, in what manner, (slow or fast, heavy or light, restricted or easy) leads to a perception which carries over into a recognition of the character, also, of daily or habitual body usage.

Human physical movement involves joints and muscle to control the gesture of body. This research is trying to develop the program to control robot to move as the actual gesture of human. However, human structures are much more complicated

61 than robot especially the low cost one. By looking at the application which this research is focusing on, Thai dance, cheerleading dance, yoga, and exercises for elderly, it is obvious that the difficult part is the move of human joints in detail.

Robots can perform only the simple movements not the complex ones.

As there are many varieties of Thai dances, it is vital that a careful selection of the one to be used in this research is done with a proper consultation from a professional in the area of Thai dance. Thai dance can range from a simple movement to a complex movement that if an inappropriate one is chosen, the robot might not be able to perform due to its physical limitation.

Cheerleading dance, on the other hand, requires more flexibility and faster movement than Thai dance. Moreover, cheerleading dance routine always forms in a team of more than two members. It can be up to a hundred depending on the leading objective. In each team, it can be divided into smaller groups and each group will perform differently and synchronise the gesture along with music. In the year 2002, a research project was initiated to investigate how a team of robots can work together in such activity that is usually carried out by humans. In their research project, seven

LEGO mindstorms robots were used to carry out a cheerleading dance routine

(Mastuname et al., 2002). After that, there has never been another research on cheerleading robots known in publication. There was recent research project focuses differently as it is investigated and developed a cheerleading robot that puts collaborative efforts for performing live together with a human cheerleading team members.

62 A cheerleading robot is closely follows the footstep of what is required to be a dance robot with one exceptional role of robots in dance entertainment. That is a cheerleading robot would allow human to become only entertainers, but not spectators. In other words, human could act as entertainers by creating built-in pre- programmed sequence of cheerleading dance routines for the robot to entertain the audience. This is considered as a form of non-real-time entertainment for the designer as in the same situation as with human cheerleaders. They would need a choreographer to design the dance routine for them, so that they can practice and memorise the motions before the actual live performance.

From the aforementioned, it can be said that a cheerleading sequence can be appropriately choreographed for any robot to allow it to perform together with human cheerleading team members. In other words, the dance sequence would have to be specifically designed for that particular robot before it can be applied to human cheerleaders due to the fact that a human is still far more flexible than any robot in existence. Therefore, it makes more sense to focus with a robot first.

2.8 Image Processing

Image processing is any form of signal processing which the input is an image, such as a photograph or video frame. The output of image processing may be either an image or a set of characteristics or parameter related to the image. Most image- processing techniques involve treating the image as a 2D signal and applying standard signal-processing techniques to it. Image processing is also known as a set of computational techniques for analyzing, enhancing, compressing, and

63 reconstructing images. Image processing has extensive applications in many areas, including astronomy, medicine, robotics, and remote sensing by satellites (Britannica

Online Encyclopedia, 2010).

According to McGraw-Hill Encyclopaedia of Science and Technology (2004), image processing, other words, is how to manipulate data in the form of an image through several possible techniques. An image is usually interpreted as a two-dimensional array of brightness values, and is most familiarly represented by such patterns as those of a photographic print, slide, television screen, or movie screen. An image can be processed optically or digitally with a computer.

To digitally process an image, it is first necessary to reduce the image to a series of numbers that can be manipulated by the computer. Each number representing the brightness value of the image at a particular location is called a picture element, or pixel. A typical digitised image may have 512 × 512 or roughly 250,000 pixels, although much larger images are becoming common. Once the image has been digitised, there are three basic operations that can be performed on it in the computer. For a point operation, a pixel value in the output image depends on a single pixel value in the input image. For local operations, several neighbouring pixels in the input image determine the value of an output image pixel. In a global operation, all of the input image pixels contribute to an output image pixel value.

These operations, taken singly or in combination, are the means by which the image is enhanced, restored, or compressed (McGraw-Hill, 2004).

64 An image is enhanced when it is modified so that the information it contains is more clearly evident, but enhancement can also include making the image more visually appealing. An example is noise smoothing. To smooth a noisy image, median filtering can be applied with a 3 × 3 pixel window. This means that the value of every pixel in the noisy image is recorded, along with the values of its nearest eight neighbours. These nine numbers are then ordered according to size, and the median is selected as the value for the pixel in the new image. As the 3 × 3 window is moved one pixel at a time across the noisy image, the filtered image is formed (cf. McGraw-

Hill, 2004).

Fig. 2.39 Image Processing (LetsGoDigital, 2008).

Another example of enhancement is contrast manipulation, where each pixel's value in the new image depends solely on that pixel's value in the old image; in other words, this is a point operation. Contrast manipulation is commonly performed by adjusting the brightness and contrast controls on a television set, or by controlling the exposure and development time in printmaking. Another point operation is that of pseudo colouring a black-and-white image by assigning arbitrary colours to the

65 gray levels. This technique is popular in thermography (the imaging of heat) where hotter objects (with high pixel values) are assigned one colour (for example: red), and cool objects (with low pixel values) are assigned another colour (for example: blue), with other colours assigned to intermediate values (cf. McGraw-Hill, 2004).

The aim of restoration is also to improve the image but unlike enhancement, knowledge of how the image was formed is used in an attempt to retrieve the ideal

(uncorrupted) image. Any image-forming system is not perfect, and will introduce artifacts (for example, blurring, aberrations) into the final image that would not be present in an ideal image. A point spread function, called a filter, can be constructed that undoes the blurring. By imaging the blurred image with the filter point spread function, the restored image results. The filter point spread function is spread out more than the blurring point spread function, bringing more pixels into the averaging process. This is an example of a global operation, since perhaps all of the pixels of the blurred image can contribute to the value of a single pixel in the restored image.

This type of deblurring is called inverse filtering, and is sensitive to the presence of noise in the blurred image. By modifying the deblurring filter according to the properties of the noise, performance can be improved. An example of the need to deblur images from an optical system is the Hubble Space Telescope before its spherical aberration was corrected with new optics (cf. McGraw-Hill, 2004).

Compression is a way of representing an image by fewer numbers, at the same time minimising the degradation of the information contained in the image. Compression is important because of the large quantities of digital imagery that are sent electronically and stored. Digital high-definition television relies heavily on image compression to enable transmission and display of large-format colour images. Once

66 the image is compressed for storage or transmission, it must be uncompressed for use, by the inverse of the compression operations. There is a trade-off between the amount of compression and the quality of the uncompressed image. High compression rates are acceptable with television images, for example. However where high image quality must be preserved (as in diagnostic medical images) only compression rates as low as three to four may be acceptable (cf. McGraw-Hill,

2004).

Image processing is an active area of research in such diverse fields as medicine, astronomy, microscopy, seismology, defence, industrial quality control, and the publication and entertainment industries. The concept of an image has expanded to include three-dimensional data sets (volume images), and even four-dimensional volume-time data sets. An example of the latter is a volume image of a beating heart obtainable with x-ray computed tomography (CT). CT, PET, single-photon emission computed tomography (SPECT), MRI, ultrasound, SAR, co-focal microscopy, scanning tunnelling microscopy, atomic force microscopy, and other modalities have been developed to provide digitised images directly. Digital images are widely available from the Internet, CD-ROMs, and inexpensive charge-coupled-device

(CCD) cameras, scanners, and frame grabbers. Software for manipulating images is also widely available (cf. McGraw-Hill, 2004).

2.9 Learning Theories

Behaviorism, cognitivism, and constructivism are the three broad learning theories most often utilised in the creation of instructional environments. These theories, however, were developed in a time when learning was not impacted through

67 technology. Over the last twenty years, technology has reorganised how people live, how people communicate, and how people learn. Learning needs and theories that describe learning principles and processes should be reflective of underlying social environments (Siemens, 2004).

Driscoll (2000) defines learning as a persisting change in human performance or performance potential which must come about as a result of the learner’s experience and interaction with the world. This definition encompasses many of the attributes commonly associated with behaviorism, cognitivism, and constructivism, namely learning as a lasting changed state brought about as a result of experiences and interactions with content or other people.

Informal learning is a significant aspect of our learning experience. Formal education no longer compromises the majority of our learning. Learning now occurs in a variety of ways through communities of practice, personal networks, and through completion of work-related tasks. Learning is a continual process, lasting for a lifetime. Learning and work-related activities are no longer separate. In many situations, they are the same. Moreover, technology is altering people’s brains. The tools people use define and shape their thinking. Many of the processes previously handled by learning theories can now be off-loaded to, or supported by technology

(cf. Siemens, 2004).

All of the learning theories hold similar notion that knowledge is an objective or a state that is attainable through either reasoning or experiences. In addition, learning is viewed as a process of inputs, managed in short-term memory, and coded for long-

68 term recall. Furthermore, Driscoll (2000) stated that learners create knowledge as they attempt to understand their experiences. Moreover, learning theories are concerned with the actual process of learning, not with the value of what is being learned. Additional concerns arise from the rapid increase in information. In today’s environment, action is often needed without personal learning. That is people need to act by drawing information outside of their primary knowledge (cf. Siemens 2004).

This research utilised the Taxonomy of Educational Objectives, or commonly known as “Bloom’s Taxonomy”, to assess improvement, enjoyment, motivation, and participation of participants. Bloom’s Taxonomy is a method of classification for learning teaching and assessing (Anderson and Krathwohl, 2001). In addition, this taxonomy of learning behaviours can be thought of as the goals of the training process. That is, after the training session, the learners should have acquired new skills, knowledge, and/or attitudes (Bloom, 1956). A search of World Wide Web yielded a clear evidence that Bloom’s Taxonomy has been applied to a variety of situations. Current results include a broad spectrum of applications represented by articles and websites describing everything from corrosion training to medical preparation.

Although this taxonomy covers three domains including cognitive, affective, and psychomotor, but only two latter domains will be applied for the assessment. The affective domain is an attitudinal-based domain, which deals with growth in feelings or emotions, including the manner in which we deal with things emotionally, such as feelings, values, appreciation, enthusiasms, motivation, and attitudes. There are five levels in this domain ranging from mere awareness through being able to distinguish

69 implicit values through analysis as described in detail below (Krathwohl et al., 1956;

Krathwohl et al., 1973).

. Receiving Phenomena: the lowest level; awareness, willingness to hear, or selected attention. The learner passively pays attention. Without this level no learning can occur. In other words, the learner is being aware of or attending to something in the environment.

. Responding to Phenomena: active participation on the part of the learners.

Attends and react to a particular phenomenon. Learning outcomes may emphasise compliance in responding, willingness to respond or satisfaction in responding

(motivation). In other words, the learner shows some new behaviours as a result of experience.

. Valuing: the worth or value a learner attaches to a particular object, phenomenon, or behaviour. This ranges from simple acceptance to the more complex state of commitment. Valuing is based on the internalisation of a set of specified values, while clues to these values are expressed in the learner overt behaviour and are often identifiable. In other words, the learner shows some definite involvement or commitment.

. Organising: the learner can put together different values, information, and ideas into priorities and accommodate them within his/her own schema by comparing, relating and elaborating on what has been learned. The emphasis is on comparing, relating, and synthesising values.

. Characterising by Value or Internalising Values: the learner holds a particular value or belief that now exerts influence on his/her behaviour so that it becomes a characteristic. In other words, the behaviour is pervasive, consistent, predictable, and most importantly, characteristic of the learner.

70

On the other hand, the psychomotor domain is a skills-based domain, which deals with changes in physical skills, including physical movement, coordination, and use of the motor-skill areas. Development of these skills requires practice and is measured in terms of speed, precision, distance, procedures, or techniques in execution. When learning psychomotor skills, individual’s progress through the cognitive stage, the associative stage, and the autonomic stage. The cognitive stage is marked by awkward slow and choppy movements that the learner tries to control.

The learner has to think about each movement before attempting it. In the associative stage, the learner spends less time thinking about every detail, however, the movements are still not a permanent part of the brain. In the autonomic stage, the learner can refine the skill through practice, but no longer needs to think about the movement. There are five levels in this domain as described in detail below (Dave,

1975).

. Imitation: the lowest level; the learner observes and patterns behavior after

someone else. Performance may be of low quality.

. Manipulation: the learner is being able to perform certain actions by following

instructions and practicing.

. Precision: the learner becomes more exact. Few errors are apparent.

. Articulation: the learner coordinates a series of actions, achieving harmony

and internal consistency.

. Naturalisation: the learner possesses high-level performance, becoming

natural without needing to think much about it.

Having completed the background literature research on this study, the following chapter outlines the scope and research.

71 CHAPTER 3

RESEARCH STUDY

3.1 Scope of Research

Applications of robots have been found in wide variety areas such as research, industry, medical and toy. As such, the objectives of all areas are widely distinct purposes. For example, research robots are mostly expensive and flexible to be applied to different areas since the goals are always beyond present situations.

Second, industrial robots are widely used and their prices vary greatly since the areas of applications in different industries vary a lot. The price may vary from a couple of hundred dollars to millions of dollars. In medical application, robots have to be precise and accurate and they associate with high initial costs. More and more toy robots have also been launched to the consumer markets and their performance are getting close to movement in all direction like human. These are also known as humanoid robots.

The concepts of using humanoid robots for entertainment and services have captured many people’s attention and imagination for some time. Robot characters have been created and popularized in mass media such as fictions, movies TV and radio shows.

Although these imaginary robot characters were assumed to be able to do many complex and interactive tasks, successful implementation of these types of functions in real life is still an ongoing development. Nevertheless, robot technologies are improving and progressing steadily towards the ultimate goal of practical service robots.

72

Apart from providing useful or practical services, robots could also be considered for the provision of companionship and entertainment. Successful development of social robots for day-to-day interaction with human in the society has been the ultimate dream of robot designers and developers for a long time. While there have been rapid and substantial advancement over the years, the cost of the specialized robots are beyond the reach of most people. On the other hand, low cost off-the-shelf toy robots have been improving in terms of mobility and computation capability.

Most robotic researches so far have not yet focused on the low-cost off-the-shelf robots. This research is therefore one of the few that focuses on this category of robots. This study also investigates the opportunity and potential with the low-cost off-the-shelf robots and how to develop them beyond their default capability. The research proposes the feasible use of humanoid robots in training, entertainment and edutainment by using low-cost off-the-shelf robots as a tool. The off-the-shelf humanoid robot is aimed to be interactive and user-friendly, and it can be controlled by voice thereby eliminating the need of remote control devices. In order to assess the effectiveness of the approach, a conceptual model is proposed to evaluate the performance, level of enjoyment, motivation, and participation in the interactive activities with the robot.

Moreover, this research utilized two low-cost off-the-shelf robots and customized their software for training. It serves to demonstrate that off-the-shelf robots can be developed to facilitate the training in physical exercises such as Yoga, Thai dance, cheerleading dance, and exercise for elderly, which are brought up to be a case study here. It is believed that a robot can enhance and encourage participants to cooperate

73 and enjoy more in their activities. At this point, the researcher can assess the effectiveness in using robot to facilitate participation in physical activities.

This research will also contribute to scholarly knowledge in the Human-Robot

Interaction (HRI) field in various ways. Firstly, the proposed study will provide robotics software developers a greater degree of understanding of what are required in a companion humanoid robot which can entertain human partner as well as provide an edutainment experience. Secondly, other robotic industry such as service robots industry will also be benefited from the outcome of this study. Moreover, this research project will implicitly help to enhance and stimulate the interests of programmers and researchers in IT and related fields in the discipline of the robotic software development.

3.2 Research Planning and Methodology

This research project is based on a mixture of developmental and qualitative research method. The approach in this study firstly involves the development of digital content for a low-cost off-the-shelf robot that can carry out tasks or physical activities in addition to the built-in pre-programmed repetitive activities. For different types of physical activities, there will be different criteria and expectation.

Regarding the evaluation of the effectiveness of the robot as a trainer and of the improvement of the users’ performance, observations and qualitative approaches will be used.

74 3.2.1 Research Phases

This research project was conducted in the following three phases:

Preliminary Investigation  Review of background Literature  Survey of relevant Equipment

Design and Development  Design  Validate knowledge  Development

Deployment & Evaluation  Pre-performance test  Conduct training sessions and observations  Post-performance test

Fig. 3.1 Phases of this research project

3.2.2 Procedure

The experiment was arranged into three phases: pre-testing, training and post-testing.

In the beginning, a pre-test against the participant’s performance in the area of training was carried out and the outcome was observed and recorded. Next, a training session with the robot was carried out. For each type of physical training,

75 with the robot, the participant was given two hours session. Then finally, a post-test against the participant’s performance in the area of training was carried out and the outcome was observed and recorded. The outcome was assessed by professionals in the areas of training and compared with result of the pre-test.

Participant

Training Session

Robot

Assessment of participant on Each of the Four Criteria

Improvement Enjoyment

Motivation Participation

Fig. 3.2 Conceptualization of the proposed research approach

3.2.3 Assessment

The assessments for pre-test and post-test were in forms of performance demonstration of a participant (researcher) by the experts in the training areas, and the observation of participant’s movements was video recorded.

During the two hours training session with the iCHEER robot, two web cameras were setup to capture live video footage of the participant. Then, the footage was fetched in real-time to a computer for image processing in order to compare the

76 movements between the robot and the participant. In the event of wrong or misaligned movements performed by the participant, the robot would inform the participant with voice message and it would allow participant to issue a voice command to instruct the robot what to do next. Besides voice activation commands, the robot could also be interrupted with voice commands such as “stop” or “repeat” in the middle of the training session. Furthermore, the robot could also be fully controlled by a computer connecting to the wireless network. Having the ability to control this trainer robot by voice, the trainee could instruct it to demonstrate any routine of the four types of trainings. But the only thing the trainee has to know to control the robot is to learn all the commands including the names of all routines.

3.3 The Application of Robosapien Media

The RS Media has been developed and applied to teach Thai dance called Ram Wong to primary school children as shown in Figure 3.3. The development is implemented in a number of ways similar to what a human teacher will do. This would be eventually evolved into a game according to the following phases.

1. Demo only. The robot performs the dance sequence. The robot has the advantage

that it can be viewed from different angles, front and back.

2. Demo plus verbal instruction. This incorporates verbal instructions and

explanation of the finer points of the dance. The dialog could be a source of

encouragement for the user.

3. Demo plus recording of the student’s own dance. This will take one step further

by challenging the audience to follow the dance. This is then recorded by the

robot’s camera for play back.

77 4. Demo, record, replay and comments. Phase 3 is further enhanced with witty

dialogs incorporated during the play back of the video recording. The dialogs are

just random funny one-liners to arouse the interest of the audience and is by no

means judging or assessing the performance. It uses popular comments like

“awesome” or “touch-down”...etc to encourage the students.

Fig. 3.3 RS Media dances with a school kid (Fung & Nandhabiwat, 2008).

3.4 The Application of Speecys

One of the first applications being developed and tested for the SPC-101C is to send the robot a voice command over the Internet by using a computer or an Internet- enable mobile device such as an Apple iPhone used in this project. A user issued a voice command via the iPhone. Then the voice command was sent to the robot within seconds. The command then activated a set of instructions and movements.

For example, when using an iPhone from any location, the robot could be activated to monitor a house and capture a video motion on detection of motions. The video captured will then be transferred over the Internet to store on a designated computer.

The next application being developed on iCHEER was a cheerleading dance with an

78 actual Cheerleading team as a prop in the beginning of the performance. The dance sequence and music were stored in the robot and the performance is activated by voice command using the iPhone. The last two applications developed were to have the robot to teach yoga and exercises for elderly people.

The following chapter describes the development and assessment of the proposed project.

79 CHAPTER 4

DEVELOPMENT AND MEASUREMENTS

4.1 Software Design and Development

By applying image processing theory and implementing it on the robot, a participant who attends the training would be able to get assessment and different forms of feedback. With the customized software, the researcher can compare the physical movements of human to that of the robot as shown in Figure 4.1 below.

Fig. 4.1 Real-time Image Processing of iCHEER

80 4.1.1 Structure Computation

There are many proposed algorithms to recover 3D structure and camera position.

Some algorithms require camera calibration to help in camera matrix estimation and some algorithm that does not need a preprocessing of intrinsic camera parameter estimation is called auto calibration.

In this research, the two-view geometry with the epipolar geometry and a linear triangulation method are used to determine a 3D point cloud of tracked points.

4.1.1.1 Epipolar Geometry

The application of projective geometry techniques in computer vision is most notable in the Stereo Calibration problem which is very closely related to Structure- from-Motion. Unlike general motion, stereo calibration assumes that there are only two shots of the scene. In principle, it could apply stereo calibration algorithms to a structure from motion task (Jebara et al., 1999; Cyganek & Siebert, 2009).

Fig. 4.2 Epipolar Geometry (cf. Jebara et al., 1999).

Figure 4.2 depicts the imaging situation for stereo calibration. The application of projective geometry to this situation results in the now popular epipolar geometry

81 approach. The three points [COP1, COP2, P] form what is called an epipolar plane and the intersections of this plane with the two image planes form the epipolar lines. The line connecting the two centres of projection [COP1, COP2] intersects the image planes at the conjugate points e1 and e2 which are called epipoles. Assume that the 3D point P projects into the two image planes as the points p1 and p2 which are expressed in homogeneous coordinates (u1, v1, 1) and (u2 ,v2, 1) respectively. After some manipulations, the main result of the epipolar geometry is that the following linear relationship can be written as (cf. Jebara et al., 1999)

t p1 F p2 = 0 (Equation 4.1)

Here, F is the so-called fundamental matrix which is a 3 x 3 entity with 9 parameters. However, it is constrained to have rank 2 (i.e. || F || = 0) and can undergo an arbitrary scale factor. Thus, there are only 7 degrees of freedom in F. It defines the geometry of the correspondences between two views in a compact way, encoding intrinsic camera geometry as well as the extrinsic relative motion between the two cameras. Due to the linearity of the above equation, the epipolar geometry approach maintains a clean elegance in its manipulations. In addition, the structure of the scene is eliminated from the estimation of F and can be recovered in a separate step.

Given the matrix F, identifying a point in one image identifies a corresponding epipolar line in the other image (cf. Jebara et al., 1999).

The fundamental matrix F is recovered independently of the structure and can be useful on its own, for example in a robotics application (Faugeras, 1992). At this point it is worthwhile to study the stability of such techniques. The reader should consider the case where the centres of projection of both images are close to each other (COP1 and COP2). Note the degeneracy when the centres overlap, which is the

82 case when there is no translation and only rotation. A point in one image does not project to an epipolar line in the other for these cases. Degeneracy also occurs when all 3D points in the scene are coplanar. The result is that it is not possible to determine the epipolar geometry between close consecutive frames and it cannot be determined from image correspondences alone. The linearization in epipolar geometry creates these degeneracies and numerical ill-conditioning near them.

Therefore, one requires a large base-line or translation between the image planes for small errors (cf. Jebara et al., 1999). One way to overcome these degeneracies is provided by technique involves switching from epipolar feature matching to a homography approach which can automatically detect and handle degenerate cases such as pure camera rotation (Torr et al., 1998).

The linear epipolar geometry formulation also exhibits sensitivity to noise (i.e. in the

2D image measurements) when compared to nonlinear modelling approaches. One reason is that each point can be corresponded to any point along the epipolar line in the other image. Thus, the noise properties in the image are not isotropic with noise along the epipolar line remaining completely unpenalised. Thus, solutions tend to produce high residual errors along the epipolar lines and poor reconstruction

(Azarbayejani, 1997; cf. Jebara et al., 1999).

To estimate a 3D point cloud, a camera needs to be calibrated to know intrinsic and extrinsic camera parameters. Once the intrinsic and extrinsic camera parameters are known, we can know a camera calibration matrix and projection matrix of each camera. Finally, a linear triangulation method is used to determine a 3D position of the same point appeared on left and right camera.

83 4.1.1.2 Stereo Calibration

The epipolar constraint is one of the most fundamentally useful pieces of information which can be exploited during stereo matching. It can be shown by elementary geometry (Ballard and Brown, 1982) that 3D feature points are constrained to lie along epipolar lines in each image. Knowledge of the epipolars reduces the correspondence problem to a 1D search. This constraint is best utilized by the process of image rectification. The principle is quite simple, any pair of images can be transformed to a "parallel camera geometry" so that corresponding point features in 3D will lie on the same horizontal line in the two images. Unfortunately such a process requires some knowledge of the left to right camera transformation which will generally require a calibration procedure (Faugeras and Toscani, 1986). The freedom for camera model specification, cost function definition and numerical implementation is immense. It is perhaps somewhat unfortunate that the subject must continue to be a drain on research efforts during the development of new systems.

Camera calibration comprises quite a large number of publications and methods, but a few guidelines can be provided as to what constitutes good practice (Lane and

Thacker, 1994).

The camera model must be specified with a minimum number of parameters which describe the important degrees of freedom. There are at least three ways of representing the left-to-right camera coordinate rotation matrix. The quaternion, the screw (Rodriegez method) and polar co-ordinate triple axis rotation, there is probably very little to distinguish between the performances of these methods. If the cameras display radial distortion affects these may well not be visible to the casual

84 observer but will weaken the accuracy of the epipolar constraint and completely bias any resulting 3D measurement. Image centers and aspect ratios may also need to be free parameters and can generally not be expected to take default values (cf. Lane and Thacker, 1994).

Lane and Thacker (1994) also indicated that the cost function must be defined in the image plane as the errors between back projected positions for points. This is the only domain in which measurement errors can be expected to be uniform so that systematic errors are not introduced during the calibration process. If the data for calibration is to be obtained from any automatic matching process then the cost function must be in the form of a robust statistic. Any automatic calibration procedure which is based on least-squares will be susceptible to calibration matching failures and must be viewed with suspicion.

A practical calibration system must give some indication of resulting calibration accuracy, either in terms of resulting back projected error or as covariance estimates on the estimated parameters. Ideally any subsequent stereo algorithm should be capable of interpreting these measures and adjusting output depth accuracy estimates accordingly (cf. Lane and Thacker, 1994).

A recent focus of research has been in the area of "self-calibration" (Faugeras et al.,

1992), the rational being that it is possible to calibrate cameras from the data available during use rather than having to rely on special purpose calibration data.

This is an admirable endeavor but all of these techniques must be evaluated on the basis of the above criteria if they are to form the basis of a reliable automatic system.

85 Unfortunately, numerical robustness is often sacrificed for mathematical sophistication.

Optimal techniques for robust combination of fixed camera calibration have been used to fuse data from robot motion, matched stereo corner correspondences and a calibration tile. The accuracy of estimated epipolar geometry was found to improve with the inclusion of new data as expected. This system supports online recalibration of a variable verge stereo camera system using data from the fixed verge calibration method over a relatively large range of rotations (cf. Lane and Thacker, 1994).

Fig. 4.3 Overview Program Workflow.

86

Fig. 4.4 Two Images are Captured from Left and Right cameras at the Same Time.

The images in Figure 4.4 are then used to determine the projection matrix of each camera (i.e. to determine rotation and translation of the second camera based on the first camera). Also, this process is used to find the principal point, principal axis, and focal length of camera. The programming code can be seen in Appendix C.

Fig. 4.5 Current Status on 3D Reconstruction and Program for Image Capture from

Two Webcams is achieved.

87 4.1.2 Color Detection

Fig. 4.6 Color Object Tracking.

By using Microsoft Visual Studio Software and two cameras, a training session can be setup on color tracking from various color-coding to attach on the white body suit as illustrated in steps of Figures 4.7 to 4.12 below. The programming code can be seen in Appendix B.

88

Fig. 4.7 MS Visual Studio Software on Colour Tracking.

Fig. 4.8 Simulation of Real-time Image Processing of Human to Point Cloud.

89

Fig. 4.9 Moving Horizontal Axis to the Left.

Fig. 4.10 Horizontal Axis is on the Left.

90

Fig. 4.11 Simulation Movement of the Human’s Right Foot.

Fig. 4.12 Simulation Movement of Human’s Right Hand.

OpenCV* (Open Source Computer Vision) is a library of programming functions for real time computer vision. CV CamShift based on mean shift. Mean-shift algorithm

91 is a robust method of finding local extreme in the density distribution of a data set.

This is easy for continuously distributed data, but difficult for discrete data sets.

Fig. 4.13 Mean-Shift Algorithm

Mean-shift algorithm’s approach:

1. Choose a search window with -a starting location-a type

(uniform, exponential, Gaussian) -a shape (symmetric, skewed) -a size

2. Compute the Center of Mass (CoM)

3. Center the window on the CoM

4. Return to step 2 until the window stops moving.

This terminates when a local maximum is found. The size of the window matters as it will change the local maxima. Applied to successive images (showing the same feature), the mean-shift algorithm will find the new peak/mode of the feature in motion across the screen.

92 4.1.3 Correlation between Robot Joints and Human Joints.

Fig. 4.14 Human Joints

There are a total of 11 human joints used in this research, which are right wrist (5), right elbow (3), right shoulder (1), left shoulder (0), left elbow (2), left wrist (4), waist (6), right knee (8), right ankle (10), left knee (7) and left ankle (9), as shown in

Figure 4.15 below.

93

Fig. 4.15 Human Joints Structure.

After getting the coordinate of each joint in 3D axis, these points are used to calculate the degree of each joint by using these three planes. The angle at elbow joint 2 can be found from the degree between arm 2-4 and upper arm 0-2. The angle at 3 can be derived from 3-5 and 1-3 as well. While the angle at the shoulder joints 0 and 1 are the degrees between 1-0 – 0-2 and 0-1 – 1-3 respectively. The angle at joints 7 and 8 or knee joints are the degree between thigh and shin. They can be calculated by finding the degree between 6-7 – 7-9 and 6-8 – 8-10 respectively. At joint 6, the angle at this joint can tell that the robot bends down or does a back bend.

This angle can be found by using y-z plane. The angle is the degree between 1-6 and y axis. Compare to the Robot joints as shown in Figure 4.16 below.

94

Fig. 4.16 Robot joints.

There are total of 17 joints used in robot. On the left and the right arms of the robot have 3 joints as human’s. However there are more joints on the neck and on the back. This considered compensating the angle which usually measuring between 0-6 or 1-6 on human when bending. Instead of one joint on the hip, there are 2 joints for robot and even two times more joints on each leg. This will give more flexibility and details in point cloud.

95 4.1.4 Software Development

As for programming the two robots used in this research, the RS Media has been programmed using Java SDK extension capability. It is important that the following freeware software are also installed on a developer’s computer.

NetBeans IDE 5.5.1

The NetBeans IDE is a free, Open-Source Integrated Development Environment for

RS Media software developers. The NetBeans IDE provides developers with all the tools needed to create professional applications to run on RS Media.

NetBeans Mobility Pack 5.5

The NetBeans Mobility Pack for CLDC/MIDP adds everything to the IDE needed to create, test and debug applications for the Mobile Information Device Profile

(MIDP) 2.0, and the Connected, Limited Device Configuration (CLDC) 1.1.

Java SE Development Kit (JDK) 6

JDK 6 has the full support from NetBeans IDE 5.5.1. It includes the Java Runtime

Environment (JRE) and command-line development tools that are useful for developing applets and applications.

J2ME Wireless Toolkit 2.2

The J2ME Wireless Toolkit is a set of tools for creating Java applications that run on devices compliant with the Java Technology for the Wireless Industry (JTWI, JSR

96 185) specification such as RS Media. It consists of build tools, utilities, and a device emulator.

As for programming the Speecys 101C, Visual Basic and Visual C++ have been used to develop the customized software for assessment and feedback.

4.2 Implementation

4.2.1 Development of Thai Dance by iThaiSTAR

The iThaiSTAR’s Thai dance movements were programmed using Java programming language (see Appendix D). The implementation of Thai dance involved the dance pattern process. With the consultation of a Thai dance professional in this project, the dance pattern was studied and developed in sequence as follow:

(1) A variety of typical Ram Wong dance motion was gathered.

(2) Then observation was made and discussion with the professional on the dance

sequence and selection of suitable dance pattern that included almost all the

representative standard Ram Wong dance motion.

(3) Next the preliminary testing of the selected dance motion was put to test by

controlling the movement from the remote control or software to see if there

would be any unforeseen problem.

(4) Then the robots were programmed and debugged.

97 After a thorough study, the selection of two Ram Wong songs that iThaiSTAR could perform were chosen, each acquired from the ten specific Thai National preserved standard movement patterns. The two songs are “Ngam Sang Duen”, which obtained the standard movement called “Tar Sod Soi Ma La” (imitated from the actual actions of the local people making a flower garland by having one of the hands holding a cotton string and another hand pulling a flower on the string outwards from the body towards the side).

Dance movement in “Ngam Sang Duen” Ram Wong song can be broken down into 4 major steps as illustrated in Figure 4.17.

Step 1. Right arm moves up to the eyebrow level and outward from the body with

the palm facing to the side (NS1-RightArmUp) in synchronization with the left

arm positioning at the belt level (NS1-LeftArmDown) with the left leg

initiating the move forward followed by the right leg (NS1-LeftLegRightLeg).

The body (NS1-BodyTiltLeft) and head (NS1-HeadTiltLeft) tilt a little bit to

the left.

Step 2. Left arm moves outward from the body with the palm facing up (NS2-

LeftArmPalmUp) and right palm facing down (NS2-RightArmPalmDown).

Step 3. Left arm moves up to the eyebrow level and outward from the body with the

palm facing to the side (NS3-LeftArmUp) in synchronization with the right

arm moves down positioning at the belt level (NS3-RightArmDown) with the

right leg initiating the move forward followed by the left leg (NS3-

RightLegLeftLeg). The body (NS3-BodyTiltRight) and head (NS3-

HeadTiltRight) tilt a little bit to the right.

98 Step 4. Right arm moves outward from the body with the palm facing up (NS4-

RightArmPalmUp) and left palm facing down (NS4-LeftArmPalmDown).

Fig. 4.17 Main Movements of “Tar Sod Soi Ma La” (Banramthai.com[1], 2007).

While the song “Ram Si Ma Ram” synchronizes with the standard movement called

“Tar Ram Sai” (imitated from the actual actions of people trying to persuade one another by stretching both arms almost parallel to the ground and twisting both hands up and down opposite one another).

As for the dance movement in “Ram Si Ma Ram” Ram Wong song, it can be broken down into 2 major steps as illustrated in Figure 4.18.

Step 1. Stretch both left and right arms outward from the body and position left arm

at the shoulder level with the palm facing outward and fingers pointing down

(RS1-LeftArmFinger Down) as well as position right arm at the waist level

with the palm facing outward and fingers pointing upward (RS1-

RightArmFingerUp). The left leg initiates the move forward followed by the

right leg (RS1-LeftLegRightLeg).

Step 2. Position right arm at the shoulder level with the palm facing outward and

fingers pointing down (RS2-RightArmFingerDown) as well as position left

99 arm at the waist level with the palm facing outward and fingers pointing

upward (RS2-LeftArmFingerUp). The right leg initiates the move forward

followed by the left leg (RS2-RightLegLeftLeg).

Fig. 4.18 Main Movements of “Tar Ram Sai” (Banramthai.com[2], 2007).

4.2.2 Development of Cheerleading Dance by iCHEER

The iCHEER’s control centre software has been customized by using Visual Basic and Java (see Appendix A). While a series of its cheerleading routines was done by using the Speecys’ bundle software called “Motion Editor”.

100

Fig. 4.19 iCHEER Control Centre.

One of the first applications being developed and tested for the iCHEER is to have it perform a 3-minute cheerleading dance routine together with a group of Rangsit

University’s Cheerleading Team as a prop in the beginning of the performance. The dance sequence and music are pre-stored in the robot. The robot can correctly synchronize the movements and music together while producing the music from its built-in speakers. After this application completed, the next phase was promptly carried out to create an application whereby the robot is sent with a voice command over the Internet by using a computer or a web-enabled mobile device such as an

Apple iPhone used in this project. The voice activated feature is developed by using

Microsoft Speech SDK 5.1. A user issues a voice command such as “dance” by

101 talking on the iPhone. Then the voice command is sent to the robot within a few seconds. The command then activates a set of instructions and movements.

4.2.3 Development of Yoga Exercises by iCHEER

With an extraordinary body balance of Speecys SPC-101C, or iCHEER, and 22

DOFs flexibility, it can be applied to program the poses of yoga. The iCHEER’s yoga exercises were programmed using Motion Editor Software and Visual Basic programming language. The iCHEER is developed to perform 10 yoga exercises in steps as shown in Figures 4.20 to 4.29 below.

4.2.3.1 Sitting Pose

Fig. 4.20 iCHEER Demonstrated “Sitting Pose”.

Step 1: Sit with straight back, and both feet stretch out forwards.

Step 2: Breathe out then extends arm forwards facing down the floor in

parallel or both hands touching tiptoes.

Step 3: While breathing in, lift up both hands onto the head, leaning forward

as far as possible, knee fully straighten at all time.

Step 4: Stay for 1-3 minutes

Step 5: Repeat for 2-3 repetitions.

102 4.2.3.2 Plane Pose

Fig. 4.21 iCHEER Demonstrated “Plane Pose”.

Step 1: Breathe in.

Step 2: Stretch the right leg backwards.

Step 3: Put down right arm.

Step 4: Move up both arms to the plane-like pose, leaning forward a little,

and parallel to floor.

Step 5: Breathe out.

4.2.3.3 One Leg Pose

Fig. 4.22 iCHEER Demonstrated “One Leg Pose”.

Step 1: Begin at the mountain pose.

Step 2: Looking forwards.

103 Step 3: Breathe in, extend both arms towards the front, hands pushing each

other.

Step 4: Breathe out, lift up right leg and stretch backwards and head leaning

to the front to gain balance. Head and right leg parallel to the floor.

Step 5: Stay for 30 seconds for each leg.

4.2.3.4 Mountain Pose

Fig. 4.23 iCHEER Demonstrated “Mountain Pose”.

Step 1: Stand up straight; both feet together, heel and tiptoes are parallel.

Step 2: Fully extend the knee, tense the upper leg muscles with appropriate

pressure.

Step 3: Tense the abdomen, stretch out breast and back, neck straight, eyes

looking forwards.

Step 4: Maintain the body mass balance from head to shoulder, hip, knees

then both feet.

Step 5: Optionally, arm stay either by the body, on the head or breast

(pushing towards one another).

104 4.2.3.5 Forward Bend Pose

Fig. 4.24 iCHEER Demonstrated “Forward Bend Pose”.

Step 1: Start with the mountain pose.

Step 2: Bend forward until the upper body stands halfway parallel to the

ground. Place your hands right under your shoulders on the ground

and see to it that your lower back is straight and still supports your

position.

Step 3: Get back and stand straight up again.

4.2.3.6 Dog and Cat Pose

Fig. 4.25 iCHEER Demonstrated “Dog and Cat Pose”.

Step 1: Knees bend onto the floor at shoulder width.

Step 2: Hands stays at shoulders width, arms fully extend.

105 Step 3: Looking upwards and pushing the body forwards and stay.

Step 4: Looking downwards till the chin touches breast, pushing the body

backwards and stay.

4.2.3.7 Cobra Pose

Fig. 4.26 iCHEER Demonstrated “Cobra Pose”.

Step 1: Lay down facing the floor, legs fully stretch out, hands stay on the

floor at shoulder position.

Step 2: Apply smooth pressure on feet, upper leg muscle, and hip.

Step 3: Breathe in facing towards the front, chin on the floor and extend the

elbow, head, breast, and waist upwards. Knee, chin, and feet are to

be pushed downwards meanwhile, without feeling any back pain.

Step 4: Stay in the current pose for 15-30 seconds, smooth and slow while

breathing. Repeat for 3 minutes.

106 4.2.3.8 Corpse Pose

Fig. 4.27 iCHEER Demonstrated “Corpse Pose”.

Step 1: Begin with sitting on the floor having legs stand up, both arms push

the floor at the back, move the hips backwards, and stretch each

foot.

Step 2: Both hands holding the neck lay down on the floor. Small pillow is

helpful to support the head, check that head, ears are balance.

Step 3: Lift up hands perpendicular to the floor; relax the arms, hands facing

upwards, arms and legs stretch out, hands stay nearby the hip.

Step 4: Close eyes, breathe smoothly in and out.

Step 5: Tense each muscle in the body, start from tiptoes for 5-6 seconds

then relax. Following by leg muscle, knee, hip, abdomen, breast,

arms, hands, fingers, shoulders, neck, head, and brain! Then relax.

Step 6: Relax all part of the body both physically and mentally.

Concentrates on the breathing at all time.

Step 7: Lay in this pose for 5 minutes in every 30 minutes.

107 4.2.3.9 Chaturanga Pose

Fig. 4.28 iCHEER Demonstrated “Chaturanga Pose”.

Step 1: Lay down facing the floor, both feet stay together, and standing on

tip of the toes.

Step 2: Bend the elbow, hands holding ribs.

Step 3: Breathe in and apply small pressure on the ribs, lifting up the body

to about 3-5 inches, straightly and parallel to the floor, knee fully

extends.

Step 4: Hands and feet holding the body weight.

Step 5: Breathe out and relax.

108 4.2.3.10 Chair Pose

Fig. 4.29 iCHEER Demonstrated “Chair Pose”.

Step 1: Stand up in the “mountain pose”, slow breathing, and lifting hands

up perpendicular to the floor then push towards each other.

Step 2: Breathe out; bend knees as far as possible parallels to the floor.

Step 3: Stretch out the breast, body in a up straight position. Do not lean

forward.

Step 4: Stay for 30-60 seconds then relax with hands moving down in a

mountain shape (or triangle).

4.2.4 Development of Elderly Exercises by iCHEER

The iCHEER’s elderly exercises were also programmed using Motion Editor software and Visual Basic programming language. The iCHEER is developed to perform 9 exercises for elderly in steps as shown in Figures 4.30 – 4.38 below.

109 4.2.4.1 Arm Raise

Fig. 4.30 iCHEER Demonstrated Arm Raise.

Step 1: Sit on the chair, straight back, feet stay away from each other, hand

holding the weight moving up and down.

Step 2: Three seconds lifting up then stay for 1 second.

Step 3: Three seconds lying down then stay for 1 second.

Step 4: Repeat 8-15 times, rest then repeat another 8-15 times.

4.2.4.2 Chair Stand

Fig. 4.31 iCHEER Demonstrated Chair Stand.

Step 1: Sit on the chair, use 3 seconds lean on the chair and cross arms in

the chest position.

Step 2: Three seconds standing up.

Step 3: Repeat 8-15 times, rest then repeat another 8-15 times.

110 4.2.4.3 Bicep Curl

Fig. 4.32 iCHEER Demonstrated Bicep Curl.

Step 1: Sit on the chair, hands holding weight in the front position, arms

fully extend, turning wrists inwards.

Step 2: For 30 seconds bend one arm up and lift the weight towards the

chest then stay for 1 second.

Step 3: Slowly move the hand back to the first position.

Step 4: Repeat 8-15 times, rest then repeat another 8-15 times.

4.2.4.4 Shoulder Flexion

Fig. 4.33 iCHEER Demonstrated Shoulder Flexion.

Step 1: Sit on the chair, both hands holding the weights down straight.

Step 2: For 3 seconds lift up the arms and fully extend then stay for 1

second.

111 Step 3: For 3 seconds lay down both arms back to the first position.

Step 4: Repeat 8-15 times, rest then repeat another 8-15 times.

4.2.4.5 Knee Flexion

Fig. 4.34 iCHEER Demonstrated Knee Flexion.

Step 1: Stand up straight holding a table or chair for balance.

Step 2: For 3 seconds slowly bend the knees as far as possible then stay for 1

second.

Step 3: For 3 seconds slowly move back to the first position.

Step 4: Repeat 8-15 times, rest then repeat another 8-15 times.

4.2.4.6 Hip Flexion

Fig. 4.35 iCHEER Demonstrated Hip Flexion.

112 Step 1: Stand up straight holding a table or chair for balance.

Step 2: For 3 seconds bending the knees up to the chest position then stay

for 1 second.

Step 3: For 3 seconds slowly bend the knees back down.

Step 4: Repeat 8-15 times, rest then repeat another 8-15 times.

4.2.4.7 Hip Extension

Fig. 4.36 iCHEER Demonstrated Hip Extension.

Step 1: Stand up behind the chair holding it with both hands.

Step 2: Lean forward from the hip, keep the back and shoulder straight until

the top part of the body is parallel to the floor.

Step 3: Stay at that position for 10-30 seconds.

Step 4: Repeat 3-5 times.

113 4.2.4.8 Alternative Hamstring Stretch

Fig. 4.37 iCHEER Demonstrated Alternative Hamstring Stretch.

Step 1: Stand up behind the chair holding it with both hands

Step 2: Slowly lift up one leg backwards and straight.

Step 3: Stay at that position and slowly move back down.

4.2.4.9 Wrist Stretch

Fig. 4.38 iCHEER Demonstrated Wrist Stretch.

Step 1: Both hands pushing against each other, both elbows pointing

downwards.

Step 2: Lifting up both elbows until arms are parallel to the floor as far as

possible.

114 Step 3: While both hands are still at this position, stay for 10-30 seconds.

Step 4: Repeat 3-5 times

4.2.5 Camera Calibration Matrix

A projection of points in space onto an image plane needs to know a focal length of the camera and the coordinate of the principal point as shown in Figure 4.39 (cf.

Jebara et al., 1999).

Fig. 4.39 Perspective (cf. Jebara et al., 1999).

The image plane is represented by ∏. The principal coordinate on the image plane

∏ is (u2, v2) and the focal length f is a distance between a center of projection and the image plane (cf. Jebara et al., 1999).

A camera matrix can be represented in a form of matrix as follows:

(K= ݂݌ݔ݂݌ݕ1 (Equation 4.2 where

 f is a focal length;

 (px,py) are the coordinate of the principal point.

115 4.2.6 Projection Matrix

Given a X is a world point represented by the homogeneous 4-vector (X,Y,Z,1)T and x is an image point represented by the homogeneous 3-vector (x,y,1)T. A projection matrix P is 3x4 homogeneous camera projection matrix which is used to map a world point to an image point as the following equation (cf. Jebara et al., 1999):

x = PX (Equation 4.3) which P is represented in a matrix as follows:

P = K[I|0] (Equation 4.4)

(P = ݌ݔ݌ݕ1000 (Equation 4.5

4.2.7 Camera Rotation and Translation

Assume that the first camera is located at the origin of world coordinate, a projection matrix of the first camera P1 can be represented by (cf. Jebara et al., 1999):

P1 = K[I|0] (Equation 4.6)

where

 K is a camera matrix;

 I is an 3x3 identity matrix.

The second camera position will be calculated a rotation and translation based on the position of the first camera. A projection matrix of the second camera P2 is defined as the following equation:

P2 = K[R|t] (Equation 4.7)

116 where

 K is a camera matrix;

 R is a 3x3 rotation matrix which is represented a rotation from the first

camera position;

 t is a 3x1 translation matrix which is represented a translation from the first

camera position.

4.2.8 Linear Triangulation Method

A linear triangulation method is the direct analogue of the DLT method (Jinjun et al.,

2008). Let denote x = PX and x’=P’X are equations for mapping word points to image points for the first and second camera respectively. These two equations can be combined in a form of AX=0. An equation of the form AX=0 can then be composed with

(A = ݔ݌3ܶ−݌1ܶݕ݌3ܶ−݌2ܶݔ′݌′3ܶ−݌′1ܶݕ′݌′3ܶ−݌′2ܶ (Equation 4.8 where

 piT are the rows of P;

 x is a x-position of an image point on image from the first camera;

 y is a y-position of an image point on image from the first camera;

 x’ is a x-position of an image point on image from the second camera;

 y’ is a y-position of an image point on image from the second camera.

To solve AX= 0, the homogeneous method (DLT)[1] can be used.

117 4.3 Outcome and Evaluation

4.3.1 Initial Indication of Response

Initially, assessment of the performance of iThaiSTAR was based by comments from the professionals. Informal feedback from the public were noted by the organiser during the public exhibition at the Bangkok International ICT Expo 2007. The feedback collected cannot be considered as a comprehensive or objective assessment of the use of iThaiSTAR. It however gives an indication the feasibility of using low cost robots as hypothesized in the proposal. About 100 feedback forms were collected and while not all the fields in the return forms were filled, the results listed in the following tables do provide an indication of the public’s interest in the project.

< 21 21-25 26-30 31-35 > 36 Not Specified 33.30% 27.80% 13.00% 9.30% 13.00% 3.70%

Table 4.1: Age distribution of respondents who returned the forms.

Male Female 38.2% 61.8%

Table 4.2: Gender Distribution.

Yes No 75.0% 25.0%

Table 4.3: Answers on “Is this the first time you see a dance robot?”

Dog Robot Not specified 18.2% 80.0% 1.8%

Table 4.4: Preference of Robot verses a dog.

118 Not No So So Good Great Impressed Usefulness of the robot in our daily life 0.0% 2.2% 15.6% 53.3%28.9% Cuteness and Attractiveness of 1.8% 0.0% 0.0% 36.4% 61.8% iThaiSTAR The dance performed by iThaiSTAR 0.0% 0.0% 18.2% 41.8% 40.0% Will you dance with the robot? 1.8% 9.1% 36.4% 32.7% 20.0%

Table 4.5: Impression on the Dance Robot.

Yes No Not Specified 92.7% 5.5% 1.8%

Table 4.6: Answer to question, “Would you like to have a robot in your house?”

The answers from the public have clearly indicated a positive impression and general acceptance of the robot in their house or lives. It is also interesting to see the percentage of female compare to the male in returning the forms is 2:1. Also, it is surprised to see close to 80% of the returns preferred the robot to a dog! The cuteness and attractiveness of iThaiSTAR also received overwhelming favourable responses. They also perceive the robot could be useful for other services in the daily life.

Since the iCHEER’s public appearance in 2008, it has created and caught the media and public’s attention and it has attracted subsequent interviews on TV channels, radio channels and published news in magazines, newspapers and talk shows in

Thailand (see Appendix E).

The initial assessment of iCHEER was based by observation from a professional choreographer of cheerleading dance and from audience. Informal feedbacks were observed from the applause and facial expressions of audience as well as the

119 audience approached the researcher for further information. While the initial indication of response may not be considered as a comprehensive or objective assessment of the use of iCHEER, the general feedback provided an indication of the public’s interest in the project.

4.3.2 Evaluation

The performance of the participant (researcher) who attended a training session was positively improved through the training by iCHEER when compared to the performance in the pre-test versus post-test. This was observed and evaluated by a panel of professionals in each area of trainings. The interaction also proved synchronization of rhythmic behaviour between the robot and human. The responses from the robot in turns prompted positive interaction and engagement between human and the robot. It was thus deduced that activities could be considered as an appropriate means to facilitate robot and human interaction. Similar to Yoga training class, master robot, pre-programmed sequence of Yoga motions, could therefore lead the gesture of Yoga in steps by showing its own autonomous movements and interactive capabilities where the students as spectators can enjoy watching robot perform and at the same time learn the sequence of Yoga to serve the purpose of entertainment and learning interaction.

Next chapter provided a more comprehensive and objective assessment of the results with discussion on the achievement of this study.

120 CHAPTER 5

RESULTS AND DISCUSSIONS

5.1 Results and Discussions

In earlier stage, this research reported an investigation on the comparison of two off- the-shelf humanoid robots for the implementation and demonstration of Thai Folk dances. The Speecys SPC-101C is a highly flexible robot with 22 degrees of freedom and very compact in size and weight. The RS Media is heavier and taller with less degree of freedom. However, in terms of hand movements, it was found that the Speecys has limitations due to its design on the elbow. Nevertheless, it is more flexible and has a faster response time. While the slow movements in Thai dance seems more appropriate for the RS Media, it was obvious that the Speecys is better adapted to replicate the hip and torso movements. When the movement is faster, it is shown that Speecys will have a greater performance as compared to RS

Media. However, one of the main challenges observed from this research so far is the urgent need to improve its power management and lengthen its battery life. In terms of the software development, it also essential that new and appropriate content must be able to be added to the robot in order to maintain ongoing interest and meaningful purposes.

121 5.2 Problems and Limitations

Although both RS Media and Speecys SPC-101C have a powerful brain and ability to be customized, they still possess certain limitations as mentioned in previous chapters.

Furthermore, robots with higher degrees of freedom do not guarantee a superior ability to perform all tasks better than lower ones, as it was noted that SPC-101C has limitation on the arm motions for Thai dance. It is not as realistic as RS Media

(Table 5.1). This is due to the lack of servo in the elbow location that would otherwise allow movement in other directions. However, the lower body part is definitely more flexible with SPC-101C.

RS Media SPC-101C Ngam Sang Duen Dance Movement NS1-RightArmUp Yes Limited NS1-LeftArmDown Yes Limited NS1-LeftLegRightLeg Yes Yes NS1-BodyTiltLeft Yes Yes NS1-HeadTiltLeft Yes Yes NS2-LeftArmPalmUp Yes Limited NS2-RightArmPalmDown Yes Limited NS3-LeftArmUp Yes Limited NS3-RightArmDown Yes Limited NS3-RightLegLeftLeg Yes Yes NS3-BodyTiltRight Yes Yes NS3-HeadTiltRight Yes Yes NS4-RightArmPalmUp Yes Limited NS4-LeftArmPalmDown Yes Limited Ram Si Ma Ram Dance Movements RS1-LeftArmFingerDown Yes Yes RS1-RightArmFingerUp Yes Yes RS1-LeftLegRightLeg Yes Yes RS2-RightArmFingerDown Yes Yes RS2-LeftArmFingerUp Yes Yes RS2-RightLegLeftLeg Yes Yes

Table 5.1 Main Movement Comparison: RS Media vs SPC-101C.

122 5.3 Current Achievement

The research so far has achieved the objective to demonstrate low-cost off-the-shelf humanoid robots can be used in various application domains of education, entertainment and training even with certain physical limitations of the robots. Also, low-cost off-the-shelf humanoid robots are more affordable for the purpose of training physical movements with the use of two web cameras, a , wireless network and customized software for assessment and feedback. The two robots were proved to capable of assisting and enhancing the performance, enjoyment, and motivation of the trainees towards the interactive activities with the robot. Nevertheless, at present, it can be said that robots do not have the capability to replace humans in physical training, but it can be said that at the present level of development robots can be used as a supplementary trainer.

5.4 Plans for Future Work

Having explored the advancement in robotic technology, the experiment carried out in this research and other relevant research work being done and reported, the research has exhibited the feasible development of future projects.

The ongoing objective of further research is to develop a framework for the development of content and applications for off-the-shelf robots to be used in different applications including entertainment and training. Robot technologies have advanced rapidly in terms of movement, hardware/software architecture, communication and network supports. They are moving closer to become entertainer and companion. However, one of the main challenges observed from this research so

123 far is the obstacle in software development. It is essential that new and appropriate content must be able to be added to the robot in order to maintain ongoing interest and meaningful purposes. Additional study in HCI, HRI and software engineering will further enhance the capability of the robots and to facility the interface for the software and content development.

124 CHAPTER 6

CONCLUSIONS

At this stage, the research has found that existing low cost humanoid robots are not fully capable of performing every task assigned, but they can be good at performing specific tasks. Moreover, the present research revealed that a robot which possesses more joints (servo motors) does not appear to be more effective than a robot with fewer joints. It is interesting to note that in certain situations, the robot with lower number of joints was able to do a much better job. Therefore, it can be said that some tasks only require robots with a fewer number of degrees of freedom and, thus, it could be a waste of resources to employ a robot with additional servo motors which would consume more power than was necessary.

As robotic industry continues to evolve, it is expected that companion robots will become more popular and user-friendly. In times to come, it will be easy enough for users to generate their contents and the robots will be flexible enough to do the functions as required by its human partner. Learning from the history of PCs,

Internet, music and media industry, it can be expected that new business models associated with companion robots will definitely emerge. However, the content generation aspect currently remains a challenge. Most of the existing companion robots still fall short of the requirements on ease of programming and flexibility of the functions. Even with the existing high-cost companion robots, they still require specialized skills to develop the content and software applications. This research has used iThaiSTAR and iCHEER as examples to illustrate the ability that a companion robot could download contents and software applications from a server through the

125 Internet similar to the case that the public are capable to produce motion video for

YouTube. The content will have commercial and intellectual value and this can adopt a similar subscription concept such as the one adopted by Apple iTunes. Users will be able to download contents and applications from the Internet with a fee.

Other possibilities are specialized consultancy services for customization or adaptation of the robot hardware and software. These are just some possibilities of the future e-business models for robotic industry, following similar models at present. While the current research is concentrated on overcoming the shortcomings of current companion robots, iThaiSTAR and iCHEER are only used as an example for illustration. It is believed that next generation commercial companion robot will be more cost effective, flexible and user-friendly in its programming and communication. The long term value will be the need to generate useful applications to meet day-to-day requirements and be a “true” companion. It could be anticipated that they might appear in many homes before the end of the next decade. This may fulfill the saying that, “We are not alone!”

126 REFERENCES

W. Aekplakorn and L. Mo-Suwan, “Prevalence of Obesity in Thailand,” Obesity Reviews, vol. 10, no. 6, pp. 589-592, 2009.

L. W. Anderson and D. R. Krathwohl, A Taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives: Complete Edition. New York: Longman, 2001.

Answer.com, “Image Processing,” Image Processing: Definition from Answer.com. [Online]. Available: Answer.com, http://www.answer.com/topic/image-processing. [Accessed July. 18, 2010].

R. C. Arkin, Behavior-Based Robotics. Cambridge, MA, USA: MIT Press, 1998.

I. Asimov, I, Robot. Greenwhich, CT, USA: Fawcett Publishing, 1950.

T. Atwood, “WowWee RS Media: Customize your own interactive personality,” Robot, issue 8, pp. 28-31, Fall 2007.

T. Atwood and J. Klein, “VECNA’s Revolutionary Battlefield Extraction-Assist Robot: A life-size humanoid robot that will search for and rescue people,” Robot, issue 7, pp. 24-28, Summer 2007.

A. J. Azarbayejani, “Nonlinear Probabilistic Estimation of 3-D Geometry from Images,” Ph.D. thesis, Massachusetts Institute of Technology, Boston, USA, 1997.

D. H. Ballard and C. Brown, Computer Vision. Prentice Hall, 1982.

Banramthai.com[1], “Ngam Sang Duen,” Banramthai. [Online]. Available: http://www.banramthai.com/html/rw_kham.html. [Accessed: Nov. 3, 2007].

Banramthai.com[2], “Ram Si Ma Ram,” Banramthai. [Online]. Available: http://www.banramthai.com/html/rw_ramsi.html. [Accessed: Nov. 3, 2007].

C. Bartneck and J. Forlizzi,“A Design-Centred Framework for Social Human-Robot Interaction,” in Proceedings of the 13th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN 2004), Kurashiki, Okayama, Japan, 2004, pp. 591- 594.

BBC News, “Robots set to get homely by 2007,” BBC News, October 2004. [Online]. Available: http://news.bbc.co.uk/go/pr/fr/-/2/hi/technology/3764142.stm. [Accessed: Sept. 18, 2008].

BBC News, “World's first ‘bionic arm’ for Scot,” BBC News, August 25, 1998. [Online]. Available: http://news.bbc.co.uk/2/hi/health/154545.stm [Accessed Sept. 18, 2009].

S. Behnke, J. Müller, and M. Schreiber, “Playing Soccer with Robosapien,” in I. Noda, A. Jacoff, A. Bredenfeld, and Y. Takahashi, editors, RoboCup-2005: Robot Soccer World Cup IX, Lecture Notes in Artificial Intelligence, LNAI 4020, Springer, 2006, pp. 36-48.

127 A. C. Bell, K. Ge, and B. M. Popkin, “Weight Gain and Its Predictors in Chinese Adults,” International Journal of Obesity and Related Metabolic Disorders, vol. 25, no. 7, pp. 1079- 1086, 2001.

A. Billard, “Robota: Clever Toy and Educational Tool,” Robotics & Autonomous Systems, vol. 42, pp. 259-269, 2003.

T. T. Blackmon, S. Thayer, J. Teza, V. Broz, J. Osborn, M. Hebert, and et al., “Virtual Reality Mapping System for Chernobyl Accident Site Assessment,” in Proceedings of the SPIE, vol. 3644, Feb. 1999, pp. 338-345.

B. S. Bloom, Taxonomy of Educational Objectives, Handbook I: The Cognitive Domain. New York: David McKay Co., Inc., 1956.

Boots & Sabers, “Unimate,” August 15, 2008. [Online]. Available: http://www.bootsandsabers.com/index.php/weblog/permalink/unimate/. [Accessed: Sept. 16, 2008].

K. Brady, T. Tarn, N. Xi, L. Love, P. Lloyd, H. Davis, and et al., “Remote Systems for Waste Retrieval from the Oak Ridge National Laboratory Gunite Tanks,” International Journal of Robotics Research, vol. 17, pp. 450–460, 1998.

C. Breazeal, Designing Sociable Robots. Cambridge: MIT Press, 2003a.

C. Breazeal, “Towards Sociable Robots,” Special Issue on Socially Interactive Robots, Robotics and Autonomous Systems, vol. 42, no. 3-4, pp. 167, 2003b.

C. Breazeal, “Designing Socially Intelligent Robots,” Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2004 NAE Symposium on Frontiers of Engineering, National Academy of Engineering, National Academies Press, pp. 123-130, 2004.

C. Breazeal and B. Scassellati, “A Context-Dependent Attention System for a Social Robot,” in Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI 99), Stockholm, Sweden, 1999, pp. 1146-1151.

C. Breazeal, A. Brooks, D. Chilongo, J. Gray, G. Hoffman, C. Kidd, H. Lee, J. Lieberman, and A. Lockerd, “Working Collaboratively with Humanoid Robots,” in IEEE-RAS/RSJ International Conference on Humanoid Robots (Humanoids 2004), Los Angeles, California, USA, November 2004.

A. J. N. van Breemen, K. Crucq, B. J. A. Kröse, M. Nuttin, J. M. Porta, and E. Demeester, “A User-Interface Robot for Ambient Intelligent Environments,” in Proceeding of the 1st International Workshop on Advances in Service Robotics (ASER03), Bardolino, Italy, 2003.

Encyclopaedia Britannica, “Image Processing,” Encyclopaedia Britannica Online. [Online]. Available: http://www.britannica.com/EBchecked/topic/283261/image-processing. [Accessed: Nov. 5, 2010].

A. Bruce, I. Nourbakhsh, and R. Simmons, “The Role of Expressiveness and Attention in Human-Robot Interaction,”in Proceedings ofthe IEEE International on Robotics and Automation (ICRA'02), Washington D.C, USA, 2002, pp. 4138-4142.

K. Capek, R.U.R. Oxford, UK: Oxford University Press, 1923.

128 J. Casper, “Human–Robot Interactions during the Robot-Assisted Urban Search and Rescue Response at the World Trade Center,” Unpublished master’s thesis, University of South Florida, Tampa, USA, 2002.

Community Headlines, “WowWee Purchased by Optimal Group,” RoboCommunity, Sep. 27, 2007. [Online]. Available: http://www.robocommunity.com/blog/5/11861/WowWee- Purchased-by-Optimal-Group/. [Accessed: Oct. 1, 2009].

W. S. Condon and L. W. Sander, “Synchrony demonstrated between movements of the neonate and adult speech.” Child Development, vol. 45, no. 2, pp. 456–462, June 1974.

C. Crick, M. Munz, and B. Scassellati, “Synchronization in Social Tasks: Robotic Drumming,” in Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2006), United Kingdom: University of Hertfordshire, 2006, pp. 97-102.

Cultural Office of Samutprakan Province, “Lakhon Chatri,” Cultural Office of Samutprakan Province. [Online]. Available: http://www.m-culture.go.th/samutprakan/index.php?module= ContentExpress&func=display&ceid=22. [Accessed: Sept.23, 2007].

A. Currie, “The History of Robotics,” Adam’s Robot Page. [Online]. Available: http://www.faculty.ucr.edu/~currie/roboadam.htm. [Accessed: Sept.18, 2007].

B. Cyganek and J. P. Siebert, Basics of the Projective Geometry, in An Introduction to 3D Computer Vision Techniques and Algorithms. Chichester, UK: John Wiley & Sons, Ltd., 2009.

B. Darrach, “Meet Shakey, the First Electronic Person,” Life, p. 58B, Nov. 20, 1970.

Davidbuckley.net, “History Making Robots.” [Online]. Available: http://davidbuckley.net/DB/HistoryMakers.htm. [Accessed: Sep. 21, 2008].

K. Dautenhahn, “Robots We Like to Live With? - A Developmental Perspective on a Personalized, Life-Long Robot Companion,” in Proceedings of the 13th IEEE International Workshop on Robot and Human Interactive Communication(RO-MAN 2004), 2004, pp. 17- 22.

K. Dautenhahn, S. N. Woods, C. Kaouri, M. L. Walters, K. L. Koay, and I. Werry, “What is a Robot companion - Friend, Assistant or Butler?,” in Proceedings of IEEE RSJ International Conference on Intelligent Robot Systems (IROS'05), Edmonton, Canada,2005, pp. 1488-1493.

R. H. Dave, Developing and Writing Behavioral Objectives, (R J Armstong, ed.), Educational Innovators Press, 1975.

A. De Santis, B. Siciliano, A. De Luca, and A. Bicchi,“An of Physical Human-Robot Interaction,” Mechanism and Machine Theory, vol. 43, no.3, pp. 253-270, 2008.

A. De Santis, “Modelling and Control for Human-Robot Interaction,” Ph.D. dissertation, The University of Naples Federico II, Naples, Italy, 2007.

M. Driscoll, Psychology of Learning for Instruction. Needham Heights, MA, Allyn & Bacon, 2000.

129 P. Edwards and I. Roberts, “Population Adiposity and Climate Change,” International Journal of Epidemiology, vol. 38, pp. 1137-1140, 2009.

Electronics Times, “Bionic arm with skin is being tested in Scotland,” Electronics Times, June 1, 1998. [Online]. Available: FindArticles.com, http://findarticles.com/p/articles/mi_m0WVI/is_1998_June_1/ai_50074743. [Accessed Sept. 18, 2007].

J. Excell, “Interview: Toy Story,” The Engineer, pp. 28-29, November 2006.

O. D. Faugeras, “What can be seen in three dimensions from an uncalibrated stereo rig?,” in Proceeding of the Second European Conference on Computer Vision (ECCV’92), Santa Margherita Ligure, Italy, Springer-Verlag, 1992, pp. 563-578.

O. D. Faugeras, Q. T. Luong, and S. J. Maybank, “Camera Self Calibration: Theory and Experiments,” in Proceeding of the Second European Conference on Computer Vision (ECCV’92), Springer-Verlag, 1992, pp. 321-334.

O. D. Faugeras and G. Toscani, “The Calibration Problem for Stereo,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR’86), 1986, pp.15-20.

D. Feil-Seifer and M. J. Matarić, “Defining socially assistive robotics,” in Proceeding of the IEEE International Conference on Rehabilitation Robotics(ICORR’05), Chicago, IL, USA, June 2005, pp. 465–468.

T. Fong, I. Nourbakhsh, and K. Dautenhahn, “A Survey of Socially Interactive Robots: Concepts, Design, and Applications,” The Robotics Institute Carnegie Mellon University, Report CMU-RI-TR-02-29, 2002.

T. Fong, I. Nourbakhsh, and K. Dautenhahn, “A Survey of Socially Interactive Robots,” Robotics & Autonomous Systems, Special issue on Socially Interactive Robots, vol. 42, no. 3-4, pp. 143.166, 2003.

B. Fortenberry, J. Chenu, and J. R. Movellan, “RUBI: A Robotic Platform for Real-time Social Interaction,” in Proceeding of the Third International Conference on Development and Learning (ICDL'04), La Jolla, California, USA, Oct. 2004.

Fujitsu Automation Limited, “Humanoid Robot HOAP-2,” Fujitsu Automation. [Online]. Available: http://jp.fujitsu.com/group/automation/en/services/humanoid-robot/hoap2/. [Accessed: Sept.21, 2007].

L. D. Frank, M. A. Andresen, and T. L. Schmid, “Obesity Relationships with Community Design, Physical Activity, and Time Spent in Cars,” American Journal of Preventive Medicine, vol. 27, no. 2, pp. 87–96, 2004.

B. Friedman, P. H. Kahn, and J. Hagman, “Hardware companions? - What online AIBO discussion forums reveal about the human-robotic relationship,” in Proceedings of the CHI 2003 Conference on Human Factors in Computing Systems, New York: ACM, 2003, pp. 273-280.

Futaba, “Masterpiece of Motion,” Robot, issue 8, pp. 2-3, Fall 2007.

B. Gates, “A Robot in Every Home,” Scientific American, vol. 296, no. 1,pp. 58-65, Jan. 2007.

130

F. Gemperle, C. DiSalvo, J. Forlizzi, and W. Yonkers, “The Hug: A New Form for Communication,” presented at the Designing the User Experience (DUX2003), New York, USA, 2003.

R. Gockley, A. Bruce, J. Forlizzi, M. Michalowski, A. Mundell, S. Rosenthal, B.P. Sellner, R. Simmons, K. Snipes, A. Schultz, and J. Wang, “Designing Robots for Long-Term Social Interaction”, in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2005), August, 2005, 1338-1343.

R. Gockley, R. Simmons, J. Wang, D. Busquets, C. DiSalvo, Ke. Caffrey, S. Rosenthal, J. Mink, S. Thomas, W. Adams, T. Lauducci, M. Bugajska, D. Perzanowski, and A. Schultz, “Grace and George: Social Robots at AAAI,” in Proceedings of AAAI 2004 Mobile Robot Competition Workshop (AAAI'04). Tech. Rep. WS-04-11, August 2004, pp. 15-20.

J. Goetz, S. Keisler, and A.Powers, “Matching Robot Appearance and Behaviour to Tasks to Improve Human-Robot Cooperation,” in Proceedings of the 12th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN 2003), Berkeley, CA, USA, 2003, pp. 55-60.

C. Gonzalez, “The Role of Blended Learning in the World of Technology,” in Benchmarks Online, 2004. University of North Taxas. [Online]. Available: http://www.unt.edu/benchmarks/archives/2004/september04/eis.htm. [Accessed: May.14, 2010].

M. A. Goodrich and A. Schultz, “Human-Robot Interaction: A Survey.” Foundations and Trends in Human-Computer Interaction, vol. 1, no. 3, pp. 203-275, 2007. S. M.Goza, R. O.Ambrose, M. A.Diftler, and I. M.Spain, “Telepresence Control of the NASA/DARPA on a Mobility Platform,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI2004), Vienna, Austria, 2004, pp. 623-629.

M. E. Gredler, Learning and Instruction: Theory into Practice. 5th Edition, Upper Saddle River, NJ, Pearson Education, 2005.

E. T. Hall, Handbook for Proxemics Research. Washington DC: Society for the Anthropology of Visual Communication, 1974.

E. T. Hall, “Proxemics,” Current Anthropology, vol. 9, no. 2-3, pp. 83-108, 1968.

E. T. Hall, The Hidden Dimension. New York: Doubleday, 1966.

P. J. Hinds, T. L. Roberts, and H. Jones, “Whose Job is it Anyway? A Study of Human- Robot Interaction in a Collaborative Task,” Human-Computer Interaction, vol. 19, no. 1-2, pp. 151-181.

T. Hirsch, J. Forlizzi, E. Hyder, J. Goetz, J. Stroback, and C. Kurtz, “The ELDer Project: Social and Emotional Factors in the Design of Eldercare Technologies,” in Proceeding of the 2000 Conference on Universal Usability, Arlington, Virginia, USA, 2000, pp. 72-79.

A. G. Hofmann and B. C. Williams, “Intent Recognition for Human-Robot Interaction,” in Technical Report of2007Association for the Advancement in Artificial Intelligence (AAAI 2007) Spring Symposium on Interaction Challenges for Intelligent Assistants, Menlo Park, CA: AAAI Press, 2007, pp. 60-61.

131 O. Holland, “The Grey Walter Online Archive,” Intelligent Autonomous Systems -people, University of the West of England. [Online]. Available: http://www.ias.uwe.ac.uk/Robots/gwonline/gwonline.html. [Accessed: Sept.19, 2007].

Honda, “The Honda Humanoid Robot,” Honda Worldwide | ASIMO. [Online]. Available: http://world.honda.com/ASIMO. [Accessed: Sept.21, 2007].

M. J. Horowitz, D. F. Duff, and L. O. Stratton, “Body Buffer Zone,” Archives of General Psychiatry, vol. 11, no. 6, pp. 651-656, 1964.

K. J. Hosein, “Review of the RS Media,” RoboCommunity, Jan. 20,2007. [Online]. Available: http://www.robocommunity.com/article/10627/Review-of-the-RS-Media/. [Accessed Sept. 22, 2007].

J. Intini, “In 2017, robots will care for the elderly, your car will drive itself, and your house will talk. A short history of the near future,” Maclean's, vol. 120, no. 39, pp. 58-59, pp. 62- 64, Oct. 2007.

T. Jebara, A. Azarbayejani, and A. Pentland, “3D structure from 2D motion,” IEEE Signal Processing Magazine, vol. 16, no. 3, pp. 66–84, 1999.

L. Jinjun, Z. Hong, J. Tao, and Z. Xiang, “Development of a 3D High-Precise Positioning System Based on a Planar Target and Two CCD Cameras,” in Proceeding of the First International Conference on Intelligent Robotics and Applications (ICIRA’08), Wuhan, China, Oct. 15-17, 2008, pp. 475-484.

H. L. Jones, S. M. Rock, D. Burns, and S. Morris, “Autonomous Robots in SWAT Applications: Research, Design, and Operations Challenges,” in Proceedings of the 2002 International Symposium for the Association of Unmanned Vehicle Systems (AUVSI ’02), Orlando, FL, USA, 2002.

JTSC, “New Released – HOAP-3,” HOAP-3 Press Release English, August 29, 2007. [Online]. Available: http://home.comcast.net/~jtechsc/HOAP-3_Press_Release_English.pdf. [Accessed: Sept.21, 2007].

L. E. Kahn, P. S. Lum, W. Z. Rymer, and D. J. Reinkensmeyer, “Robot-assisted movement training for the stroke-impaired arm: Does it matter what the robot does?,” Journal of Rehabilitation Research and Development, vol. 43, no. 5, pp. 619-630, 2006.

T. Kanda, M. Kamasima, M. Imai, T. Ono, D. Sakamoto, H. Ishiguro, and Y. Anzai, “A Humanoid Robot that Pretends to Listen to Route Guidance from a Human,” Autonomous Robots, vol. 22, no. 1, pp. 87-100, 2007.

T. Kanda, T. Hirano, D. Eaton, and H. Ishiguro, “Interactive Robots as Social Partners and Peer Tutors for Children: A Field Trial,” Human-Computer Interaction, vol. 19, no. 1-2, pp. 61-84, June 2004.

A. Kendon, “Movement coordination in social interaction: Some examples described.” Acta Psychologica, vol. 32, pp. 100–125, Jan. 1970.

O. Khatib, K. Yokoi, O. Brock, K. Chang, and A. Casal, “Robots in Human Environments: Basic Autonomous Capabilities,” International Journal of Robotics Research, vol. 18, no. 7,pp. 684–696, 1999.

132 S. Kiesler and P. Hinds, “Introduction to This Special Issue on Human-Robot Interaction,” Human-Computer Interaction, vol. 19, no. 1-2, pp. 1-8, 2004.

S. J. Kingand C. F. R. Weiman, “Helpmate Autonomous Mobile Robot Navigation System,” Proceedings of the SPIE 1990 Conference on Mobile Robots, Boston, MA, USA, Nov. 1990, vol. 1388, pp. 190-198.

Kosuge & Wang Laboratory, “MS DanceR,” Research. [Online]. Available: http://www.irs.mech.tohoku.ac.jp/research/RobotPartner/dance.html. [Accessed: Oct. 29, 2007].

D. R. Krathwohl, B. S. Bloom, and B. B. Masia, Taxonomy of Educational Objectives, Handbook II: Affective Domain. New York: David McKay Co., Inc., 1956.

D. R. Krathwohl, B. S. Bloom, and B. B. Masia, Taxonomy of Educational Objectives, the Classification of Educational Goals, Handbook II: Affective Domain. New York: David McKay Co., Inc., 1973.

R. A. Lane and N. A. Thacker, Stereo Vision Research: An Algorithm Survey, 1996.

P. Larsen, M. C. Nelson, and K. Beam, “Associations Among Active Transportation, Physical Activity, and Weight Status in Young Adults,” Obesity Research, vol. 13, no. 5, pp. 868–875, 2005.

J. C. Lee, “Hacking the Nintendo Wii Remote,” IEEE Pervasive Computing, vol. 7, no. 3, pp. 39-45, 2008.

LERN: Leading Edge Robotics News, “Keepon Keeps Dancing On,” Robot, issue 8, p. 15, Fall 2007.

LetsGoDigital, “Image Processing Technique.” [Online]. Available: http://www.letsgodigital.org/en/index.html. [Accessed: Aug. 8, 2008].

S. Li and B. Wrede, “Why and How to Model Multi-Modal Interaction for a Mobile Robot Companion,” The AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants, CA, USA, Tech. Rep. 72-79, 2007.

L. L. Y. Lim, S. A. Seubsman, and A. Sleigh, “Validity of Self-Reported Weight, Height, and Body Mass Index Among University Students in Thailand: Implications for Population Studies of Obesity in Developing Countries,” Population Health Metrics, vol. 7, pp. 15, 2009.

M. Lindström, “Means of Transportation to Work and Overweight and Obesity: a Population-based Study in Southern Sweden,” Preventive Medicine, vol. 46, pp. 22-28, 2008.

G. C. Liu, J. S. Wilson, R. Qi, and J. Ying, “Green Neighborhoods, Food Retail and Childhood Overweight: Differences by Population Density,” American Journal of Health Promotion, vol. 21, no. 4S, pp. 317–325, 2007.

P. S. Lum, C. G. Burgar, P. C. Shor, M. Majmundar, and M. Van der Loos, “Robot-assisted movement training compared with conventional therapy techniques for the rehabilitation of upper-limb motor function after stroke,” Archives of Physical Medicine and Rehabilitation, vol. 83, pp. 952–959, 2002.

133 H. H. Lund, “Adaptive Robotics in the Entertainment Industry,” in Proceedings IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA 2003), Kobe, Japan, July 16-20, 2003, pp. 595-602.

Y. Ma, B. C. Olendzki, W. Li, A.R. Hafner, D. Chiriboga, J. R. Hebert, and et al., “Seasonal Variation in Food Intake, Physical Activity, and Body Weight in a Predominantly Overweight Population,” European Journal of Clinical Nutrition, vol. 60, pp. 519–528, 2006.

N. Mastunami, K. Tanaka-Ishii, I. Frank, and H. Matsubara, “LEGO Mindstorms Cheerleading Robots,” in Proceedings of the IFIP First International Workshop on Entertainment Computing 2002 (IWEC 2002), Makuhari, Japan, 2002, pp. 199-206.

McGraw-Hill, McGraw-Hill Encyclopedia of Science and Technology, 5th edition, McGraw-Hill Professional, 2004.

M. P. Michalowski and H. Kozima, “Methodological Issues in Facilitating Rhythmic Play with Robots,” in Proceedings of the 16th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2007), Korea: Jeju, 2007, pp. 95-100.

M. P. Michalowski, S. Sabanovic, and H. Kozima, “A Dancing Robot for Rhythmic Social Interaction,” in Proceedings of the 2nd Annual Conference on Human-Robot Interaction (HRI 2007), ACM, 2007, pp. 89-96.

M. P. Michalowski, S. Sabanovic, and P. Michel, “Roillo: Creating a social robot for playrooms,” in Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2006), United Kingdom: University of Hertfordshire, 2006, pp. 587-592.

P. Mickle, “1961: A peep into the automated future,” 1961: The First Robot. [Online]. Available: http://www.capitalcentury.com/1961.html. [Accessed: Sept.17, 2007].

G. Mitsuoka, “INFINUVO Cleanmate 365 QQ-2 Robotic Vacuum Cleaner: Watch out Roomba, here comes the competition!,” Robot, issue 8, pp. 74-75, Fall 2007.

H. Nakajima, S. B. C. Nass, R. Yamada, Y. Morishima, and S. Kawaji, “The functionality of human-machine collaboration systems mind model and social behavior,” in Proceeding of the IEEE Conference on Systems, Man, and Cybernetics, Washington, USA, October 2003, pp. 2381–2387.

H. Nakajima, Y. Morishima, R. Yamada, S. Brave, H. Maldonado, C. Nass, and S. Kawaji, “Social Intelligence in a Human-Machine Collaboration System-Social Responses to Agents with Mind Model and Personality,” Journal of the Japanese Society for Artificial Intelligence, vol. 19, no. 3, pp. 184–196, 2004.

NEC, “PaPeRo,” NEC Personal Robot Research Center.[Online]. Available: http://www.nec.co.jp/robot/english/robotcenter_e.html. [Accessed: Aug. 31, 2008].

S. Newman, R. Clemmer-Smith, and J. Yhoung-Aree, “Food, Lifestyle and Fitness: Obesity in Central Thailand,” The Journal of the Federation of American Societies for Experimental Biology, vol. 22, no. 3, pp. 866.11, 2008.

S. Y. Nof, Handbook of Industrial Robotics, 2nd ed. John Wiley & Sons, 1999, pp. 3-5.

134 K. R. Norum, “World Health Organization’s Global Strategy on Diet, Physical Activity and Health: the Process Behind the Scenes,” Scandinavian Journal of Nutrition, vol. 49, no. 2, pp. 83-88, 2005.

H. Ogawa and T. Watanabe, “Interrobot: Speech-Driven Embodied Interaction Robot,” in Proceedings of the 9th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN 2000), Osaka, Japan, 2000, pp. 322-327.

M. Okada, “Muu: Artificial Creatures as an Embodied Interface,” in Proceeding of the ACM Siggraph 2001, New Orleans, USA, 2001, pp. 91.

J. L. Patton and F. A. Mussa-Ivaldi, “Robot-assisted adaptive training: custom force fields for teaching movement patterns,” IEEE Transactions on Biomedical Engineering, vol. 51, pp. 636–646, 2004.

Phon’s Thai Martial Art Center, “Ram Muay,” Phon’s Thai Martial Art Center: Muay Thai Promotions. [Online]. Available: http://www.muaythaipromotions.com. [Accessed: Sept.23, 2007].

B. M. Popkin, “Global Changes in Diet and Activity Patterns as Drivers of the Nutrition Transition,” Nestle Nutrition Workshop Series: Pediatric Program, vol. 63, pp. 1–14, 2009.

B.M. Popkin, “The nutrition transition: an overview of world patterns of change,” Nutrition Reviews, vol. 62, no. 7, pp. S140–S143, 2004.

J. Pransky, “Social Adjustments to a Robotic Future,” Dr. Joanne Pransky World's First Robotic Psychiatrist – Publications, pp. 1-10, 2004. [Online]. Available: http://www.robot.md/publications/sfra.html. [Accessed: Sept. 18, 2008].

J. Randerson, “Japanese teach robot to dance,” Guardian Unlimited, Aug. 8, 2007. [Online]. Available: http://www.guardian.co.uk/technology/2007/aug/08/robots.japan. [Accessed: Nov. 28, 2007].

D. Reinemann and D. Smith, “Evaluation of Automatic Milking Systems for the United States,” in H. Hogeveen and A. Meijering (Eds.),Robotic Milking, in Proceedings of the International Symposium. The Netherlands: Lelystad, 2000.

RoboGuide, “RS Media,” RS Media RoboGuide, Feb. 5, 2007. [Online]. Available: http://www.evosapien.com/robosapien-hack/nocturnal2/RSMedia. [Accessed: Sept. 22, 2007].

Robotics Trends, “Will entertainment robots become a new class of high margin consumer electronics or will they be relegated to the status of expensive novelties?,” Robotics Trends. [Online]. Available: http://www.roboticstrends.com/displayarticle688.html?POSTNUKESID =11b957626a4b0d20ae3d6e600bf12bf8. [Accessed: Nov. 2, 2007].

Robot Magazine, “Name That Bot!,” Robot, issue 8, p. 9, Fall 2007.

E. Rogers and R. Murphy, “Final Report for DARPA/NSF Study on Human–Robot Interaction,”2001 [Online]. Available: http://www.aic.nrl.navy.mil/hri/nsfdarpa/index.html. [Accessed: Aug.8, 2007].

B. H. Rosenberg, D. Landsittel, and T. D. Averch, “Can video games be used to predict or improve laparoscopic skills?,” Journal of Endourology, vol. 19, pp. 372-376, 2005.

135 S. Sabanovic, M. P. Michalowski, and R. Simmons, “Robots in the Wild: Observing Human-Robot Social Interaction Outside the Lab,” in Proceedings of the 9th IEEE International Workshop on Advanced Motion Control (AMC 06), Istanbul, Turkey, 2006, pp. 596-601.

M. A. Salichs, R. Barber, and et al., “Maggie: A Robotic Platform for Human-Robot Social Interaction,” in 2006 IEEE International Conferences on Cybernetics & Intelligent Systems (CIS) and Robotics, Automation & Mechatronics, Bangkok, Thailand, 2006.

L. Satyen and K. Ohtsuka, “Strategies to develop dual attention skills through video game training,” International Journal for Numerical Methods in Engineering, vol. 42, pp. 561- 578, 2002.

M. Scheeff, J. Pinto, K. Rahardja, S. Snibbe, and R. Tow, “Experience with Sparky, A Social Robot,” Socially Intelligent Agents, vol. 3, pp. 173-180, 2002.

B. J. Scholl and P. D. Tremoulet, “Perceptual causality and animacy,” Trends in Cognitive Sciences, vol. 4, no. 8, pp.299-309, August 2000.

Scienceinaction.info, “Science in Action,” Ability | Robot | Technology | Scienceinaction.info. [Online]. Available: http://www.scienceinaction.info/technology/robot/ability/index.html. [Accessed: Sept.21, 2007].

M. Scopelliti, M. V. Giuliani, and F. Fornara, “Robots in a Domestic Setting: A Psychological Approach,” Universal Access to the Information Society, vol. 4, pp. 146–155, 2005.

K. Severinson-Eklundh, A. Green, and H. Hüutenrauch, “Social and Collaborative Aspects of Interaction with a Service Robot,” Robotics and Autonomous Systems, vol. 42, no. 3-4, pp. 223-234, March 2003.

T. Sheridan, Telerobotics, Automation, and Human Supervisory Control, Cambridge, MA: MIT Press, 1992, pp. 6.

K. Shinozaki, A. Iwatani, and R. Nakatsu, “Concept and Construction of a Dance Robot System,” presented at 2nd International Conference on Digital Interactive Media in Entertainment and Arts (DIMEA 2007), Perth, Australia, 2007.

K. Shinozaki, Y. Oda, S. Tsuda, R. Nakatsu, and A. Iwatani, “Study of Dance Entertainment Using Robots,” in Lecture Notes in Computer Science, Technologies for E-Learning and Digital Entertainment, Springer Berlin / Heidelberg, vol. 3942/2006, pp. 473-483.

S. Siegel, Non-Parametric Statistics for the Behavioural Sciences, New York: McGraw-Hill, 1956, pp. 75-83.

G. Siemens, “Connectivism: A Learning Theory for the Digital Age,” in International Journal of Instructional Technology and Distance Learning, 2004, [Online]. Available: http://www.itdl.org/Journal/Jan_05/article01.htm. [Accessed May 15, 2010],

R. Sommer, Personal Space: The Behavioral Basis of Design. New Jersey: Prentice-Hall, 1969.

SONY, “Small Biped Entertainment Robot “QRIO” Appointed as Sony Group's Corporate Ambassador,” SONY Global – Press Release, October 1, 2003. [Online].

136 Available: http://www.sony.net/SonyInfo/News/Press_Archive/200310/03-1001E/. [Accessed: Sept.21, 2007].

Speecys Corp., “SPC-101C,” Speecys. [Online]. Available: http://www.speecys.com. [Accessed: Jul. 20, 2008].

L. O. Stratton, D. J. Tekippe, and G. L. Flick, “Personal Space and Self Concept,” Sociometry, vol. 36, pp. 424-429, 1973.

C. Suplee, “Robot Revolution,” National Geographic, vol. 192, no. 1, pp. 76-95, July 1997.

D. S. Syrdal, K. Dautenhahn, S. N. Woods, M. L. Walters, and K. L. Koay,“ ‘Doing the Right Thing Wrong’ - Personality and Tolerance to Uncomfortable Robot Approaches,” in Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2006), University of Hertfordshire, UK, 2006, pp. 183-188.

D. S. Syrdal, K. L. Koay, M. L. Walters, and K. Dautenhahn, “A Personalized Robot Companion? - The Role of Individual Differences on Spatial Preferences in HRI Scenarios,” in Proceedings of the 16th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2007), Jeju, Korea,2007, pp. 1143-1148.

F. Tanaka, A. Cicourel, and J. R. Movellan, “Socialization between Toddlers and Robots at an Early Childhood Education Center,” Proceedings of the National Academy of Sciences of the USA (PNAS), vol.104, no.46, pp.17954-17958, Nov. 2007.

F. Tanaka, B. Fortenberry, K. Aisaka, and J. R. Movellan, “Developing Dance Interaction between QRIO and Toddlers in a Classroom Environment: Plans for the First Steps,” in Proceedings of the 2005 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN 2005), 2005, pp. 223-228.

A. Tapus, C. Ţăpuş, and M. J. Matarić, “User-Robot Personality Matching and Assistive Robot Behavior Adaptation for Post-Stroke Rehabilitation Therapy,” Journal of Intelligent Service Robotics, vol. 1, no. 2, pp. 1861-2776, 2008.

A.Tapus, M. J.Matarić, and B. Scassellati, “The Grand Challenges in Socially Assistive Robotics,” IEEE Robotics and Automation Magazine, vol. 14, no. 1, pp. 35-42, 2007.

TAT, Germany, “Multimedia,” Tourism Authority of Thailand, Germany. [Online]. Available: http://www.thailandtourismus.de/multimedia/index.html. [Accessed: Sept.23, 2007].

The Irish Times, “Science fiction becomes a reality with first bionic arm,” The Irish Times, p. 37, September 4, 1998. [Online]. Available: http://www.ireland.com. [Accessed Sept. 18, 2007].

C. Thompson, “Proceedings of the San Francisco Robotics Society of America,” Robotics Society of America, April 2, 1997. [Online]. Available: http://www.robots.org/newslttr/news0497/met0497a.htm. [Accessed: Sept.18, 2007].

S. Thrun, D. Hähnel, D. Ferguson, M. Montemerlo, R. Triebel, W. Burgard, and et al., “A System for Volumetric Robotic Mapping of Abandoned Mines,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’03), Sept. 2003, pp. 4270- 4275.

137 S. Thrun, “Toward a Framework for Human-Robot Interaction,” Human-Computer Interaction, vol. 19, no. 1, pp. 9-24, 2004.

P. Torr, W. Fitzgibbon, and A. Zisserman, “Maintaining Multiple Motion Model Hypotheses Over Many Views to Recover Matching and Structure,” in Proceeding of the Sixth International Conference on Computer Vision, Jan. 1998, pp. 485-491.

Toyota, “Partner Robot,” . [Online]. Available: http://www.toyota.co.jp/en/special/robot. [Accessed: Sept.21, 2007].

A. Trafton, “Assistive robot responds to faces, new places,” MIT Tech Talk, vol. 51, no. 23, p. 3, April 11, 2007. Available: http://web.mit.edu/newsoffice/2007/techtalk51-23.pdf. [Accessed Sept. 20, 2007].

U.N. and I.F.R.R. United Nations and The International Federation of Robotics: Proceeding of the World Robotics 2002, New York, 2002.

P. B. Vaill, Learning as a Way of Being. San Francisco, CA, USA: Jossey-Blass Inc., 1996.

Visual Journal, “History of Robotics,” Visual Journal – Robot History. [Online]. Available: http://pages.cpsc.ucalgary.ca/~jaeger/visualMedia/robotHistory.html. [Accessed: Sep. 18, 2007].

B. T. Volpe,D. Lynch, A. Rykman-Berland, M. Ferraro, M. Galgano, N. Hogan, and H. I. Krebs, “Intensive sensorimotor arm training mediated by therapist or robot improves hemiparesis in patients with chronic stroke,” Neurorehabilitation and Neural Repair,vol. 22, pp. 305–10, 2008.

G. W. Walter, The Living Brain. New York, NY, USA: Norton and Company, 1953.

M. L. Walters, “The Design Space for Robot Appearance and Behaviour for Social Robot Companions,” Ph.D. thesis, University of Hertfordshire, Hatfield, UK, 2008.

L. M. Wen and C. Rissel, “Inverse Associations Between Cycling to Work, Public Transport, and Overweight and Obesity: Findings from a Population Based Study in Australia,” Preventive Medicine, vol. 46, pp. 29-32, 2008.

M. S. Whitehouse, “Physical Movement and Personality,” Paper presented at the Analytical Psychology Club of Los Angeles, USA, 1968.

M. Whitlock, “Evolution of Robosapien V2,” RoboCommunity, Nov. 3, 2006. [Online]. Available: http://www.robocommunity.com/article/10264. [Accessed: Nov. 2, 2007].

I. Wickelgren, Ramblin’ Robots. New York, NY, USA: Franklin Watts, 1996.

F. Wilcoxon, “Individual Comparisons by Ranking Methods,” Biometrics, vol. 1, pp. 80-83, 1945.

S. N. Woods, K. Dautenhahn, C. Kaouri, R. te Boekhorst, K. L. Koay, and M. L. Walters, “Are Robots Like People? - Relationships between Participant and Robot Personality Traits in Human-Robot Interaction,” Interaction Studies, vol. 8, no. 3, pp. 281-305, 2007.

138 World Health Organization, “Childhood Overweight and Obesity,” World Health Organization. [Online]. Available: http://www.who.int/dietphysicalactivity/childhood/en/. , [Accessed: Jan. 4, 2010].

World Health Organization, “Obesity and Overweight,” World Health Organization. [Online]. Available:http://www.who.int/mediacentre/factsheets/fs311/en/index.html. [Accessed: Dec. 29, 2009].

World-Information.Org, “1961: Installation of the First Industrial Robot,” World- Information.Org. [Online]. Available: http://world- information.org/wio/infostructure/100437611663/100438659325. [Accessed: Sept.17, 2007].

WowWee Ltd., “History,” WowWee. [Online]. Available: http://www.wowwee.com. [Accessed Sept. 22, 2007].

A. Zivanovic and B. L. Davies, “A Robotic System for Blood Sampling,” IEEE Transactions on Information Technology in Biomedicine, vol. 4, no. 1, pp. 8-14, March 2000.

139 APPENDICES

Note: The computer codes in Appendices A to D (Pages 140‐201) of this thesis are not available for public access.

Appendix E: Public Exposure of iThaiSTAR and iCHEER

E.1 Bangkok International ICT Expo 2007

202

203

E.2 “Sanook.com” Campus News, Nov. 16, 2007.

(Source: http://campus.sanook.com/campusnews/detail.php?id=5316)

204

E.3 BangkokBizNews Online News, Nov. 19, 2007.

(Source: http://www.bangkokbiznews.com/2007/11/19/WW17_1702_news.php? newsid=203544)

205

E.4 “Newswit.com” Online News, Nov. 19, 2007.

(Source: http://www.newswit.com/news/2007-11-19/0744-c6b49a749c1ba2c7d83a7

96a530400d3/)

206

E.5 “ThaiPR.NET” Online News, Nov. 19, 2007.

(Source: http://www.thaipr.net/nc/readnews.aspx?newsid=A8247B90476B77488334

141E8438A578)

207

E.6 “KLIN: KMUTT Library Network” News, Vol. 47, Nov. 20, 2007.

(Source: http://www.lib.kmutt.ac.th/news/digital.jsp)

208

E.7 KomChadLuek Newspaper (Thailand), pg. 12, Nov. 21, 2007.

209

E.8 Rangsit University’s website, Nov. 22, 2007.

(Source: http://www2.rsu.ac.th/web/insidersu/news.aspx?newsID=1068)

210

211

E.9 BanMuang Newspaper (Thailand), pg. 13, Nov. 25, 2007.

(Online version: http://www.banmuang.co.th/Education.asp?id=127314)

212

E.10 Live TV Talk Show “Poo Ying Theung Poo Ying”.

(The Show Name in English “Woman to Woman”)

213

E.11 Interviewed by ASTV (Asian Satellite Television) for News Hours.

(Source: http://webmunism.com/vids/of/astv)

(Source: http://www.dailymotion.com/tag/ithaistar+news/video/x3lic9_campus-news-14_news)

214

E.12 Interviewed by TITV (Thai Independent Television) for Technology News.

215

E.13 Sarn Rangsit News, No. 534, 5-7 Dec. 2007.

216

E.14 City News, CityVariety.com, Dec. 12, 2007. (Source: http://www.cityvariety.com/cv4/modules/news/index.php?mode=detail&id=2809)

217

E.15 Sarn Rangsit Magazine, Dec. 2007.

218

E.16 Insight Education Consulting, Dec. 15, 2007.

(Source: http://www.insighteducate.com/scholarship_murdoch.html)

219

E.17 Manager Online News, Dec. 20, 2007

(Source: http://www.manager.co.th/Campus/ViewNews.aspx?NewsID=9500000150840)

The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then insert it again.

220

E.18 KomChadLuek Newspaper (Thailand), Jan. 2, 2008.

(Online version: http://www.komchadluek.net/2008/01/04/h001_183959.php?news_id=183959)

221

E.19 Sanook.com, Campus News: Show room, Jan. 3, 2008.

(Source: http://campus.sanook.com/u_life/showroom_03315.php)

222

E.20 Explore Article, Mar. 2008.

223

E.21 Click Zone TV Show, Thai PBS (Public Broadcasting Service) TV

224

E.22 Live TV Talk Show “Tee Nee… H Plus”, MV Television

(The Show Name in English “Here at... H Plus”)

225