University of Pennsylvania ScholarlyCommons

IRCS Technical Reports Series Institute for Research in Cognitive Science

January 1999

A Motion Control Scheme for Animating Expressive Arm Movements

Diane M. Chi University of Pennsylvania

Follow this and additional works at: https://repository.upenn.edu/ircs_reports

Chi, Diane M., "A Motion Control Scheme for Animating Expressive Arm Movements" (1999). IRCS Technical Reports Series. 45. https://repository.upenn.edu/ircs_reports/45

University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-99-06.

This paper is posted at ScholarlyCommons. https://repository.upenn.edu/ircs_reports/45 For more information, please contact [email protected]. A Motion Control Scheme for Animating Expressive Arm Movements

Abstract Current methods for figure involve a tradeoff between the level of realism captured in the movements and the ease of generating the . We introduce a motion control paradigm that circumvents this tradeoff-it provides the ability to generate a wide range of natural-looking movements with minimal user labor.

Effort, which is one part of Rudolf Laban's system for observing and analyzing movement, describes the qualitative aspects of movement. Our motion control paradigm simplifies the generation of expressive movements by proceduralizing these qualitative aspects to hide the non-intuitive, quantitative aspects of movement. We build a model of Effort using a set of kinematic movement parameters that defines how a figure moves between goal keypoints. Our motion control scheme provides control through Effort's four dimensional system of textual descriptors, providing a level of control thus far missing from behavioral animation systems and offering novel specification and editing capabilities on opt of traditional keyframing and inverse kinematics methods. Since our Effort model is inexpensive computationally, Effort-based motion control systems can work in real-time.

We demonstrate our motion control scheme by implementing EMOTE (Expressive MOTion Engine), a module for expressive arm movements. EMOTE works with inverse kinematics to control the qualitative aspects of end-effector specified movements. The user specifies general movements by entering a sequence of goal positions for each hand. The user then expresses the essence of the movement by adjusting sliders for the Effort motion factors: Space, Weight, Time, and Flow. EMOTE produces a wide range of expressive movements, provides an easy-to-use interface (that is more intuitive than joint angle interpolation curves or physical parameters), features interactive editing, and real-time motion generation.

Comments University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-99-06.

This thesis or dissertation is available at ScholarlyCommons: https://repository.upenn.edu/ircs_reports/45

A MOTION CONTROL SCHEME FOR ANIMATING

EXPRESSIVE ARM MOVEMENTS

DIANE M. CHI

A DISSERTATION

in

COMPUTER AND INFORMATION SCIENCE

Presented to the Faculties of the UniversityofPennsylvania in Partial Ful llment of the

Requirements for the Degree of Do ctor of Philosophy.

1999

Norman I. Badler

Sup erviser

Jean Gallier Graduate Group Chair

COPYRIGHT

Diane M. Chi 1999

ABSTRACT

A MOTION CONTROL SCHEME FOR ANIMATING

EXPRESSIVE ARM MOVEMENTS

Diane M. Chi

Sup ervisor: Norman I. Badler

Current metho ds for gure animation involve a tradeo between the level of realism

captured in the movements and the ease of generating the animations. We intro duce a

motion control paradigm that circumvents this tradeo |it provides the ability to generate

a wide range of natural-lo oking movements with minimal user lab or.

E ort, which is one part of Rudolf Laban's system for observing and analyzing

movement, describ es the qualitative asp ects of movement. Our motion control paradigm

simpli es the generation of expressive movements by pro ceduralizing these qualitative

asp ects to hide the non-intuitive, quantitative asp ects of movement. We build a mo del

of E ort using a set of kinematic movement parameters that de nes how a gure moves

between goal keyp oints. Our motion control scheme provides control through E ort's four-

dimensional system of textual descriptors, providing a level of control thus far missing from

b ehavioral animation systems and o ering novel sp eci cation and editing capabilities on

top of traditional keyframing and inverse kinematics metho ds. Since our E ort mo del is

inexp ensive computationally, E ort-based motion control systems can work in real-time.

We demonstrate our motion control scheme by implementing EMOTE Expressive

MOTion Engine, a character animation mo dule for expressive arm movements. EMOTE

works with inverse kinematics to control the qualitative asp ects of end-e ector sp eci ed

movements. The user sp eci es general movements byentering a sequence of goal p ositions

for each hand. The user then expresses the essence of the movement by adjusting sliders

for the E ort motion factors: Space, Weight, Time, and Flow. EMOTE pro duces a wide

range of expressive movements, provides an easy-to-use interface that is more intuitive

than joint angle interp olation curves or physical parameters, features interactive editing,

and real-time motion generation. iii

Acknowledgements

When I re ect on these past few years of graduate study, there are numerous p eople to

whom I am grateful. Ab ove all, I am deeply indebted to my advisor Professor Norm Badler.

Without his vision and inspiration, this work would never have b een imagined much

less demonstrated. Norm has b een an ideal advisor|sensing when I needed direction or

motivation, yet trusting me with indep endence and resp onsibility. I admire his intelligence,

managerial abilities, and sel ess devotion to his students, and am grateful for his years of

advice, guidance, and supp ort.

I am also grateful to Janis Pforsich, who was instrumental in providing the LMA

exp ertise for the pro ject. Her enthusiasm and willingness to explore the p otential of

computer technology were true assets, and I appreciate her role as a teacher and a friend.

My committee, Dr. Armin Bruderlin, Dr. Martha Palmer, Professor CJ Taylor,

and Professor Dimitri Metaxas, deserves sp ecial thanks for reading my pap ers and

providing insightful comments. Their varied p ersp ectives brought truly useful suggestions

that strengthened the work and its presentation, and their interest in the pro ject was

encouraging.

Others that deserve thanks include: Deepak Tolani for providing the inverse kinematics

co de; MikePan for his mo deling and video assistance, along with Mark Palatucci, for b eing

guinea pigs for my user tests; Amy Matthews, Connie Co ok, and Janet Hamburg for their

E ort mo del evaluations; Karen Carter for making the lab a pleasant place to work; Rama

Bindiganavale and John Granieri for their lab software supp ort; Harold Sun and Christian

Vogler for setting up the system; Professor Bonnie Webb er, Dr. John

Clarke, and Dr. Yumi Iwasaki for their guidance in my earlier research work; Professors iv

Mark Steedman and Jean Gallier for their work as graduate student advo cates; and Mike

Felker for keeping the department running smo othly.

I am also grateful to the National Physical Science Consortium and the Departmentof

Defense at Fort Meade for supp orting my graduate studies and encouraging women and

minorities to enter underrepresented elds of study.

Thanks to my \dissertation supp ort group" | Omolola Ijeoma Ogunyemi and Sonu

Chopra for o ering a listening ear, providing sound advice, and joining me on all those

\stress-relieving" shopping trips and candy runs. I appreciate the many past and present

memb ers of the graphics lab who were valuable colleagues and friends, including Vangelis

Kokkevis, Ken Noble, Roxana Cantarovici, Charles Erignac, Bond-Jay Ting, Je Nimero ,

Barry Reich, and Jonathan Kaye. I'm also grateful for my friends David Brogan, Chrissy

Benson, Alan Kuo, Jerome Strach, and Margie Wendling.

I would like to thank David Jelinek for his input in various technical discussions, for

lab oring through my pap ers and providing sometimes helpful comments, and for b eing

my b est friend during the last few years.

I am constantly reminded of how lucky I am to have such a wonderful family. Marie

and Mikehave b een the b est siblings a little sister could ask for|from the childho o d antics

through the transition into adults, they have shared and shap ed the imp ortant moments

of my life. Their unconditional supp ort means a lot to me. I also appreciate my brother-

in-law Darrell for b eing my nancial, hardware, and home repair advisor. My nephews

Je rey and Kevin are constant sources of fun, humor, and entertainment, reminding me o

the truly imp ortant things in life. Thanks are due to Auntie Alice, my California relatives,

and the close family friends who \adopted" me while I was away from home.

I want to express my utmost resp ect and admiration for my father and my mother.

They are my role mo dels. My father built a successful career, starting out with little

supp ort and a lot of determination. He has also shown me that age is of no matter|one

can always be young at heart. My mother is the most dedicated and generous p erson I

know. I admire her uncomplaining sacri ces and her ability to provide practical advice

even in the face of crisis. I thank them for their un agging supp ort and guidance, and I

dedicate this work to them. v

Contents

Acknowledgements iv

1 Intro duction 1

1.1 Our Approach...... 2

1.2 Motivation ...... 3

2 Related Work 5

2.1 Motion Control ...... 5

2.2 ExpressiveMovement ...... 9

2.3 Biomechanics ...... 12

2.4 Computers and Dance Notation ...... 13

3 Background 15

3.1 Nonverbal Communication ...... 16

3.2 Laban Movement Analysis ...... 18

4 E ort Mo del 22

4.1 Translating E ort into Movement Parameters ...... 22

4.2 Low-level Movement Parameter De nitions ...... 24

4.2.1 Tra jectory De nition ...... 24

4.2.2 Parameterized Timing Control ...... 25

4.2.3 Flourishes ...... 30

4.3 Parameter Settings ...... 33

4.3.1 Parameter Settings for Individual E ort Elements ...... 33 vi

4.3.2 Generating E ort Ranges and Combinations ...... 35

5 Implementation: EMOTE 39

5.1 User Interaction ...... 39

5.2 Arm Mo del ...... 42

5.3 Metho d for Using E ort Mo del ...... 43

5.4 Examples ...... 43

5.4.1 Individual E ort Elements ...... 43

5.4.2 Gestures Accompanying Sp eech ...... 47

6 Conclusions 52

6.1 Evaluation ...... 52

6.1.1 E ort Mo del Evaluation ...... 52

6.1.2 User Evaluations ...... 56

6.2 Extensions ...... 57

6.3 Contributions ...... 58

Bibliography 58 vii

List of Tables

3.1 Motion Factors and E ort Elements ...... 21

4.1 Low-level Parameter Settings for E ort Elements ...... 34

6.1 Overall Percentages for E ort Mo del Evaluation ...... 54

6.2 Percentage Correct for Individual E ort Elements ...... 55 viii

List of Figures

4.1 Stick Figure Formed By Motion Capture Sensors ...... 24

4.2 Velo cityFunction ...... 27

4.3 Varying In ection Point to Obtain Acceleration and Deceleration In ection

PointValue Given in Parentheses ...... 28

4.4 Varying Time Exp onent to Magnify Acceleration and Deceleration . . . . . 29

4.5 Varying Initial and Final Velo cities to Obtain Anticipation and Oversho ot

Initial and Final Velo cities Given in Parentheses ...... 29

4.6 Sine Factor in Squash Equation ...... 30

4.7 Sine Factor in Equation for Breath ...... 31

4.8 Multiplier in Wrist Bend Equation ...... 32

5.1 Interface for Adjusting End-E ector Positions ...... 40

5.2 Key Editor ...... 41

5.3 E ort Editor ...... 41

5.4 E ort Graph Editor ...... 42

5.5 Arm Mo del ...... 43

5.6 End-E ector Keys for an Example Movement Sequence ...... 44

5.7 Every Fifth Frame from Animation of Indirect and Direct E orts ordered

left to right, top to b ottom ...... 45

5.8 Every Fifth Frame from Animation of Light and Strong E orts ordered left

to right, top to b ottom ...... 46

5.9 Every Tenth Frame from Animation of Sustained and Sudden E orts

ordered left to right, top to b ottom ...... 48 ix

5.10 Every Tenth Frame from Animation of Free and Bound E orts ordered left

to right, top to b ottom ...... 49

5.11 Every Fourth Frame from Animation of a Denial ordered left to right, top

to b ottom ...... 50

5.12 Every Fourth Frame from Animation of a Gleeful Exclamation ordered left

to right, top to b ottom ...... 51 x

Chapter 1

Intro duction

As a so ciety, we are surrounded by other p eople. Consciously and sub consciously,we are

constantly observing how p eople move. We often recognize others solely by catching a

glimpse of them walking or moving. When p eople limp or make subtle comp ensations for

an injury,we immediately recognize something unnatural ab out their movements. Through

constant observation of everyday life, we b ecome sub consciously aware of the subtleties of

human movement. Animations of virtual characters must capture these subtleties in order

to app ear life-like and b elievable.

Current metho ds for human or character animation involve a tradeo between the

level of realism captured in the movements and the ease of generating the animations. At

one end of the sp ectrum, computer-animated features and movie sp ecial e ects display

extremely realistic individuals with p ersonalized movement characteristics; however, they

require teams of using lab or-intensive systems where every movement, from the

ick of a nger to a full-b o dy leap, must be explicitly sp eci ed. At the other end of the

sp ectrum, b ehavioral animation systems can automatically generate multiple characters

interacting with each other and their environments by merely sp ecifying a set of initial

conditions; however, the various characters often move in a fairly mechanical manner and

share very similar motions. A metho d that combines the advantages found at these two

extremes|the ability to generate a wide range of natural-lo oking movements with minimal

user lab or|can play an imp ortant role in circumventing this tradeo . 1

1.1 Our Approach

We intro duce a motion control paradigm that simpli es the generation of expressive

movements by pro ceduralizing the qualitative asp ects of movement to hide the non-intuitive

quantitative asp ects. To do this, we needed 1 a language for sp ecifying expressive

movements, and 2 a translation of this language into quantitativemovement parameters.

We sought a language that would cover the complex description space of expressive

movement while still providing an intuitive interface. We examined the nonverbal

communication and dance notation literature for metho ds of describing, notating, and

recording human movement. We found that the E ort comp onent of Laban Movement

Analysis [50 , 51 , 27 , 7, 59 ] met our requirements, has a solid foundation based on

observation and analysis of humans p erforming a wide range of movements, and is b eing

used as a research to ol in a growing number of disciplines. E ort is the part of Rudolf

Laban's theories on movement that describ es the qualitative asp ects of movement using

textual descriptors along four motion factors: Space, Weight, Time, and Flow Chapter

3. The extremes of each motion factor give the eight E ort Elements: Indirect, Direct,

Light, Strong, Sustained, Sudden, Free, and Bound.

We derived quantitative structures to mo del E ort. Through much trial and error, we

established an empirical mo del of each E ort Element and designed techniques to generate

ranges and combinations of E ort Chapter 4.

We demonstrate the use of our motion control scheme by implementing EMOTE

Expressive MOTion Engine, a 3D animation control mo dule for expressive arm

movements. EMOTE lets users sp ecify movements using end-e ector p ositions and E ort

settings. A sequence of end-e ector p ositions provides a general spatial description of a

movement, while the E ort settings de ne the desired qualitative nature of the movement.

EMOTE uses inverse kinematics IK to compute p ostures for an articulated gure from

the sp eci cation of end-e ector lo cations [89 , 76 ]. Inverse kinematics, however, do es not

sp ecify how a gure achieves a computed p osture or changes b etween a series of p ostures.

Thus, IK output is usually used as input to a separate animation pro cess, such as an

end-e ector linear interp olator or gure control equations. EMOTE lets a user express the

essence of the movement by setting sliders for each of the four qualitative E ort motion 2

factors: Space, Weight, Time, and Flow. EMOTE uses these E ort settings to compute the

values for a set of low-level motion parameters that sp ecify an animation that follows the

de ned p osition sequence and displays the selected E ort qualities. Since our translation

pro cess is computationally inexp ensive, EMOTE provides interactive editing and real-time

motion generation.

1.2 Motivation

We seek to provide a useful to ol that enables further automation of expressive character

animations. We b elieve sucha system can play an imp ortant role in keyframe animation

systems, virtual environments, games, and b ehavioral animation systems.

For novice keyframe animators, an E ort-based motion control system provides the

ability to generate a broad range of motions with a short learning curve and an easy-to-use

interface. For skilled animators, our system provides a blo cking to ol for quickly sketching

out movement using a language that directly supp orts the desired intent of the animated

character in a pro cess that is rep eatable and provides interactive editing at the parameter

level. Traditional low-level editing metho ds can b e provided to allow animators to further

re ne their animations.

Users of virtual environment systems and computer game players often interact

with other individuals, b oth user-controlled actors avatars and autonomous synthetic

characters. With current systems, the synthetic characters either have to be completely

sp eci ed with a library of reactivemovements, or the characters all have extremely similar

actions and reactions. For realistic interactions, virtual characters must app ear to be

individuals|they must move naturally and with subtle \p ersonality" traits. In fact, having

them act indep endently is more imp ortant than having them lo ok di erent physically,

b ecause we commonly use actions and other non-verbal communication to try to infer

emotional state, attitude, and ultimately intent. Behavioral animation uses a hierarchical

structure to de ne crowds, herds, and scho ols of characters [13 , 65 , 69 , 77 ]. Such systems

de ne low-level movements such as a character's basic means of lo comotion, as well as

higher level b ehaviors suchasscho oling or o cking, obstacle avoidance, pursuit and evasion, 3

and aggressiveness. However, these systems frequently omit the \middle" layer{variations

on low-level movements to create di erent expressions, p ersonalities, and intents. Further,

b ehavioral systems use characters represented by simple mo dels which o er a very small

range of movements and expression. Recent feature lms have used crowd simulations

to create armies in Disney's Mulan[36 ], worker ants in Paci c Data Images' Antz[67 ], and

Tipp ett Studios' bugs in Starship Troopers[38 ]. Casts of characters were created byvarying

b o dy shap es, props, and clothing; however, the movements of these characters were drawn

from a small pre-de ned library of motions generated bykeyframing, motion capture, stop

motion, and pro cedural animation.

Games, virtual environments, and computer-generated animations have obvious

applications in entertainment; however, synthetic characters also enable a participant

to exp erience a wide variety of scenarios with cognitive and decision-making challenges

without the physical consequences or harm that could result in the real world. For instance,

prototyp e systems already exist for training battle eld medics [2, 19 , 20 ], re ghters [12 ],

and surgeons [58 , 28 ].

A system that allows users to customize basic movements based on a character's

p ersonality, mood, and attitudes is the rst step towards simplifying the development

of a rep ertoire of characters with a wide range of expressiveness. By selecting a general

human movement description language to customize motions, such a system can also lead to

the generation of virtual characters from di erent cultures. Although certain gestures are

culture-sp eci c, the descriptions of the movements of individuals with the same emotions

and intent are often similar. A playful child moves with free, indirect abandon; an aggressor

makes strong, direct, and sudden attacks; and a soldier marches in b ound, sustained strides.

Our motion control paradigm allows the user to customize movements through general

qualitative descriptors. This is the rst step towards enabling a system where a user

creates character by sp ecifying attitudes and intentions, which in turn mayeventually lead

to the automatic generation of appropriate movements from sp eech text, a storyb oard

script, or a b ehavioral simulation. 4

Chapter 2

Related Work

In this chapter, we examine the current approaches to motion control used in computer

animation. Then, we discuss techniques that have b een develop ed to sp eci cally address

emotion and expression in animated movement. Next, we brie y overview the biomechanics

research and related work on human movement. We conclude with a survey of previous

e orts to combine dance notation and computers.

2.1 Motion Control

The basic approaches to motion control for articulated gures include: keyframing,

dynamic simulation, motion capture, and pro cedural metho ds. Each di ers in the amount

of user sp eci cation, the skill required to generate go o d-lo oking animations, the ease of

editing, and the generality of its application.

Keyframing articulated gures uses kinematics, requiring the user to sp ecify the

p ositions and orientations of a gure and its limbs at ma jor p oints in a movement. The

computer automatically generates the frames in-b etween the sp eci ed keyframes. With

keyframing, the has a lot of control over the nal animation. To change

the generated animation, the animator can add, remove, or change keyframes. The

disadvantage of keyframing is that it requires a certain artistry and skill in order to capture

life-likemovements and p ersonality. Current3Dkeyframing systems provide to ols so users

can view and edit curves representing the changing parameter values over an animation. 5

For instance, users are often able to edit joint angle interp olation curves, a motion's velo city

curve, or an ob ject's tra jectory in space. Another technique, inverse kinematics, reduces

the amount of user input by allowing users to sp ecify end-e ector lo cations to determine the

p osture of a gure [89 ]. These to ols aid the tedious sp eci cation tasks of the animator but

fail to o er assistance in capturing expression. Some systems also provide shap e blending

to ols that morph a source ob ject into a target ob ject over a sp eci ed amount of time. This

technique is typically used for facial or soft ob ject animation and not for movements of

articulated gures.

Dynamic simulation sometimes called physics-based mo deling uses the physics of

b o dies in motion to pro duce animations with a realistic impression of weight, friction,

inertia, and other physical prop erties. However, dynamic simulation requires solving

the equations of motion, which is computationally exp ensive and may result in unstable

solutions. Also, users must provide a detailed physical description for all ob jects in

the scene [84 , 68 ]. This description must include ob ject prop erties, such as mass

along with its distribution over the ob ject, moments of inertia, segment lengths, and

joint limits, as well as any forces or torques acting on the body, any frictional or

damping e ects that might be triggered through collisions or other movements, and

any e ects due to energy exp enditure and transfer. Since dynamic simulation generates

motions entirely from the physical description, users have limited and indirect control

over the nal animation; and editing proves non-intuitive. E orts to simplify control

of dynamic systems while maintaining physically correct animations include blending

kinematic and dynamic techniques [42 , 8, 84 , 14 , 48 ], customizing controllers for sp eci c

tasks [68 , 39 ], and automatically generating controllers using genetic algorithms or control

theory [73 , 72 , 35 , 62, 54 ]. While these metho ds capture the physical realism of b o dies in

motion, they do not address howtochange the movements to display di erent expressions

or intentions. Spacetime constraints metho ds attempt to provide the user with b etter

control over generated animations and editing capabilities [85 , 21 , 55 ]. However, thus

far, editing of how a motion is p erformed using spacetime constraint systems is limited to

qualities that directly translate to physical criteria. For instance, \don't waste energy"

minimizes kinetic energy, while \land hard at the end of a jump" maximizes the contact 6

force on landing. On the other hand, achieving expressive qualities such as \careful" or

\meandering" are not addressed. Also, spacetime constraints metho ds are non-interactive

and have b een limited to movements of simple creatures.

Motion capture uses electro-magnetic or optical technologies to collect p osition and

orientation data of real human movement, which can then b e used to animate articulated

gures. Motion capture techniques can capture the small nuances of an individual's

movement, enabling an animated character to display the intended expression of the

original human p erformer; however, motion capture requires exp ensive equipment and

quickly b ecomes impractical when an animation involves a large number of characters

and/or movements. Further, motion capture o ers only indirect control over pro duced

animations|by requesting changes from the p erformer and re-capturing the subsequent

motions. Gleicher intro duced a spacetime constraints technique for adapting motions from

one character to pro duce motion sp eci cations for other di erently sized characters [33 , 34 ].

For instance, using motion capture data of two p eople swing dancing, he changed the sizes

of the dancers, but was able to maintain their fo ot contact to the o or and their hand

contact with each other using his retargetting technique. Bindiganavale and Badler present

a metho d that automatical ly recognizes and maintains spatial, as well as visual constraints

while mapping them to characters of di erent sizes[10 ]. For instance, the data for an adult

grabbing a mug on a table and drinking from it is mo di ed to animate a child drinking

from the mug|they ensure that the child grasps the mug at the correct lo cation and brings

it to his lips. These metho ds however provide no direct means of editing the expressive

qualities of captured movement.

Pro cedural animation metho ds de ne how a gure moves over time using a mo del

often mathematical that resp onds to some external input, either from a human user,

1

other pro cedures, or some means of sensing the current state . Often pro cedural metho ds

are geared towards sp eci c applications. For instance, pro cedural metho ds have b een

used in animating a number of physical phenomena, such as cloth movements, waves, and

particle phenomena [29 ]. Bruderlin and Calvert present a pro cedural system for animating

1

We note that the broad de nition given for pro cedural animation gives a disjoint set of techniques, some

of which, by our categorization, use other fundamental approaches to animation, notably physically-based

metho ds. 7

the running movements of an articulated human-like gure [15 ]. Their system, RUNNER,

provides a user with high-level parameters and attributes to interactively alter an animation

to display a wide variety of human running styles. Parameters determine the running

stride and include velo city, step length, step frequency, ight height, and level of running

exp ertise. Attributes allow the user to individualize a run by setting movement variables

for the arms, torso, p elvis, and legs. RUNNER mo di es a default running animation in

real-time according to changes sp eci ed by its user.

Behavioral animation, a subset of pro cedural metho ds, uses rules to de ne virtual

creatures that can sense and react to the environment. Various researchers have develop ed

b ehavioral animation systems to generate animations of multiple creatures with varying

p ersonalities and/or goals [69 , 77 , 5, 60 ]. Tu and Terzop oulos create a \virtual marine

world" lled with di erent shes [77 ]. They de ne three typ es of sh { predators, prey, and

paci sts{each with a di erent set of intentions and b ehaviors. Di erent sh typ es react

to their environments in di erent ways; however, since they are all physically mo deled

using the same spring-mass mo del, their low-level movements are all the same. We note

though that the basic movement of real sh is fairly unexpressive at least to this casual

observer!. Blumb erg and Galyean present a metho d of directing autonomous creatures

at multiple levels [13 ]. Their hierarchical organization of b ehaviors allow commands that

re ect emotional state to initiate lower-level b ehaviors. In their example, a dog told to

display a happy state, induces b ehaviors to set appropriate p ositions for his ears, tail, and

mouth and may also issue a meta-command for the dog to use a more jovial gait. Although

their system allows users to change the expression of characters, each expression and its

manifestations must b e de ned separately. Badler and his collab orators have implemented

b ehavioral systems with a more complex mo del|an articulated human gure. In the Hide

and Seek pro ject, characters are assigned di erent roles as hiders or as the seeker in on-

the- y animations of the children's game of the same name [60 ]. The role of the character

in uences its goals and subsequent b ehavior towards other characters in the scene, but the

low-level movements lo comotion of all characters is the same and fairly expressionless.

Pro cedural metho ds ease the work required of the animator by enco ding some

information on how things move. Our motion control paradigm takes this approach 8

and applies it to a general-purp ose application|the expressivity of movements. We

parameterize the non-intuitive, quantitative asp ects of movement and provide the user

with more meaningful, interactive control through textual descriptors E ort Elements.

Since our motion control paradigm is based on a comprehensive language for describing

movements, it can b e applied to anytyp e of movements and ob jects.

2.2 Expressive Movement

Several researchers have sp eci cally addressed the generation of expressiveness in

movement. Their approaches involve adding expressiveness to neutral motions [64 , 79 , 1]

or providing editing to ols to mo dify expression or t di erent constraints [16 , 86 , 70 ]. Most

of these techniques are not sp eci c to any particular motion generation metho d and are

valuable to ols for making existing motions more usable; however, they may prove costly

or dicult to use in generating the range of human expressivity. Other researchers have

develop ed systems that generate animations from high-level sp eci cations [61 , 47 ].

Ken Perlin gives the \visual impression of p ersonality" to animated pupp ets using

sto chastic noise functions [64 ]. The user controls the pupp et through a panel of buttons

representing a set of primitive actions. The system smo othly blends the selected actions

into a coherent animation. The user can vary expression by adding a random comp onent

to joints, mo difying the bias on joints, and varying the transition times for di erent

primitive actions. These metho ds givecharacters a dynamic presence and a random sense

of attentiveness, which play a role in resp onsive virtual agents, but do not necessarily

present a natural, human-like demeanor. Also, varying expression by mo difying joint

angle frequency and amplitude functions is non-intuitive.

Unuma, Anjyo, and Takeuchi capture a wide variety of expression in human lo comotion

[80 , 79 ]. They use motion capture to collect joint angle data of a human sub ject

p erforming neutral and various emotion-in uenced lo comotion. From the discrete data,

they approximate the original movement with a continuous rescaledFourier function model.

Their Fourier function mo del allows them to smo othly transition between two captured

motions using interp olation, as well as to generate exaggerated motions using extrap olation. 9

For instance, they can interp olate between a \normal" walk and a \tired" walk to get

various degrees of \tiredness"; more imp ortantly, they can extrap olate to get a \brisk" and

an exaggerated \tired" walk. Further, they generate Fourier characteristic functions for

di erent emotions by taking the di erence b etween the Fourier co ecients of a functional

mo del for an emotion-in uenced lo comotion and those for a neutral lo comotion. This

allows them to sup erimp ose emotions onto mo dels for other motions. For instance, they

can sup erimp ose \briskness" or \tiredness" onto a \run" mo del, even if the characteristic

functions for \brisk" and \tired" were de ned from brisk and tired walks. In addition, they

provide interactive controls for step, sp eed, and hip p osition. In [80 ], they show examples

of walking and running with the following emotions: hollow, vacant, graceful, cold, brisk,

happy, vivid, and hot. They seem to have captured a wide variation in movements, however,

their metho ds work only on cyclic motions. Also, although users can interactively adjust

values for the mo deled emotions, the addition of other emotions requires rep eating the

lengthy pro cess of motion capture on a human sub ject and generating its functional mo del

and characteristic function.

Amaya, Bruderlin, and Calvert presenta more general metho d for adding emotion to

motions. They derive emotional transforms from motion capture data by quantifying the

di erences b etween neutral and emotion-driven actions using the sp eed of the end-e ector

and the spatial amplitude of joint angle signals [1 ]. They then use the emotional transforms

to add emotion to neutral actions. Further, by dividing the joints of an articulated human

gure into joint categories, they are able to apply emotional transforms derived from one

part of the b o dy to another part of the b o dy. For instance, an emotional transform derived

from angry and sad arm movements drinking motions was applied to the legs to generate

angry and sad kicking motions. Further, their emotional transforms can be applied to

simulated, keyframed, and pro cedurally generated motions. The authors use the technique

to capture ten emotions or mo o ds: neutral, angry, sad, happy, fearful, tired, strong, weak,

excited, and relaxed. By separating the de nition of basic movements from the transforms

required to generate emotions, the authors are able to generate a broad range of movements

with di erenttyp es of expressivity. They note however that individuals di er in the ways

they express themselves, requiring di erent transforms to represent di erent p ersonalities, 10

genders, cultures, and ages, which could result in a large database of movement transforms.

In [16 ], Bruderlin and Williams intro duce a set of techniques to mo dify existing motion

data generated from motion capture, keyframing, or pro cedural animation . By treating

motion parameters such as joint angles or co ordinates as sampled signals, they can

apply techniques from image and signal pro cessing to mo dify the animated motions.

Multiresolution motion ltering passes a motion parameter signal through a series of lters,

decomp osing the signal into a set of bandpass lter bands. An animator can adjust the

amplitudes of high, middle, or low frequency bands to add a nervous twitch, exaggerate

the movement, or constrain joint ranges, resp ectively. Multitarget motion interp olation

blends two di erent motions into a single motion. Dynamic timewarping resolves timing

di erences b etween two motions to b e blended. Waveshaping mo di es input motions using

shaping functions and is useful for maintaining joint limits or adding subtle e ects. Motion

displacement mapping p ermits lo cal shaping of a signal while maintaining the global

shap e of the signal. This allows an animator to change select keyframes for instance, to

satisfy certain constraints or change an end e ector p osition while maintaining the overall

character of the original motion. The system ts a spline curve through the displacements

in each degree of freedom and adds it to the original curve signal.

Witkin and Pop ovi c describ e a technique, similar to motion displacement mapping, for

editing captured or keyframed animation by warping motion parameter curves [86 ]. The

animator mo di es the p ose at particular frames, which are used as constraints on a smo oth

deformation to be applied to the captured motion curves. The deformation satis es the

constraints, while maintaining the ne details of the captured motion. A large number of

realistic motions may b e created from a single prototyp e motion sequence using just a few

keyframes to de ne the motion warp.

Rose, Cohen, and Bo denheimer present a metho d for leveraging existing motions|

they interp olate b etween structurally similar example motions along multiple dimensions

to create new motion [70 ]. Using an o -line authoring system, they parameterize the

motion \verbs" with \adverbs" and create a verb graph to sp ecify transitions between

verbs. At runtime, these structures allow the user to mo dify animations in real-time by

changing adverb settings. 11

Koga, Kondo, Ku ner, and Latombe present a task-level animation system that

generates arm motions of a human gure moving an ob ject to a goal lo cation [47 ]. They

intro duce a planner to compute collision-free paths and an inverse kinematics algorithm

for human arms based on neurophysiological studies. Though their system generates a

complete animation with minimal user sp eci cation, their fo cus is on the \intention" of

moving an ob ject from one lo cation to another and not on the underlying movement

qualities of the character and their expressive manifestations.

Morawetz and Calvert intro duce a framework for an animation system, which p erhaps

most closely matches our high-level goals, but takes a very di erent approach [61 ]. They

implement a p ortion of this framework as the GESTURE system, which uses a mo ck

exp ert system to add secondary motion to walking characters based on their user-sp eci ed

\p ersonality" and \mo o d". Secondary movements are the subtle movements that don't

serve to satisfy a particular goal, but often re ect one's sub conscious, inner attitudes

and play an imp ortant role in nonverbal communication. For instance, reaching for a

cup or op ening a do or are goal-directed primary movements, while tapping one's fo ot

or scratching one's head are examples of secondary motion. Personality is sp eci ed

through slider values for extrovert/introvert, cheerful/glo omy, assertive/passive, and

domineering/submissive. Moods include b oredom, nervousness, fatigue, impatience, and

fear. For motion sp eci cation, GESTURE uses a graph to facilitate the sequencing of

movements and to allow for one movement to interrupt another. GESTURE takes as

input a high-level script, which chronologically lists actor movements and start times

using textual descriptions suchas\walk forward" or \scratch head with left hand" and

frame numb ers. Each high-level movement must be de ned using a pre-de ned gesture

sp eci cation function. The system generates a detailed animation script, sp ecifying all

joint angles for all characters in an animation.

2.3 Biomechanics

There has b een a signi cant amount of research on the biomechanics of human motion;

however, the eld is still fairly young with no de nitive mo dels and several working 12

hyp otheses. For instance, the minimum-jerk hyp othesis suggests that p eople p erforming

skilled tasks move in a maximally smo oth manner, minimizing the time rate of change of

acceleration [31 ]. Another mo del, the minimum-torque change hyp othesis, suggests that

motions display minimized torque derivatives [78 ]. Zatsiorsky provides a go o d intro ductory

text on elementary movement concepts as well as the biomechanical research literature

[88 ]. As none of the biomechanical mo dels can explain all the complexities of human

movement, we take the artistic approachindeveloping our basic movement mo del Chapter

4. We further justify this approachby the fact that actors human as well as animated

tend to move di erently from humans in everyday situations|they must exaggerate their

movements, emotions, and reactions to capture an audience's attention.

2.4 Computers and Dance Notation

For the past twenty years, p eople have b een exploring ways to combine computers and

dance notation, b oth to simplify the pro cess of notating dance and to facilitate computer

animation by using notations already established to describ e human movement. Several

software programs exist for editing and printing dance notation scores. For instance,

there is LabanWriter [52 ] and LED [40 ] for Labanotation, and MacBenesh [56 ] for Benesh

Movement Notation. Others have used notation, Labanotation in particular, to animate

human gures [3 , 18 ]. These pro jects have fo cused on the structural asp ects of movement,

which are sp eci ed by Labanotation, but have not addressed the more qualitative asp ects

of movement provided by E ort. Bishko suggests analogies b etween the \Twelve Principles

of Animation" [75 ] and Laban Movement Analysis LMA [11 ]. She shows that there is

an abstract relationship b etween LMA and techniques, but do es not

provide a practical means of exploiting this relationship. Badler was the rst to prop ose

the use of E ort as a higher level of control for human gure animation [4]. Our work here

provides a metho d of implementing just such a system.

Since E ort provides a universal theory of human expression and a vo cabulary for

movement dynamics, our motion control paradigm eliminates the need to build a database

of sp eci c expressions and b ehaviors. Also, since E ort is expressed by combining and 13

varying the magnitude along just four motion factors, we are able to re-use our E ort

mo del to sp ecify all typ es of expressive movements without requiring o -line mo deling

for new expressions. Further, our E ort mo del is computationally ecient, p erforming in

real-time and allowing for interactive motion generation and editing. 14

Chapter 3

Background

Describing human motion is a formidable task. In dance alone, there are over a dozen

signi cant notations, eachvarying in approach, aims, strengths, and weaknesses [37 ]. The

Random House Word Menu has over ve hundred verbs of motion [32 ]. Laban describ es

movement as \one of man's languages" [49 ]; unfortunately, translating movement into a

non-visual, sp oken or textual language is not straightforward.

To achieve our goals for a means of sp ecifying expressive movements of computer-

animated characters, we sought a language for describing the qualitative asp ects of

movement that was:

 systematic with a limited numb er of terms the set of adverbs in the English language

is unwieldy,

 intuitive, so that users do not have to learn a new notation or express how they want

amovement to b e p erformed in abstract, quantitative terms,

 ob jective enough that di erent p eople can agree on the meanings of the terms, and

 not limiting on the range of represented movements.

We examined the nonverbal communication and dance notation literature for metho ds

of describing, notating, and recording human movement. We found that the E ort

comp onent of Laban Movement Analysis was most appropriate for our purp oses. This

chapter surveys the research in nonverbal communication and describ es how other 15

notations used to study human b ehavior are inadequate for our purp oses. Then, we provide

justi cations for the use of Laban's notation as a research to ol, give a brief history of

Laban's work, and describ e E ort as a metho d for qualitativemovement description.

3.1 Nonverbal Communication

Nonverbal communication encompasses research in a variety of disciplines and seeks

to answer a wide range of questions. Signi cant advances have b een made in our

understanding of nonverbal communication; however, the eld still seems to su er

from overlapping terminology, categorizations, and ill-de ned b oundaries. Researchers

in nonverbal b ehavior include sp ecialists from psychology, psychiatry, anthrop ology,

so ciology, dance notation, ethology, education, and the p erforming arts. They seek

theories on the expression of emotion and p ersonality, psycho-pathological diagnoses

and treatments, cultural characteristics of movement, developmental motor pro cesses,

psychological implications of gesture and p osture, in uences of a ect and attitude,

comparison to animal b ehavior, and a slew of other issues. Bo dy movement is a key

area of study, but facial expression, visual b ehavior, paralanguage, proxemics use of space

and distance, tactile b ehavior and multichannel communication are also considered ma jor

topics in nonverbal communication. We do not attempt to integrate or categorize the

large body of research in nonverbal communication and body movement, since that is

beyond the scop e of this work. However, we p oint the interested reader to the readings

and commentaries in [83 ], which o er some insightinto the study of b o dy movement and

gesture, and several extensive bibliographies on nonverbal communication [23 , 43 , 44 ].

Our goal di ers from much work in nonverbal communication in that we are not

seeking universal theories on the communicative implications of movements or any

underlying psychological meaning; nor are we seeking comparisons or generalizations

between individuals of di erent mental states, ages, cultures, or even sp ecies. Instead, we

seek a comprehensive description system of human movements that may provide a useful

interface for controlling gure animation. Thus, we are most interested in description

and notation systems for co ding body movements. Wallb ott surveys nonverbal b ehavior 16

measurement and observation systems, noting that they tend to form a continuum b etween

ob jective systems making physical measurements and sub jective notations that require

human observers making inferences and interpretations [81 , 82 ]. We seek a notation

somewhere between these two extremes. We desire a system that describ es qualitative

not quantitative asp ects of movement, yet is ob jective enough to prove intuitive to

layp ersons and to ensure inter-observer agreement. Many of the systems for measuring

body movement do not meet our needs, b ecause they are cumb ersome and non-intuitive,

or fo cus solely on spatial asp ects of movement. Similarly, many dance notations are either

particular to a sp eci c style of dance, lack metho ds of describing movements not used in

dance, or have no means of describing the expressive nature of movement. There are three

ma jor dance notation systems currently in use: Benesh Movement Notation[9 ], Eshkol-

Wachmann [30 ], and Labanotation [41 ]. Benesh notation has b een used primarily to notate

ballets and was adopted by the Royal Ballet in London. Eshkol-Wachmann notation uses

anumerical system for recording movement, fo cusing on joint angles and spatial patterns.

Only notations based on the work of Rudolf Laban seem to have the p otential to satisfy

our needs.

The use of Labanotation and its derivative notations has extended beyond the

dance community to b ecome a valuable to ol for nonverbal communication research.

1

Davis evaluates the \logic and consistency" of E ort-Shap e Analysis an o sho ot of

Labanotation promoting its use as a research to ol [22 ]. In [6], Bartenie and Davis

justify the use of Laban's notation for the study of b ehavior. Their justi cations match

our requirements for a language to describ e and control computer gure animation. They

p oint out that many b ehavioral studies use sub jective, detailed descriptions of movement

and p ostures, and argue for the need for a systematic, objective notation with a limited

numb er of descriptive terms. They demonstrate that E ort-Shap e provides such a system,

and further, they discuss the hyp othesis that there may b e neurophysiological supp ort for

the selection of its basic variables. In [24 ], Davis further details the evolution of Laban's

work, describing his students' extension and application of his ideas, and surveys the early

use of Laban analysis in b ehavior research.

1

E ort-Shap e later evolved into Laban Movement Analysis. 17

3.2 Laban Movement Analysis

Rudolf Laban 1879-1958 made signi cant contributions to the study of movement,

bringing together his exp eriences as a dancer, choreographer, architect, painter, scientist,

notator, philosopher, and educator. He observed the movement of p eople p erforming all

typ es of tasks: from dancers to factory workers, fencers to p eople p erforming cultural

ceremonies, mental patients to managers and company executives. His theories on

movement and its extensions by his students and colleagues have resulted in a rich

vo cabulary for describing and analyzing movement. He also develop ed a movement

notation system, which has evolved and expanded into a number of related and

overlapping variations, including Labanotation, Kinetography Laban, E ort-Shap e, and

Laban Movement Analysis. The International Council on Kinetography Laban ICKL

was established in 1959 to standardize the notation and to unify some of the di erences

between the various forms. Labanotation, which was adopted by the Dance Notation

Bureau [17 ], fo cuses on the structural asp ects of movement and provides a very exact

notation that allows dancers to repro duce a dance solely from its score [41 ]. Kinetography

Laban is essentially the same as Labanotation except for minor di erences in notation

usage and rules [45 ]. E ort-Shap e develop ed somewhat indep endently and with an

emphasis on the qualitative, dynamic asp ects of movement [6 ]. E ort-Shap e spawned

the development of Laban Movement Analysis LMA [7 , 59 , 27 , 57 ], which is promoted

by the Laban/Bartenie Institute for Movement Studies [63 ]. LMA has evolved into a more

comprehensive system and has b een used in dance, drama, nonverbal research, psychology,

anthrop ology, ergonomics, physical therapy, and manymovement-related elds [22 , 6 , 24 ].

The variance between the systems is partially due to historical reasons and all remain

consistent with Laban's original philosophies.

Mo ore and Yamamoto enumerate ve principles that are fundamental to Laban's

theories [59 ], Chapter 9:

1. Movement is a pro cess of change.

2. The change is patterned and orderly.

3. Human movementisintentional. 18

4. The basic elements of human movementmay b e articulated and studied.

5. Movement must b e approached at multiple levels if it is to b e prop erly understo o d.

These basic principles form the theoretical foundation of LMA.

2

LMA is divided into four ma jor comp onents: Bo dy, Space, Shap e, and E ort .

Together these comp onents constitute a language for describing movement. Bo dy deals

with the parts of the body that are used and the initiation and sequencing of a motion.

Space describ es the lo cale, directions, and paths of a movement. Shap e involves the

changing forms that the body makes in space. E ort describ es the qualitative asp ects

of movement and is often compared to dynamic musical terms such as legato, staccato,

forte, dolce, etc., which give information on how a piece of music should b e p erformed.

Movement is often describ ed in terms of actions or what one do es. However, we are

interested in how one moves, which is precisely what is provided by E ort. Mo ore explains

that, \While the uses of [S]pace and of the [B]o dy reveal the mover's purp oses, Laban

b elieved that the uses of energy, or the dynamics of an action, [E ort] were particularly

evo cativeofintentions" [59 ], pp 185. Maletic go es further to say that \Laban sees E ort

as the inner impulse|a movement sensation, a thought, a feeling or emotion|from which

movement originates; it constitutes the link between mental and physical comp onents

of movement"[57 ], pp 179. She continues, saying that \one may conclude that the

concept of E ort uni es the actual, physical, quantitative and measurable prop erties of

movement with the virtual, p erceivable, qualitative, and classi able qualities of movement

and dance"[57 ], pp 101. We note that neither Laban nor anyone else has ever sp eci ed

these \quantitative ... prop erties of movement"; this is precisely the contribution of our

E ort mo del Chapter 4.

E ort comprises four motion factors: Space, Weight, Time, and Flow. Each motion

factor is a continuum b etween two extremes: 1 indulging in the quality and 2 ghting

3

against the quality . These extreme E ort Elements are seen as basic, \irreducible"

2

Throughout this do cument, we capitalize key terms de ned by LMA to distinguish them from their

common English language usage.

3

Some meaning is lost in the translation of Laban's original German texts into English. The German

word \antrieb" has b een translated into \e ort". \Antrieb" has an implication of the existence of a

force that drives or prop els something. The German term \ballung" has markedly di erent connotations

from its translation of \ ghting". Ballung connotes a coming together or clustering; it is a condensing 19

qualities|they are the smallest units of change in an observed movement. The eight

E ort Elements are: Indirect, Direct, Light, Strong, Sustained, Sudden, Free, and Bound.

Table 3.1 illustrates the motion factors, listing their opp osing E ort Elements with textual

descriptions and examples.

We note that we are not trying to provide a to ol to facilitate the recording of E ort,

but rather we use the language provided by E ort as an interface for controlling expressive

gure movements. Aside from handling the individual E ort Elements, there are several

other issues that must be handled in order to capture the full extent of expressivity

a orded by E ort. First, human movements span the range along each motion factor

continuum. Also, movements rarely involve only one motion factor; more often they

display combinations of E ort Elements from di erent motion factors and at varying

intensities. Another issue is phrasing. Certi ed Movement Analysts CMAs trained in

Laban Movement Analysis observe movements as a subtle sequence of E ort changes or

phrases. For instance, in general terms one might say that a movement was \light".

However, a skilled E ort observer might observe that, \The movement b egan with a

quickness, then b ecame light, and ended with a sustained indirectness." To exploit the

richness of E ort descriptions, one must allow users to use b oth general E ort descriptions

as well as detailed E ort phrasing. Finally, we note that individuals tend to display

particular E ort patterns, although they may consciously learn to expand their E ort

rep ertoire. An E ort-based system should supp ort individualized expression by allowing

users to customize E ort Element settings for di erentcharacters. We address these issues

in Chapter 4, where we describ e our E ort mo del.

We b elieve that the E ort descriptions of Laban Movement Analysis provide an

adequate interface to controlling the expressiveness of computer animated gures. E ort

seems to generate the space of human movements, with its use justi ed by numerous

researchers in a variety disciplines. Further, E ort proves intuitive by providing a

small number of textual descriptors as compared to detailed, cumb ersome notations or

mathematical and physics-based parameters for describing expression.

or agglomeration. For instance, the term ballung might be used to describ e the coming together and

condensing of particles to form storm clouds [66 ]. 20

Space { attention to the surroundings

Indirect exible, meandering, wandering, multi-fo cus

examples: waving away bugs, slashing through plant growth

surveying a crowd of p eople, scanning a ro om

for misplaced keys

Direct single fo cus, channeled, undeviating

examples: p ointing to a particular sp ot, threading a needle,

describing the exact outline of an ob ject

Weight { attitude towards the impact of one's movement

Light buoyant, delicate, easily overcoming gravity,

marked by decreasing pressure

examples: dabbing paintonacanvas, pulling out a splinter,

describing the movement of a feather

Strong powerful, having an impact, increasing pressure

into the movement

examples: punching, pushing a heavy ob ject, wringing a

towel, expressing a rmly held opinion

Time { lack or sense of urgency

Sustained lingering, leisurely, indulging in time

examples: stretching to yawn, stroking a p et

Sudden hurried, urgent

examples swatting a y, lunging to catch a ball, grabbing a

child from the path of danger, making a snap decision

Flow { amountofcontrol and b o dily tension

Free uncontrolled, abandoned, unable to stop in the

course of the movement

examples: waving wildly, shaking o water, inging a ro ck

into a p ond

Bound controlled, restrained

examples: moving in slow motion, tai chi, ghting back

tears, carefully carrying a cup of hot liquid

Table 3.1: Motion Factors and E ort Elements 21

Chapter 4

E ort Mo del

In order to use E ort for , we needed a quantitative E ort mo del.

This chapter discusses the metho d we used to build an empirical mo del of E ort using

quantitative, low-level movement parameters. We describ e the set of low-level movement

parameters and showhow E ort settings are used to compute movement parameter values.

Our current E ort mo del has b een develop ed with the intent of creating expressive arm

movements, although many asp ects of our mo del are applicable to expressive movements

displayed in other b o dy parts. We elected to fo cus on arm movements b ecause the arms

are a primary means of b o dily expression and communication. Also, many task-oriented

movements involve the use of the arms. Further, the extensive reach and joint angle ranges

in the arms allow for a wide variation in movements as compared, for instance, to head

no ds or torso b ends. Finally, although a number of researchers have addressed issues of

lo comotion and its in uence under various emotions, expressive arm movements have b een

essentially neglected.

4.1 Translating E ort into Movement Parameters

The translation of the qualitative E ort Elements into quantitative, low-level movement

parameters was the key task in implementing a system using E ort for motion control.

Initially, we tried to deduce movement characteristics from motion capture data. We

collected 3D motion capture data and made a video recording of a Certi ed Movement 22

Analyst CMA trained in Laban Movement Analysis p erforming several examples of each

1

combination of two and three E ort Elements . We used 12 electromagnetic sensors: one

each on the head, sternum, stomach, and back; and one on each shoulder, elb ow, wrist,

and knee. Analysis of the motion capture data led to only the most obvious conclusions;

i.e.: Sudden is short in duration, Sustained is longer in duration, and Strong tends to

have large accelerations. The inability to deduce the more subtle characteristic qualities of

E ort arose from several factors. First, E ort re ects complex inner physiological pro cesses

that are related to a b eing's inner drive to resp ond to the physical forces in nature. Thus,

E ort is emb o died in the whole p erson and manifested in al l body parts, whereas we

were interested solely in the physical emb o diment and visual result of inner attitudes on

movement, particularly that of the arms. Furthermore, numerous other movements such

as visual attention, changes in muscular tension, facial expressions, and breath patterns

are not adequately captured by current motion capture technology.

As a result, we turned to other metho ds for developing an empirical mo del of

E ort. Initially, we used visual analysis of the playback of the motion capture data of

a CMA p erforming E ort combinations as a stick gure formed by connecting the blo cks

representing the sensors Fig. 4.1. This allowed us to fo cus our attention solely on the

in uence of E ort on one's gross body movement by extracting out other more subtle

manifestations of E ort facial expression, visual attention, etc.. We then made rep eated

and careful analyses of the video of the E ort p erformance and used descriptions of E ort

from the literature [7 , 27 , 57 , 59 ] to deduce some of the tangible qualities of E ort that we

needed to capture. In determining the underlying motion parameters to use to mo del the

E ort Elements, we exp erimented with and extended computer animation metho ds such

as velo city curves and interp olation, as well as traditional animation principles such as

anticipation, oversho ot, squash and stretch [53 , 75 ]. Finally, we p erformed numerous

exp eriments byhaving a CMA [66 ] mark the E orts present in video segments and having

her exp eriment with low-level parameters in our system EMOTE Chapter 5. This also

induced more suggestions on qualities of E ort that would add to our p ortrayal of the

E ort Elements.

1

Movements displaying a single E ort element or a combination of four E ort Elements are rare, and

thus, were not included in the recording. 23

Figure 4.1: Stick Figure Formed By Motion Capture Sensors

4.2 Low-level Movement Parameter De nitions

In selecting the set of low-level movement parameters, we chose a kinematic mo del over

a dynamics-based or hybrid implementation of E ort. Dynamics-based techniques require

computationally costly calculations, limiting our desire for interactivity. Also, although

dynamic simulations generate physically accurate motions, these motions often lack the

nuances and ourishes of expressive human movements. Further, kinematic mo dels have

proven reasonable at p ortraying physics-based mo dels [87 ], and our mo dels for Flow and

Weight combinations adequately generate the impression of mass, force, inertia, and other

physical phenomena.

In the following section, we de ne the set of low-level movement parameters in our

mo del. These are divided into three categories: those that a ect the arm tra jectory, those

that a ect timing, and ourishes that add to the expressiveness of the movement.

4.2.1 Tra jectory De nition

We de ne the arm tra jectory for a given animation with two parameters: 24

 Path curvature determines the straightness or roundness of the path segments

between keyp oints. We control the path curvature using the tension parameter

intro duced by Ko chanek and Bartels for interp olating splines [46 ]. The tension

parameter ranges from 1to+1. Decreasing the tension value gives rounder b ends

at the keyp oints, while increasing the value results in a tighter curve with straighter

segments b etween p oints.

 The interp olation space de nes the space in which the interp olation is p erformed:

end-e ector p osition, joint angle, or elb ow p osition.

For end-e ector interp olation, we use the end-e ector p osition and swivel angle stored

for each keyp oint. We de ne an interp olating spline between the p ositions at keyp oints

using the tension parameter to determine the curvature of the path. We also interp olate

between swivel angle values with an interp olating spline. For joint angle interp olation,

we compute and store the shoulder and elb ow rotations at keyp oints. We then generate

an interp olating spline b etween the elb ow angle values at keyp oints and p erform spherical

linear interp olation to determine the shoulder rotations. For interp olation in elb ow p osition

space, we compute and store the elb ow p osition at keyp oints using the p osture de ned by

the end-e ector p osition and swivel angle. We then de ne an interp olating spline b etween

these p ositions, which are later used to set the shoulder rotations. The elb ow rotations for

elb ow p osition interp olation are the same as those computed for end-e ector interp olation.

Interp olation in elb ow p osition space gives smo oth elb ow motions with a less path-driven

movement than interp olation in end-e ector p osition space.

The E ort settings determine which interp olation space is used. The default

interp olation space is end-e ector p osition. Free movements use angular interp olation

to achieve a less path-driven and less controlled movement. Our empirical studies show

that Indirect movements tend to b e driven by the elb ow, and thus weinterp olate them in

elb ow p osition space.

4.2.2 Parameterized Timing Control

We separate timing control from tra jectory de nition by using a variation of the double

interp olant metho d intro duced by Steketee and Badler [74 ]. The interp olating splines that 25

de ne the tra jectory describ ed in the preceding section compute values b etween keyp oints

using an interp olation parameter s that varies from 0 to 1 over the interval from keyp oint

i to keyp oint i + 1 [46 ]. Let the tra jectory b e de ned by some function P s; i. To obtain

p oints on the tra jectory, we need a metho d for translating in-b etween frame numb ers

into s and i. At each keyp oint, s = 0 and i is the number of the current keyp oint. For

0

in-b etween frames, we de ne a variable t [0; 1] to represent a frame's relative time b etween

the previous and following keyp oints. Let pr ev equal the frame number of the previous

keyp oint, next equal the frame number of the next keyp oint, and cur r equal the current

frame numb er. Then,

cur r pr ev

0

t = : 4.1

next pr ev

We de ne a timing control function

0

~

Qt ; I =s; 4.2

~

where I is a four-dimensional vector whose comp onents sp ecify various timing e ects

describ ed further b elow.

For each in-b etween frame, we

0

1. compute t , the frame's normalized time b etween keyp oints, using Equation 4.1,

2. compute the interp olation parameter s using function Q Equation 4.2, and then

3. input s and the corresp onding keyp oint number i into function P to compute the

p osition values or joint angle values for angular interp olation for the given frame.

We provide several parameters for timing control:

 The numb er of frames b etween keyp oints is initially set according to the user's

sp eci ed key times, but these values get adjusted according to the E ort settings.

The nf mul t parameter is a multiplier that increases nf mul t > 1 or decreases

0 > nf mul t > 1 the numb er of frames b etween twokeyp oints.

~

 The comp onents of I include in ection time t , time exp onent texp, start velo city

i

v , and end velo city v .

0 1 26

Our parameterized timing control function Q assumes every movement from one goal

keyp oint to the next starts and ends at rest. Also, every movement has a constant

2

acceleration until time t , followed by a constant deceleration . Weintro duce velo cities v

i 0

at time t and v at time t to achieve the traditional animation e ects of anticipation and

0 1 1

oversho ot. This mo del gives us the following velo city function Fig. 4.2:

8

v

00

>

0

>

t [0;t 

0

>

t

0

>

>

>

00

>

v +t t +v t +t t

0 0 0

i i i

<

[t ;t 

0 i

t t

00

0

i

v t = 4.3

00

v +t t +v t +t t

>

1 1 1

i i i

>

[t ;t 

>

i 1

>

t t

1

i

>

>

>

00

:

v t +v

1 1

[t ; 1]

1

t 1

1

where

00 0 texp

t =t  : 4.4

The function Q is the integral of Equation 4.3.

ti

0 t0 ti t1 1 −v0

−v1

Figure 4.2: Velo cityFunction

~

The comp onents of I to the timing control function Q provide control to the

acceleration/deceleration pattern of the movement, as well as allowing for anticipation and

oversho ot. The in ection p oint t [0; 1] represents the p oint between two goal keyp oints

i

where the movementchanges from accelerating to decelerating. Avalue of 0:5 gives a basic

2

As none of the existing mo dels for human motor control explain all the characteristics of human

movement, we use an ease-in/ease-out function, p opular in current animation technology, as an

approximation for an expressionless movement pro le. 27

Figure 4.3: Varying In ection Point to Obtain Acceleration and Deceleration In ection

PointValue Given in Parentheses

ease-in/ease-out curve. A value greater than 0:5 corresp onds to a primarily accelerating

motion, while a value less than 0:5 gives a decelerating motion Fig. 4.3. The default time

exp onenttexpvalue of 1:0 do es not a ect the velo city curve; however, values greater than

1:0 magnify an acceleration, while values less than 1:0 exaggerate a deceleration Fig. 4.4.

3

The start v  and end v  velo cities default to 0. Increasing v generates movements

0 1 0

with anticipation, where the hand pulls back b efore extending, such as in preparation for

a Strong movement. Increasing v generates movements with oversho ot, such as in Free

1

movements where an indulgence in ow causes one to swing out past a target b efore hitting

it Fig. 4.5. We set t to 0.01 and t to 0.99, which gives us natural-lo oking anticipation

0 1

~

and oversho ot e ects; however, these values can easily b e included in I as variable low-level

movement parameters.

3

As mentioned, eachmovement b egins and ends at rest. The start and end velo cities represent times

shortly after the b eginning or shortly b efore the end of a movement, resp ectively. They are so named to

emphasize that they are not initial and nal velo cities, which remain 0. 28

Figure 4.4: Varying Time Exp onent to Magnify Acceleration and Deceleration

Figure 4.5: Varying Initial and Final Velo cities to Obtain Anticipation and Oversho ot

Initial and Final Velo cities Given in Parentheses 29

Figure 4.6: Sine Factor in Squash Equation

4.2.3 Flourishes

Flourishes are miscellaneous parameters that add to the expressiveness of the movements.

They are listed b elow:

 Squash and stretch is a traditional animation technique where non-rigid ob jects

deform, giving a dynamic, life-like quality [53 , 75 ]. We employ squash and stretchby

scaling the body of simple characters to simulate the expansion and contraction of

the human torso. We parameterize squash and stretch with the variable sq uashmag .

The squash is computed as:

01:6

sq uash = 1+sq uashmag  sint   ; 4.5

0

where t is the normalized time b etween twokeyp oints. The sq uash value is then used

to scale the body by 1:0=sqrtsq uash in x width and z depth, and by sq uash

in y height; thus, maintaining the original body volume. Fig. 4.6 illustrates the

motivation for the sine factor. The o line illustrates the normal sine curve; the

+ line shows that our sine factor skews the curve slightly towards 1. This squash

computation gives a normal-sized torso which gradually elongates, then more quickly

returns to its normal size b etween eachkeyp oint. 30

Figure 4.7: Sine Factor in Equation for Breath

 Breath refers to the noticeable exhale one makes when executing powerful

movements. To simulate breath, we re-use the sq uash variable, setting it with a

0

computation based on t , a frame's relative time b etween the previous and following

keyp oints. An exhale b egins with an unsquashed torso, where sq uash is 1.0. When

0

t is greater than 0:4, we set the squash using the sinusoidal function Fig. 4.7:

2



0 2

 t 0:4 sin 

2

4.6 sq uash =1

7

This results in a normal-sized torso that squashes shortens and widens and returns

to normal towards the end of eachmovement.

 Wrist b end is determined by the wrist b end multiplier w bmag and the wrist

extension magnitude w xmag . The w bmag parameter is a multiplier that represents

the magnitude of the wrist b end. If the w bmag is set for a exed wrist, the wrist

b end is set to 0:6 radians ab out the y-axis. Otherwise, the wrist b end is set using

0

bend = w bmag  sin2 t +0:75 + 1 w xmag ; 4.7 w r ist

0

where t [0; 1] and represents the normalized time b etween twokeyp oints. The shifted

sine factor Fig. 4.8 results in a wrist that gradually b ends inwards and back out.

The value of w xmag shifts the sinusoidal graph up or down, setting the b eginning

wrist extension to b e p ositive outward or negative inward. 31

Figure 4.8: Multiplier in Wrist Bend Equation

 Arm twist is parameterized by wrist twist magnitude w tmag , wrist frequency

w f mag , elb ow twist magnitude etmag , and elb ow frequency ef mag . The wrist

twist is measured in radians ab out the z-axis and is determined by:

0

tw ist = w tmag  sinw f mag  t : 4.8 w r ist

Elb owtwist is set using a similar equation, replacing w tmag and w f mag with etmag

and ef mag , resp ectively.

 Displacement magnitude is a multiplier dmag that adds a sinusoidal displacement

to the elb ow angle

0

el bow ang l e = el bow ang l e  1 + dmag  sin2t  4.9

0

where t is the normalized time b etween twokeyp oints.

 Limb volume refers to the swelling of the biceps muscles that is particularly

noticeable in p owerful movements. We simulate biceps exion by scaling the width

ang l e  aw mag , where el bow ang l e is and depth of the upp er arm by 1+0:7  el bow

the current elb ow angle and aw mag is a low-level movement parameter determined

by the E ort settings. 32

4.3 Parameter Settings

To determine the mapping of the four E ort motion factors into our low-level movement

parameters, we rst determined the default settings for each of the eight E ort Elements.

Next, we devised a scheme for generating the range b etween opp osing E ort Elements and

for combining Elements from di erent motion factors. Finally, we expressed our E ort

mo del as a set of mathematical equations.

4.3.1 Parameter Settings for Individual E ort Elements

Table 4.1 presents the parameter settings for the individual E ort Elements. We started

by de ning a generic, expressionless motion similar to what would be generated using

traditional keyframe animation systems. The generic motion has normal path curvature.

There is no squash and stretch of the b o dy, and no wrist or elb owtwists. Also, there is no

wrist b end so the wrist stays aligned to the forearm throughout the motion, and the arm

maintains normal limbvolume. The movement is interp olated in end-e ector space using

a standard ease-in/ease-out velo city curve with no anticipation or oversho ot. Now, we give

some explanation b ehind the settings for the individual E ort Elements. We note that

the exact values were determined by a signi cant amount of trial and error using visual

analysis and testing by a CMA.

A video of a CMA p erforming various E ort combinations shows that the elb ow often

leads in Indirect arm movements and are characterized by arm twists and curved paths.

To capture these qualities, we interp olate in elb ow p osition space, adding a sinusoidal

displacement for extra variation in space. Also, we add elb ow and wrist twists and use a

signi cant amount of wrist b end.

Direct motions are fo cused and path-oriented, so weinterp olate in end-e ector space.

The default motion path for Direct motions generates straight rather than curved line

segments b etween keyp oints.

Movements asso ciated with weight Strong and Light usually have marked breath

patterns; we approximate these by varying the squash and stretch of the body. Strong

movements contract slightly near the end, much like a p erson exerting a forceful exhale. 33 0 3 0 0 0 0,0 0,0 0,0 ease none none none none 0.5,1 none exed end-e normal Bound lengthen es es 0 1 0 0 ersh ree 0,0 0,0 y y ease 0.15 v none none none F 0,0.20 o 0.5,0.6 1.0,0.2 normal normal angular 0 0 0 0 0,0 0,0 0,0 0,0 0.33 none 0.9,3 none none none none none accel end-e normal shorten Sudden ts 0 3 0 0 0 ersh 0,0 0,0 0,0 decl v none 0.1,1 none none none none 0,0.03 o end-e normal lengthen Sustained tic 0 1 0 0,0 0,0 0.25 none none 0.1,0 accel an exed 0.9,0.8 breath end-e normal normal Strong magni ed t es es 0 1 0 0 ersh 0,0 0,0 y y decl 0.20 v 0.1,1 none none 0,0.03 Ligh o end-e normal normal normal 0.5,-0.3 t arameter Settings for E ort Elemen 1 0 0 0 +1 0,0 0,0 0,0 0,0 ease 0.5,1 none none none none none end-e normal normal Direct el P straigh w-lev ed Lo es es es 1 0 0 -1 0,0 0.4 w p osn y y y ease 0.5,1 none none 0.4,2 0.4,2 0.6,0.0 curv normal normal Indirect elb o able 4.1: T 0 1 0 0 0 0,0 0,0 0,0 0,0 ease 0.5,1 none none none none none end-e normal normal normal Neutral  h       ersh. ature   1 Space Mag. olume v w xmag w f mag ef mag , , , , texp 0 , w Twist Tval v i bV t   rist Bend aw mag nf mul t rist Twist  arameter tic./Ov   terp. rames Multiplier W sq uashmag Displ. W P Acceleration ath Curv Elb o  In Lim etmag An w tmag w bmag  P   Squash & Stretc

Num F 34

For Light movements, the squash and stretch e ects vary dep ending on the end-e ector

lo cations. When the hand is high ab ove the gure, the body is stretched vertically; for

p ositions horizontally distant from the gure, we widen the b o dy.

Lightmovements use a signi cant amount of wrist b end to achieve an airy quality and

enhance the lightness. Toachieve the buoyant quality of lightmovements, we use a slight

oversho ot and decelerating velo city curve.

Strong motions tend to display a marked acceleration corresp onding to the force

exerted. Strong movements also use anticipation to emphasize the preparation required to

exert that force. The limbvolume is magni ed to simulate muscle exion, and the wrist

remains in a exed p osition throughout the movement.

Sustained movements indulge in time, displaying a marked deceleration [66 ]. A small

amountofoversho ot captures the slight reb ound at the end of a sustained \sigh". Sustained

movements usually have a longer duration than other movements, and thus, we increase

the numb er of frames b etween keyp oints. On the other hand, Sudden movements are short

in duration and tend to accelerate.

Free motions are unstoppable and tend to oversho ot their goal p oints. To achieve an

abandoned, uncontrolled quality, we use wrist b end values which give signi cant inward

and outward b ends. Since end-e ector interp olation pro duces movements that are to o

rigid and controlled for Free movements, we use angular interp olation of shoulder and

elb ow angles. Squash and stretch of the b o dy adds to the feeling of freedom and exibility.

Bound motions tend to o ccur in slow motion, so we extend the number of frames

between keyp oints. Also, since Bound movements tend to be sti and rigid, we keep the

wrist in a exed p osition.

4.3.2 Generating E ort Ranges and Combinations

Motion factor ranges are represented by a sliding value between 1 for the indulging

extreme and +1 for the ghting extreme. To generate the range of each motion factor, we

interp olate b etween the settings at the two extremes. For discrete variables interp olation

space and whether to display breath, we use the value of the nearest extreme. For instance,

a Flow slider value of 0:3 generates an animation using angular interp olation b ecause it 35

is nearer to the Free extreme value than that of Bound. We note that this may lead to

discontinuities in an animation generated in phrase mo de when Space, Weight, or Flow

cross zero at p oints that are not goal keyp oints. Such o ccurrences are fairly uncommon,

as it is rare to change from one E ort extreme to its opp osite within a movement. Our

E ort mo del is designed such that the p osture computed at any goal keyp oint is the same

regardless of E ort settings in other words, editing of the movements based on E ort

settings o ccurs only between goal keyp oints; thus, discontinuities do not o ccur at zero

crossings at goal keyp oints.

In general, combinations of E ort Elements are achieved in a straightforward manner.

The magnitude of an E ort Element is used to weight its contribution for a parameter

setting. If more than one E ort Element contributes to a parameter setting such as

for parameters sq uashmag and w bmag , we take the maximum value of the weighted

contributions. Several parameters are tweaked when combining E ort Elements from

di erent motion factors; the sp eci c cases of these are describ ed b elow.

Finally,we express our E ort mo del as a set of equations. Let the variables ind, dir , lg t,

str , sus, sud, fre, and bnd represent the magnitudes for Indirect, Direct, Light, Strong,

Sustained, Sudden, Free, and Bound, resp ectively. Each of these variables is in the range

[0; 1]. Variables within the same motion factor are related as such: if one E ort Element

variable is p ositive, then its opp osing E ort Elementvariable is zero. Totweak parameters

for combined E ort settings, we use the function f , which is de ned as follows:

8

>

<

a a<= b

4.10 f a; b=

>

:

b a>b

Our mo del for translating E ort into low-level movement parameters is given by the

following equations:

Tval =1  ind +1  f f r e; ind + dir 4.11

t =0:5+0:4  maxstr; sud 0:4  maxl g t; sus+0:8  f bnd; l g t 4.12

i

v =0:1  str max0:06  f sus; str ; 0:1  f f r e; str  4.13

0

v = max0:03  maxl g t; sus; 0:2  fre 0:1  f ind; f r e 4.14

1 36

texp = 1+2 sud +0:2  f str; sud 0:6  f f r e; sud

0:2  maxstr; 0:5  dir + sus 0:4  fre 0:1  f ind; f r e 4.15

sq uashmag = max0:20  lg t; 0:15  fre 4.16

w bmag = max0:6  ind; 0:5  lg t; 1:0  fre 4.17

w xmag = 0:3  lg t +0:2  fre 0:8  f str;fre 4.18

w tmag =0:4  ind 4.19

w f mag =2 ind 4.20

etmag =0:4  ind 4.21

ef mag =2 ind 4.22

dmag =0:4  ind 4.23

aw mag =0:25  str 4.24

When E ort Elements from di erent motion factors are combined, characteristics of one

E ort Element may mask or con ict with characteristics of another. We de ned our

E ort mo del to adjust for these situations. When a movement is b oth Indirect and Free,

we reduce the path curvature Tval to avoid overly uncontrolled movements. When a

movement is Free and Strong, we reduce the wrist extension w xmag  so the wrist b ends

are more inward, giving a greater impression of strength. By default, Light movements

are de ned to decelerate t value of 0:1; while Bound movements use a standard ease-

i

in/ease-out velo city curve t value of 0:5. Since the primary characteristic of Bound

i

movements is their evenness, movements that are b oth Light and Bound use a standard

ease-in/ease-out velo city curve. Strong movements show anticipation v value of 0:1;

0

since this contradicts certain characteristics of Sustained and Free elements, we reduce

this value when Strong movements are combined with Sustained or Free. We reduce

the oversho ot v  value for Free and Indirect movements to avoid uncontrollably wild

1

movements. Sudden movements use a large texp value to create a magni ed acceleration.

The 0:2  f str; sud 0:6  f f r e; sud factor in the time exp onent parameter texp

calculation o sets the factors that represent the individual contributions for Strong and 37

Free. Strong movements use a slightly decreased texp value, as do es the combination of

Direct and Sustained. Free also uses a decreased texp value, which is increased slightly

when displayed in combination with Indirect.

Our E ort-based motion control paradigm works directly with end-e ector sp eci ed

movements; however, it could b e used with keyframed, motion captured, and pro cedurally

generated movements as well. End-e ector sp eci cation easily facilitates interp olation in

end-e ector, joint angle, and elb ow p osition space. With motions sp eci ed using other

metho ds, one could select keyp oints in the movement, compute their corresp onding end-

e ector p ositions, and use that as input into an E ort-based system for qualitative editing.

Also, our E ort mo del is not sp eci c to a particular character, nor to characters with

particular arm and b o dy dimensions, although we do assume human-like arm structures.

The next chapter discusses an implementation of our E ort mo del and shows how

the metho ds and equations de ned in this chapter are applied to generate animations in

real-time. 38

Chapter 5

Implementation: EMOTE

Wehave implemented our motion control scheme in an animation mo dule called EMOTE

Expressive MOTion Engine. EMOTE uses inverse kinematics, where spatial movement

requirements are sp eci ed through end-e ector p ositions. This chapter describ es the

generation of animations in EMOTE, the arm mo del, use of the E ort mo del, and example

animations.

5.1 User Interaction

EMOTE users generate animations by 1 sp ecifying general movement sequences as a

series of keyp oints, and then 2 interactively editing qualitative parameters until the

desired animation is achieved.

EMOTE uses twotyp es of keyp oints: goal keyp oints and via keyp oints. Goal keyp oints

de ne a general movement path; the hand follows a path which stops at each goal keyp oint.

Via keyp oints direct the motion b etween goal keyp oints without pausing. For instance, a

via keyp oint mightbe used to generate a semi-circular path between two goal keyp oints.

EMOTE provides two metho ds for users to sp ecify a series of keyp oints. One metho d

allows users to sp ecify a keyp ointby using sliders Fig. 5.1 to interactively set the gure

in the desired p osition. After sp ecifying the frame numb er and selecting the keyp ointtyp e,

the user can save the keyp oint into the Key Editor Fig. 5.2. The other metho d reads

in an ascii text le containing the frame numb er, x, y, z p osition values, swivel angle, and 39

Figure 5.1: Interface for Adjusting End-E ector Positions

typ e of eachkeyp oint.

EMOTE has two mo des for sp ecifying E ort settings: single mo de and phrase mo de.

In single mo de, the user sets E ort values using four sliders|Space, Weight, Time, and

Flow|on the E ort Editor Fig. 5.3. These automatically set a number of low-level

parameters, which the user may also edit. Pressing the Play button generates an animation

following the path sp eci ed by the end-e ector keyp oints and displaying the sp eci ed E ort

qualities. In phrase mo de, the user can sp ecify changes in E ort over time. An E ort

Graph Editor Fig. 5.4 displays the E ort settings over an animation, allowing the user

to interactively edit the p oints and play the corresp onding animation. Both mo des provide

a radio button that allows the user to lo ck or unlo ck the frame numb ers for keyp oints.

Lo cking the frame numb ers prevents the E ort settings from changing the frame numb ers

of keyp oints. 40

Figure 5.2: Key Editor

Figure 5.3: E ort Editor 41

Figure 5.4: E ort Graph Editor

5.2 Arm Mo del

EMOTE mo dels the arm as a kinematic chain with two spherical joints, the shoulder

and wrist, connected by a exion joint at the elb ow. The shoulder and wrist have three

degrees-of-freedom DOF, which represent rotations ab out the x, y, and z axes. The

elb ow has 1 DOF, rotation ab out a single axis. An analytical inverse kinematics algorithm

computes the shoulder and elb ow rotations, given a goal sp eci ed by three-dimensional

p osition co ordinates and an elb ow swivel angle [76 ]. The base of the wrist acts as the

end-e ector indicating the goal p osition. Since the determination of arm p osture given 3D

p osition is under-sp eci ed, Tolani uses the swivel angle, which sp eci es the lo cation along

the circular arc swept out by the elb ow swiveling around the axis b etween the shoulder and

the wrist n ^ and lying on the plane normal ton ^ Fig. 5.5. Wrist rotations are determined

according to E ort settings as describ ed in Chapter 4.

For demonstration, we have created the character Mo. Mo has a spherical body and

articulated arms. The currentversion of Mo do es not have an articulated hand, although

EMOTE provides a selection of hand shap es: op en, st, p ointing, closed, ngers together

with thumb out, cupp ed, and claw. 42 Shoulder Joint (3 DOF) n^ Wrist Joint (3 DOF)

Elbow Joint

(1 DOF)

Figure 5.5: Arm Mo del

5.3 Metho d for Using E ort Mo del

When a user requests an animation, we plug the E ort settings into Equations 4.11 to

4.24 to compute the low-level movement parameters. For each frame, we use the metho d

describ ed in Section 4.2.2 to compute end-e ector p ositions for the given frame. Then,

we call the IK mo dule to compute and set the matrices de ning the arm p ostures. Mo is

re-drawn using the computed arm matrices and computation b egins on the next frame.

5.4 Examples

5.4.1 Individual E ort Elements

To demonstrate the utility of our motion control scheme and the wide range of p ossible

movements generated, we include thumbnail images from a series of animations that were

generated from the same ve goal keyp oints Fig. 5.6. We generated the animations using

the default settings for each individual E ort Element. Opp osing E ort Elements for each

motion factor are shown together for easy comparison. Frames are displayed in order from

rightto left, top to b ottom. The Space and Weight animations show every fth frame of

the animation, while the Time and Flow animations showevery tenth frame. 43 12

34 5

Figure 5.6: End-E ector Keys for an Example Movement Sequence

Fig. 5.7 shows frames from the animations of Indirect and Direct. The Direct animation

is more fo cused on the path of the hands, displaying sti er, more angular arm angles than

the rounded, all-encompassing arm p ostures of the Indirect animation as seen in frames

A7, C3, and E7. The Indirect animation also displays more arm twists and a exible

wrist; whereas Mo maintains a sti , rigid wrist p osition in the Direct animation A7, C1,

C5, F2.

Fig. 5.8 shows several notable di erences between the Strong and Light animations.

The Strong animation displays marked acceleration, apparent by the large movement

changes observed just b efore each keyp oint A6-B3, C1-C6, D4-E2,F1-F7, while the

b eginning of each movement remains relatively unchanged A2-A5, B4-B7, C7-D3, E3-

E7. In contrast, the Light animation uses a decelerating velo city curve, which starts with

signi cant amounts of movements A2-A6,B6-C2, D2-D5, E6-F1 and ends with slight

movements A7-B5, C3-D1, D6-E5, F2-F7. To emphasize the impact or lack thereof  of

the movements, we selected a st shap e for the Strong movement and an op en hand for

the Light movement. The Strong animation has a sti , but exed wrist, while the Light 44 1234567

A

B

C

D

E

F

Figure 5.7: Every Fifth Frame from Animation of Indirect and Direct E orts ordered left

to right, top to b ottom 45

animation uses a signi cant amount of wrist b end A3-A7, B6-C2, E6-F2. Although the

Strong movements shows anticipation and the Lightmovements oversho ot the keyp oints,

these e ects are subtle and dicult to observe from the selected thumbnail images. The

Light animation displays squash and stretch of the torso, while the Strong animation

shows a slight torso contraction breath and increased limbvolume biceps exion b efore

reaching eachkeyp oint.

1234567

A

B

C

D

E

F

Figure 5.8: Every Fifth Frame from Animation of Light and Strong E orts ordered left

to right, top to b ottom

The most notable di erence b etween the Sudden and Sustained animations Fig. 5.9

is the di erence in duration. The Sudden animation is completed by frame B1, while

the Sustained animation continues through to frame I7. The Sudden animation displays 46

marked acceleration, but this element is not evident in the gure since only every 10th

frame is displayed. The Sustained movements are decelerating, as displayed by the large

movements A2-B2, C5-C7, E7-F4, G7-H5 followed by the insigni cantmovements B3-

C4, D1-E6, F5-G6, H6-I7 b efore eachkeyp oint.

The duration of the Bound animation is signi cantly longer than that of the Free

animation Fig. 5.10. The Free animation is complete by frame C6, while the Bound

animation lasts until I7. Free movements show a signi cant amount of wrist b end A4, B6,

C4, while Bound movements maintain a exed wrist. The Free movement also displays

squash and stretch of the torso.

5.4.2 Gestures Accompanying Sp eech

We have found that our motion control scheme is useful for generating the hand gestures

accompanying sp eech, b ecause the E ort settings for the animation can be set to match

the intent of the sp eaker. For instance, a powerful denial is accompanied by Strong and

Direct hand gestures Fig. 5.11. The sp eech that accompanies the animation states, \I

did not have sexual relations with that woman." Fig 5.12 displays every fourth frame from

an animation of Mo saying, \Hey, lo ok! I made it. I made it." The animation shows a

gleeful exclamation that ends with Mo throwing up his arms in a Free movement. 47 1234 5 67

A

B

C

D

E

F

G

H

I

Figure 5.9: Every Tenth Frame from Animation of Sustained and Sudden E orts ordered

left to right, top to b ottom 48 1234 5 67

A

B

C

D

E

F

G

H

I

Figure 5.10: Every Tenth Frame from Animation of Free and Bound E orts ordered left

to right, top to b ottom 49

Figure 5.11: Every Fourth Frame from Animation of a Denial ordered left to right, top

to b ottom 50

Figure 5.12: Every Fourth Frame from Animation of a Gleeful Exclamation ordered left

to right, top to b ottom 51

Chapter 6

Conclusions

This chapter discusses an evaluation of our E ort mo del and the usability of EMOTE. We

conclude with extensions and contributions of the work.

6.1 Evaluation

6.1.1 E ort Mo del Evaluation

To evaluate our motion control scheme, we used EMOTE to create a 16-minute video

containing a series of animation segments. The animation segments displayed randomly

selected E ort Elements, b oth individual and in combination. The video was divided into

two parts. The rst part showed a neutral movement no E ort Elements present b efore

displaying the animation segment of the movement to be co ded. This part consisted of

16 short animations 2 keyp oints, followed by 16 long animation segments 5 keyp oints.

The second part showed only animations of the movement to be co ded, and consisted

of 30 long animation segments 5 keyp oints. The video was given to 3 CMAs and our

consultant CMA. They were asked to view the video once to get a feel for the movements

of the character Mo, and then to watch it again while marking a co ding sheet. For each

animation segment, they were asked to \mark the main, overal l E orts present in the

segment" on a chart as follows: 52

Indirect -1 0 1 Direct

Light -1 0 1 Strong

Sustained -1 0 1 Quick

Free -1 0 1 Bound.

The 1value represents the indulging extreme, while the +1 value represents the ghting

extreme. The 0 value indicates neutrality along the given motion factor.

Table 6.1 summarizes the overall results of the evaluation of our E ort mo del. The rst

row indicates the p ercentage of correct resp onses|where the CMA either marked the E ort

that wewere trying to display in the animation or marked neutral when wewere trying to

display neutrality along a given motion factor. The second row indicates the p ercentage

of neutral resp onses|where the CMA marked neutral when wewere trying to displayan

E ort or where the CMA marked an E ort when wewere trying to display neutral along a

given motion factor range. The third row indicates the p ercentage of opp osite resp onses|

where the CMA marked the E ort opp osite from the one we were trying to p ortray.

Fortunately, the correct resp onses display signi cantly greater than chance agreement,

while the p ercentage of opp osite resp onses is low. The low but signi cant p ercentage of

neutral resp onses is partially attributed to the fact that most of the animation segments

on our video showed combinations of the E ort Elements|thus, a more prominent E ort

mayhave masked other displayed E ort Elements.

There are several reasons that we did not achieve p erfect agreementbetween the E orts

we were trying to p ortray and the qualities marked by the CMA observers. First of all,

many of the manifestations of E ort are subtle and ephemeral; notators watching the same

live human p erformers must work together through rep eated viewings and discussions in

order to achieve inter-observer agreement. Even CMAs notating video several months

after their initial observation don't agree completely with their previous notations [26 ].

Another factor a ecting the scores is that the manifestations of E ort on our character

Mo a ected only his arm movements with limited limb volume and torso supp ort; with

a human p erformer, facial expression, eye gaze, muscle exion, environmental context,

sounds and sp oken words all give additional clues to the E ort b eing p ortrayed. Further,

observing and analyzing a stylized carto on character was a novel exp erience for the CMA 53

Consultant CMA 1 CMA 2 CMA3

Correct 76.6 55.6 53.2 60.1

22.6 38.7 39.1 37.1 Neutral

Opp osite 0.81 5.6 7.7 2.8

Table 6.1: Overall Percentages for E ort Mo del Evaluation

observers. Also, we asked the CMAs to make judgments based on only two viewings in

one sitting, rather than allowing for rep eated, careful analyses over a p erio d of time. The

evaluations generally represent only their initial impressions. Finally,even though CMAs

are extensively trained in observing the qualitative asp ects of movements, the sub jective

nature of E ort inherently leads to somewhat p ersonalized notions of E ort.

The LIMS Reliability Pro ject is the most comprehensive study of observer agreement

on LMA thus far [26 ]. The LIMS pro ject involved CMAs watching 45-second videotap e

segments of dance solos and segments of conversation with one sp eaker o camera.

Observers were asked to press keys on a computer when they observed certain features;

the computer recorded the time in seconds of the key presses. E ort Elements of the

same motion factor were observed simultaneously. For various reasons, it is dicult to

conclusively assess the results of the pro ject. The low number of o ccurrences of certain

E ort Elements Light, Sustained, and Indirect on the videotap e segments meant that

statistical analysis was infeasible for those features. Also, assessments of Flow were

unreliable b ecause observers were not used to sp ecifying instances of Free and Bound;

rather they view Flow as a continuous uctuation between the two extremes. In general

terms, agreement for Strong and Lightwas \relatively high, if sp otty". Observers \could

agree on the p erception of [S]udden" for the dance segments, though agreement was \low

but notable" on the conversation segments. Direct gave \consistently p ositive if sometimes

low" agreement. The results of the pro ject suggest that Indirect, Light, Bound and Free

\require more rigorous training, practice and, in some cases, more dynamic videotap e

examples" in order to achieve high levels of agreement. Because the nature of their

exp eriments were fundamentally di erent from ours, it was not p ossible to quantitatively

compare their results with our results. Our exp eriments had \right" answers, where 54

Consultant CMA 1 CMA 2 CMA3 Average

non-consultant CMAs

Indirect 84.6 92.3 76.9 69.2 79.5

Direct 60.0 80.0 40.0 26.7 48.9

Light 90.5 38.1 23.8 42.9 34.9

Strong 82.3 5.9 29.4 0.0 11.8

Sustained 79.0 47.4 100 42.1 63.2

Sudden 100 100 100 100 100

Free 76.2 57.1 33.3 38.1 42.9

Bound 75.0 25.0 75.0 37.5 45.8

Table 6.2: Percentage Correct for Individual E ort Elements

marking the E ort we were trying to display constituted a correct resp onse. Their

exp eriment had no right answer, and instead measured the level of agreement between

two viewers. Further, their analysis measured agreement in the total number of E ort

Elements viewed in a segment, rather than the overall E ort displayed.

Table 6.2 shows the p ercentage of correct resp onses in our E ort Mo del Evaluation

for each E ort Element. The results are interesting compared to the results from the

LIMS Reliability Pro ject, which gave high levels of inter-observer agreement for Strong,

Direct, Sudden [26 ]. Our results indicate a high p ercentage of correct resp onses for

Sudden, Indirect, and Sustained, while Direct, Bound, and Free gave mo derate results.

The p ercentages for Strong and Light are surprisingly low compared to the p erformance

of our consultant on those qualities. There are several p ossible reasons for this mismatch.

First of all, our consultant is familiar with the subtle cues we used to p ortray Weight

characteristics, and thus, could probably more readily identify the Weight E orts. Also,

Strong qualities are often revealed by body and limb tension, and the throwing of one's

weightintoamovement|qualities that are dicult to display in our simple Mo character.

Light qualities are often revealed by delicate hand and nger gestures|which are limited

by EMOTE's requirement that movements must use a preset hand shap e.

More details on the LIMS Reliability Pro ject along with several other articles on

observer agreement using Laban-based notations are provided in [25 ]. Overall, we b elieve

that our E ort mo del gives more than adequate results given the sub jectivity of E ort, 55

the nature and novelty of E ort analysis in our exp eriment, and the results of previous

studies on observer agreement.

6.1.2 User Evaluations

The EMOTE interface was not the fo cus of this work; however, we p erformed some

preliminary user evaluations to demonstrate the ease of generating expressivemovements

using our E ort mo del. We had two users User A and User B evaluate EMOTE

R

and compare it to a commercial animation pro duct 3D Studio MAX . User A had no

exp erience with EMOTE or LMA, but did have some exp erience with various commercial

animation packages. User B had some exp erience in b oth. We had b oth users select two

1

Basic E ort Actions : oat, punch, glide, slash, wring, dab, ick, or press. Each user

was told to animate the selected actions using EMOTE and 3DS MAX, lling out a brief

questionnaire comparing the two. Both users to ok less time generating their animations

while using EMOTE 10-20 minutes for User A, and 5-10 minutes for User B compared

to 3DS MAX 20-30 minutes for b oth. Also, b oth rated the quality of their animations

as the same or b etter using EMOTE. We also asked users what p ercentage of time they

sp ent setting up the animation and what p ercentage of time they sp ent tweaking to get

the desired animation. User A sp ent 40 50 of the time tweaking the animation when

using EMOTE and 70 80 of the time tweaking the animation when using 3DS MAX.

The user explained that once he \got a rough approximation in EMOTE, [he] was able to

get the motion to lo ok great with a small amount of time". He did nd that setting up an

animation in 3DS MAX was easier b ecause it provided a more extensive set of to ols. We

note that this p ortion of the animation task was not the one wewere seeking to improve.

User B sp ent 85 of the time tweaking animations in EMOTE and only 50 tweaking

animations in 3D Studio MAX. He explained that he was \trying to o hard in [EMOTE]

to get the gestures just right" and appreciated that one do esn't have to worry as much

ab out timing issues in EMOTE. Overall, we b elieve the results suggest that our motion

control paradigm simpli es and sp eeds up the task of generating expressive movements,

and provides a capability not available in current animation packages.

1

The Basic E ort Actions were de ned by Laban as the eight combinations of E ort Elements for Space,

Weight, and Time. He used a textual \action word" to describ e each. 56

6.2 Extensions

In order to extend our motion control paradigm to handle leg movements, one would need

to account for some physical factors that don't a ect the arms, such as balance and b earing

weight. On the other hand, the legs are used signi cantly less than the arms in human

expression. The tra jectory de nition and timing control asp ects of our E ort mo del are

directly applicable to leg movements. Some of the ourishes namely, torso supp ort and

limbvolume would also b e manifested in the legs dep ending on E ort settings, although

the equations de ning them may have to be altered slightly. The other ourishes we

de ned|wrist b end and arm twist|are sp eci c to the arms, although the legs would

probably have similar, and p ossibly additional, ourishes i.e.: ankle b end and leg twist

that would need to b e de ned.

We b elieve that an imp ortant area of the body that has essentially b een ignored in

animating human-likecharacters is the torso. The torso plays an imp ortant role in shaping.

For instance, a tired or discouraged p erson might reveal these inner turmoils with a concave

shap e and slouched shoulders. A proud, b oastful p erson might strut with chest out and

head high. Also, characteristics of one's breathing provide a number of clues into the

situation or one's attitude towards it. Someone who is exp ending a lot of energy with

powerful gestures probably feels strongly ab out the ideas they are conveying. A p erson

who is fearful or anxious will likely display quickened breathing. Wehave simulated some

torso changes using squash and stretch on Mo's b o dy, but a more detailed mo del is needed

in order to capture all the subtleties of human torso shaping. Francois Delsarte 1811-

1871 dedicated his life to discovering the laws of expression [71 ]. One p ortion of his work

is presented as a set of charts showing basic p ositions of various body parts and their

resp ective meanings. Zhao and Badler implemented a system that uses Delsarte's charts

for the head, arm, and hand as a baseline for gestures accompanying sp eech [90 ]. The rst

step towards expressive torso movements would b e to extend the work of Zhao and Badler

to include Delarte's interpretations of torso p ostures and movements.

We suggest a plan for utilizing an E ort-based motion control scheme in a b ehavioral

animation system. Using natural language for the high-level control i.e.: in the form of

sp eech text or a storyb oard script, one could develop a translator that would take adverbs 57

and convert them into E ort settings. \Carefully" might translate into Light and slightly

Sustained a Weightvalue of 1:0 and Time value of 0:3; \haphazardly" might translate

into Indirect and somewhat Free a Space value of 1:0 and Flowvalue of 0:8. Then, one

could tweak these settings based on a character's p ersonality and/or mo o d. An introverted

p erson might tend towards Bound and Indirect movements, which could be achieved by

adding a small value to the Flow and Space settings. Finally, one could add a small degree

of randomness to the E ort settings to account for individual di erences among characters

with matching p ersonalities and mo o ds.

6.3 Contributions

We have de ned the quantitative structures necessary to mo del qualitative asp ects of

movement. Using these structures, we have built an empirical mo del of E ort. We

intro duce a novel motion control paradigm that uses our E ort mo del to facilitate the

sp eci cation of human expressive movements and provides real-time, interactive control.

While skilled animators can achieve life-like, expressive characters by controlling the

individual lowest level degrees-of-freedom via keyframing or inverse kinematics metho ds,

E ort provides a more systematic and meaningful way of describing qualitative asp ects

of movement. Varying the intensities of the E ort Elements and combining di erent

elements pro duce a rich language for expressive movements. A system using E ort-

based motion control could provide low-level control by outputting joint angle or other

motion parameter curves for editing with traditional metho ds. More imp ortantly,

by allowing users to customize basic movements with general terms, such a system

supp orts individualized expression as well as enabling users to create high-level movement

controllers. This paradigm is an essential step towards a system where a user creates

character by sp ecifying p ersonality, attitudes, and intentions, which in turn mayeventually

lead to the automatic generation of appropriate movements from sp eech text, a storyb oard

script, or a b ehavioral simulation. 58

Bibliography

[1] Kenji Amaya, Armin Bruderlin, and Tom Calvert. Emotion from motion. In

Wayne A. Davis and Richard Bartels, editors, Graphics Interface '96, pages

222{229. Canadian Information Pro cessing So ciety, Canadian Human-Computer

Communications So ciety,May 1996.

[2] N. Badler, B. Webb er, J. R. Clarke, D. Chi, M. Hollick, N. Foster, E. Kokkevis,

D. Metaxas, O. Ogunyemi, J. Kaye, and R. Bindiganavale. Medisim: Simulated

medical corpsmen and casualties for medical forces planning and training. In The

National Forum: Military Telemedicine On-Line Today. Research, Practice and

Opportunities. IEEE Computer So ciety Press, 1995.

[3] N. I. Badler and S. W. Smoliar. Digital representation of human movement. ACM

Computing Surveys, 11:19{38, March 1979.

[4] Norman I. Badler. A computational alternative to e ort notation. In Judith A. Gray,

editor, Dance Technology: Current Applications and Future Trends. National Dance

Asso ciation, Reston, VA, 1989.

[5] Norman I. Badler, Cary B. Phillips, and Bonnie Lynn Webb er. Simulating Humans:

Computer Graphics Animation and Control. Oxford University Press, New York,

1993.

[6] Irmgard Bartenie and Martha Davis. E ort-shap e analysis of movement: The

unity of expression and function. In Martha Davis, editor, Research Approaches to

Movement and Personality. Arno Press Inc., New York, 1972. 59

[7] Irmgard Bartenie and Dori Lewis. Body Movement: Coping with the Environment.

Gordon and Breach Science Publishers, New York, 1980.

[8] Ronen Barzel and Alan H. Barr. A mo deling system based on dynamic constraints. In

Computer Graphics SIGGRAPH '88 Proceedings,volume 22, pages 179{188, August

1988.

[9] R. Benesh and J. Benesh. An Introduction to Benesh Dance Notation. A. and C.

Black, London, 1956.

[10] Rama Bindiganavale and Norman Badler. Motion abstraction and mapping with

spatial constraints. In Model ling and Motion Capture Techniques for Virtual

Environments, International Workshop CAPTECH'98, pages 70{82. Springer-

Verlag, 1998.

[11] Leslie Bishko. Relationships between laban movement analysis and computer

animation. In Dance and Technology I: Moving Toward the Future, pages 1{9, Ohio,

1992. Fullhouse Publishing.

[12] James P. Bliss, Philip D. Tidwell, and Michasel A. Guest. The e ectiveness of virtual

reality for administering spatial navigation training to re ghters. PRESENCE:

Teleoperators and Virtual Environments, 61:73{86, February 1997.

[13] Bruce M. Blumb erg and Tinsley A. Galyean. Multi-level direction of autonomous

creatures for real-time virtual environments. In SIGGRAPH 95 Conference

Proceedings, pages 47{54, August 1995.

[14] Armin Bruderlin and Thomas W. Calvert. Goal-directed, dynamic animation of

human walking. In Computer Graphics SIGGRAPH '89 Proceedings, volume 23,

pages 233{242, July 1989.

[15] Armin Bruderlin and Tom Calvert. Knowledge-driven, interactive animation of human

running. In Wayne A. Davis and Richard Bartels, editors, Graphics Interface '96,

pages 213{221. Canadian Information Pro cessing So ciety, Canadian Human-Computer

Communications So ciety,May 1996. 60

[16] Armin Bruderlin and Lance Williams. Motion signal pro cessing. In SIGGRAPH 95

Conference Proceedings, pages 97{104, August 1995.

[17] Dance Notation Bureau. 31 West 21st Street, New York, NY, 10010;

http://www.dancenotation.org.

[18] T. W. Calvert, J. Chapman, and A. Patla. Asp ects of the kinematic simulation of

human movement. IEEE Computer Graphics and Applications, 2:41{48, November

1982.

[19] Diane Chi, John Clarke, Bonnie Webb er, and Norman Badler. Casualty mo deling

for real-time medical training. PRESENCE: Teleoperators and Virtual Environments,

54:359{366, Decemb er 1996.

[20] Diane Chi, Evangelos Kokkevis, Omolola Ogunyemi, Rama Bindiganavale, Mike

Hollick, John Clarke, Bonnie Webb er, and Norman Badler. Simulated casualties and

medics for emergency training. In K.S. Morgan, H. Ho man, D. Stredney, and S.J.

Weghorst, editors, Medicine Meets Virtual Reality, volume 39, pages 486{494, 1997.

[21] Michael F. Cohen. Interactive spacetime control for animation. In Computer Graphics

SIGGRAPH '92 Proceedings, volume 26, pages 293{302, July 1992.

[22] Martha Davis. E ort-shap e analysis: Evaluation of its logic and consistency and its

systematic use in research. In Irmgard Bartenie , Martha Davis, and Forrestine Paula,

editors, Four Adaptations of E ort Theory in Research and Teaching, New York, 1970.

Dance Notation Bureau, Inc.

[23] Martha Davis, editor. Understanding Body Movement: An Annotated Bibliography.

Arno Press, New York, 1972.

[24] Martha Davis. Laban analysis of nonverbal communication. In Shirley Weitz, editor,

Nonverbal Communication: Readings with Commentary, pages 182{206. Oxford

University Press, New York, 2 edition, 1979.

[25] Martha Davis, editor. Movement Studies: A Journal of the Laban/Bartenie Institute 61

of Movement Studies. Laban/Bartenie Institute of Movement Studies, Inc., New

York, 1987.

[26] Martha Davis. Steps to achieving observer agreement: The lims reliability pro ject.

Movement Studies: A Journal of the Laban/Bartenie Institute of Movement Studies,

2:7{20, 1987.

[27] Cecily Dell. A Primer for Movement Description: Using E ort-shape and

Supplementary Concepts. Dance Notation Bureau, Inc., New York, 1970.

[28] Scott L. Delp, Peter Loan, Cagatay Basdogan, and Joseph M. Rosen. Surgical

simulation: An emerging technology for training in emergency medicine. PRESENCE:

Teleoperators and Virtual Environments, 62:147{159, April 1997.

[29] David Eb ert, Kent Musgrave, Darwyn Peachey, Ken Perlin, and Steven Worley.

Texturing and Modeling: A Procedural Approach. Academic Press, second edition,

July 1998.

[30] N. Eshkol and A. Wachmann. Movement Notation. Weidenfeld and Nicholson,

London, 1958.

[31] T. Flash and N. Hogan. The co ordination of arm movements: An exp erimentally

con rmed mathematical mo del. Journal of Neuroscience, 5:1688{1703, 1985.

[32] Stephen Glazier. Random House Word Menu. Random House, New York, 1992.

[33] Michael Gleicher. Retargeting motion to new characters. In SIGGRAPH 98

Conference Proceedings, pages 33{42, July 1998.

[34] Michael Gleicher and Peter Litwinowicz. Constraint-based motion adaptation. The

Journal of Visualization and Computer Animation, 92:65{94, 1998.

[35] Larry Gritz and James K. Hahn. Genetic programming evolution of controllers for

3-D character animation. In Genetic Programming 1997: Proceedings of the Second

Annual Conference, pages 139{146, Stanford University, CA, USA, 13-16 July 1997.

Morgan Kaufmann. 62

[36] Eric Guaglione. The art of disney's 'mulan'. In Siggraph 1998 Course, 1998.

[37] Ann Hutchinson Guest. Choreo-graphics : a comparison of dance notation systems

from the fteenth century to the present. Gordon and Breach, New York, 1989.

[38] Craig Hayes. Starship tro op ers. In Rick Parent, editor, Conference Abstracts and

Applications Siggraph 1998, page 311, 1998. Animation sketch.

[39] Jessica K. Ho dgins, Wayne L. Wo oten, David C. Brogan, and James F. O'Brien.

Animating human athletics. In SIGGRAPH 95 ConferenceProceedings, pages 71{78,

August 1995.

[40] Francis Edward Simon Hunt, George Politis, and Don Herbison-Evans. Led:

An interactive graphical editor for labanotation. Technical Rep ort 343, Basser

Department of Computer Science, University of Sydney, 1989.

[41] Ann Hutchinson. Labanotation. Theatre Arts Bo oks, New York, 3 edition, 1977.

[42] Paul M. Isaacs and Michael F. Cohen. Controlling dynamic simulation with kinematic

constraints, b ehavior functions and inverse dynamics. In Computer Graphics

SIGGRAPH '87 Proceedings, volume 21, pages 215{224, July 1987.

[43] Mary Ritchie Key. Nonverbal communication : aresearch guide and bibliography. The

Scarecrow Press, Inc., Metuchen, N.J., 1977.

[44] Mary Ritchie Key and Janet Skupien. Body movement and nonverbal communication

: an annotated bibliography. Indiana University Press, Blo omington, Indiana, 1982.

[45] AlbrechtKnust. Dictionary on Kinetography Laban Labanotation. MacDonald and

Evans, Estover, Plymouth, 1979.

[46] Doris H. U. Ko chanek and Richard H. Bartels. Interp olating splines with lo cal tension,

continuity, and bias control. In Computer Graphics SIGGRAPH '84 Proceedings,

volume 18, pages 33{41, July 1984. 63

[47] Yoshihito Koga, Koichi Kondo, James Ku ner, and Jean-Claude Latombe. Planning

motions with intentions. In Proceedings of SIGGRAPH '94 Orlando, Florida, July

24{29, 1994, pages 395{408, July 1994.

[48] Evangelos Kokkevis, Dimitri Metaxas, and Norman I. Badler. User-controlled physics-

based animation for articulated gures. In Computer Animation '96, Geneva,

Switzerland, June 1996.

[49] Rudolf Laban. Choreutics. Macdonald & Evans, London, 1966. Annotated and edited

by Lisa Ullmann.

[50] Rudolf Laban. The Mastery of Movement. Macdonald and Evans, Plymouth, 4th

edition, 1980. Revised and enlarged by Lisa Ullmann.

[51] Rudolf Laban and F. C. Lawrence. E ort: Economy in Body Movement. Plays, Inc.,

Boston, 1974.

[52] Labanwriter. Dance Notation Bureau Extension, The Ohio State University,

Department of Dance, 1813 N. High Street, Columbus, Ohio 43210.

[53] John Lasseter. Principles of traditional animation applied to 3D computer animation.

In Computer Graphics SIGGRAPH '87 Proceedings, volume 21, pages 35{44, July

1987.

[54] Joseph F. Laszlo, Michiel van de Panne, and Eugene Fiume. Limit cycle control and its

application to the animation of balancing and walking. In SIGGRAPH 96 Conference

Proceedings, pages 155{162, August 1996.

[55] Zicheng Liu, Steven J. Gortler, and Michael F. Cohen. Hierarchical spacetime control.

In Proceedings of SIGGRAPH '94 Orlando, Florida, July 24{29, 1994, pages 35{42,

July 1994.

[56] Macb enesh. DanceWrite, 119 Glen Cresent, Thornhill L4J 4W3, Ontario, Canada,

905764-0046, [email protected].

[57] Vera Maletic. Body, space, expression : the development of Rudolf Laban's movement

and dance concepts. Mouton de Gruyte, New York, 1987. 64

[58] Kevin Montgomery, Michael Stephanides, Jo el Brown, Jean-Claude Latombe, and

Stephen Schendel. Virtual reality in microsurgery training. In Medicine Meets Virtual

Reality,volume 41, 1999.

[59] Carol-Lynne Mo ore and Kaoru Yamamoto. Beyond Words: Movement Observation

and Analysis. Gordon and Breach Science Publishers, New York, 1988.

[60] Michael Mo ore, Christopher Geib, and Barry Reich. Planning for reactive b ehaviors in

hide and seek. In Proc. 5th Conference on Computer GeneratedForces and Behavioral

Representation, 1995.

[61] Claudia L. Morawetz and Thomas W. Calvert. Goal-directed human animation of

multiple movements. In Proceedings of Graphics Interface '90, pages 60{67, May

1990.

[62] J. Thomas Ngo and Jo e Marks. Spacetime constraints revisited. In Computer Graphics

SIGGRAPH '93 Proceedings, volume 27, pages 343{350, August 1993.

[63] Laban/Bartenie Institute of Movement Studies LIMS. 234 Fifth Avenue, Ro om

203, New York, NY, 10001; 212477-4299.

[64] Ken Perlin. Real time resp onsive animation with p ersonality. IEEE Transactions on

Visualization and Computer Graphics, 11:5{15, March 1995.

[65] Ken Perlin and Athomas Goldb erg. IMPROV: A system for scripting interactive

actors in virtual worlds. In SIGGRAPH 96 Conference Proceedings, pages 205{216,

August 1996.

[66] Janis Pforsich. Personal communication.

[67] Luca Prasso, Juan Bahler, and Jonathan Gibbs. The p di crowd system for antz. In

RickParent, editor, ConferenceAbstracts and Applications Siggraph 1998, page 313,

1998. Animation sketch.

[68] Marc H. Raib ert and Jessica K. Ho dgins. Animation of dynamic legged lo comotion.

In Computer Graphics SIGGRAPH '91 Proceedings,volume 25, pages 349{358, July

1991. 65

[69] Craig W. Reynolds. Flo cks, herds, and scho ols: A distributed b ehavioral mo del.

In Computer Graphics SIGGRAPH '87 Proceedings, volume 21, pages 25{34, July

1987.

[70] Charles Rose, Michael F. Cohen, and Bobby Bo denheimer. Verbs and adverbs:

Multidimensional motion interp olation. IEEE Computer Graphics & Applications,

185, Septemb er { Octob er 1998.

[71] Ted Shawn. Every Little Movement. M. Witmark & Sons, New York, 1954.

[72] Karl Sims. Evolving 3d morphology and b ehavior by comp etition. In R. Bro oks and

P. Maes, editors, Arti cial Life IV Proceedings, pages 28{39. MIT Press, 1994.

[73] Karl Sims. Evolving virtual creatures. In Proceedings of SIGGRAPH '94 Orlando,

Florida, July 24{29, 1994, pages 15{22, July 1994.

[74] Scott Steketee and Norman Badler. Parametric keyframe interp olation incorp orating

kinetic adjustment and phasing control. In Computer Graphics SIGGRAPH '85

Proceedings, volume 19, pages 255{262, July 1985.

[75] Frank Thomas and Ollie Johnston. Il lusion of Life: Disney Animation. Hyp erion,

New York, 1995.

[76] Deepak Tolani. Inverse Kinematics Methods for Human Modeling and Simulation.

PhD thesis, UniversityofPennsylvania, 1998.

[77] Xiaoyuan Tu and Demetri Terzop oulos. Arti cial shes: Physics, lo comotion,

p erception, b ehavior. In Proceedings of SIGGRAPH '94 Orlando, Florida, July 24{

29, 1994, pages 43{50, July 1994.

[78] Y. Uno, M. Kawato, and R. Suzuki. Formation and control of optimum tra jectory in

human multijoint arm movement. Biological Cybernetics, 61:89{101, 1989.

[79] Munetoshi Unuma, Ken Anjyo, and Ryozo Takeuchi. Fourier principles for emotion-

based human gure animation. In SIGGRAPH 95 Conference Proceedings, pages

91{96, August 1995. 66

[80] Munetoshi Unuma and Ryozo Takeuchi. Generation of human motion with emotion.

In Nadia Magnenat Thalmann and Daniel Thalmann, editors, Computer Animation

'91, pages 77{88. Springer-Verlag, New York, NY, 1991.

[81] Harald G. Wallb ott. The measurement of human expression. In Walburga von

Raer-Engel, editor, Aspects of Nonverbal Communication, pages 203{228. Swets

and Zeitlinger, Lisse, Netherlands, 1980.

[82] Harald G. Wallb ott. Analysis of nonverbal communication. In Uta M. Quastho ,

editor, Aspects of Oral Communication, pages 480{488. Walter de Gruyter, New York,

1995.

[83] Shirley Weitz, editor. Nonverbal Communication: Readings with Commentary. Oxford

University Press, New York, 2 edition, 1979.

[84] Jane Wilhelms. Dynamic exp eriences. In Norman I. Badler, Brian A. Barsky, and

David Zeltzer, editors, Making them move: mechanics, control, and animation of

articulated gures, pages 265{279. Morgan Kaufmann, 1991.

[85] Andrew Witkin and Michael Kass. Spacetime constraints. In Computer Graphics

SIGGRAPH '88 Proceedings, volume 22, pages 159{168, August 1988.

[86] Andrew Witkin and Zoran Pop ovi c. Motion warping. In SIGGRAPH 95 Conference

Proceedings, pages 105{108, August 1995.

[87] QinXin Yu and Demetri Terzop oulos. Synthetic motion capture for interactive virtual

worlds. In Computer Animation 1998, pages 2{10. IEEE Computer So ciety, June

1998.

[88] Vladimir M. Zatsiorsky. Kinematics of Human Motion. Human Kinetics, Champaign,

IL, 1998.

[89] Jianmin Zhao and Norman Badler. Inverse kinematics p ositioning using nonlinear

programming for highly articulated gures. ACM Transactions on Graphics,

134:313{336, Octob er 1994. 67

[90] Liwei Zhao and Norman Badler. Gesticulation b ehaviors for virtual humans. Sixth

Paci c Conference on Computer Graphics and Applications, pages 161{168, Octob er

1998. 68