University of Pennsylvania ScholarlyCommons
IRCS Technical Reports Series Institute for Research in Cognitive Science
January 1999
A Motion Control Scheme for Animating Expressive Arm Movements
Diane M. Chi University of Pennsylvania
Follow this and additional works at: https://repository.upenn.edu/ircs_reports
Chi, Diane M., "A Motion Control Scheme for Animating Expressive Arm Movements" (1999). IRCS Technical Reports Series. 45. https://repository.upenn.edu/ircs_reports/45
University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-99-06.
This paper is posted at ScholarlyCommons. https://repository.upenn.edu/ircs_reports/45 For more information, please contact [email protected]. A Motion Control Scheme for Animating Expressive Arm Movements
Abstract Current methods for figure animation involve a tradeoff between the level of realism captured in the movements and the ease of generating the animations. We introduce a motion control paradigm that circumvents this tradeoff-it provides the ability to generate a wide range of natural-looking movements with minimal user labor.
Effort, which is one part of Rudolf Laban's system for observing and analyzing movement, describes the qualitative aspects of movement. Our motion control paradigm simplifies the generation of expressive movements by proceduralizing these qualitative aspects to hide the non-intuitive, quantitative aspects of movement. We build a model of Effort using a set of kinematic movement parameters that defines how a figure moves between goal keypoints. Our motion control scheme provides control through Effort's four dimensional system of textual descriptors, providing a level of control thus far missing from behavioral animation systems and offering novel specification and editing capabilities on opt of traditional keyframing and inverse kinematics methods. Since our Effort model is inexpensive computationally, Effort-based motion control systems can work in real-time.
We demonstrate our motion control scheme by implementing EMOTE (Expressive MOTion Engine), a character animation module for expressive arm movements. EMOTE works with inverse kinematics to control the qualitative aspects of end-effector specified movements. The user specifies general movements by entering a sequence of goal positions for each hand. The user then expresses the essence of the movement by adjusting sliders for the Effort motion factors: Space, Weight, Time, and Flow. EMOTE produces a wide range of expressive movements, provides an easy-to-use interface (that is more intuitive than joint angle interpolation curves or physical parameters), features interactive editing, and real-time motion generation.
Comments University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-99-06.
This thesis or dissertation is available at ScholarlyCommons: https://repository.upenn.edu/ircs_reports/45
A MOTION CONTROL SCHEME FOR ANIMATING
EXPRESSIVE ARM MOVEMENTS
DIANE M. CHI
A DISSERTATION
in
COMPUTER AND INFORMATION SCIENCE
Presented to the Faculties of the UniversityofPennsylvania in Partial Ful llment of the
Requirements for the Degree of Do ctor of Philosophy.
1999
Norman I. Badler
Sup erviser
Jean Gallier Graduate Group Chair
COPYRIGHT
Diane M. Chi 1999
ABSTRACT
A MOTION CONTROL SCHEME FOR ANIMATING
EXPRESSIVE ARM MOVEMENTS
Diane M. Chi
Sup ervisor: Norman I. Badler
Current metho ds for gure animation involve a tradeo between the level of realism
captured in the movements and the ease of generating the animations. We intro duce a
motion control paradigm that circumvents this tradeo |it provides the ability to generate
a wide range of natural-lo oking movements with minimal user lab or.
E ort, which is one part of Rudolf Laban's system for observing and analyzing
movement, describ es the qualitative asp ects of movement. Our motion control paradigm
simpli es the generation of expressive movements by pro ceduralizing these qualitative
asp ects to hide the non-intuitive, quantitative asp ects of movement. We build a mo del
of E ort using a set of kinematic movement parameters that de nes how a gure moves
between goal keyp oints. Our motion control scheme provides control through E ort's four-
dimensional system of textual descriptors, providing a level of control thus far missing from
b ehavioral animation systems and o ering novel sp eci cation and editing capabilities on
top of traditional keyframing and inverse kinematics metho ds. Since our E ort mo del is
inexp ensive computationally, E ort-based motion control systems can work in real-time.
We demonstrate our motion control scheme by implementing EMOTE Expressive
MOTion Engine, a character animation mo dule for expressive arm movements. EMOTE
works with inverse kinematics to control the qualitative asp ects of end-e ector sp eci ed
movements. The user sp eci es general movements byentering a sequence of goal p ositions
for each hand. The user then expresses the essence of the movement by adjusting sliders
for the E ort motion factors: Space, Weight, Time, and Flow. EMOTE pro duces a wide
range of expressive movements, provides an easy-to-use interface that is more intuitive
than joint angle interp olation curves or physical parameters, features interactive editing,
and real-time motion generation. iii
Acknowledgements
When I re ect on these past few years of graduate study, there are numerous p eople to
whom I am grateful. Ab ove all, I am deeply indebted to my advisor Professor Norm Badler.
Without his vision and inspiration, this work would never have b een imagined much
less demonstrated. Norm has b een an ideal advisor|sensing when I needed direction or
motivation, yet trusting me with indep endence and resp onsibility. I admire his intelligence,
managerial abilities, and sel ess devotion to his students, and am grateful for his years of
advice, guidance, and supp ort.
I am also grateful to Janis Pforsich, who was instrumental in providing the LMA
exp ertise for the pro ject. Her enthusiasm and willingness to explore the p otential of
computer technology were true assets, and I appreciate her role as a teacher and a friend.
My committee, Dr. Armin Bruderlin, Dr. Martha Palmer, Professor CJ Taylor,
and Professor Dimitri Metaxas, deserves sp ecial thanks for reading my pap ers and
providing insightful comments. Their varied p ersp ectives brought truly useful suggestions
that strengthened the work and its presentation, and their interest in the pro ject was
encouraging.
Others that deserve thanks include: Deepak Tolani for providing the inverse kinematics
co de; MikePan for his mo deling and video assistance, along with Mark Palatucci, for b eing
guinea pigs for my user tests; Amy Matthews, Connie Co ok, and Janet Hamburg for their
E ort mo del evaluations; Karen Carter for making the lab a pleasant place to work; Rama
Bindiganavale and John Granieri for their lab software supp ort; Harold Sun and Christian
Vogler for setting up the motion capture system; Professor Bonnie Webb er, Dr. John
Clarke, and Dr. Yumi Iwasaki for their guidance in my earlier research work; Professors iv
Mark Steedman and Jean Gallier for their work as graduate student advo cates; and Mike
Felker for keeping the department running smo othly.
I am also grateful to the National Physical Science Consortium and the Departmentof
Defense at Fort Meade for supp orting my graduate studies and encouraging women and
minorities to enter underrepresented elds of study.
Thanks to my \dissertation supp ort group" | Omolola Ijeoma Ogunyemi and Sonu
Chopra for o ering a listening ear, providing sound advice, and joining me on all those
\stress-relieving" shopping trips and candy runs. I appreciate the many past and present
memb ers of the graphics lab who were valuable colleagues and friends, including Vangelis
Kokkevis, Ken Noble, Roxana Cantarovici, Charles Erignac, Bond-Jay Ting, Je Nimero ,
Barry Reich, and Jonathan Kaye. I'm also grateful for my friends David Brogan, Chrissy
Benson, Alan Kuo, Jerome Strach, and Margie Wendling.
I would like to thank David Jelinek for his input in various technical discussions, for
lab oring through my pap ers and providing sometimes helpful comments, and for b eing
my b est friend during the last few years.
I am constantly reminded of how lucky I am to have such a wonderful family. Marie
and Mikehave b een the b est siblings a little sister could ask for|from the childho o d antics
through the transition into adults, they have shared and shap ed the imp ortant moments
of my life. Their unconditional supp ort means a lot to me. I also appreciate my brother-
in-law Darrell for b eing my nancial, hardware, and home repair advisor. My nephews
Je rey and Kevin are constant sources of fun, humor, and entertainment, reminding me o
the truly imp ortant things in life. Thanks are due to Auntie Alice, my California relatives,
and the close family friends who \adopted" me while I was away from home.
I want to express my utmost resp ect and admiration for my father and my mother.
They are my role mo dels. My father built a successful career, starting out with little
supp ort and a lot of determination. He has also shown me that age is of no matter|one
can always be young at heart. My mother is the most dedicated and generous p erson I
know. I admire her uncomplaining sacri ces and her ability to provide practical advice
even in the face of crisis. I thank them for their un agging supp ort and guidance, and I
dedicate this work to them. v
Contents
Acknowledgements iv
1 Intro duction 1
1.1 Our Approach...... 2
1.2 Motivation ...... 3
2 Related Work 5
2.1 Motion Control ...... 5
2.2 ExpressiveMovement ...... 9
2.3 Biomechanics ...... 12
2.4 Computers and Dance Notation ...... 13
3 Background 15
3.1 Nonverbal Communication ...... 16
3.2 Laban Movement Analysis ...... 18
4 E ort Mo del 22
4.1 Translating E ort into Movement Parameters ...... 22
4.2 Low-level Movement Parameter De nitions ...... 24
4.2.1 Tra jectory De nition ...... 24
4.2.2 Parameterized Timing Control ...... 25
4.2.3 Flourishes ...... 30
4.3 Parameter Settings ...... 33
4.3.1 Parameter Settings for Individual E ort Elements ...... 33 vi
4.3.2 Generating E ort Ranges and Combinations ...... 35
5 Implementation: EMOTE 39
5.1 User Interaction ...... 39
5.2 Arm Mo del ...... 42
5.3 Metho d for Using E ort Mo del ...... 43
5.4 Examples ...... 43
5.4.1 Individual E ort Elements ...... 43
5.4.2 Gestures Accompanying Sp eech ...... 47
6 Conclusions 52
6.1 Evaluation ...... 52
6.1.1 E ort Mo del Evaluation ...... 52
6.1.2 User Evaluations ...... 56
6.2 Extensions ...... 57
6.3 Contributions ...... 58
Bibliography 58 vii
List of Tables
3.1 Motion Factors and E ort Elements ...... 21
4.1 Low-level Parameter Settings for E ort Elements ...... 34
6.1 Overall Percentages for E ort Mo del Evaluation ...... 54
6.2 Percentage Correct for Individual E ort Elements ...... 55 viii
List of Figures
4.1 Stick Figure Formed By Motion Capture Sensors ...... 24
4.2 Velo cityFunction ...... 27
4.3 Varying In ection Point to Obtain Acceleration and Deceleration In ection
PointValue Given in Parentheses ...... 28
4.4 Varying Time Exp onent to Magnify Acceleration and Deceleration . . . . . 29
4.5 Varying Initial and Final Velo cities to Obtain Anticipation and Oversho ot
Initial and Final Velo cities Given in Parentheses ...... 29
4.6 Sine Factor in Squash Equation ...... 30
4.7 Sine Factor in Equation for Breath ...... 31
4.8 Multiplier in Wrist Bend Equation ...... 32
5.1 Interface for Adjusting End-E ector Positions ...... 40
5.2 Key Editor ...... 41
5.3 E ort Editor ...... 41
5.4 E ort Graph Editor ...... 42
5.5 Arm Mo del ...... 43
5.6 End-E ector Keys for an Example Movement Sequence ...... 44
5.7 Every Fifth Frame from Animation of Indirect and Direct E orts ordered
left to right, top to b ottom ...... 45
5.8 Every Fifth Frame from Animation of Light and Strong E orts ordered left
to right, top to b ottom ...... 46
5.9 Every Tenth Frame from Animation of Sustained and Sudden E orts
ordered left to right, top to b ottom ...... 48 ix
5.10 Every Tenth Frame from Animation of Free and Bound E orts ordered left
to right, top to b ottom ...... 49
5.11 Every Fourth Frame from Animation of a Denial ordered left to right, top
to b ottom ...... 50
5.12 Every Fourth Frame from Animation of a Gleeful Exclamation ordered left
to right, top to b ottom ...... 51 x
Chapter 1
Intro duction
As a so ciety, we are surrounded by other p eople. Consciously and sub consciously,we are
constantly observing how p eople move. We often recognize others solely by catching a
glimpse of them walking or moving. When p eople limp or make subtle comp ensations for
an injury,we immediately recognize something unnatural ab out their movements. Through
constant observation of everyday life, we b ecome sub consciously aware of the subtleties of
human movement. Animations of virtual characters must capture these subtleties in order
to app ear life-like and b elievable.
Current metho ds for human or character animation involve a tradeo between the
level of realism captured in the movements and the ease of generating the animations. At
one end of the sp ectrum, computer-animated features and movie sp ecial e ects display
extremely realistic individuals with p ersonalized movement characteristics; however, they
require teams of animators using lab or-intensive systems where every movement, from the
ick of a nger to a full-b o dy leap, must be explicitly sp eci ed. At the other end of the
sp ectrum, b ehavioral animation systems can automatically generate multiple characters
interacting with each other and their environments by merely sp ecifying a set of initial
conditions; however, the various characters often move in a fairly mechanical manner and
share very similar motions. A metho d that combines the advantages found at these two
extremes|the ability to generate a wide range of natural-lo oking movements with minimal
user lab or|can play an imp ortant role in circumventing this tradeo . 1
1.1 Our Approach
We intro duce a motion control paradigm that simpli es the generation of expressive
movements by pro ceduralizing the qualitative asp ects of movement to hide the non-intuitive
quantitative asp ects. To do this, we needed 1 a language for sp ecifying expressive
movements, and 2 a translation of this language into quantitativemovement parameters.
We sought a language that would cover the complex description space of expressive
movement while still providing an intuitive interface. We examined the nonverbal
communication and dance notation literature for metho ds of describing, notating, and
recording human movement. We found that the E ort comp onent of Laban Movement
Analysis [50 , 51 , 27 , 7, 59 ] met our requirements, has a solid foundation based on
observation and analysis of humans p erforming a wide range of movements, and is b eing
used as a research to ol in a growing number of disciplines. E ort is the part of Rudolf
Laban's theories on movement that describ es the qualitative asp ects of movement using
textual descriptors along four motion factors: Space, Weight, Time, and Flow Chapter
3. The extremes of each motion factor give the eight E ort Elements: Indirect, Direct,
Light, Strong, Sustained, Sudden, Free, and Bound.
We derived quantitative structures to mo del E ort. Through much trial and error, we
established an empirical mo del of each E ort Element and designed techniques to generate
ranges and combinations of E ort Chapter 4.
We demonstrate the use of our motion control scheme by implementing EMOTE
Expressive MOTion Engine, a 3D animation control mo dule for expressive arm
movements. EMOTE lets users sp ecify movements using end-e ector p ositions and E ort
settings. A sequence of end-e ector p ositions provides a general spatial description of a
movement, while the E ort settings de ne the desired qualitative nature of the movement.
EMOTE uses inverse kinematics IK to compute p ostures for an articulated gure from
the sp eci cation of end-e ector lo cations [89 , 76 ]. Inverse kinematics, however, do es not
sp ecify how a gure achieves a computed p osture or changes b etween a series of p ostures.
Thus, IK output is usually used as input to a separate animation pro cess, such as an
end-e ector linear interp olator or gure control equations. EMOTE lets a user express the
essence of the movement by setting sliders for each of the four qualitative E ort motion 2
factors: Space, Weight, Time, and Flow. EMOTE uses these E ort settings to compute the
values for a set of low-level motion parameters that sp ecify an animation that follows the
de ned p osition sequence and displays the selected E ort qualities. Since our translation
pro cess is computationally inexp ensive, EMOTE provides interactive editing and real-time
motion generation.
1.2 Motivation
We seek to provide a useful to ol that enables further automation of expressive character
animations. We b elieve sucha system can play an imp ortant role in keyframe animation
systems, virtual environments, games, and b ehavioral animation systems.
For novice keyframe animators, an E ort-based motion control system provides the
ability to generate a broad range of motions with a short learning curve and an easy-to-use
interface. For skilled animators, our system provides a blo cking to ol for quickly sketching
out movement using a language that directly supp orts the desired intent of the animated
character in a pro cess that is rep eatable and provides interactive editing at the parameter
level. Traditional low-level editing metho ds can b e provided to allow animators to further
re ne their animations.
Users of virtual environment systems and computer game players often interact
with other individuals, b oth user-controlled actors avatars and autonomous synthetic
characters. With current systems, the synthetic characters either have to be completely
sp eci ed with a library of reactivemovements, or the characters all have extremely similar
actions and reactions. For realistic interactions, virtual characters must app ear to be
individuals|they must move naturally and with subtle \p ersonality" traits. In fact, having
them act indep endently is more imp ortant than having them lo ok di erent physically,
b ecause we commonly use actions and other non-verbal communication to try to infer
emotional state, attitude, and ultimately intent. Behavioral animation uses a hierarchical
structure to de ne crowds, herds, and scho ols of characters [13 , 65 , 69 , 77 ]. Such systems
de ne low-level movements such as a character's basic means of lo comotion, as well as
higher level b ehaviors suchasscho oling or o cking, obstacle avoidance, pursuit and evasion, 3
and aggressiveness. However, these systems frequently omit the \middle" layer{variations
on low-level movements to create di erent expressions, p ersonalities, and intents. Further,
b ehavioral systems use characters represented by simple mo dels which o er a very small
range of movements and expression. Recent feature lms have used crowd simulations
to create armies in Disney's Mulan[36 ], worker ants in Paci c Data Images' Antz[67 ], and
Tipp ett Studios' bugs in Starship Troopers[38 ]. Casts of characters were created byvarying
b o dy shap es, props, and clothing; however, the movements of these characters were drawn
from a small pre-de ned library of motions generated bykeyframing, motion capture, stop
motion, and pro cedural animation.
Games, virtual environments, and computer-generated animations have obvious
applications in entertainment; however, synthetic characters also enable a participant
to exp erience a wide variety of scenarios with cognitive and decision-making challenges
without the physical consequences or harm that could result in the real world. For instance,
prototyp e systems already exist for training battle eld medics [2, 19 , 20 ], re ghters [12 ],
and surgeons [58 , 28 ].
A system that allows users to customize basic movements based on a character's
p ersonality, mood, and attitudes is the rst step towards simplifying the development
of a rep ertoire of characters with a wide range of expressiveness. By selecting a general
human movement description language to customize motions, such a system can also lead to
the generation of virtual characters from di erent cultures. Although certain gestures are
culture-sp eci c, the descriptions of the movements of individuals with the same emotions
and intent are often similar. A playful child moves with free, indirect abandon; an aggressor
makes strong, direct, and sudden attacks; and a soldier marches in b ound, sustained strides.
Our motion control paradigm allows the user to customize movements through general
qualitative descriptors. This is the rst step towards enabling a system where a user
creates character by sp ecifying attitudes and intentions, which in turn mayeventually lead
to the automatic generation of appropriate movements from sp eech text, a storyb oard
script, or a b ehavioral simulation. 4
Chapter 2
Related Work
In this chapter, we examine the current approaches to motion control used in computer
animation. Then, we discuss techniques that have b een develop ed to sp eci cally address
emotion and expression in animated movement. Next, we brie y overview the biomechanics
research and related work on human movement. We conclude with a survey of previous
e orts to combine dance notation and computers.
2.1 Motion Control
The basic approaches to motion control for articulated gures include: keyframing,
dynamic simulation, motion capture, and pro cedural metho ds. Each di ers in the amount
of user sp eci cation, the skill required to generate go o d-lo oking animations, the ease of
editing, and the generality of its application.
Keyframing articulated gures uses kinematics, requiring the user to sp ecify the
p ositions and orientations of a gure and its limbs at ma jor p oints in a movement. The
computer automatically generates the frames in-b etween the sp eci ed keyframes. With
keyframing, the animator has a lot of control over the nal animation. To change
the generated animation, the animator can add, remove, or change keyframes. The
disadvantage of keyframing is that it requires a certain artistry and skill in order to capture
life-likemovements and p ersonality. Current3Dkeyframing systems provide to ols so users
can view and edit curves representing the changing parameter values over an animation. 5
For instance, users are often able to edit joint angle interp olation curves, a motion's velo city
curve, or an ob ject's tra jectory in space. Another technique, inverse kinematics, reduces
the amount of user input by allowing users to sp ecify end-e ector lo cations to determine the
p osture of a gure [89 ]. These to ols aid the tedious sp eci cation tasks of the animator but
fail to o er assistance in capturing expression. Some systems also provide shap e blending
to ols that morph a source ob ject into a target ob ject over a sp eci ed amount of time. This
technique is typically used for facial or soft ob ject animation and not for movements of
articulated gures.
Dynamic simulation sometimes called physics-based mo deling uses the physics of
b o dies in motion to pro duce animations with a realistic impression of weight, friction,
inertia, and other physical prop erties. However, dynamic simulation requires solving
the equations of motion, which is computationally exp ensive and may result in unstable
solutions. Also, users must provide a detailed physical description for all ob jects in
the scene [84 , 68 ]. This description must include ob ject prop erties, such as mass
along with its distribution over the ob ject, moments of inertia, segment lengths, and
joint limits, as well as any forces or torques acting on the body, any frictional or
damping e ects that might be triggered through collisions or other movements, and
any e ects due to energy exp enditure and transfer. Since dynamic simulation generates
motions entirely from the physical description, users have limited and indirect control
over the nal animation; and editing proves non-intuitive. E orts to simplify control
of dynamic systems while maintaining physically correct animations include blending
kinematic and dynamic techniques [42 , 8, 84 , 14 , 48 ], customizing controllers for sp eci c
tasks [68 , 39 ], and automatically generating controllers using genetic algorithms or control
theory [73 , 72 , 35 , 62, 54 ]. While these metho ds capture the physical realism of b o dies in
motion, they do not address howtochange the movements to display di erent expressions
or intentions. Spacetime constraints metho ds attempt to provide the user with b etter
control over generated animations and editing capabilities [85 , 21 , 55 ]. However, thus
far, editing of how a motion is p erformed using spacetime constraint systems is limited to
qualities that directly translate to physical criteria. For instance, \don't waste energy"
minimizes kinetic energy, while \land hard at the end of a jump" maximizes the contact 6
force on landing. On the other hand, achieving expressive qualities such as \careful" or
\meandering" are not addressed. Also, spacetime constraints metho ds are non-interactive
and have b een limited to movements of simple creatures.
Motion capture uses electro-magnetic or optical technologies to collect p osition and
orientation data of real human movement, which can then b e used to animate articulated
gures. Motion capture techniques can capture the small nuances of an individual's
movement, enabling an animated character to display the intended expression of the
original human p erformer; however, motion capture requires exp ensive equipment and
quickly b ecomes impractical when an animation involves a large number of characters
and/or movements. Further, motion capture o ers only indirect control over pro duced
animations|by requesting changes from the p erformer and re-capturing the subsequent
motions. Gleicher intro duced a spacetime constraints technique for adapting motions from
one character to pro duce motion sp eci cations for other di erently sized characters [33 , 34 ].
For instance, using motion capture data of two p eople swing dancing, he changed the sizes
of the dancers, but was able to maintain their fo ot contact to the o or and their hand
contact with each other using his retargetting technique. Bindiganavale and Badler present
a metho d that automatical ly recognizes and maintains spatial, as well as visual constraints
while mapping them to characters of di erent sizes[10 ]. For instance, the data for an adult
grabbing a mug on a table and drinking from it is mo di ed to animate a child drinking
from the mug|they ensure that the child grasps the mug at the correct lo cation and brings
it to his lips. These metho ds however provide no direct means of editing the expressive
qualities of captured movement.
Pro cedural animation metho ds de ne how a gure moves over time using a mo del
often mathematical that resp onds to some external input, either from a human user,
1
other pro cedures, or some means of sensing the current state . Often pro cedural metho ds
are geared towards sp eci c applications. For instance, pro cedural metho ds have b een
used in animating a number of physical phenomena, such as cloth movements, waves, and
particle phenomena [29 ]. Bruderlin and Calvert present a pro cedural system for animating
1
We note that the broad de nition given for pro cedural animation gives a disjoint set of techniques, some
of which, by our categorization, use other fundamental approaches to animation, notably physically-based
metho ds. 7
the running movements of an articulated human-like gure [15 ]. Their system, RUNNER,
provides a user with high-level parameters and attributes to interactively alter an animation
to display a wide variety of human running styles. Parameters determine the running
stride and include velo city, step length, step frequency, ight height, and level of running
exp ertise. Attributes allow the user to individualize a run by setting movement variables
for the arms, torso, p elvis, and legs. RUNNER mo di es a default running animation in
real-time according to changes sp eci ed by its user.
Behavioral animation, a subset of pro cedural metho ds, uses rules to de ne virtual
creatures that can sense and react to the environment. Various researchers have develop ed
b ehavioral animation systems to generate animations of multiple creatures with varying
p ersonalities and/or goals [69 , 77 , 5, 60 ]. Tu and Terzop oulos create a \virtual marine
world" lled with di erent shes [77 ]. They de ne three typ es of sh { predators, prey, and
paci sts{each with a di erent set of intentions and b ehaviors. Di erent sh typ es react
to their environments in di erent ways; however, since they are all physically mo deled
using the same spring-mass mo del, their low-level movements are all the same. We note
though that the basic movement of real sh is fairly unexpressive at least to this casual
observer!. Blumb erg and Galyean present a metho d of directing autonomous creatures
at multiple levels [13 ]. Their hierarchical organization of b ehaviors allow commands that
re ect emotional state to initiate lower-level b ehaviors. In their example, a dog told to
display a happy state, induces b ehaviors to set appropriate p ositions for his ears, tail, and
mouth and may also issue a meta-command for the dog to use a more jovial gait. Although
their system allows users to change the expression of characters, each expression and its
manifestations must b e de ned separately. Badler and his collab orators have implemented
b ehavioral systems with a more complex mo del|an articulated human gure. In the Hide
and Seek pro ject, characters are assigned di erent roles as hiders or as the seeker in on-
the- y animations of the children's game of the same name [60 ]. The role of the character
in uences its goals and subsequent b ehavior towards other characters in the scene, but the
low-level movements lo comotion of all characters is the same and fairly expressionless.
Pro cedural metho ds ease the work required of the animator by enco ding some
information on how things move. Our motion control paradigm takes this approach 8
and applies it to a general-purp ose application|the expressivity of movements. We
parameterize the non-intuitive, quantitative asp ects of movement and provide the user
with more meaningful, interactive control through textual descriptors E ort Elements.
Since our motion control paradigm is based on a comprehensive language for describing
movements, it can b e applied to anytyp e of movements and ob jects.
2.2 Expressive Movement
Several researchers have sp eci cally addressed the generation of expressiveness in
movement. Their approaches involve adding expressiveness to neutral motions [64 , 79 , 1]
or providing editing to ols to mo dify expression or t di erent constraints [16 , 86 , 70 ]. Most
of these techniques are not sp eci c to any particular motion generation metho d and are
valuable to ols for making existing motions more usable; however, they may prove costly
or dicult to use in generating the range of human expressivity. Other researchers have
develop ed systems that generate animations from high-level sp eci cations [61 , 47 ].
Ken Perlin gives the \visual impression of p ersonality" to animated pupp ets using
sto chastic noise functions [64 ]. The user controls the pupp et through a panel of buttons
representing a set of primitive actions. The system smo othly blends the selected actions
into a coherent animation. The user can vary expression by adding a random comp onent
to joints, mo difying the bias on joints, and varying the transition times for di erent
primitive actions. These metho ds givecharacters a dynamic presence and a random sense
of attentiveness, which play a role in resp onsive virtual agents, but do not necessarily
present a natural, human-like demeanor. Also, varying expression by mo difying joint
angle frequency and amplitude functions is non-intuitive.
Unuma, Anjyo, and Takeuchi capture a wide variety of expression in human lo comotion
[80 , 79 ]. They use motion capture to collect joint angle data of a human sub ject
p erforming neutral and various emotion-in uenced lo comotion. From the discrete data,
they approximate the original movement with a continuous rescaledFourier function model.
Their Fourier function mo del allows them to smo othly transition between two captured
motions using interp olation, as well as to generate exaggerated motions using extrap olation. 9
For instance, they can interp olate between a \normal" walk and a \tired" walk to get
various degrees of \tiredness"; more imp ortantly, they can extrap olate to get a \brisk" and
an exaggerated \tired" walk. Further, they generate Fourier characteristic functions for
di erent emotions by taking the di erence b etween the Fourier co ecients of a functional
mo del for an emotion-in uenced lo comotion and those for a neutral lo comotion. This
allows them to sup erimp ose emotions onto mo dels for other motions. For instance, they
can sup erimp ose \briskness" or \tiredness" onto a \run" mo del, even if the characteristic
functions for \brisk" and \tired" were de ned from brisk and tired walks. In addition, they
provide interactive controls for step, sp eed, and hip p osition. In [80 ], they show examples
of walking and running with the following emotions: hollow, vacant, graceful, cold, brisk,
happy, vivid, and hot. They seem to have captured a wide variation in movements, however,
their metho ds work only on cyclic motions. Also, although users can interactively adjust
values for the mo deled emotions, the addition of other emotions requires rep eating the
lengthy pro cess of motion capture on a human sub ject and generating its functional mo del
and characteristic function.
Amaya, Bruderlin, and Calvert presenta more general metho d for adding emotion to
motions. They derive emotional transforms from motion capture data by quantifying the
di erences b etween neutral and emotion-driven actions using the sp eed of the end-e ector
and the spatial amplitude of joint angle signals [1 ]. They then use the emotional transforms
to add emotion to neutral actions. Further, by dividing the joints of an articulated human
gure into joint categories, they are able to apply emotional transforms derived from one
part of the b o dy to another part of the b o dy. For instance, an emotional transform derived
from angry and sad arm movements drinking motions was applied to the legs to generate
angry and sad kicking motions. Further, their emotional transforms can be applied to
simulated, keyframed, and pro cedurally generated motions. The authors use the technique
to capture ten emotions or mo o ds: neutral, angry, sad, happy, fearful, tired, strong, weak,
excited, and relaxed. By separating the de nition of basic movements from the transforms
required to generate emotions, the authors are able to generate a broad range of movements
with di erenttyp es of expressivity. They note however that individuals di er in the ways
they express themselves, requiring di erent transforms to represent di erent p ersonalities, 10
genders, cultures, and ages, which could result in a large database of movement transforms.
In [16 ], Bruderlin and Williams intro duce a set of techniques to mo dify existing motion
data generated from motion capture, keyframing, or pro cedural animation . By treating
motion parameters such as joint angles or co ordinates as sampled signals, they can
apply techniques from image and signal pro cessing to mo dify the animated motions.
Multiresolution motion ltering passes a motion parameter signal through a series of lters,
decomp osing the signal into a set of bandpass lter bands. An animator can adjust the
amplitudes of high, middle, or low frequency bands to add a nervous twitch, exaggerate
the movement, or constrain joint ranges, resp ectively. Multitarget motion interp olation
blends two di erent motions into a single motion. Dynamic timewarping resolves timing
di erences b etween two motions to b e blended. Waveshaping mo di es input motions using
shaping functions and is useful for maintaining joint limits or adding subtle e ects. Motion
displacement mapping p ermits lo cal shaping of a signal while maintaining the global
shap e of the signal. This allows an animator to change select keyframes for instance, to
satisfy certain constraints or change an end e ector p osition while maintaining the overall
character of the original motion. The system ts a spline curve through the displacements
in each degree of freedom and adds it to the original curve signal.
Witkin and Pop ovi c describ e a technique, similar to motion displacement mapping, for
editing captured or keyframed animation by warping motion parameter curves [86 ]. The
animator mo di es the p ose at particular frames, which are used as constraints on a smo oth
deformation to be applied to the captured motion curves. The deformation satis es the
constraints, while maintaining the ne details of the captured motion. A large number of
realistic motions may b e created from a single prototyp e motion sequence using just a few
keyframes to de ne the motion warp.
Rose, Cohen, and Bo denheimer present a metho d for leveraging existing motions|
they interp olate b etween structurally similar example motions along multiple dimensions
to create new motion [70 ]. Using an o -line authoring system, they parameterize the
motion \verbs" with \adverbs" and create a verb graph to sp ecify transitions between
verbs. At runtime, these structures allow the user to mo dify animations in real-time by
changing adverb settings. 11
Koga, Kondo, Ku ner, and Latombe present a task-level animation system that
generates arm motions of a human gure moving an ob ject to a goal lo cation [47 ]. They
intro duce a planner to compute collision-free paths and an inverse kinematics algorithm
for human arms based on neurophysiological studies. Though their system generates a
complete animation with minimal user sp eci cation, their fo cus is on the \intention" of
moving an ob ject from one lo cation to another and not on the underlying movement
qualities of the character and their expressive manifestations.
Morawetz and Calvert intro duce a framework for an animation system, which p erhaps
most closely matches our high-level goals, but takes a very di erent approach [61 ]. They
implement a p ortion of this framework as the GESTURE system, which uses a mo ck
exp ert system to add secondary motion to walking characters based on their user-sp eci ed
\p ersonality" and \mo o d". Secondary movements are the subtle movements that don't
serve to satisfy a particular goal, but often re ect one's sub conscious, inner attitudes
and play an imp ortant role in nonverbal communication. For instance, reaching for a
cup or op ening a do or are goal-directed primary movements, while tapping one's fo ot
or scratching one's head are examples of secondary motion. Personality is sp eci ed
through slider values for extrovert/introvert, cheerful/glo omy, assertive/passive, and
domineering/submissive. Moods include b oredom, nervousness, fatigue, impatience, and
fear. For motion sp eci cation, GESTURE uses a graph to facilitate the sequencing of
movements and to allow for one movement to interrupt another. GESTURE takes as
input a high-level script, which chronologically lists actor movements and start times
using textual descriptions suchas\walk forward" or \scratch head with left hand" and
frame numb ers. Each high-level movement must be de ned using a pre-de ned gesture
sp eci cation function. The system generates a detailed animation script, sp ecifying all
joint angles for all characters in an animation.
2.3 Biomechanics
There has b een a signi cant amount of research on the biomechanics of human motion;
however, the eld is still fairly young with no de nitive mo dels and several working 12
hyp otheses. For instance, the minimum-jerk hyp othesis suggests that p eople p erforming
skilled tasks move in a maximally smo oth manner, minimizing the time rate of change of
acceleration [31 ]. Another mo del, the minimum-torque change hyp othesis, suggests that
motions display minimized torque derivatives [78 ]. Zatsiorsky provides a go o d intro ductory
text on elementary movement concepts as well as the biomechanical research literature
[88 ]. As none of the biomechanical mo dels can explain all the complexities of human
movement, we take the artistic approachindeveloping our basic movement mo del Chapter
4. We further justify this approachby the fact that actors human as well as animated
tend to move di erently from humans in everyday situations|they must exaggerate their
movements, emotions, and reactions to capture an audience's attention.
2.4 Computers and Dance Notation
For the past twenty years, p eople have b een exploring ways to combine computers and
dance notation, b oth to simplify the pro cess of notating dance and to facilitate computer
animation by using notations already established to describ e human movement. Several
software programs exist for editing and printing dance notation scores. For instance,
there is LabanWriter [52 ] and LED [40 ] for Labanotation, and MacBenesh [56 ] for Benesh
Movement Notation. Others have used notation, Labanotation in particular, to animate
human gures [3 , 18 ]. These pro jects have fo cused on the structural asp ects of movement,
which are sp eci ed by Labanotation, but have not addressed the more qualitative asp ects
of movement provided by E ort. Bishko suggests analogies b etween the \Twelve Principles
of Animation" [75 ] and Laban Movement Analysis LMA [11 ]. She shows that there is
an abstract relationship b etween LMA and traditional animation techniques, but do es not
provide a practical means of exploiting this relationship. Badler was the rst to prop ose
the use of E ort as a higher level of control for human gure animation [4]. Our work here
provides a metho d of implementing just such a system.
Since E ort provides a universal theory of human expression and a vo cabulary for
movement dynamics, our motion control paradigm eliminates the need to build a database
of sp eci c expressions and b ehaviors. Also, since E ort is expressed by combining and 13
varying the magnitude along just four motion factors, we are able to re-use our E ort
mo del to sp ecify all typ es of expressive movements without requiring o -line mo deling
for new expressions. Further, our E ort mo del is computationally ecient, p erforming in
real-time and allowing for interactive motion generation and editing. 14
Chapter 3
Background
Describing human motion is a formidable task. In dance alone, there are over a dozen
signi cant notations, eachvarying in approach, aims, strengths, and weaknesses [37 ]. The
Random House Word Menu has over ve hundred verbs of motion [32 ]. Laban describ es
movement as \one of man's languages" [49 ]; unfortunately, translating movement into a
non-visual, sp oken or textual language is not straightforward.
To achieve our goals for a means of sp ecifying expressive movements of computer-
animated characters, we sought a language for describing the qualitative asp ects of
movement that was:
systematic with a limited numb er of terms the set of adverbs in the English language
is unwieldy,
intuitive, so that users do not have to learn a new notation or express how they want
amovement to b e p erformed in abstract, quantitative terms,
ob jective enough that di erent p eople can agree on the meanings of the terms, and
not limiting on the range of represented movements.
We examined the nonverbal communication and dance notation literature for metho ds
of describing, notating, and recording human movement. We found that the E ort
comp onent of Laban Movement Analysis was most appropriate for our purp oses. This
chapter surveys the research in nonverbal communication and describ es how other 15
notations used to study human b ehavior are inadequate for our purp oses. Then, we provide
justi cations for the use of Laban's notation as a research to ol, give a brief history of
Laban's work, and describ e E ort as a metho d for qualitativemovement description.
3.1 Nonverbal Communication
Nonverbal communication encompasses research in a variety of disciplines and seeks
to answer a wide range of questions. Signi cant advances have b een made in our
understanding of nonverbal communication; however, the eld still seems to su er
from overlapping terminology, categorizations, and ill-de ned b oundaries. Researchers
in nonverbal b ehavior include sp ecialists from psychology, psychiatry, anthrop ology,
so ciology, dance notation, ethology, education, and the p erforming arts. They seek
theories on the expression of emotion and p ersonality, psycho-pathological diagnoses
and treatments, cultural characteristics of movement, developmental motor pro cesses,
psychological implications of gesture and p osture, in uences of a ect and attitude,
comparison to animal b ehavior, and a slew of other issues. Bo dy movement is a key
area of study, but facial expression, visual b ehavior, paralanguage, proxemics use of space
and distance, tactile b ehavior and multichannel communication are also considered ma jor
topics in nonverbal communication. We do not attempt to integrate or categorize the
large body of research in nonverbal communication and body movement, since that is
beyond the scop e of this work. However, we p oint the interested reader to the readings
and commentaries in [83 ], which o er some insightinto the study of b o dy movement and
gesture, and several extensive bibliographies on nonverbal communication [23 , 43 , 44 ].
Our goal di ers from much work in nonverbal communication in that we are not
seeking universal theories on the communicative implications of movements or any
underlying psychological meaning; nor are we seeking comparisons or generalizations
between individuals of di erent mental states, ages, cultures, or even sp ecies. Instead, we
seek a comprehensive description system of human movements that may provide a useful
interface for controlling gure animation. Thus, we are most interested in description
and notation systems for co ding body movements. Wallb ott surveys nonverbal b ehavior 16
measurement and observation systems, noting that they tend to form a continuum b etween
ob jective systems making physical measurements and sub jective notations that require
human observers making inferences and interpretations [81 , 82 ]. We seek a notation
somewhere between these two extremes. We desire a system that describ es qualitative
not quantitative asp ects of movement, yet is ob jective enough to prove intuitive to
layp ersons and to ensure inter-observer agreement. Many of the systems for measuring
body movement do not meet our needs, b ecause they are cumb ersome and non-intuitive,
or fo cus solely on spatial asp ects of movement. Similarly, many dance notations are either
particular to a sp eci c style of dance, lack metho ds of describing movements not used in
dance, or have no means of describing the expressive nature of movement. There are three
ma jor dance notation systems currently in use: Benesh Movement Notation[9 ], Eshkol-
Wachmann [30 ], and Labanotation [41 ]. Benesh notation has b een used primarily to notate
ballets and was adopted by the Royal Ballet in London. Eshkol-Wachmann notation uses
anumerical system for recording movement, fo cusing on joint angles and spatial patterns.
Only notations based on the work of Rudolf Laban seem to have the p otential to satisfy
our needs.
The use of Labanotation and its derivative notations has extended beyond the
dance community to b ecome a valuable to ol for nonverbal communication research.
1
Davis evaluates the \logic and consistency" of E ort-Shap e Analysis an o sho ot of
Labanotation promoting its use as a research to ol [22 ]. In [6], Bartenie and Davis
justify the use of Laban's notation for the study of b ehavior. Their justi cations match
our requirements for a language to describ e and control computer gure animation. They
p oint out that many b ehavioral studies use sub jective, detailed descriptions of movement
and p ostures, and argue for the need for a systematic, objective notation with a limited
numb er of descriptive terms. They demonstrate that E ort-Shap e provides such a system,
and further, they discuss the hyp othesis that there may b e neurophysiological supp ort for
the selection of its basic variables. In [24 ], Davis further details the evolution of Laban's
work, describing his students' extension and application of his ideas, and surveys the early
use of Laban analysis in b ehavior research.
1
E ort-Shap e later evolved into Laban Movement Analysis. 17
3.2 Laban Movement Analysis
Rudolf Laban 1879-1958 made signi cant contributions to the study of movement,
bringing together his exp eriences as a dancer, choreographer, architect, painter, scientist,
notator, philosopher, and educator. He observed the movement of p eople p erforming all
typ es of tasks: from dancers to factory workers, fencers to p eople p erforming cultural
ceremonies, mental patients to managers and company executives. His theories on
movement and its extensions by his students and colleagues have resulted in a rich
vo cabulary for describing and analyzing movement. He also develop ed a movement
notation system, which has evolved and expanded into a number of related and
overlapping variations, including Labanotation, Kinetography Laban, E ort-Shap e, and
Laban Movement Analysis. The International Council on Kinetography Laban ICKL
was established in 1959 to standardize the notation and to unify some of the di erences
between the various forms. Labanotation, which was adopted by the Dance Notation
Bureau [17 ], fo cuses on the structural asp ects of movement and provides a very exact
notation that allows dancers to repro duce a dance solely from its score [41 ]. Kinetography
Laban is essentially the same as Labanotation except for minor di erences in notation
usage and rules [45 ]. E ort-Shap e develop ed somewhat indep endently and with an
emphasis on the qualitative, dynamic asp ects of movement [6 ]. E ort-Shap e spawned
the development of Laban Movement Analysis LMA [7 , 59 , 27 , 57 ], which is promoted
by the Laban/Bartenie Institute for Movement Studies [63 ]. LMA has evolved into a more
comprehensive system and has b een used in dance, drama, nonverbal research, psychology,
anthrop ology, ergonomics, physical therapy, and manymovement-related elds [22 , 6 , 24 ].
The variance between the systems is partially due to historical reasons and all remain
consistent with Laban's original philosophies.
Mo ore and Yamamoto enumerate ve principles that are fundamental to Laban's
theories [59 ], Chapter 9:
1. Movement is a pro cess of change.
2. The change is patterned and orderly.
3. Human movementisintentional. 18
4. The basic elements of human movementmay b e articulated and studied.
5. Movement must b e approached at multiple levels if it is to b e prop erly understo o d.
These basic principles form the theoretical foundation of LMA.
2
LMA is divided into four ma jor comp onents: Bo dy, Space, Shap e, and E ort .
Together these comp onents constitute a language for describing movement. Bo dy deals
with the parts of the body that are used and the initiation and sequencing of a motion.
Space describ es the lo cale, directions, and paths of a movement. Shap e involves the
changing forms that the body makes in space. E ort describ es the qualitative asp ects
of movement and is often compared to dynamic musical terms such as legato, staccato,
forte, dolce, etc., which give information on how a piece of music should b e p erformed.
Movement is often describ ed in terms of actions or what one do es. However, we are
interested in how one moves, which is precisely what is provided by E ort. Mo ore explains
that, \While the uses of [S]pace and of the [B]o dy reveal the mover's purp oses, Laban
b elieved that the uses of energy, or the dynamics of an action, [E ort] were particularly
evo cativeofintentions" [59 ], pp 185. Maletic go es further to say that \Laban sees E ort
as the inner impulse|a movement sensation, a thought, a feeling or emotion|from which
movement originates; it constitutes the link between mental and physical comp onents
of movement"[57 ], pp 179. She continues, saying that \one may conclude that the
concept of E ort uni es the actual, physical, quantitative and measurable prop erties of
movement with the virtual, p erceivable, qualitative, and classi able qualities of movement
and dance"[57 ], pp 101. We note that neither Laban nor anyone else has ever sp eci ed
these \quantitative ... prop erties of movement"; this is precisely the contribution of our
E ort mo del Chapter 4.
E ort comprises four motion factors: Space, Weight, Time, and Flow. Each motion
factor is a continuum b etween two extremes: 1 indulging in the quality and 2 ghting
3
against the quality . These extreme E ort Elements are seen as basic, \irreducible"
2
Throughout this do cument, we capitalize key terms de ned by LMA to distinguish them from their
common English language usage.
3
Some meaning is lost in the translation of Laban's original German texts into English. The German
word \antrieb" has b een translated into \e ort". \Antrieb" has an implication of the existence of a
force that drives or prop els something. The German term \ballung" has markedly di erent connotations
from its translation of \ ghting". Ballung connotes a coming together or clustering; it is a condensing 19
qualities|they are the smallest units of change in an observed movement. The eight
E ort Elements are: Indirect, Direct, Light, Strong, Sustained, Sudden, Free, and Bound.
Table 3.1 illustrates the motion factors, listing their opp osing E ort Elements with textual
descriptions and examples.
We note that we are not trying to provide a to ol to facilitate the recording of E ort,
but rather we use the language provided by E ort as an interface for controlling expressive
gure movements. Aside from handling the individual E ort Elements, there are several
other issues that must be handled in order to capture the full extent of expressivity
a orded by E ort. First, human movements span the range along each motion factor
continuum. Also, movements rarely involve only one motion factor; more often they
display combinations of E ort Elements from di erent motion factors and at varying
intensities. Another issue is phrasing. Certi ed Movement Analysts CMAs trained in
Laban Movement Analysis observe movements as a subtle sequence of E ort changes or
phrases. For instance, in general terms one might say that a movement was \light".
However, a skilled E ort observer might observe that, \The movement b egan with a
quickness, then b ecame light, and ended with a sustained indirectness." To exploit the
richness of E ort descriptions, one must allow users to use b oth general E ort descriptions
as well as detailed E ort phrasing. Finally, we note that individuals tend to display
particular E ort patterns, although they may consciously learn to expand their E ort
rep ertoire. An E ort-based system should supp ort individualized expression by allowing
users to customize E ort Element settings for di erentcharacters. We address these issues
in Chapter 4, where we describ e our E ort mo del.
We b elieve that the E ort descriptions of Laban Movement Analysis provide an
adequate interface to controlling the expressiveness of computer animated gures. E ort
seems to generate the space of human movements, with its use justi ed by numerous
researchers in a variety disciplines. Further, E ort proves intuitive by providing a
small number of textual descriptors as compared to detailed, cumb ersome notations or
mathematical and physics-based parameters for describing expression.
or agglomeration. For instance, the term ballung might be used to describ e the coming together and
condensing of particles to form storm clouds [66 ]. 20
Space { attention to the surroundings
Indirect exible, meandering, wandering, multi-fo cus
examples: waving away bugs, slashing through plant growth
surveying a crowd of p eople, scanning a ro om
for misplaced keys
Direct single fo cus, channeled, undeviating
examples: p ointing to a particular sp ot, threading a needle,
describing the exact outline of an ob ject
Weight { attitude towards the impact of one's movement
Light buoyant, delicate, easily overcoming gravity,
marked by decreasing pressure
examples: dabbing paintonacanvas, pulling out a splinter,
describing the movement of a feather
Strong powerful, having an impact, increasing pressure
into the movement
examples: punching, pushing a heavy ob ject, wringing a
towel, expressing a rmly held opinion
Time { lack or sense of urgency
Sustained lingering, leisurely, indulging in time
examples: stretching to yawn, stroking a p et
Sudden hurried, urgent
examples swatting a y, lunging to catch a ball, grabbing a
child from the path of danger, making a snap decision
Flow { amountofcontrol and b o dily tension
Free uncontrolled, abandoned, unable to stop in the
course of the movement
examples: waving wildly, shaking o water, inging a ro ck
into a p ond
Bound controlled, restrained
examples: moving in slow motion, tai chi, ghting back
tears, carefully carrying a cup of hot liquid
Table 3.1: Motion Factors and E ort Elements 21
Chapter 4
E ort Mo del
In order to use E ort for computer animation, we needed a quantitative E ort mo del.
This chapter discusses the metho d we used to build an empirical mo del of E ort using
quantitative, low-level movement parameters. We describ e the set of low-level movement
parameters and showhow E ort settings are used to compute movement parameter values.
Our current E ort mo del has b een develop ed with the intent of creating expressive arm
movements, although many asp ects of our mo del are applicable to expressive movements
displayed in other b o dy parts. We elected to fo cus on arm movements b ecause the arms
are a primary means of b o dily expression and communication. Also, many task-oriented
movements involve the use of the arms. Further, the extensive reach and joint angle ranges
in the arms allow for a wide variation in movements as compared, for instance, to head
no ds or torso b ends. Finally, although a number of researchers have addressed issues of
lo comotion and its in uence under various emotions, expressive arm movements have b een
essentially neglected.
4.1 Translating E ort into Movement Parameters
The translation of the qualitative E ort Elements into quantitative, low-level movement
parameters was the key task in implementing a system using E ort for motion control.
Initially, we tried to deduce movement characteristics from motion capture data. We
collected 3D motion capture data and made a video recording of a Certi ed Movement 22
Analyst CMA trained in Laban Movement Analysis p erforming several examples of each
1
combination of two and three E ort Elements . We used 12 electromagnetic sensors: one
each on the head, sternum, stomach, and back; and one on each shoulder, elb ow, wrist,
and knee. Analysis of the motion capture data led to only the most obvious conclusions;
i.e.: Sudden is short in duration, Sustained is longer in duration, and Strong tends to
have large accelerations. The inability to deduce the more subtle characteristic qualities of
E ort arose from several factors. First, E ort re ects complex inner physiological pro cesses
that are related to a b eing's inner drive to resp ond to the physical forces in nature. Thus,
E ort is emb o died in the whole p erson and manifested in al l body parts, whereas we
were interested solely in the physical emb o diment and visual result of inner attitudes on
movement, particularly that of the arms. Furthermore, numerous other movements such
as visual attention, changes in muscular tension, facial expressions, and breath patterns
are not adequately captured by current motion capture technology.
As a result, we turned to other metho ds for developing an empirical mo del of
E ort. Initially, we used visual analysis of the playback of the motion capture data of
a CMA p erforming E ort combinations as a stick gure formed by connecting the blo cks
representing the sensors Fig. 4.1. This allowed us to fo cus our attention solely on the
in uence of E ort on one's gross body movement by extracting out other more subtle
manifestations of E ort facial expression, visual attention, etc.. We then made rep eated
and careful analyses of the video of the E ort p erformance and used descriptions of E ort
from the literature [7 , 27 , 57 , 59 ] to deduce some of the tangible qualities of E ort that we
needed to capture. In determining the underlying motion parameters to use to mo del the
E ort Elements, we exp erimented with and extended computer animation metho ds such
as velo city curves and interp olation, as well as traditional animation principles such as
anticipation, oversho ot, squash and stretch [53 , 75 ]. Finally, we p erformed numerous
exp eriments byhaving a CMA [66 ] mark the E orts present in video segments and having
her exp eriment with low-level parameters in our system EMOTE Chapter 5. This also
induced more suggestions on qualities of E ort that would add to our p ortrayal of the
E ort Elements.
1
Movements displaying a single E ort element or a combination of four E ort Elements are rare, and
thus, were not included in the recording. 23
Figure 4.1: Stick Figure Formed By Motion Capture Sensors
4.2 Low-level Movement Parameter De nitions
In selecting the set of low-level movement parameters, we chose a kinematic mo del over
a dynamics-based or hybrid implementation of E ort. Dynamics-based techniques require
computationally costly calculations, limiting our desire for interactivity. Also, although
dynamic simulations generate physically accurate motions, these motions often lack the
nuances and ourishes of expressive human movements. Further, kinematic mo dels have
proven reasonable at p ortraying physics-based mo dels [87 ], and our mo dels for Flow and
Weight combinations adequately generate the impression of mass, force, inertia, and other
physical phenomena.
In the following section, we de ne the set of low-level movement parameters in our
mo del. These are divided into three categories: those that a ect the arm tra jectory, those
that a ect timing, and ourishes that add to the expressiveness of the movement.
4.2.1 Tra jectory De nition
We de ne the arm tra jectory for a given animation with two parameters: 24
Path curvature determines the straightness or roundness of the path segments
between keyp oints. We control the path curvature using the tension parameter
intro duced by Ko chanek and Bartels for interp olating splines [46 ]. The tension
parameter ranges from 1to+1. Decreasing the tension value gives rounder b ends
at the keyp oints, while increasing the value results in a tighter curve with straighter
segments b etween p oints.
The interp olation space de nes the space in which the interp olation is p erformed:
end-e ector p osition, joint angle, or elb ow p osition.
For end-e ector interp olation, we use the end-e ector p osition and swivel angle stored
for each keyp oint. We de ne an interp olating spline between the p ositions at keyp oints
using the tension parameter to determine the curvature of the path. We also interp olate
between swivel angle values with an interp olating spline. For joint angle interp olation,
we compute and store the shoulder and elb ow rotations at keyp oints. We then generate
an interp olating spline b etween the elb ow angle values at keyp oints and p erform spherical
linear interp olation to determine the shoulder rotations. For interp olation in elb ow p osition
space, we compute and store the elb ow p osition at keyp oints using the p osture de ned by
the end-e ector p osition and swivel angle. We then de ne an interp olating spline b etween
these p ositions, which are later used to set the shoulder rotations. The elb ow rotations for
elb ow p osition interp olation are the same as those computed for end-e ector interp olation.
Interp olation in elb ow p osition space gives smo oth elb ow motions with a less path-driven
movement than interp olation in end-e ector p osition space.
The E ort settings determine which interp olation space is used. The default
interp olation space is end-e ector p osition. Free movements use angular interp olation
to achieve a less path-driven and less controlled movement. Our empirical studies show
that Indirect movements tend to b e driven by the elb ow, and thus weinterp olate them in
elb ow p osition space.
4.2.2 Parameterized Timing Control
We separate timing control from tra jectory de nition by using a variation of the double
interp olant metho d intro duced by Steketee and Badler [74 ]. The interp olating splines that 25
de ne the tra jectory describ ed in the preceding section compute values b etween keyp oints
using an interp olation parameter s that varies from 0 to 1 over the interval from keyp oint
i to keyp oint i + 1 [46 ]. Let the tra jectory b e de ned by some function P s; i. To obtain
p oints on the tra jectory, we need a metho d for translating in-b etween frame numb ers
into s and i. At each keyp oint, s = 0 and i is the number of the current keyp oint. For
0
in-b etween frames, we de ne a variable t [0; 1] to represent a frame's relative time b etween
the previous and following keyp oints. Let pr ev equal the frame number of the previous
keyp oint, next equal the frame number of the next keyp oint, and cur r equal the current
frame numb er. Then,
cur r pr ev
0
t = : 4.1
next pr ev
We de ne a timing control function
0
~
Qt ; I =s; 4.2
~
where I is a four-dimensional vector whose comp onents sp ecify various timing e ects
describ ed further b elow.
For each in-b etween frame, we
0
1. compute t , the frame's normalized time b etween keyp oints, using Equation 4.1,
2. compute the interp olation parameter s using function Q Equation 4.2, and then
3. input s and the corresp onding keyp oint number i into function P to compute the
p osition values or joint angle values for angular interp olation for the given frame.
We provide several parameters for timing control:
The numb er of frames b etween keyp oints is initially set according to the user's
sp eci ed key times, but these values get adjusted according to the E ort settings.
The nf mul t parameter is a multiplier that increases nf mul t > 1 or decreases
0 > nf mul t > 1 the numb er of frames b etween twokeyp oints.
~
The comp onents of I include in ection time t , time exp onent texp, start velo city
i
v , and end velo city v .
0 1 26
Our parameterized timing control function Q assumes every movement from one goal
keyp oint to the next starts and ends at rest. Also, every movement has a constant
2
acceleration until time t , followed by a constant deceleration . Weintro duce velo cities v
i 0
at time t and v at time t to achieve the traditional animation e ects of anticipation and
0 1 1
oversho ot. This mo del gives us the following velo city function Fig. 4.2:
8