Behavioural Simulation in Voxel Space
Hongwen Zhang Brian Wyvill
SAGE Systems Division Department of Computer Science
Valmet Automation University of Calgary
Calgary, Alb erta, Canada T2W 3X6 Calgary, Alb erta, Canada T2N 1N4
In the later literature [5], the mo deling and the simula- Abstract
tion steps in b ehavioural animation are referred to as
In this paper we present a framework for be-
behavioural simulation. Compared with b ehavioural
havioural simulation. A uniform voxel spacerepre-
animation, the concept of b ehavioural simulation has
sentation is used to implement the environment mech-
a larger scop e. It not only includes b ehavioural ani-
anism of the framework. An example environment
mation, but also includes those applications that use
is presented where actors with olfactory sensors are
similar mo deling and simulation principles to pro duce
able to direct their motions according to a scent of the
complex and static images, suchas[4].
chemicals in the voxel spacebased on mass transfer
The key problems in b ehavioural simulation are :
theory. Objects in the enviroment arescan converted
to the voxel representation to facilitate col lision detec-
1 Collecting b ehavioural data from the observation
tion. An example of using the framework to simulate
of living systems.
the behaviour of a group of arti cial butter ies is used
to demonstrate the ideas of this research.
2 Behaviour representation. This concerns howto
b etter represent the observed b ehaviour in a com-
1 Intro duction
puter system.
Complex images of a p opulation of ob jects can
b e automatically generated by mo deling the simple
3 Mo deling the world view and the asso ciated sens-
b ehaviour of each individual ob ject and the interac-
ing mechanism for each individual ob ject.
tion b etween the ob jects. This approach is termed
by Craig W. Reynolds as behavioural animation [12].
4 Path generation by reasoning on the b ehaviour
Reynolds noticed that scripting the paths of a large
representation with the sensed information.
numb er of individual ob jects such asa ock of birds
is a very dicult and tedious task in computer ani-
5 The attachment of the details of the motion with
mation. He demonstrated that b ehavioural animation
the generated path, such as the wing ip of birds
is a more ecient and robust way to accomplish this
and the gaits and b o dy swing of a dancer, etc.
task. The basic idea of b ehavioural animation is that
the complex paths can b e generated by the interac-
This pap er addresses the ab ove third problem. There
tion b etween the individual ob jects. Hence only the
are two related sub-problems. The rst is how to pro-
b ehaviour of each individual ob ject needs to b e mo d-
vide a world view for each ob ject. Any solution to this
eled. The paths can then b e generated by simulating
problem has to provide a mechanism to represent the
these mo dels. This di ers from the more conventional
neighb orho o d of each ob ject. The second sub-problem
computer animation metho ds which only mo dels the
concerns how an ob ject obtains information from the
shap e and physical prop erties of the characters. When
outside world. In order to solve this problem, a sensing
visualized, a simulation of a b ehaviour mo del can gen-
mechanism has to b e implemented for all the ob jects.
erate a sequence of dynamic, intentional, and complex
Many previous approaches use a database to store
pictures. This makes b ehavioural animation a very at-
geometric information ab out the ob jects in the system.
tractive high level mo deling metho d in computer ani-
This database provides a world view which is shared
mation.
by the ob jects. Each ob ject can then abstract informa-
Generally sp eaking, a b ehavioural animation pro-
tion ab out its environmentby querying the database
cess includes three related steps. They are :
and computing its relationship with all the other ob-
jects. This pro cess has to b e applied to all the ob jects
mo deling the b ehaviour of each individual ob ject,
in the system. It is an exp ensive op eration which has a
2
complexityof On , where n is the numb er of ob jects.
simulating a p opulation of these b ehaviour mo d-
To reduce this complexity, Ned Greene [4] used a voxel
els,
space to provide a sub dividing strategy for geometric
information. A voxel space is a region of three di-
visualizing the simulation pro cesses. mensional space partitioned into identical cub es. Each
cub e is called a voxel. His approach e ectively re-
duces the ab ove complexityto On. However, his
original metho d is mainly concerned with using voxel
space for the purp ose of generating static images of
e plants. The plants havevery simple geo-
vine lik Clock
metric shap es. Their b ehaviour is mainly a ected by the geometric information distribution in the neigh-
borhood. Actors
In this researchwe prop ose a framework for b e-
havioural simulation. The framework not only em- 333
33 333 ys vision p erception but also olfaction p erception plo 33 333 33 333 33
33
h is otherwise very dicult to implementina whic 33 33
33
ventional approach. By using voxel space, the com- con 33
33
plexity of the p erception pro cess of each ob ject is re-
duced to linear, within the limitation of a voxel space's
resolution.
A FRAMEWORK FOR
2 Behavioural System
VIOURAL SIMULATION
BEHA Environment
2.1 THE IMPORTANT CONCEPTS
Behaviour simulation assumes that the complex
phenomena of a large system is caused by the interac-
Figure 1: A b ehavioural system
tions of its comp onents. Hence only these comp onents
need to b e explicitly mo deled. The global phenomena
of the large system can b e generated automatically by
simulating a group of explicitly mo deled comp onents.
Each mo del of such a comp onent is called an actor.
One lo osely de ned concept in b ehavioural simula-
tion is the environment. It is used here to refer to
the mechanism that represents all the outside factors
for each individual actor. These factors are ltered
and transformed by each actor to its internal repre-
sentation of its neighb orho o d. The mechanism that yp e of
an actor uses to lter and transform a certain t sense Images
information from the environment is called the sen-
sor of that typ e of information. In a simulation, a set
of actors are coupled by their environment. The set
vironment is called a be-
of these actors and the en Sensors State interpolate
havioural system.
The b ehaviour of an actor is the internal repre-
tation of our observation of the corresp onding dy-
sen interpret
namic ob ject. This internal representation is used in a ulation to map the sensed information and the cur-
sim Actuator update
t state of the actor to a path of state changes. This
ren Behaviour Reasoning
mapping is referred to as path planning. Actuators
are used to execute these state changes byinterpreting
and/or interp olating the path. In b ehavioural simula- tions that di-
tion, these are usually programs or fun Actor Path
rectly manipulate the state parameters of actors. The
examples of actuators are the motor controller in [15],
or the muscles in [17].
2.2 THE FRAMEWORK
ve discussion, this research estab-
Based on the ab o Environment
lishes a framework for building b ehavioural simulation
applications. The framework is comp osed of two lev-
els. The rst level is the mo del of b ehavioural sys-
tems Figure 1. It has a set of actors, an environ-
Figure 2: An actor
ment shared by all the actors, and a global clo ck for
controlling the animation frequency and synchroniz-
ing each actor with the changes in the environment.
The second level is the mo del of actors Figure 2.
An actor senses the environment to detect any signif-
icantchanges. The path planning mechanism plans
the path according to the sensed information and the
current state of the actor. The actuators interpret and
interp olate the path by reading and setting the cur-
rent state. Images can b e generated by rendering the
state.
The b ehaviour reasoning mechanism can b e imple-
mented by using heuristic rules [3], a network of no des
[20,16], a genetic algorithm [10], or a hybrid approach
using b oth a genetic algorithm and a network of no des
[14]. The path can b e represented as qualitative terms
or a set of parameters to drive the actuators. The im-
plementation of the actuators can range from simply
interpreting the qualitative terms, to the interp ola-
tion of the actors state with key-framing or physically-
based mo deling techniques [15].
The environment and the sensors are implemented
in this research with voxel space representation.
3 VOXEL SPACE AS THE ENVI-
RONMENT
The central idea of using a voxel space as the en-
vironment is to distribute the information ab out each
actor in the voxels. This way, an actor can get infor-
mation ab out anychanges in its neighb orho o d, suchas
an approaching obstacle, a fo o d source, etc., by sens-
Figure 3: A glass temple de ned in Op enInventor
ing the neighb oring voxels. An actor can also let its
activity b e known to others by leaving its \fo otprint"
in the voxel space. In our approach, eachvoxel can
scene graph is used as the data structure to represent
hold a list of information parcels. Each information
a hierarchical geometric shap e. A scene graph con-
parcel corresp onds to a typ e of information. The typ es
sists of one or more no des, each of which represents
of information currently b eing mo deled are geometric
a geometry, prop erty e.g material, etc., transforma-
information and olfactory information. Each informa-
tion, or grouping ob ject. Hierarchical scenes are cre-
tion parcel is represented as the following triplet :
ated by adding no des as children of grouping no des,
resulting in a directed acyclic graph. Figure 3 shows a
t; i; ids
glass temple constructed with an Op enInventor scene
where :
graph.
The algorithm for scan converting a scene graph in
t | the information typ e. Currently, it is either ge-
the voxel space is describ ed as the following :
ometry or olfaction.
i | the intensity of the information. For geometric
scanConvertGroupNode gNode,
information, i is always 0 or 1, which indicates
Transformation aMatrix{
the voxel is empty or o ccupied. For olfactory in-
for each child x of gNode{
formation, i is the combined scentintensityofthe
m = the transformation of x
actors that contribute their scent in this voxel.
under gNode;
//Calculate the combined
ids | A set of actor identi ers. Each actor in this set
//transformation
has contributed to the intensity of the information
m = m*aMatrix;
in this parcel.
switchthe node type of x{
case GroupNode :
When sensing the neighb orho o d, an actor only
scanConvertx, m;
needs to search the list of parcels asso ciated with each
break;
voxel.
case ShapePrimitive :
3.1 GEOMETRIC INFORMATION IN
x.transformm;
VOXEL SPACE
//Use Kaufman algorithms
x.scanConvert; A set of elegant algorithms for distributing the geo-
break; metric primitives such as three dimensional lines, p oly-
default : gons, p olyhedra, circles, cylinders, etc. were devised
break; by Arie Kaufman [7, 6]. These algorithms are used in
} //end switch this research to scan convert geometric primitives.
} //end for loop Complex shap es can b e constructed hierarchically
} //end scanConvert by using these primitives. The Op enInventor [19]
Now the question is how to calculate the molar con-
centration in eachvoxel. The answer can b e found in
the mass transfer theory [8]. The mass balance of a
chemical A in a voxel is mo deled as:
@C
A
5N + R =0 1
A A
@t
where :
@ @ @
5| the di erential op erator, + +
@x @y @z
N | the molar ux at time t. The molar ux
A
of A is de ned as the numb er of moles of A that
pass through a unit area p er unit time.
C | the molar concentration of A in the voxel.
A
R | the rate of repro duction of A p er unit vol-
A
ume. It is non-zero in the voxels that contain the
sources of A and zero elsewhere.
If we only consider the di usion in the static air, then
the molar ux N is governed by the Fick's law of
A
di usion :
@C @C @C
A A A
; D ; D 2 N = D
Figure 4: The glass temple in voxel space A
@x @y @z
where D is the mass di usive co ecientofAin the
air. It is a constant if the temp erature and pressure
Figure 4 is the scan converted glass temple in the
are all constant.
voxel space with a 75 75 75 resolution.
Solving equation 1 and equation 2 for C with the
A
3.2 OLFACTORY INFORMATION IN
Forward Time Center SpaceFTCS metho d [11], we
VOXEL SPACE
have:
t+1 t+1
t
t t
Olfactory information asso ciated with searching for
+ C + =tR C F
Ai;j;k s Ai;j;k
Ai;j;k Ai;j;k
fo o d, nding a mate, or avoiding a predator plays
where:
an imp ortant role in the activities of many ani-
mals. The following is a vivid depiction of how the
t
C | the molar concentration of A in the
Ai;j;k
African hyaena uses olfactory information to guide it-
voxel i; j; k at time t.
self through the darkness [1] :
t
\With a sni a hyaena can p erceive not only
R | the repro duction rate of A in the voxel
Ai;j;k
here and now but, simultaneously, a whole
i; j; k at time t.
series of events stretching backinto the past.
.... The pasted patches of grass must shine in
s | the size of eachvoxel.
the distance like lighthouses and the pack's
t
trails, p erfumed by their paws, must stretch F | the net ux of A in the voxel i; j; k .
Ai;j;k
ahead like lines of re ector studs down the
middle of a motorway."
The net ux of A can b e expressed as
X
Previous approaches in b ehaviour simulation applica-
t t t
F = C 6C
Ai;j;k Ai+m;j +n;k +p Ai;j;k
tions have not mo deled the e ect of olfactory informa-
m;n;p2f 1;1g
tion on the actors.
According to physics [13], givenachemical source,
Figure 5 illustrates the scent distribution of a p oint
its scentintensityatany p oint in a three dimensional
chemical source in the voxel space.
space is determined by the molar concentration of this
In a complex scene, geometric information can in-
chemical at the p oint. The molar concentration of a
terfere with olfactory information, e.g. a scent will
chemical A is de ned as the numb er of moles of A per
spread a gas around an ob ject. To approximate this
unit volume. We assume that the molar concentration
e ect, the net ux is mo di ed into :
of anychemical substance within a voxel is constant,
X
whilst di erentvoxels mayhave di erent molar con-
t t t
F = T C
Ai;j;k i+m;j +n;k +p Ai+m;j +n;k +p
centration. If the air is static, i.e. no wind, the dif-
m;n;p2f 1;1g
ference of the molar concentration b etween the neigh-
X
b oring voxels will suggest the direction of the chemical
t t
T C
i+m;j +n;k +p Ai;j;k
source. Hence, an actor can reach the source by \sni -
m;n;p2f 1;1g
ing" a set of voxels.
Figure 5: The scent distribution of a p oint source See
colour slide
Figure 6: Interfering scent distribution with geometric
t
information
where T 2f0;1g. It is 1 if the voxel i; j; k is
i;j;k
not o ccupied. Otherwise, it is 0.
Figure 6 shows the result of putting the same p oint
source in the center of a b ox with an op en gap on one
end. Figure 7 only shows the scent distrubution of
Figure 6. Note that it is shap ed by the b ox.
4 THE SENSORS
A set of sensors are implemented. An actor can
use these sensors to get stimuli from its neighb oring
voxels. This is achieved by scanning the information
parcels in those voxels. The action of the actor can
then b e planned. Hence b oth the principles of the
stimuli/resp onse paradigm and the spatial lo cality are
enforced.
4.1 THE VISION SENSOR
The vision sensors are used to get images of the
actor's neighb orho o d and to extract abstract infor-
mation, e.g. collision, found fo o d, etc., from these
images with some simple p erceptual pro cesses. Each
vision sensor is constrained by its view angle and view
range.
The image obtained by a vision sensor is a 2D
latitude-longitude array.Aray is cast from the sensor
center towards each element of this array. This ray
casting can b e accomplished by a simple 3D version
DDA algorithm [2]. If the ray passes through an o ccu-
pied voxel within the view range, the reference of this
voxel will b e recorded in the corresp onding elementof
the latitude-longitude image. Otherwise, a null value
Figure 7: Olfactory information distribution is shap ed
is assigned to the element. Figure 8 illustrates a vision
by geometric information
sensor.
The p erceptual pro cesses are task oriented. For ex-
ample, if the task is to avoid collision, the asso ciated
pro cess will scan the latitude-longitude arraytolook 5 @@4@@@@6 @@@@@@ @@3 @@4@@5 @@@@@@ @@2 @@3@@4 @@@@@@
@@ @@ --- the smell-detecting cell
--- the gradient of the intensity
Figure 9: The olfactory sensor in a 2D voxel space
avoid collision It should avoid y into the other
digi ies or any obstacles. There is one exception
to this rule, that is a hungry digi y can collide
with its fo o d, i.e. the owers.
seek fo o d The energy level of each digi y drops with
the passage of time. When this level is b elowa
threshold, the digi y will b egin to seek for fo o d.
If it can \see" a ower, it will y towards the
ower. If it can \smell" the fo o d, it will y to the
Figure 8: The vision sensor
direction of the fo o d.
eat fo o d If the tactile sensor detects that the digi y
touchesa ower, the digi y will recharge itself,
for an area with enough null elements that will allow
i.e. set its energy level to a maximum value. It
the whole b o dy of the actor to pass through. This is
will then y away.
actually a very easy task in the simulated environment
of a voxel space since all the information contained in a
y towards other digi ies When the digi y do es
voxel, e.g. depth, who o ccupies it, etc. are readily ac-
not detect any collision and it is not hungry or
cessible through the elements in the latitude-longitude
can not nd any fo o d, it will y towards the cen-
array.
ter of other digi ies that are detected by its vision
4.2 THE TACTILE SENSOR
sensor.
The tactile sensor simply tests if the voxel at the
sensor center is o ccupied or not.
random ight When none of the ab ove rules are
triggered, a lonely digi y will randomly change
4.3 THE OLFACTORY SENSOR
its heading direction to wander around.
The olfactory sensor is a simulated nose. It is com-
p osed of a set of smell-detecting cells with one cell at
The b ehavioural simulation framework develop ed in
the sensor center. Each cell can get the scentinten-
this research allows the ab ove rules to b e formally
sity of the asso ciated voxel. The intensity gradient
sp eci ed with symb olic expressions of CCS, the calcu-
can b e approximated with a vector which p oints from
lus of communicating systems [9]. In the simulation,
the sensor center to the cell that detects the biggest
these rules will b e triggered according to the lo cal cir-
intensityvalue. Figure 9 illustrates an olfactory sensor
cumstance of each digi y.
ina2Dvoxel space. Sensors that detects the gradient
A scene containing the glass temple in Figure 3 and
of a chemical concentration can b e found in the living
a ower in a glass b ox is used. There is a gap on the
systems. For example, Caenorhabditis elegan,atyp e
top side of the b ox. When a set of digi ies are released
of nemato de, can sense the concentration gradientof
into the scene, one should exp ect that those digi ies
the chemical released by its fo o d and move along the
will o cktowards the b ox, y through the gap, eat the
gradient [18].
fo o d, and y away. This is exactly what happ ened into
the simulation. Figure 10 shows the tra jectory of two
5 AN EXAMPLE | THE JOURNEY
digi ies in the scene. Figure 11 is a snapshot of a
OF THE DIGIFLIES
group of sixty digi ies.
As an example, a set of arti cial butter ies called
digi ies are simulated. Each digi y is equipp ed with a
vision sensor, an olfactory sensor, and a tactile sensor.
The hardware used in this research are the Silicon The b ehaviour of the digi y is de ned by the following
Graphics Indigo workstations. The Op enInventor is rules :
used as the graphical system. The visualization of the
scent distribution is achieved by the SGI Explorer.
The program is written in C++.
6 CONCLUSION
Wehave presented a uni ed framework which en-
dorses the imp ortant concepts in b ehaviour simula-
tion. Wehave shown how this framework which uses
avoxel space representation can b e applied to a group
of actors exhibiting emergent b ehaviour.
As demonstrated in this research, there are several
advantages to our approach. Firstly,itisintuitive. A
living system knows it touches an obstacle by sens-
ing that the touching p oint is o ccupied by the obsta-
cle, not by solving equations. Hence, it is easy to b e
used in the b ehavioural simulations that use empir-
ical knowledge. Secondly, the data structure of the
voxel space is very simple. Any geometric mo dels can
b e converted into the voxel representation. This im-
plies that the actors do not havetohave the same
geometric data structure in order for them to interact
with each other. Therefore, the voxel representation
can serve as a common ground for di erent parties to
work collectively to build a complex scene animation.
These advantages must b e traded against the loss
of precision since voxel space mo deling op erates in dis-
crete space. It is p ossible to generate undesired anima-
tions. For example, the feet of a walking animal may
Figure 10: The tra jectory of two digi ies
sometimes p enetrate slightly into the ground or oat
ab ove the ground. Our future work will lo ok at adap-
tivevoxel sub division and other p ossible approaches
to ameliorating this problem as well as optimizing the
existing algorithms to enable the use of actors with
more complicated structures and b ehaviour.
Acknowledgments
The authors would like to thank the many students
and faculty memb ers who have contributed towards
the GraphicsJungle pro ject at the University of Cal-
gary. This work is partially supp orted by the Natural
Sciences and Engineering Council of Canada. Hong-
wen Zhang would like to thank the SAGE Systems
Division of Valmet Automation for sp onsoring him to
present this pap er to Computer Animation 97.
References
[1] David Attenb orough. The Trials of Life |
A Natural History of Animal Behavior. Little,
Brown and Company, 1990.
[2] John Cleary and Geo Wyvill. Analysis of an
Algorithm for Fast RayTracing Using Uniform
Space Sub division. The Visual Computer, 42,
1988.
[3] Bill Co derre. Mo deling b ehavior in p etworld. In
Arti cial Life, 1989.
[4] Ned Greene. Voxel space automata : Mo deling
with sto chastic growth pro cesses in voxel space.
ACM SIGGRAPH 89, 233:175{184, 1989.
Figure 11: A snapshot of sixty digi ies
[5] D. R. Haumann and R. E. Parent. The b ehav-
ioral test-b ed : Obtaining complex b ehavior from
simple rules. The Visual Computer, 46:332{347,
1988.
[6] A. Kaufman. Ecient algorithms for 3d scan-
conversion of parametric curves, surfaces, and
volumes. ACM SIGGRAPH 87, 214:171{179,
1987.
[7] A. Kaufman and E. Shimony. 3d scan conver-
sion algorithm for voxel based graphics. In ACM
Workshop on Interactive 3D Graphics, pages 45{
75, NC, Octob er 1986.
[8] A.F. Mills. Basic Heat and Mass Transfer.
Richard D. Irwin, 1995.
[9] Robin Milner. Communication and concurrency.
Prentice Hall, 1 edition, 1989.
[10] J. Thomas Ngo and Jo e Marks. Spacetime con-
straints revisited. ACM Computer Graphics Pro-
ceedings, pages 343{350, 1993.
[11] W.H. Press, B.P. Flannery, S.A. Teukolsky, and
W.T. Vetterling. Numerical Recipes in C | The
Art of Scienti c Computing. Cambridge Univer-
sity Press, 1989.
[12] Craig W. Reynolds. Flo cks, herds and scho ols.
ACM Computer Graphics, 214:25{34, July
1987.
[13] W. M. Rohsenow and H. Choi. Heat, Mass, and
Momentum Transfer. Prentice-Hall, 1961.
[14] Karl Sims. Evolving virtual creatures. In SIG-
GRAPH'94, pages 15{22. ACM, 1994.
[15] X. Tu and D. Terzop oulos. Arti cial shes:
Physics, lo comotion, p erception, b ehavior. In
SIGGRAPH'94, pages 43{50. ACM, 1994.
[16] Michiel van de Panne and Eugene Fiume. Sensor-
actuator networks. ACM Computer Graphics,
pages 335{342, 1993.
[17] Michiel van de Panne, Ryan Kim, and Eugene
Fiume. Virtual wind-up toys for animation. In
Proceedings of the Graphics Interface'94, pages
208{215, 1993.
[18] S. Ward. Chemotaxis by the nemato de
caenorhabditis elegan: identi cation of attrac-
tants and analysis of the resp onse by use mu-
tants. In Proceedings of the National Academy
of Sciences U.S.A., pages 70:817{821, 1973.
[19] Josie Wernecke. The Inventor Mentor. Addison-
Wesley Publishing Company, 1994.
[20] Jane Wilhelms and Rob ert Skinner. A notion for
interactive b ehavioral animation control. IEEE
Computer Graphics and Applications, 103:14{
22, May 1990.