 – 



.

.      .      .                   .  . .. ..                  

. 

  

. . .. ...

 .   .  .      .. 



 

  .                  .       . . ...



  .          .         .        .



       .          . ..



 .                  

...     .             .....                                . .

.

                          

.

  ..     .         . 



 .   .              



 . 



            .  . 

. ..  .  .

 .           



       .               .        .     .            .   

       .         .      .         . 

Sequential Engineering is often called “across the wall” method. Figure illustrates   .   .                   

   .  .  .    .                            . . .                 ... .          .              ..  .  .. .          .     .. .  . .       .        ....  ..            .

           . .            .       .      .         .                                 . .  .  .   .  .              .               .                       . . .   .   .. . . . .   .       .        .        . 

              .            .    –       .     . .

             .     .   .         .        ..  

               .                        .      

         .       .             ..      . .           

                 .                                  .    

 . .   .     .  

 .  . .             .     .                        .  .   .   . . .   .  .  .  .

.. .     .  . .  .   “points or dots” on the screen. Therefore, it is called as a point plotting device.

 ..    ..  .       .          .       .             .  .                

        .         .  .             .  . .        .                               .        .               .           

  . ..       ..

   

.                  .  

              .                    .                .. .       P1 (40, 20) can be seen being rotated about the origin through an angle, θ =   .. .  .. 

                                        .. .                      

                .   . . . .                  

   .  .    .    .        ..  ..  .   

              .  .    

..         .              .             .       .           .          .      .       .          .              . .      .   .   .       .          . ...   

   .                  .  .  .               .            .



     .      .       .                                                   .     -  .  -  .      .   .

..  . . 

  UNIT II GEOMETRIC MODELLING

PPT FORMAT

UNIT-III

VISUAL REALISM

VISUAL RELISM:

Figure Flowchart of Visual Realism

Visibility

• Polygonal model - a collection of vertices

Figure: Polygon Model: Vertices Point

VISIBILITY (2)

• Connecting dots (wireframe) helps, • But it is still ambiguous

Figure: Wireframe Model of Polygon

VISIBILITY (3)

• Visibility helps to resolve this ambiguity

VISIBILITY (4)

• Include shading for a better sense of 3D

Figure: 3d Model COMPUTER VISIBILITY:

 Realism: Occlusions make scenes look more realistic  Less ambiguity: Visibility computations provide us with depth perception  Efficiency: Rendering is time consuming, so we should not have to waste time on objects we cannot see

VISIBILITY + SHADING

 Visibility is only one aspect for realistic rendering  Visibility + shading → even better sense of 3D

LINE:

Straight line segments are used a great deal in computer generated pictures. The following criteria have been stipulated for line drawing displays:

 Lines should appear straight  Lines should terminate accurately  Lines should have constant density  Line density should be independent of length and angle  Line should be drawn rapidly

The process of turning on the pixels for a line segment is called vector generation. If the end points of the line segment are known, there are several schemes for selecting the pixels between the end pixels. SURFACE:

All physical objects are 3-dimensional. In a number of cases, it is sufficient to describe the boundary of a solid object in order to specify its shape without ambiguity. This fact is illustrated in Fig. The boundary is a collection of faces forming a closed surface. The space is divided into two parts by the boundary - one part containing the points that lie inside and forming the object and the other the environment in which the object is placed. The boundary of a solid object may consist of surfaces which are bounded by straight lines and curves, either singly or in combination.

Figure: Surface Modelling SOLID REMOVAL

Many applications require the visibility determination for lines, points, edges, surfaces or volumes with reference to an observer

Applications:

 Realistic representation of solids  Simulations where dynamically the object changes shape  Computer animations

For real time applications faster algorithms are required which can cope with the determination of visible surfaces. However, transparency, texture, reflections which are commonly used are not part of the visibility algorithms. These form the part of rendering process which is presenting a picture or scene realistically. All visibility algorithms involve sorting.

Fundamental Assumption:

The farther the object, it is most likely to be obscured by other objects which are nearer to the view point. The efficiency of a visible surface depends significantly on the efficiency of the sorting algorithm used.

Coherence:

The tendency for the characteristics of a scene to be locally constant (homogeneous). This property is used to increase the efficiency of the sort.

CLASSIFICATION OF ALGORITHMS

Simple Visibility Algorithm

The classification is based on the coordinate system used. a. Object Space algorithms:

They are implemented in physical coordinate system in which the object is described.

Very precise – used in engineering applications. The result can be enlarged satisfactorily

Computation: Compares every object in a scene with every other object, therefore complexity is like n2.

Eg. Roberts’s algorithm, Warnock’s algorithm b. Image Space algorithms

 The algorithm is implemented in screen coordinates. Therefore total computions are nN where n is the number of objects in the screne and N is the total number of pixels (eg. 1280x1024 = 106)  More efficient – takes advantage of coherence. . List priority algorithms: These are partially implemented in a and b

Floating Horizon Algorithm

Most commonly used for representation of 3D surfaces of the form. This category of algorithms is usually implemented in image space.

F(x, y, z) = 0

Fundamental Concept:

The technique is to convert 3D problem to equivalent 2D problem by intersecting 3D surface with a series of parallel cutting planes at constant values of the coordinate in the view direction. It could be x, y or z. The function F(x, y, z)=0 is reduced to a planar curve in each of these parallel planes.

y = f (x, z)

It is assumed that the curves are single valued functions of independent variables.

• The result is projected on to the z = 0 plane

• The algorithm first sorts the z = constant planes in increasing distance from the viewpoint beginning from z=0 (closest to viewpoint)

Upper Horizon:

If at any given value of x, the y value of the curve in the current plane is larger than the y value for any previous curve at that z value, then the curve is visible, otherwise hidden

Lower Horizon:

If at any given value of x, the y value of the curve in the current plane larger than the maximum y value or smaller than the minimum y value for any previous curve at that z plane then the curve is visible, else hidden Functional interpolation:

The algorithm assumes the value of y is available at every x location. However, if it is not available (crossing of curves), a linear interpolation of known values is calculated to fill the upper and lower floating horizon arrays.

Aliasing:

If the function contains very narrow regions (small increments of x) then the algorithm yields incorrect results. The effect is generally caused by computing the function for visibility at a resolution less than that of image space resolution. The problem is overcome by taking more points to evaluate the function in narrow regions.

ROBERTS ALGORITHM Roberts (1964) algorithm is the oldest known solution to the visible surface problem. This is implemented in object space and mathematically an elegant algorithm. The algorithm eliminates the edges or planes from each volume that are hidden by the volume itself (self hidden). Each remaining edge (plane) of each volume is compared with each of the remaining volumes to determine what portion or portions, if any, are hidden by these volumes. Thus computational requirements for the Roberts algorithm theoretically increase as n2 where n is total number of objects. The implementation using a preliminary z priority sort and simple boxing or minmax tests exhibits a near linear growth with the number of objects n.

WARNOCK ALGORITHM

It is based on a hypothesis that tells how the human eye-brain combination processes information contained in a scene.

Hypothesis:

Very little time or effort is spent on areas of scene that contain little information. The majority of the time and effort is spent on high information content.A table top with one object only has less colour texture information. Takes minimal time to perceive. This algorithm can be implemented both in image and object space. The algorithm relies on divide and conquers strategy.

Warnock and its variants take advantage of homogeneity in the areas of display (scene) which is known as area coherence. The algorithm considers a window (area) in image (or object) space and seeks to determine if the window is empty or the contents are simple enough to resolve. If not, it divides the window until either the scene is simple or the preset resolution is reached. If image space results are not satisfactory, the algorithm shifts to object space for better accuracy.

WARNOCK ALGORITHM

Hidden surface problem of a simple scene

A scene with more details

A quad tree data structure used to store the data. The subdivision process gives a tree structure for the sub windows. Figure: Warnock Algorithm Hidden surface removal (HSR) vs. visible surface determination (VSD)

HSR: figure out what cannot be seen

VSD: figure out what can be seen

Image space vs. object space algorithms

Image space is per pixel based (image precision) and to determine colour of pixel based on visible. e.g., ray tracing, z-buffering

Object space is the per polygon or object based in object. Space (object precision),

Eg, back to front (depth sort)

In many cases a hybrid of the two is used.

IMAGE SPACE

For each pixel in image to determine the object closest to the viewer, pierced by the projector through the pixel. Draw the pixel using the appropriate colour.  Dependent on image resolution  Simpler, possibly cheaper  Aliasing a factor

OBJECT SPACE

For each object in world to determine those parts of objects whose view is unobstructed by other parts of it or other objects. Draw those parts in the appropriate colour.

 Independent of image resolution  More complex 3D interactions  Can be more expensive, e.g., subdivision  Designed first for vector displays

EFFICIENT VISIBILITY TECHNIQUES  Coherence  Use of projection normalization  Bounding boxes or extents  Back-face culling  Spatial partitioning or spatial subdivision and hierarchy VISIBILITY ALGORITHMS

Image-space algorithms using coherence:

 z-buffer algorithm – use of depth coherence  Scanline algorithm – use of scanline and depth coherence  Warnock’s (area subdivision) algorithm – use of area coherence and spatial subdivision

Object-space algorithms:

 Depth sort – use of bounding or extent  Binary space partitioning (BSP) algorithm

Z-BUFFER ALGORITHM

One of the simplest and most widely used. Hardware & OpenGL implementation common,

GlutInitDisplay (GLUT_RGB | GLUT_DEPTH);

GlEnable (GL_DEPTH_TEST);

 Use a depth buffer to store depth of closest object encountered at each position (x, y) on the screen – depth buffer part of the frame buffer.  Works in image space and at image precision.

The Z-Buffer Algorithm

Initialize all pixels to background colour, depth to 0 (representing depth of back clipping plane)

For each polygon { For each pixel in polygon's projection { pz = depth of projected point If (pz >= stored depth) { Store new depth pz Draw pixel } }

}

Z-Buffer: Exploiting Depth Coherence

 Polygons are scan-converted as before, but now need to compute depth values z  Assume planar polygons. On scanline y, increment x: From one scanline (y) to the next (y + 1):

Left polygon edge point: from (x, y) to (x + 1/m, y+1),

Where m is the slope of the polygon edge Z-Buffer: Bilinear Interpolation

If the polygon is non-planar, or not analytically defined, it is possible to approximate using bilinear interpolation.

Not just for depth, can do it for colour (Gouraud shading), surface normal, or texture coordinates. Subdivision or refinement may be necessary is error is too great.

DEPTH SORT ALGORITHM Determine a visibility (depth) ordering of objects which will ensure a correct picture if objects are rendered in that order – object-space algorithm. If no objects overlap in depth z, it is only necessary to sort them by decreasing z (furthest to closest) and render them – back-to- front rendering with painters algorithm. Otherwise, it may be necessary to modify objects, e.g., by intersecting and splitting polygons, to get depth order. Depth comparisons and object splitting are done with object precision and the scan conversion is done with image precision.

Depth Sort

 Sort all polygons according to their smallest (furthest) z  Resolve any ambiguities (by splitting polygons as necessary)  Scans convert each polygon in ascending order of smallest z (back to front)

PAINTER’S ALGORITHM

Simple depth sort. Considers an object on planes of constant z (or non-overlapping z) does not resolve ambiguities.

SCANLINE ALGORITHM

Operate at image precision. Unlike z-buffer, this works per polygon, this one works per scanline. Avoid unnecessary depth calculations in z buffer Depth compared only when new edge is encountered. An extension of the polygon scan-conversion: deal with sets of polygons instead of just one. Less memory-intensive than z-buffer, but more complex algorithm.

SCANLINE: DATA STRUCTURE

Recall the polygon scan-conversion algorithm

Edge table (ET): stores non-horizontal and possibly shortened edges at each scanline with respect to lower vertex.

Active edge table (AET): A linked list of active edges with respect to the current scanline y, sorted in increasing x.

Extra: Polygon table (PT): stores polygon information and referenced from ET. BSP ALGORITHM

Binary space partitioning of object space to create a binary tree – object-space algorithm. Tree traversal gives back-to-front ordering depending on view point. Creating/updating the tree is time- and space-intensive (do it in pre-processing), but traversal is not – efficient for static group of polygons and dynamic view. Basic idea: render polygons in the half space at the back first, then front.

BUILDING THE BSP TREES

Object space is split by a root polygon (A) and with respect to its surface normal (in front of or behind the root). Polygons (C) which lie in both spaces are split. Front and back subspaces are divided recursively to form a binary tree. BSP TREES TRAVERSAL

BSP traversal example AREA SUBDIVISION

Divide-and-conquer and spatial partitioning in the projection plane. Idea:

 Projection plane is subdivided into areas  If for an area it is easy to determine visibility, display polygons in the area  Otherwise subdivide further and apply decision logic recursively  Area coherence exploited: sufficiently small area will be contained in at most a single visible polygon

PAINTER'S ALGORITHM The idea behind the Painter's algorithm is to draw polygons far away from the eye first, followed by drawing those that are close to the eye. Hidden surfaces will be written over in the image as the surfaces that obscure them are drawn. The concept is to map the objects of our scene from the world model to the screen somewhat like an artist creating an oil painting. First she paints the entire canvas with a background colour. Next, she adds the more distant objects such as mountains, fields, and trees. Finally, she creates the foreground with "near" objects to complete the painting. Our approach will be identical. First we sort the polygons according to their z-depth and then paint them to the screen, starting with the far faces and finishing with the near faces. The algorithm initially sorts the faces in the object into back to front order. The faces are then scan converted in this order onto the screen. Thus a face near the front will obscure a face at the back by overwriting it at any points where their projections overlap. This accomplishes hidden-surface removal without any complex intersection calculations between the two projectedfaces. The basic algorithm:

 Sort all polygons in ascending order of maximum z-values.  Resolve any ambiguities in this ordering.  Scan converts each polygon in the order generated by steps (1) and (2).

SHADING

Image synthesis has been defined as the creation of images using an illumination model for the propagation of light, with the goal being photorealism.

There are basically five steps to getting a realistic computer generated image of a scene:

(1) Modelling the scene

(2) Projecting to the image plane and transforming to the viewport including clipping

(3) Hidden surface and visibility computations

(4) Light intensity computations (shading) using illumination models

(5) Antialiasing

A shading model uses illumination models. Two different shading models could use the same illumination model. For example, one shading model could compute the light intensity at every point according to a fixed illumination model and another might only compute them at vertices of a linear polyhedron (using that same illumination model) and then get the rest of the values with interpolation. On the one hand, one can try to rigorously simulate the illumination process; on the other, one might be satisfied with achieving the illusion of realism.

The first approach where we start at the eye generates a view-dependent map of light as it moves from surfaces to the eye. The second approach to rendering where we start from the light generates a view independent map. This approach may seem extremely wasteful at first glance because one makes computations for surfaces that are not seen.

Two important phenomena one has to watch out for when modelling light are:

(1) The interaction of light with boundaries between materials, and (2) The scattering and absorption of light as it passes through material. Local Reflectance Models

Geometrical optics (or ray theory) treats reflected light as having three components, an ambient one and a diffuse and specular one. Ambient (or background) light is that light that is uniformly incident from the environment and which is reflected equally in all directions by surfaces. Diffuse and specular light is reflected light from specific sources. Diffuse light is light that is scattered equally in all directions and has two sources. It comes either from internal scattering at those places where light penetrates or from multiple surface reflections if the surface is rough. For example, dull or matte material generates diffuse light. Specular light is the part of light that is concentrated in the mirror direction. It gives objects their highlights.

The following notation will be used throughout this chapter. See Figure. At each point p of a surface the unit normal to the tangent plane at that point will be denoted by N. The vectors V and L are the unit vectors that point to the eye (or camera) and the light, respectively. It is convenient to define another unit vector H which is the bisector of V and L, that is,

The angle between H and N will be denoted by a. The simplest reflectance model takes only ambient and diffuse light into account. The ambient component of the intensity is assumed to have the form The diffuse component is assumed to have the

Consider Figure. An area A1 of light incoming along a direction L will shine on an area A2 in the plane with normal N. If q is the angle between N and L, then it is easily shown that

Figure: Diffuse Reflection Geometry

Figure: Specular Reflection Geometry The ratio A1/A2 specifies the part of the light that will diffuse equally in all directions, but there is one other assumption. Any light coming from behind the surface will be assumed not to contribute any diffuse light. To put it another way, any surface that faces away from the light, that is, where N•L < 0, contributes nothing. It follows that rd = max (N•L, 0). SIMPLE APPROACHES TO SHADING The last section described some simple illumination models and how one can use them to compute the illumination at each point of an object. In this section we show how this information is used to implement shading in a modelling program. The details depend on

(1) The visible surface determination algorithm that is used,

(2) The type of objects that are being modelled (linear or curved), and

(3) The amount of work one is willing to do.

Constant Shading:

The simplest way to shade is to draw all objects in a constant colour. The visible surface algorithms would then suffice to accomplish that. A more sophisticated model would draw each polygon in the scene with a constant shade determined by its normal and one of the described illumination models. Constant shading that used facet normal’s would work fine for objects that really were linear polyhedral, but if what we were doing was approximating curved objects by linear polyhedral, then the scene would look very faceted and not smooth. To achieve more realistic shading, we first consider the case of a world of linear polyhedral.

Gouraud Shading: Gouraud’s scan line algorithm computes the illumination at each vertex of an object using the desired illumination model and then computes the illumination at all other points by interpolation. As an example, consider Figure. Assuming that the illumination is known at the vertices A, B, C, and D, one computes it at the point X as follows: Let P and Q be the points where the edges AC and BD intersect the scan line containing X, respectively. Compute the illumination at P and Q by linearly interpolating the illumination values at A, C, and B, D, respectively. The value at X is then the linear interpolation of those two values. To get the exact illumination values at vertices, normal are needed. These normal are typically computed by taking the average of adjacent face normal. Figure: Gouraud Shading Phong Shading:

To remedy some of the problems with Gouraud shading, Phong interpolated the surface normals (rather than the intensities) at vertices across faces and then computed the illumination at each point directly from the normal’s themselves. This clearly takes more work, however. In particular, generating unit-length vectors means taking square roots, which is costly in time. Alternatively, one can use a Taylor expansion to get a good approximation. The latter approach produces pictures indistinguishable from real Phong shading at a cost that is not much greater than that of Gouraud shading. Phong shading produces better results than Gouraud shading, but it also has problems. The difference between the interpolated normal at the point P and the normal at the vertex V could cause a big change in the illumination value as one move from P to V. This again shows the importance of sampling properly.

Gouraud and Phong shading basically assumed a scan line visible surface algorithm approach. In that context, one can speed up the process by computing the shading equations incrementally.

Figure: Phong Shading. COLOURING

(1) Colour is a property of materials (as in the case of a red dress). (2) Colour is a property of light (as in the case of a red traffic light).

Perceived Colour

One way that this can be done is to illuminate the object in a dark room. Another is to view it through a hole in a black panel while focusing on the perimeter of the hole. In the latter case, the perceived colours are called film or aperture colours. Perceived colours have a number of characteristics. In the case of film colours, the simplest case, there are only three, namely, hue, saturation, and brightness:

Perceived hue:

Looking at red light we experience a sensation of a red “hue.” It is hard to say exactly what hue is. The problem is similar to trying to describe the sensation of bitterness or shrillness in voice. It can also be thought of as the “colour” of the colour by which the colour is designated. Colours are subdivided into chromatic colours – those perceived colours that have hue achromatic colours – those perceived colours that do not have hue (for example, the colours from a fluorescent lamp) There are four basic hues (the unitary or unique hues): unitary red, yellow, green, and blue.

Perceived saturation:

This is the perception of the relative amount of a hue in a colour and can be thought of as a number between 0 and 1. Since light can be thought to have two components, a chromatic and an achromatic one, a working definition of saturation is as the ratio of the chromatic to the sum of the chromatic and achromatic components of a colour. For example, pink has a lower chromatic component than red. Saturation measures how much a pure colour is diluted by white.

Perceived brightness:

Brightness is an attribute of the illumination in which a no isolated object is viewed. It is a “perception of the general luminance level.” Brightness applies to the colour of an object only when the object is isolated and the light comes to the eye from nowhere else. One generally talks about it as ranging from “dim” to “dazzling.” Perceived colours other than film colours have additional characteristics such as: Perceived lightness:

This is an attribute of a no isolated colour produced in the presence of a second stimulus. One uses terms such as “lighter than” or “darker than” for this.

Perceived brilliance:

This is perceived only when the object is not isolated as, for example, in the case of an area of paint in a painting or a piece of glass among others in a stained glass window.

COLORIMETRY

The tri-stimulus theory of light is based on the assumption that there exist three types of colour sensitive cones at the center of eye which are most sensitive to red, green, and blue. The eye is most sensitive to green.

Basically, one chooses three beams (a short, medium, and long wave length which are typically the three additive primary colours red, green, and blue). Different colours can then be produced by shining the three beams on a spot and varying the intensity of each. The chromaticity of a colour C is defined by a triple (x, y, z) of numbers specifying these three intensities, or weights, for the colour. Mathematically, one is representing the colour C in the barycentric coordinates form

where R, G, and B represent the colours red, green, and blue, respectively, and x + y + z = 1. The numbers x, y, and z would be called the chromaticity values of C. Figure shows this in graphical form. The triangle is called Maxwell triangle chromaticity .

Figure: Maxwell Triangle Since this is not possible with real primaries, imaginary primaries were invented (they are called that because they are not visible) called imaginary red, imaginary green, and imaginary blue and denoted by X, Y, and Z. Then every colour C can be written as

COLOR MODELS

The range of colours produced by an RGB monitor is called its gamut. There are a number of models for the gamut of an RGB monitor.

The Colour Cube or RGB Model:

This is the “natural” gamut model that represents the gamut as the unit cube [0,1] x [0,1] x [0,1].

The CMY Model:

This model uses the subtractive primary colours cyan, magenta, and yellow, which are the complements of red, green, and blue, that is,

Figure: CIE Chromaticity Diagram Figure: RGB Colour Cube

The YIQ Model:

This model is defined by the equation

The HSV (Hue-Saturation-Value) “Hex cone” Model: We get this model by looking down at the colour cube along its main diagonal and reco ordinatizing A colour is now specified by three numbers h (hue), s (saturation), and v (value). The hue corresponds to an angle from 0 to 360 degrees, where, for example, red is at 0 degrees and green at 120 degrees. The saturation s, s OE [0, 1], measures the departure of a hue from white or gray. The value v, v OE [0, 1], measures the departure of a hue from black, the colour of zero energy. The hexcone model tries to mimic how artists work.

The HSL (Hue-Saturation-Brightness) Triangle Model:

The reason for not using the letter “B” is so as not to cause confusion with “blue.” The hue is again specified as an angle between 0 and 360 degrees about the vertical axis and the saturation and brightness are values between 0 and 1. The HSL model is good for color gradations found in nature. Figure: The Double Hexacone HSL Colour Model

COMPUTER ANIMATION:

The actual tape containing the movie, however, consists of a sequence of images, or what are known as frames. These frames, when flashed in rapid succession on the screen, cause the illusion of motion.

Traditional Animation:

In traditional 2D animation, most of the work is done by hand. The story to be animated is written out on a storyboard. Initially only the key frames are identified. Key frames are frames selected by the master animators for their importance in the animated sequence, either because of the composition of the frames, an expression depicted by the characters, or an extreme movement in the sequence. Once the key frames are drawn, the next step is to draw the intermediate frames. The number of intermediate frames, or in-betweens as they are called, depends on the length of the animated sequence desired (keeping in mind the fps count that needs to be achieved). This step of drawing in-betweens is also called tweening.

A technique that helps tremendously in the process of creating animations is called eel animation. When we create an animation using this method, each character is drawn on a separate piece of transparent paper. A background is also drawn on a separate piece of opaque paper. Then, when it is time to shoot the animation, the different characters are overlaid on top of the background. Each frame can reuse the same elements and only modify the ones that change from one frame to the next. This method also saves time since the artists do not have to draw in entire frames-just the parts that need to change, such as individual characters. Sometimes, even separate parts of a character's body are placed on separate pieces of transparency paper. Traditional animation is still a very time-consuming and labour- intensive process. Additionally, once the frames are drawn, changing any parts of the story requires a complete reworking of the drawings.

3D COMPUTER ANIMATION – INTERPOLATIONS

The most popular technique used in 3D computer animation is the key frame in between technique that was borrowed from traditional animation. In this technique, properties of a model (such as its position, colour, etc.) are identified by a user (usually an animator) in two or more key frames. The computer then calculates these properties of the model in the in between frames in order to create smooth motion on playback.

Computing the in-between frames is done by taking the information in the key frames and averaging it in some way. This calculation is called interpolation. Interpolation creates a sequence of in-between property values for the in between frames. The type of interpolation used depends on the kind of final motion desired.

Interpolation can be used to calculate different properties of objects in the in between frames properties such as the position of objects in space, its size, orientation, colour etc.

Linear Interpolation

The simplest type of interpolation is linear interpolation. In linear interpolation, the values of a given property is estimated between two known values on a straight-line basis (hence the term linear). In simpler terms, linear interpolation takes the sum of the property values in the two key frames and divides by the number of frames needed to provide as many equally spaced in- between frames as needed. For example, consider a model P, with a position along the y-axis of

Yi, at the initial key frame Kin. At the final key frame, Kfin, they position has moved to Yfin. Now, say we want only one in-between frame f. This frame would be midway between the two key frames, and P would be located midway between the positions Yi, and Yfi

Figure: Animation Timeline The Principles of Animation

Three-dimensional computer animation has made great advances in the last few years. There are numerous animation packages that can help even a novice develop quality animations. However, it is not uncommon to see animations that lack zing-they seem drab, look unreal and reek of CGism. Here is where the principles of traditional 2D animation come to the rescue.

The concepts used in hand-drawn animations, to make the sequence look believable, can be used in the 3D realm to add as much zest and depth into the animation as we see in its 2D counterpart.Walt Disney himself put forward many of these principles of animation which is now used as a bible by animators worldwide.

Squash and Stretch

One trick used in animation to depict the changes in shape that occur during motion is called squash and stretch. The squashed position depicts a model form flattened out by an external pressure or constricted by its own power. The stretched position shows the same form in a very extended or elongated condition. Squash and stretch effects also help to create illusions of speed and rigidity.

Human facial animation also uses the concepts of squash and stretch extensively to show the flexibility of the skin and muscle and also to show the relationship between parts of the face. During the squash and stretch, the object should always retain its volume. If an object squashes down, then its sides should stretch out to maintain the volume.

Figure: Squash and Stretch Applied to the Bouncing Ball Staging

Staging is the presentation of an idea so that it is completely and unmistakably clear. A personality is staged so that it is recognizable; an expression or movement is brought into focus so that it can be seen. It is important to focus the audience on only one idea at a time. If a lot of action is happening on the screen at once, the eye does not know where to look, and the main idea will be upstaged. The camera placement is very important to make sure that the viewer's eye is led exactly to where it needs to be at the right moment. Usually, the center of the screen is where the viewer's eyes are focused. Lighting techniques also help in staging.

To stage this part of the animation, we want to move the camera to zoom in on Snowy's current location. Such a swing in the camera is called a cut. A cut splits a sequence of animation into distinctive shots. The transition from shot to shot can be a direct cut, which we shall use in this example. Usually, the last few frames from the first shot are repeated in the second shot to maintain continuity between the shots.

Anticipation

Anticipation involves preparing the objects for the event that will occur in the later frames of the animation sequence. If there is no anticipation of a scene, the scene may look rather abrupt and unnatural. The anticipation principle makes the viewer aware of what to expect in the coming scene and also makes the viewer curious about the coming action. Anticipation is particularly necessary if the event in a scene is going to occur very quickly and it is crucial that the viewer grasp what is happening in the scene.

Timing

Timing refers to the speed of an action. It gives meaning to the movement in an animation. Proper timing is crucial to make ideas readable. Timing can define the weight of an object: a heaver object is slower to pick up and lose speed than a lighter one. Timing can also convey the emotional state of a character: a fast move can convey the sense of shock, fear, apprehension, and nervousness, while a slow move can convey lethargy or excess weight.

ADVANCED ANIMATION TECHNIQUES

Dynamics

Dynamics is the branch of physics that describes how an object's physical properties (such as its weight, size, etc) are affected by forces in the world that it lives in (like gravity, air resistance, etc). To calculate the animated path, an object traversing the virtual 3D world is imbued with forces that model after the physics of the real world. These forces act on the models causing some action to occur. For example, blades of grass sway in the wind depending on the speed of the wind, and to a lesser extent, depending on the size of the blade. In the 3D world, a wind force can be defined which when applied to models of grass, affects their transformation matrix in a manner defined by the laws of physics.. This kind of simulation was used in A Bug's Life, where an infinite number of grass blades were made to constantly sway in the background by defining wind forces on them. The foreground animation of Flick and his troop was still defined using Key Frame animation.

Procedural motion

In this technique, a set of rules is defined to govern the animation of the scene. Particle systems are a very good example of such a technique. Fuzzy objects like fire and dust or even water can be represented by a set of particles. These particles are randomly generated at a defined location in space, follow a certain motion trajectory, and then finally die at different times. The rules define the position, motion, color and size of the individual particle. A collection of hundreds of particles can lead to realistic effects of fire, explosions, waterfalls, etc.

Motion Capture

Real-time motion capture is an advanced technique that allows animators to capture real-live motion with the aid of a machine. An actor is strapped with electronic nodes that transmit signals to the computer when the actor moves. The computer records this motion. The captured motion data can then be applied to the transformation matrix of the computer- generated characters to achieve a real-live effect.

Kinematics

Kinematics is the study of motion independent of the underlying forces that produce the motion. It is an important technique used to define articulated fig. animations.

Usually, a defined model is applied that has a skeleton structure consisting of a hierarchy of rigid links (or bones) connected at joints. Each link corresponds to certain patches of the model surface, also called the model skin. A joint can be rotated, causing the corresponding surface and the attached children bones to rotate about it. There are usually some constraints on how much each joint can be rotated. This motion corresponds closely with the human anatomy, and hence is used very often for human like animations.

Consider the model of a leg, with a defined skeleton consisting of three links L1, L2, and L3 at joints J1, J2 and J3 as shown in Fig.9.13. J1 is the head of the hierarchy. A rotation about J1 causes Ll-J2-L2-J3-L3 (and the associated skin) to rotate as shown in Fig. You can also rotate about joints 52 or 53 to achieve a desired position.

Camera Update

To update the camera position, we make use of the popular physics concept:

Distance travelled - velocity*elapsed time

Therefore:

Final Position - Initial Position + velocity*relapsed time)

The camera position is updated at every tick of the game logic. The velocity of the motion is determined at every tick by checking the state of the arrow keys on the keyboard. If the up arrow is being pressed, then velocity is positive, causing forward motion. If the down key is being pressed, velocity is negative, causing backward motion. If nothing is being pressed, then velocity is 0, causing no motion to happen. The difference between the last timer tick and the current one gives the elapsed time. The keyboard call-back function that we used earlier only kicks in when a state change occurs - a key is pressed or released. The new formula we are needs us to identify the state of the key at every tick.

Automatic Texture Generation

OpenGL has three automatic texture generation modes. One is used when doing spherical environment mapping. The other two are eye linear and object linear. In eye linear mode, a texture coordinate is generated from a vertex position by computing the distance of the vertex from a plane specified in eye coordinates. UNIT IV

ASSEMBLY OF PARTS

ASSEMBLY MODELING

An assembly is a collection of independent parts. It is important to understand the nature and the structure of dependencies between parts in an assembly to be able to model the assembly properly. In order to determine, for example, whether a part can be moved and which other parts will move with it. The assembly model must include the spatial positions and hierarchical relationships among the parts, and the assembly (attachment) relationships (mating conditions) between parts.

Figure shows how an assembly model can be created using a CAD system. Designers first create the individual parts. They can also analyze the parts separately. Once the parts design is complete, Designers can proceed to create the assembly and analyze it. Creating the assembly from its parts requires specifying the spatial and mating relationships between the parts. Assembly analysis may include interference checking, mass properties. Kinematic and dynamic analysis. And finite element analysis CAD systems establish a link between an assembly and its individual parts such that designers need only change individual parts for design modification. And the system updates the assembly model automatically.

Fig.1. Creating an assembly model INFERENCE OF POSITION AND ORIENTATION

The inference of the position (location) and orientation of a part in an assembly from mating conditions requires computing its 4 4 homogeneous transformation matrix from these conditions. This matrix relates the parts local coordinate system (part MCS) to the assembly’s global coordinate system (assembly MCS). In reference to below figure, the location of Part 1 is represented by the vector OOI connecting the original 0 of the assembly MCS to the origin 01 of the X y1Z1 MCS of the part. The orientation is represented by the rotation matrix between the two systems.

Fig.2. Assembling parts

(1) This matrix has 12 variables (nine rotational and three translation elements) that must be determined from the mating conditions. For an assembly of N parts, and choosing one of them is a host. N 1 transformation matrices have to be computed. Therefore, the variables to solve for simultaneously are the 12 N — I elements of these matrices. Before we present the general solution, we cover an easier version first: we call it the WCS method.

WCS Method

The simplest method for specifying the location anti orientation of each part in an assembly is to provide the 4 4 homogeneous transformation matrix ITI directly, instead of inferring it from mating conditions. This method provides us with a good understanding of the basics. The matrix transforms the coordinates of the geometric entities of the part from its MCS to the assembly MCS.

One way for the user to provide the transformation matrix interactively is by defining a WCS in the assembly model such that its origin and orientation match the final location and orientation of the MCS of the part that we need to insert into the assembly. We then write the [T] matrix. Given by above equation that converts the coordinates from the part MCS (now the WCS) to the assembly MCS. When we apply [T] to all the Part geometry. We effectively insert (merge) the part into the assembly.

As a WCS is completely defined by specifying its X and Y axes or its XY plane, the proper WCS used to merge a part into its assembly can be defined such that its XY plane coincides with the XY plane of the part MCS. For example the assembly consists of two parts. A and B. Three instances of Part B are used in the assembly. The user first creates the two parts with the MCS of each part. To create the assembly we insert an instance of part A into a blank assembly file we use the instance as file assembly base. It is usually beneficial to assign separate layer for each instance for ease of managing the assembly. To merge the instance of B on top of 44, the X1 Y1Z1 WCS is defined by the user and then the instance is merged. Similarly, the X2Y2Z2 and X3Y3Z3 WCS are defined. The transformation matrices to merge the three instances of B into A are given by, respectively: (2)

(3)

(4)

Fig.3. (a) individual parts of the assembly Fig.3. (b) Assembly creation via the WCS method

Mate Method

In a typical assembly, the mating conditions between two components are not enough by themselves to completely constraint the two components. An intertwinement of mating conditions usually exist among all of the parts. In general, a group of parts must be solved simultaneously to find the transformation matrix [T]. The mating conditions along with the properties of [T] provide the necessary equations to solve for the 12 N-1 variables. The number of equations is always equal to or greater than the number of variables. Therefore, the method of solution must account for the number of redundant equations, and it must eliminate these equations from the system of equations to be solved.

Before we discuss the details of possible methods of solution, we present the development of constraint equations from mating conditions. We discuss the three basic mating conditions: coincident, concentric, and coplanar. For the coincident condition each face where the two parts mate (butt up against one another) is specified by a unit normal and a point described in the MCS of its corresponding part. Let [T1] and [T2] be the transformation matrixes from the X1Y1Z1 and X2Y2Z2 coordinate systems, respectively, to the MCS of’ the assembly. The unit normals and the two points specifying the mating conditions can be expressed in terms of the MCS (XYZ) system as follows: (1)

(2)

(3)

And

(4)

In the preceding equations, the superscript a indicates assembly. The coincident condition requires that the directions of the two unit normals must be equal and opposite, and the two points must lie in the same plane at which the two faces mate. The coincident condition requires four equations that can be expressed as follows:

(6) (7)

(8)

(9)

The concentric condition requires the centerlines of the shaft and the hole to be collinear. The equation of the centerline of, say, the hole can be written as:

(10)

If the shall axis is collinear with the hole centerline, points P3 and P4 defining the axis should satisfy Eq. (10). These points must first be transformed using [T2] to the MUS coordinates. The constraint equations required for each concentric condition can be written as:

(11) And

(12)

Each of Eqs. (11) And (12) yield three combinations of equations resulting in six equations for each concentric condition. In general, two of these equations are redundant because Eqs. (11) and (12) each yield only two independent equations instead of three. However, it is necessary to carry all three to cover the case where the centerline passing through points P1 and P2 is parallel to any of the assembly MCS axes. For example, if the centerline is parallel to the X axis, Eqn. (11) becomes:

(13)

Equation (13) gives the following two equations only:

(14) and

(15)

Hence, it can be seen that all three equations must be carried so that at least two independent equations can be written for all cases, although this introduces redundancy in the system of equations.

The assembly MCS, can be written in terms of the elements of the matrix [T] given by,

qz = ry = 0 (16)

For a part rotating freely about its Y1 or Z1 axis, we can write respectively:

mz = rx = 0 (17)

my = qx = 0 (18)

With all the constraint equations derived for the various mating conditions, we now calculate the total number of equations and unknowns that can be used to infer the position and orientation of a part from matting conditions. For each coincident condition, 16 equations can be written: 12 are provided by, Fig.4. Free rotating part

For each concentric conditions, 18 equations can be written: The coplanar condition provides the same number of equations as the coincident condition.

For each free rotating part, two equations [Eq. (16) and (17) or (18)] are available. In addition, the properties of the transformation matrix [T] [Eq. (1)] provide six equations: three from the unit vector length property, and three from the orthogonally property. These can he written as:

(19)

(20)

(21)

(22) (23)

(24)

The unit length requirement of (rx ry, rz) and its orthogonally with the other two unit vectors is satisfied automatically by Eqs. For an assembly of N parts, the total number of equations that can be written is given by:

(25) Where NA, NC, NF, and NR are respectively, the number of coincident, coplanar concentric and free-rotation conditions. The number of variables is:

(26) Let us assume that the MCS of a part can he oriented properly in its assembly by three rotations about the axes of the assembly MCS in the following order: about the Z axis about the Y axis, and about the X axis. Thus we can rewrite Eqn. (1) as:

(27) To reduce the number of equations and variables even further, we can eliminate the twelve variables (the global description of unit normals and points given by Equations by considering them known in terms of the elements or the transformation matrix. Thus equations become, respectively:

(28) Solving the Mate Equations

We now discuss the solution of the system of equations that results from applying the mating conditions. This system is nonlinear due to the trigonometric functions that appear in the transformation matrix [Ti. Since the number of equations is equal to or exceeds the number or variables, a method is needed to remove the redundant equations. The method discussed here utilizes the least-squares technique to eliminate redundancy first, followed by using the Newton- Raphson iteration method to solve the resulting set of independent equations. The Newton- Raphson method for n nonlinear equations in ii variables can be written as:

(1) th where Xk is the solution vector at the k iteration. [J(Xk)] is the Jacobian matrix and Rk is the residual vector, both of which are evaluated at the current solution vector Xk. When redundancy exists, the inverse of the Jacobian may not exist because the Jacobian itself may not be square and/or it may be singular. The following procedure can he used to solve for ii variables (X = [x1 x2….xn]T) using the following in equations:

(2) In a vector form Eqn. (1) becomes (3) To write Eq. (2) in Newton-Raphson iterative form, let u assume that a solution Xi exits a step i and the solution at step i + 1 is Xi +1 such that

(4)

Linearizing Eqn. (2) about Xi gives:

(5)

If Xi +1 is the solution then Eq. (3) holds that is, Fi+1. = 0. Thus, Eq. (5) becomes:

(6) Expanding Eq. (6) gives:

(7)

Or

(8) where [J]i = [J(Xi)], Xi and Ri are the Jacobian matrix, the incremental solution, and the residual vector at iteration i respectively. The Jacobian [J(Xi)] is non-square of size in m n

Solving this equation for Xi, gives:

(9) The algorithm to solve for Xi can be described as follows: An initial guess X0 is made.

The Jacobian [J]o and the residual vector Ro are computed. Next Eq. (9) is used to calculate X0.

Lastly, Eq. (4) is used to compute X1. These steps are repealed to obtain X1 and X2, X2 and X3,

....., and Xn 1 and Xn. Convergence is achieved when the elements of the residual vector R or the incremental solution X approaches zero.

Tolerance Analysis

Analysis of tolerances and their stackup is important because tolerance assignments are usually done on a part-by-part basis. Tolerance analysis is defined as the process of checking the tolerances to verify that all the design constraints are met. tolerance analysis is sometimes known as design assurance. The objective of tolerance analysis is to determine the variability of any quantity that is a function of product dimensions and are called design functions. Most often, these qualities are themselves dimensions. Product dimensions and variables that control the behavior of a design function are called design function variables. The variability of design functions is used to assess the suitability of a particular tolerance specification. Figure.5 shows an example of a design function for the case of two blocks assembled into a slot. The design function F is the clearance between the two blocks, and is a function of the dimensions of the slot and the two blocks. A tolerance specification for these dimensions is satisfactory if it prevents F from being less than zero.

Fig.5. Tolerance representation The formulation of tolerance analysis can be stated as follows. Given a set of tolerances

{T) = {T1, T2, ...... Tn} on a set of dimensions {d} = {d,. d2,….., dn), and a set of design constraints

{C} = {C1. C2,……. Cn} is {T} satisfactory? Constraints could be functional requirements of an assembly, manufacturing costs, and so forth. The dimensions in the set {d} include both nominal dimensions {dn} and their tolerances {T}, that is.

d = dN + T (1) To assess tolerance suitability we formulate a design function in terms of {d}. as follows:

F = f{d} = f d1 d2 dn (2) The variability of F due to variability in {d} is determined (using methods described in the discussion that follows). If F satisfies [C) all the time, { T) is satisfactory and the assembly is accepted. If not. (T) is unsatisfactory and the assembly is rejected. Design functions are often complex and their formulation forms the hardest part of tolerance analysis and can be time consuming. Tolerance analysis methods can be divided into two types. In the simpler type, dimensions have conventional tolerances, and the result of tolerance analysis is the nominal value of the design function (FN) and its upper (Fmax) and lower (Fmin) limits. This type of analysis is sometimes called worst-case analysis. This means that all possible combinations of in- tolerance parts must result in an assembly that satisfies the design constraints. The upper and lower limits of the design function represent the worst possible combination of the tolerances of the design function variables. However, the likelihood of worst- case combination of these tolerances in any particular product is very low. Therefore worst-case tolerance analysis is very conservative.

Worst-Case Arithmetic Method The arithmetic tolerance method is the worst-case analysis method. It uses the limits of dimensions to carry out the tolerance calculations. The actual or expected distribution of dimensions is not taken into account. All manufactured parts are interchangeable since the maximum values are used. Arithmetic tolerances require greater manufacturing accuracy. It is used in job shop production (very few parts are produced) and in cases where totaly or 100% interchangeable assembly is required. Let us assume a closed-loop (meaning the resulting dimension is obtained by adding and/or subtracting the given dimensions) dimension set (d) of n elements such that the design function (resultant dimension) F is obtained by adding the first m, elements (called increasing dimensions) and subtracting the last (n m) elements (called decreasing dimensions). Using this method all tolerance information about F is obtained by adding and/or subtracting the corresponding information about the individual dimensions. Thus, we can write:

(1) Maximum dimensions:

(2) Minimum dimensions:

(3) Tolerance on F:

(4) Upper Tolerance on F:

(5) Lower Tolerance on F:

(6) Worst-Case Statistical Method This method like the arithmetic method uses the Limits of dimensions to perform tolerance analysis. However unlike the arithmetic method, it takes into consideration the fact that dimensions of parts of an assembly follow a probabilistic distribution curve. Consequently the frequency distribution curve of the dimensions of the final assembly follow a probabilistic distribution curve. Typically, the probabilistic distribution curve is assumed to be a normal distribution curve. This method is used in both batch and mass production it allow for variabilities in manufacturing conditions such as tool wear, machine conditions, and random errors. It increases the manufacturing efficiency by increasing tolerance limits and, therefore, reducing the required accuracy of manufacturing.

Fig.6. Probabilistic distribution curves Fig.7. Probabilistic distribution curve of a dimension

When the elements in the dimension set become large enough the distribution of the design function F (the resulting dimension will he asymptotically normal and independent of the distributions of the individual dimension.

Monte Carlo Simulation Method The previous two methods are only applicable to conventional tolerances—mainly for closed-loop dimensional sets with linear design functions. When these functions become more complex or nonlinear, applying these methods becomes less obvious if not impossible. Consider the simple example of a box that has sides of lengths a. b. and c with 1% tolerances on each dimension. To calculate the resulting tolerances on a diagonal of the box the design function F is the 1endth of the diagonal and given by the relation:

(7) To calculate the tolerances on the diagonal using the worst-case arithmetic method, we reduce (or increase) each dimension by 1% which gives a tolerance on the diagonal of 0.01

. To use the worst-case statistical method is less obvious and nay require linearizing Eq. (7) analysis, 2D (such as area) or 3D (such as volume) design functions may have to be formulated instead of the 1 D dimension) functions used in the previous methods. These 2D and 3D functions can be written first in terms of nominal dimensions and then perturbed using the geometric tolerances. While this approach enables including geometric tolerances in the tolerance analysis, it still requires a designed function which it may not be possible to find. An algorithm based on the Monte Carlo method and implemented into a solid modeler can be described as follows: 1. Generate a candidate instance of an as-manufactured part using a normal (or other) distribution random number generator to perturb the vertices of the part within the specified size tolerance zone. 2. Check if the part instance meets the specified form tolerances. This is needed because form tolerances may be tighter than size tolerances, and because normal distribution may in rare cases, generate perturbations with standard deviations beyond the size tolerance zone. 3. If one or more of the vertices of the part instance are found outside the zone of form tolerance, the assembly instance is rejected. If all vertices are inside the zone the assembly instance may be accepted. 4. Repeat steps 1 to 3 for all other parts in the assembly. 5. Use the solid modeler to create the assembly instance using all the instances created in step 4. These instances are positioned relative to datums established by part features. 6. Check if the assembly instance from step 5 satisfies the design constraints. If it does the assembly is accepted otherwise it is rejected. 7. Repeat steps 1 to 6 as many times as is needed by the desired sample size (number of assembly instances) for calculating the statistics. The larger the sample size the better and the more confidence we have in the results. Other Methods Other methods for tolerance analysis exist. Optimization methods do treat the design function and the design constraints ns an optimization problem, while the design variables are viewed as the decision variables for optimi7ation. The design function may be nonlinear. Linear programming methods linearize the design function and design constraint equations and they solve tolerance analysis using linear programming. After the linear programming problem is solved, a sensitivity analysis can be performed to determine the relative contribution of each of the tolerances. The Taylor series method and the quadrature method are other statistical methods, and they approximate the probability density function of the design function without generating the large number of samples required by the Monte Carlo method. Tolerance are a semigraphical spreadsheet-like method which can be used to formulate design functions and perform tolerance analysis of the resulting functions. This method deals only with one dimensional problem. Bjorke’s method formulates design functions and performs their statistical analysis. The method uses the concept of tolerance chain and can deal with general 3D problems. However it cannot accommodate design functions that are not dimensions. Bjorke assumes that the design function and the dimensions have a beta ( ) distribution. MASS PROPERTY CALCULATIONS The mass properties of an object are a set of useful properties used in various engineering applications. These properties include mass, centroid, first moments, and second moments of inertia. The main difference between mass and geometric properties is the inclusion of the density of the object material in the former. Formally, an object can have a centroid (of its volume), at center of mass (of its mass), and a center of gravity (of its weight) that may differ from each other if the acceleration of gravity g and/or the density of the object material is not constant. In this chapter we assume that g and are constants. Therefore the three centers (of volume, of mass, and of weight) coincide and equal the centroid (of the volume) of the object. This assumption implies that objects of interest are homogeneous and are always close to the surface of the earth. Mass The mass of an object can be formulated in a way similar to formulating its volume if we replace the volume element dV shown in Figure 8 by a mass element, we can write: dm = dV (1) Integrating Eq. (1) over the distributed mass of the object gives:

m = m∫dV (2)

UNIT V CAD STANDARDS STANDRDS FOR COMPUTER GRAPHICS Attempts to develop a graphics standard resulted in the following developments in 70’s: i. A Graphic Standards Planning Committee (GSPC) was formed in 1974 by ACM-SIGGRAPH (Association of Computing Machinery’s Special Interest Group on Graphics and Interactive Techniques). ii. A committee for the development of computer graphics standard was formed by DIN in 1975. iii. IFIP organized a workshop on Methodology in Computer Graphics in 1976. iv. A significant development in CAD standards is the publication of (GKS) in 1982.

Figure a shows how both solutions work. Direct translators convert data directly in one step. They are typically written by computer service companies that specialize in CAI)/CAM database conversion. Direct translators are considered dedicated translation programs, two of which link a system pair as indicated by the dual direction arrows shown in Figure. For example, two translators are needed to transfer (data between System 1 and System 2: one from System 1 to System 2 and the other from System 2 to System 1. Indirect translators utilize some neutral file format which reflects the neutral database structure. Each translation system has its own pair of translators to translate data to and from the neutral format. The translator that converts data from the native format of a given CAD/CAM system to the neutral format is called a preprocessor, while the translator that does (lie opposite translation is known as a postprocessor. (a) Direct translator (b) indirect translator

Each type of translator has its advantages and disadvantages. Direct translators provide a satisfactory solution when only a small number of systems are involved. But as this number increases the number of translator programs that need to be written becomes prohibitive. In general, if modeling data is to be transferred between all possible pairs of CAD/CAM systems. Then the total number of translators. N. that must be written is given by:

Where

GRAPHICAL KERNEL SYSTEM (GKS) GKS (Graphical Kernel System) is an ANSI and ISO standard. GKS standardizes two dimensional graphics functionality at a relatively low level. The primary purposes of the standard are:  To provide for portability of graphics application programs.  To aid in the understanding of graphics method by application programmers.  To provide guidelines for manufacturers in describing useful graphics capabilities. The GKS consists of three basic parts: i. An informal exposition of contents of the standard, which includes such things as positioning of text, filling of polygons etc. ii. A formalization of the expository material outlined in (i) by way of abstracting the ideas into functional descriptions (input/output parameters, effect of each function etc.). iii. Language bindings, which are the implementations of the abstract functions, described in (ii) in a specific computer language like FORTRAN, Ada or C. Figure (1) shows the GKS implementation in a CAD workstation. The features of GKS include:

Fig.1. GKS Implementation in a CAD Workstation GKS offers two routines to define the user created pictures. They are primitive functions and attribute functions. Examples of primitive functions are: • POLYLINE to draw a set of connected straight-line vectors • POLYMARKER to draw a set of markers or shapes • FILL AREA to draw a closed polygon with interior fill • TEXT to create characters • GDP (Generalized Drawing Primitive) to specify the standard drawing entities like circle, ellipse etc. The attribute functions define the appearance of the image e.g. color, line-type etc. Current level of GKS is GKS-3D, which provides several other functions. GKS-3D is an extension to GKS, which allows the production of 3-D objects. STANDARDS FOR EXCHANGE IMAGES The Graphics Standards Planning Committee (GSPC) of ACM-SIGGRAPH proposed the CORE system in 1977 and revised it in 1979. Though the development of GKS has been influenced by the CORE system, there are a number of significant differences between the two. However, the core graphics have a number of problems at the level of program portability. From a technological point of view, the GSPC CORE has been eclipsed by the development in GKS.

OPEN GRAPHICS LIBRARY (OpenGL) Silicon Graphics (SGI) developed the OpenGL application-programming interface (API) for the development of 2D and 3D graphics applications. It is a low-level vendor-neutral software interface. It is often referred to as the assembler language of computer graphics. It provides enormous flexibility and functionality. It is used on a variety of platforms. OpenGL is a low-level graphics library specification. OpenGL makes available to the programmer a small set of geometric primitives - points, lines, polygons, images, and bitmaps. OpenGL provides a set of commands that allow the specification of geometric objects in two or three dimensions, using the provided primitives, together with commands that control how these objects are rendered into the frame buffer. The OpenGL API was designed for use with the C and C++ programming languages but there are also bindings for a number of other programming languages such as Java, Ada, and FORTRAN. OpenGL provides primitives for modeling in 3D. Its capabilities include viewing and modeling transformation, viewport transformation, projections (orthographic and perspective), animation, lighting etc.

DATA EXCHANGE STANDARDS Necessity to translate drawings created in one drafting package to another often arises. For example you may have a CAD model created in PRO/E package and you may wish that this might be transferred to I-DEAS or Unigraphics. It may also be necessary to transfer geometric data from one software to another. This situation arises when you would want to carry out modeling in one software, say PRO/E and analysis in software, say ANSYS. One method to meet this need is to write direct translators from one software to another. This means that each system developer will have to produce its own translators. This will necessitate a large number of translators. If we have three software packages we may require six translators among them. This is shown in Fig.2.

Fig.2. Direct Data Translation

A solution to this problem of direct translators is to use neutral files. These neutral files will have standard formats and software packages can have pre-processors to convert drawing data to neutral file and postprocessors to convert neutral file data to drawing file. Figure 3 illustrates how the CAD data transfer is a accomplished using neutral file. Three types of neutral files are discussed in this chapter. They are: i. Drawing exchange files (DXF) ii. IGES files iii. STEP files Brief descriptions of these are given in the following sections.

Fig. 3 CAD Data Exchange Using Neutral Files INITIAL GRAPHICS EXCHANGE SPECIFICATION (IGES) The IGES committee was established in the year 1979. The CAD/CAM Integrated Information Network (CIIN) of Boeing served as the preliminary basis of IGES. IGES version 1.0 was released in 1980. IGES continues to undergo revisions. IGES is a popular data exchange standard today. Figure 4 shows a CAD model of a plate with a centre hole. The wire frame model of the component is shown in Fig. 5. There are eight vertices (marked as PNT 0 - PNT 8), 12 edges and two circles that form the entities of the model.

IGES (Initial Graphics Exchange Specification) is the first standard exchange format developed to address the concept of communicating product data among dissimilar CAD/CAM systems. ICES is the ANSI Standard Y 14.26M. IGES has gone through various revisions since its inception. Currently it supports solid modeling, including both B-rep and CSG schemes.

IGES defines a neutral database, in the form of a neutral file format, which describes an “IGES model” of modeling data of a given product. The IGES model can be read and interpreted by dissimilar CAD/CAM systems. Therefore, the corresponding product data can be exchanged among these systems. IGES describes the possible entities that can be used to build an IGES model, the necessary parameters (data) for the definition of model entities, and the possible relationships and associativities between model entities.

IGES has three data types: geometric, annotation, and structure. The latter two are non geometric data types. Geometric entities define the product shape and include curves, surfaces, and solids. Non geornetric entities provide views and drawings of the model to enrich its representation and include annotation and structure entities. Annotation entities include various types of dimensions (linear, angular, and ordinate), centerlines, notes, general labels, symbols, and cross-hatching. Structure entities include views, drawings, attributes (such as line and text fonts, colors, and layers), properties (e.g. mass properties),sub figures and external cross reference entities (for surfaces and assemblies), symbols (e.g.. mechanical and electrical symbols), and macros (to define parametric parts). Fig.4. 3-D Model of a Plate

Fig.5. Wire-frame Model of the Component

IGES files can also be generated for: i. Surfaces ii. Datum curves and points

IGES (Initial Graphics Exchange Specification) is the first standard exchange format developed to address the concept of communicating product data among dissimilar CADMCAM systems. IGES is the ANSI Standard Y14.26M. IGES has gone through various revisions since its inception. Currently it supports solid modeling, including both B rep and CSG schemes. IGES defines a neutral database, in the form of a neutral file format, which describes an “IGES model” of modeling data of a given product. The IGES model can be read and interpreted by dissimilar CAD/CAM systems. Therefore, the corresponding product data can be exchanged among these systems. IGES describes the possible entities that can be used to build an IGES model, the necessary parameters (data) for the definition of model entities, and the possible relationships and associativities between model entities.

Geometric Entities IGES uses two distinct but related cartesian coordinate systems to represent geometric entity types. These are the MCS and WCS introduced in previous Chapter. IGES refers to the WCS as the definition space. The WCS plays a simplifying role in representing planar entities. In such a case, the XY plane of the WCS is taken as the entity plane, and therefore only x and y coordinates relative to the WCS are needed to represent the entity. To complete the representation, a transformation matrix is assigned (via a pointer) to the entity as one of its parameters to map its description from WCS to MCS. This matrix itself is defined in IGES as entity type 124. Each geometric entity type in IGES has such a matrix. If an entity is directly described relative to the MCS, then no transformation is required. This is achieved in IGES by setting the value of the matrix pointer to zero to prevent unnecessary processing. As a general rule, all geometric entity types in IGES are defined in terms of a WCS and a transformation matrix. The case when MCS and WCS are identical is triggered by a zero value of the matrix pointer. IGES reserves entity numbers 100 to 199 inclusive for its geometric entities. Sample entity type numbers used by IGES are shown in below table. Specifications and descriptions or entities, including geometric entities, in IGES follow one pattern. Each entity has two main types of data: directory data and parameter data. The former is the entity type number, and the latter are the parameters required uniquely and completely define the entity. In addition, IGES specifies other parameters related to entity attributes and to IGES tile structure. Table.1. IGES geometric entities Annotation Entities Drafting data are represented in IGES via its annotation entities. Many IGES annotation entities are constructed by using other basic entities that IGES defines, such as copious data (centerline, section, and witness line), leader (arrow), and a general note. An annotation entity nay be defined in the modeling space (WCS) or in the drawing space (a given drawing). If a dimension is inserted by the user in model mode, then it requires a transformation matrix pointer when it is translated into IGES. Table below shows some IGES annotation entities.

Table.2. IGES annotation entities Structure Entities The previous two sections slow how geometric and drafting data can be represented in IGES. Product definition includes much more information. 1GES permits a valuable set of product data to be represented via its structure entities. These entities include associativity, drawing, view, external reference, property, subfigure, macro, and attribute entities. Attributes include line fonts, text fonts, and color definition. Table below shows some IGES structure entities.

Table.3. IGES structure entities The associativity definition entity (type number 302) allows IGES to define a special relationship (called associativity schema) between various entities of a given model. The collection of entities that are related to each other via the associativity schema is called a class. Two kinds of associativities are permitted within IGES. Predefined associativities have (form) number 1 to 5000, and the second kind is implementor-defined and has numbers 5001 to 9999. Each time an associativity relation is needed in the IGES tile, an associativity instance entity (type number 402) is used. The external reference entity (type number 416) enables IGES files to relate to each other. This entity provides a link between an entity in one file and the definition or a logically related entity in another file. Three forms of external reference entity are defined. Form 0 is used when a single definition from the referenced tile, which may contain a collection or definitions, is desired. form 1 is used when the entire file is to be instanced, which is the case where the referenced tile contains a complete subassembly. Form 3 is used when an entity in one file refers to another entity in a separate file. This is the case when each sheet of a drawing is a separate file and. for example a flange on one sheet mates with a flange on another sheet. The property entity (type number 406) in IGES contains numerical and textural data. Due to the wide range of properties, each one is assigned a form number and each form number may contain different property types (p types). For example, form number 11 contains tabular data that is organized under n p types. p types 1, 2, 3, 4, as an example, refer to Young's modulus, Poisson's ratio, shear modulus, and material matrix, respectively. There are 17 form numbers that can be specified with the property entity.

File Structure and Format A typical CAD/CAM system which supports IGES usually provides its users with two IGES commands. One command enables the user to create an 1GES file of a given model residing in the system. While the other allows the user to read an existing IGES file of a model into the system. An IGES file consists of a sequence of records. Depending on the chosen file format, the record length can be fixed or variable. There are two different formats to represent IGES data in a file: ASCII and binary. The ASCII form has two format types: fixed 80-character record (line) length format, and compressed format. The binary form consists of bytes representing the data. L3oth the compressed ASCII and binary formats are aimed at reducing the IGES tile size. We only cover the fixed 80-character length format here. The file is divided into sections. Within each section, the records are labeled and numbered. IGES data is written in columns 1 through 72 inclusive of each record. Column 73 stores the section identification character. Columns 74 through 80 are specified for the section sequence number of each record. Fig.6. IGES file structure Above figure shows the section code, also called the identification character (Column 73 of each record) for the IGES file sections. These codes are S, G, D, P, and T. The Flag section does not have a code. The Flag section k used only with compressed ASCII and binary format. It is a single record (line) that precedes the Start section in the IGES tile with the character “C” in Column 73 to identify the file as compressed ASCII. The compressed ASCII form is intended to be simply convened to and from the regular ASCII form. In the Binary file format, the Flag section is called the Binary information section, and the first byte (eight bits) of this section has the ASCII letter “B” as the tile identifier. The Start section is a human-readable introduction to the file. It is commonly described as a “prologue” to the LGES file. This section includes user-relevant information, such as the name of the CAD/CAM system generating the IGES file, and a brief description of’ the product being converted. IGES does not specify how this section could he used. The Global section includes information describing the preprocessor and information needed by the postprocessor to interpret the file. Some of the parameters that are specified in this section are the characters used as delimiters between individual entries and between records (usually commas and semicolons, respectively), the name of the IGES file, the vendor and software version of the sending CAD/CAM system, the number of significant digits in the representation of integers and single- and the double-precision floating-point numbers on the sending systems, the date and time of file generation, model space scale, model units, the minimum resolution and maximum coordinate values, and the name and organization of the author of the IGES file. The Directory Entity section is a list of all the entities defined in the IGES file together with certain attributes associated with them. The entry for each entity occupies two 80-character records, which are divided into a total of 20 8-character fields. The first and the eleventh (beginning of the second record of any given entity) fields contain the entity type number (Tables 1 to 3). The second field contains a pointer to the parameter data entry for the entity) in the Parameter Data section. The pointer of an entity is simply its sequence number in the Directory Entry section. Some of the entity attributes specified in this section are line font, layer number, transformation matrix, line weight, and color. The Parameter Data section contains the actual data defining each entity listed in the Directory Entry section. For example a straight line entity is defined by the six coordinates or its two end points. While each entity always has two records in the Directory Entry section, the number of records needed for each entity in the Parameter Data section varies from one entity to another (minimum is one record) and depends on the amount of data. Parameter data are placed in free format in columns 1 through 64. The parameter delimiter (usually a comma) is used to separate parameters, and the record delimiter (usually a semicolon) is used to terminate the list of parameters. Both delimiters are specified in the Global section of the IGES file. Column 65 is left blank. Columns 66 to 72 on all Parameter Data records contain the entity pointer specified in the first record of the entity in the Data Entry section. The Terminate section contains a single record which specifies the number of records in each of the four preceding sections for checking purposes.

STEP (Standard for the Exchange of Product Data) The Standard for the Exchange of Product Data (STEP) is the enabler for such seamless data exchange. It provides a worldwide standard for storing, sharing and exchanging product information among different CAD systems. Although STEP itself is the basis for Product Data Management System (PDM). It covers border functionalities. It includes methods of representing all critical product specifications such as shape information, materials, tolerances, finishes and product structure. Whereas the Initial Graphics Exchange Specification (IGES) standard has widespread use, it has its shortcomings. It does not convey the extensive product information needed in the design and manufacturing cycle. Often IGES translators are required to move design data from one CAD system to another. STEP is often viewed as a replacement for IGES, though IGES is still expected to be in active use for some more time in the future. Although the current focus of STEP is on mechanical parts, STEP is a data exchange standard that would apply to a wide range of product areas, including electronics, architectural, engineering and construction, apparel and shipbuilding.

STEP Architecture STEP architecture has four main components: • EXPRESS • Data schemes including attributes such as geometry, topology, features and tolerance. • Application interface called Standard Data Access Interface (SDAI), which is a standard interface to enable applications to access and manipulate STEP data • STEP database, which has the following forms: • ASCII format file for data exchange • Working from file, usually in binary format, that can be shared by multiple systems • Shared database, involving object oriented database management system or relational database system • Knowledge base, with a database management system as a base coupled to an expert shell

Figure: Step Three Layer Architecture IMPLEMENTATION:

STEP is built on a data exchange language, called EXPRESS, to formally describe a model and the file format that stores it. EXPRESS stores both the model data and semantics.

The basic unit in EXPRESS is the entity. An entity is a collection of data, constraints, and operations. The operations work on data. A set of entities make up a model. The relationships (semantics) between model entities are carried over, and maintained by STEP, from the native CAD database of the model.

There are several sections within STEP, called Application Protocols (APs), which are built into a common . These APs include definitions not only of typical geometry and drafting elements, but also of data types and processes for specific industries such as automotive, aerospace, shipbuilding, electronics, plant construction and maintenance. Sample APs are AP203 (STEP format (0 save solid models) and AP21O (STEP format to save electronics). AP203 is further divided into classes for defining wireframe geometry, surfaces and solid modeling.

One of the latest significant developments in STEP is the recent agreement to provide mapping to XML (extensible markup language). This new technology is rapidly becoming the preferred method for complex data access on the Web. The flexibility and growing availability of commercial Web/XML tools with STEP greatly increases the sharing of information across disciplines, with universal access. XML lends itself well to STEP. XML is a standard format for data representation. It complements XHTML (extensible hypertext markup language) which is a standard for data presentation. XML defines data schema in a DTD (document type definition tile.)

STEP Enabler for Concurrent Engineering STEP was released in early 1993 as a Draft International Standard (DIS). The initial release of STEP has four basic parts. These include: • EXPRESS modeling language • Two application protocols • Drafting and Configuration Control Design for three-dimensional product data. • Six application resources. Subsequent releases of STEP provided added functionality in terms of the kinds of product supported and the extent of the product life cycle. While STEP is advancing towards maturity, it had been investigated for the feasibility of incorporation into framework system. Both STEP and Concurrent Engineering share the common goal of influencing the product cycle from design, assembly, etc. to the disposal stages which have been realized in the CONSENS system under ESPRIT EP6896. The object-oriented database for CONSENS has a schema with STEP definitions alongside company specific definitions. A module called Product Information Archive (PIA) provides functionality for STEP data access via SDAI. It is generic to be adopted for different domains. For example it is used for product information by the Aircraft Company, Deutshces Aerospace and electronics manufacturing company, AEG. STEP data export in a CAD modeling package has the following options: (i) wire frame edges (ii) Surfaces (iii) Solids (iv) Shells (v) Datum curves and points

CALS:

CALS is used as a kernel in a number of commercial CAD/CAM systems. Spatial, the maker of AC1S, provides a translator for these systems use. Spatial’s translator allows the exchange of solid, surface, and wireframe data via a variety of neutral and native formats, including IGES, STEP, Pro-E, Solid Works, CATIA, Parasolid (PS), Unigraphics (UG), and Inventor. These major systems, therefore, offer these formats in their translation menus.

DXF (Data exchange Format) is a de facto standard due to its popularity. DXF is an AutoCAD format. Auto Desk Inc., the maker of AutoCAD, publishes, supports, and maintains it. DXF 3D is a format that translates CAD models (part files), while DXF/DWG is a format that translates drawing files. DXF/DWG does not and cannot translate part files. DXF files come in two formats: ASCII and binary. The ASCII version is the most widely used in industry. A DXF tile consists of four sections: Header, Tables, Blocks, and Entities. The Header section includes the AutoCAD system settings such as dimension style and layers. The Tables section includes line styles and user-defined coordinate systems. The Blocks section includes drawing blocks (instances). The Entities section includes entity definition and data.

COMMUNICATION STANDARDS:

A standard, such as IGES or STEP, in itself is just a document describing what should go into a data file. Interested developers (CAD/CAM vendors or companies specialized in database transfer) must interpret, understand, and implement the standard into programs, often called processorsortranslators.

The processors translate from their systems to the standard format and vice versa. The software that translates from the native file format of a given CAD/CAM system to a standard format is called a preprocessor. The software that translates in the opposite way (from a standard to a (CAD)/CAM system) is called a postprocessor. The user interface to access these processors usually takes the form of simple commands, accompanied by proper dialogues. Figure shows tile exchange using a translator. The source system is the originating or sending CAD/CAM system, and the target system is the receiving one. The archival database is a side benefit of using standards. Such archived databases could be kept for as long as needed. If system B in Figure becomes the source and system A becomes the target, the processors reverse positions.

Figure: Pre and Postprocessors of a Translator

DESIGN AND IMPLEMENTATION:

Designing and writing processors is a significant challenge. A typical database might contain many instances of many entity types. Many of these entity types involve complex mathematics and complex data structures. Problems in writing a processor relate to the definition and format of the standard itself. Some of these problems are:

1. Entity set: Any standard (IGES or STEP) does not and cannot contain a real superset of entities which are found in all of today’s CAD/CAM systems. The standard may contain an entity which has no equivalence on a specific CAD/CAM system. Or, the system may contain an entity for which no standard entity exists. A processor could either ignore translating the entity, or translate it into a similar one, destroying its original meaning. 2. Format: While a standard allows exchanging complex structures and relationships, its format must be processible by a wide range of different computer systems and therefore can only use simple data formats and management methods known to these systems and, in the meantime, independent of any system specifics. 3. Limitations of individual CAD/CAM systems. These limitations are based on specific systems and are related to things such as model size, model space, and data precision.

The designing of processors, with all the preceding problems in mind, divides into the following steps: I. Analyze and tabulate entity characteristics. This step involves the study of the entity mathematical representations utilized by both the standard and the CAD/CAM system. In many cases, an entity can he represented by a number of nearly, but not completely, equivalent methods. 2. Define conversion algorithms. Step 1 clearly provides the information required to design the proper conversion algorithms to convert an entity to and from the standard. 3. Develop a complete specification of the processors. Steps I and 2 forms the core of the design process of processors. Once completed, other specifications of the processors must be developed. These include the standard revision that the processor ought to support, the subset of the standard entities it can support, and the user interface of the processor. 4. Design verification procedures. Careful verification of processors is very important because processors operate at the interface between different organizations and vendors. Processors must be verified by constructing test data, running it through the processors, and comparing the actual results with those expected. Ideally, two sets of test data would he required: a set for implementations to use during processor development, and a more comprehensive set for final processor verification. In addition, more customized tests for specific user requirements can also be developed in collaboration between users and implementers of the standard.

TESTING AND VERIFICATION:

A newly developed processor must be carefully tested before it is used in a production environment. For example, there is an IGES Test. Evaluate, and Support Committee whose function is to provide test data. There is an IGES test library prepared by the committee which allows testing of the basic implementation of an entity. However, the library does not allow the checking of the variations that occur in production data due to numerical and computational errors. These variations must be tested by implementers and users themselves. Verification of the results of a processor is a time-intensive task. In most cases, it is not sufficient to check converted models visually; more comprehensive tests are needed.

Figure: Loop Back System

1. Reflection test. In this test, a neutral file created by a translator preprocessor is read by its own postprocessor to create a native tile of the translated model. This test is used to establish that a translator’s processors could read and write common entities, making them symmetric. 2. Transmission test. Here a neutral file of a model created by the preprocessor of a source system is transferred to a target system whose postprocessor is used to re-create the model on the target system. This test essentially determines the capabilities of the preprocessor and the postprocessor of the source and the target systems respectively. 3. Loopback test. In this test, a neutral tile created by the source system is read by the target system which, in turn, creates another neutral file and then transfers this file back to the source system to read it. This test checks the pre- and postprocessors of both the source and the target systems.

ERROR HANDLING:

Error handling and reporting when processing a neutral file is important. There are two major error sources when processing IGES files: programming errors in the processor, and misinterpretation of the standard itself. These sources apply to both pre- and postprocessors. The way a processor reports these errors and the information given with these reports, determine whether the correction of an error becomes a laborious task or not. The preprocessor should report the entity type, number of unprocessed entries, reasons for unprocessing, and other relevant database information of these unprocessed entities.

On the other hand, the postprocessor should report the number of unprocessed entities, their types, their forms, their record numbers in the Directory Entry and Parameter Data sections, and the reasons for unprocessing. It should also report any invalid or missing data encountered in reading neutral files, especially those that were edited.