1. A) Explain Computer Graphics? Indicate Four Practical Applications of Computer Graphics

Total Page:16

File Type:pdf, Size:1020Kb

1. A) Explain Computer Graphics? Indicate Four Practical Applications of Computer Graphics

Computer Graphics & Multi-media (IT-602) VI sem Examination June 2014

UNIT-I 1. a) Explain computer graphics? Indicate four practical applications of computer graphics. Ans. Computer graphics remains one of the most existing and rapidly growing computer fields. Computer graphics may be defined as a pictorial representation or graphical representation of objects in a computer. Applications: Computers have become a powerful tool for the rapid and economical production of pictures. There is virtually no area in which graphical displays cannot be used to some advantage, and so it is not surprising to find the use of computer graphics so widespread. Computer graphics used routinely in such diverse areas as science, engineering, medicine, business, industry, government, art, entertainment, advertising, education, and training. Out of these four practical applications are given below: 1. Computer Aided Design : A major use of computer graphics is in design processes, particularly for engineering and architectural systems, but almost all products are now computer designed. Generally referred to as CAD, computer-aided design methods are now routinely used in the design of buildings, automobiles, aircraft, watercraft, spacecraft, computers, textiles, and many, many other products. For some design applications; object are first displayed in a wireframe outline form that shows the overall sham and internal features of objects. Wireframe displays also allow designers to quickly see the effects of interactive adjustments to design shapes. 2. Computer Art: Computer graphics methods are widely used in both fine art and commercial art applications. Artists use a variety of computer methods, including special purpose hardware, artist's paintbrush such as Lumens, other paint packages such as Pixelpaint and Superpaint, specially developed software, symbolic mathematics packages such as Mathematics ,CAD packages, desktop publishing software, and animation packages that provide facilities for designing object shapes and specifying object motions. The basic idea behind a paintbrush program that allows artists to "paint" pictures on the screen of a video monitor. Actually, the picture is usually painted electronically on a graphics tablet using a stylus, which can simulate different brush strokes, brush widths, and colors. 3. Entertainment: Computer graphics methods are now commonly used in making motion pictures, music videos, and television shows. Sometimes the graphics scenes are displayed by themselves, and sometimes graphics objects are combined with the actors and live scenes. Many TV series regularly employ computer graphics methods .Music videos use graphic in several ways. Graphics objects can be combined with the live action, or graphics and image processing techniques can be used to produce a transformation of one person or object into another i.e. morphing. 4. Education and Training: Computer-generated models of physical, financial, and economic systems are often used as educational aids. Models of physical systems, physiological systems, population trends, or equipment, such as the color coded diagram, can help trainees to understand the operation of the system. For some training applications, special systems are designed. Examples of such specialized systems are the simulators for practice sessions or training of ship captains, aircraft pilots, heavy-equipment operators, and air traffic control personnel. Some simulators have no video screens; for example, a flight simulator with only a control panel for instrument flying. But most simulators provide graphics screens for visual operation. The keyboard is used to input parameters affecting the airplane performance or the environment, and the pen plotter is used to chart the path of the aircraft during a training session. 1. b) Explain Rubber band techniques. Ans. Rubber banding is a very simple, but useful technique for positioning. The user, if he wants to draw a line, say, specifies the end point sand as he moves from one point to another, the program displays the line being drawn. The effect is similar to an elastic line being stretched from one point to another and hence the name for the technique. By altering the end points, the position of the line can be modified. The technique can be extended to draw rectangles, arcs, circles etc. The technique is very useful when figures that pass through several intermediate points are to be drawn. In such cases, just by looking at the end points, it may not be possible to judge the course of the line. Hence, the positioning can be done dynamically , however, rubber band techniques normally demand fairly powerful local processing to ensure that lines are drawn fast enough. 2. Explain the working of raster and random scan displays and differentiate. Ans. The most common type of graphics monitor employing a CRT is the raster-scan display, based on television technology. In a raster-scan system, the electron beam is swept across the screen, one row at a time from top to bottom. As the electron beam moves across each row, the beam intensity is turned on and off to create a pattern of illuminated spots. Picture definition is stored in a memory area called the refresh buffer or frame buffer. This memory area holds the set of intensity values for all the screen points. Stored intensity values are then retrieved from the refresh buffer and "painted" on the screen one row scan line at a time Fig. Each screen point is referred to as a pixel or pel shortened forms of picture element. The capability of a raster-scan system to store intensity information for each screen point makes it well suited for the realistic display of scenes containing subtle shading and color patterns. Home television sets and printers are examples of other systems using raster-scan methods. Intensity range for pixel positions depends on the capability of the raster system. In a simple black-and-white system, each screen point is either on or off, so only one bit per pixel is needed to control the intensity of screen positions. For a bi-level system, a bit value of 1 indicates that the electron beam is to be turned on at that position, and a value of 0 indicates that the beam intensity is to be off. Additional bits are needed when color and intensity variations can be displayed. Up to 24 bits per pixel are included in high-quality systems, which can require several megabytes of storage for the frame buffer, depending on the resolution of the system.. On a black-and-white system with one bit per pixel, the frame buffer is commonly called a bitmap. For systems with multiple bits per pixel, the frame buffer is after referred to as a pixmap. Refreshing on raster-scan displays is carried out at the rate of 60 to 80 frames per second, although some systems are designed for higher refresh rates. Refresh rates are described in units of cycles per second, or Hertz, where a cycle corresponds to one frame. Using these units, we would describe a refresh rate of 60 frames per second as simply 60 Hz. At the end of each scan line, the electron beam returns to the left side of the screen to begin displaying the next scan line. The return to the left of the screen, after refreshing each frame displayed in 1/80th to 1/60th of a second, the electron beam returns vertical retrace to the top left comer of the screen to begin the next frame.

Figure A raster-scan system displays an object as a set of discrete points across each scan line. Random-Scan Displays When operated as a random-scan display unit, a CRT has the electron beam directed only to the parts of the screen where a picture is to be drawn. Randoms can monitors draw a picture one line at a time and for this reason are also referred to as vector displays or stroke-writing or calligraphic displays. The component lines of a picture can be drawn and refreshed by a random- scan system in any specified order shown in Fig . A pen plotter operates in a similar way and is an example of a random-scan, hard-copy device. Refresh rate on a random-scan system depends on the number of lines to be displayed. Picture definition is now stored as a set of line drawing commands in an area of memory referred to as the refresh display file. Sometimes the refresh display file is called the display list, display program, or simply the refresh buffer. To display a specified picture, the system cycles through the set of commands in the display file, drawing each component line in turn. After all line drawing commands have been processed, the system cycles back to the first line command in the list. Random-scan displays are designed to draw all the component lines of a picture 30 to 60 times each second. High quality vector systems are capable of handling approximately 100,000 "short" lines at this refresh rate. When a small set of lines is to be displayed, each refresh cycle is delayed to avoid refresh rates greater than 60 frames per second. Otherwise, faster refreshing of the set of lines could burn out the phosphor. Random-scan systems are designed for line drawing applications and cannot display realistic shaded scenes. Since picture definition is stored as a set of line drawing instructions and not as a set of intensity values for all screen points, vector displays generally have higher resolution than raster systems. Also, vector displays produce smooth line drawings because the CRT beam directly follows the line path. A raster system, in contrast, produces jagged lines that are plotted as d h t e point sets.

Figure. A random-scan system draws the component lines of an object in any order specified. Difference between Raster and Random Scan Displays

In this, the electron beam is In this, the electron beam is directed only swept across the screen, one row to the parts of the screen where a picture at a time from top to bottom. is to be drown. The pattern is created by Here a picture is drawn one line at a time illuminated spots Refreshing on Raster Scan Refresh cycle is displayed to aroid display is carried out at the rate refresh rate greater than 60 frames per of 60 to 80 frames per second. second for small set of lines. Refresh rate depends on number of lines to be displayed. This display porous produces In this display it produces jagged lines smooth line drawings as the CRT that are potted as discrete point sets. beam directly follows the line path. This provides higher resolution. This provides lower resolution. UNIT-II 3. Explain with an example the working of Bresenham’s midpoint circle drawing algorithm. Choose a circle and calculate pixels in the path of this circle in only first quadrant.

Ans. A circle with centre (xc, yc) and radius r can be represented in equation form in three ways 2 2 2 1. Analytical representation: r = (x – xc) + (y – yc) 2 2 2 2. Implicit representation : (x – xc) + (y – yc) – r = 0 3. Parametric representation: x = xc + r cosθ y = yc +ysinθ A circle is symmetrical in nature. Eight – way symmetry can be used by reflecting each point about each 45° axis. The points obtained in this case are given below with illustration by figure. P1 = (x, y) P5 = (-x, -y) P2 = (y, x) P6 = (-y, -x) P3 = (-y, x) P7 = (y, -x) P4 = (-x, y) P8 = (x, -y)

Figure. Eight way symmetry of a circle Bresenham’s Circle Bresenham’s method of drawing the circle is an efficient method because it avoids trigonometric and square root calculation by adopting only integer operation involving squares of the pixel separation distances. The Bresenham’s method consider the eight – way symmetry of the circle.It plots 1/8th parts of the circle from 90° to 45°. As circle is drawn from 90° to 45°, the x moves in +ve direction and y moves in the –ve direction. To achieve best approximation to the circle we have to select those pixels in the raster that falls the least distance from the true circle. Let us observe the 90° to 45° portion of the circle, each new point closest to the true circle can be found by applying either of the two options: a) Increment in +ve x direction by one unit b) to Increment in +ve x direction by one unit and –ve y direction both by one unit. If Pn is a current point with coordinates(xn,yn) then the next point could be either A or B. We have to select A or B, depending on which is close to the circle, and for that we have to perform some test. The closer pixel amongst these two can be determined as follows.The distances of pixels A and B from the origin (0,0) are given by dA = dA = and dB = The distance of pixels A and B from the true circle whose radius r given as δA = dA – r and δB = dB – r Decision variable di as di = δA + δB If di < 0 then only x is incremented. Otherwise x is incremented in +ve x direction and y is incremented in –ve direction. For di < 0 xi+1 = xi + 1 else xi+1 = xi + 1 yi+1 = yi – 1 The equation for di at the starting point i.e. x=0 and y=r is as follows: di = δA + δB = 3 – 2r Recompute the new decision value, by substituting new value of xi , di+1 = di + 4 xi +6 if di < 0 di+1 = di + 4 (xi - yi) +10 if di >= 0 Alogrithm Let us define a procedure for Bresenham’s circle drawing algorithm for circle of radius r and centre (xc, yc). xc and yc denote the x-coordinate and y – coordinate of the center of the circle. 1. Set x = 0 and y = r 2. Set d = 3 – 2r 3. Repeat While (x < y) 4. Call Draw Circle(xc, yc, x, y) 5. Set x = x + 1 6. If (d < 0) Then 7. d = d + 4x + 6 8. Else 9. Set y = y – 1 10. d = d + 4(x – y) + 10 [End of If] 11. Call Draw Circle(xc, yc, x, y) [End of While] 4.Explain the following terms: i) Parametric function ii) Bezier method iii) B spline method Ans. i) Parametric function: A parametric curve that lies in a plane is defined by two functions, x(t) and y(t), which use the independent parameter t. x(t) and y(t) are coordinate functions, since their values represent the coordinates of points on the curve. As t varies, the coordinates (x(t), y(t)) sweep out the curve. As an example consider the two functions: x(t) = sin(t) y(t) = cos(t) As t varies from zero to 360, a circle is swept out by (x(t), y(t)).

A spline curve is a mathematical representation for which it is easy to build an interface that will allow a user to design and control the shape of complex curves and surfaces. The general approach is that the user enters a sequence of points, and a curve is constructed whose shape closely follows this sequence. The points are called control points. A curve that actually passes through each control point is called an interpolating curve; a curve that passes near to the control points but not necessarily through them is called an approximating curve.

ii)Bezier curve Bezier curve section can be fitted to any number of control points. The number of control points to be approximated and their relative position determine the degree of the Bezier polynomial. A Bezier curve can be specified with boundary conditions, with blending function. Suppose we are given n+1 control point positions: Pk =(Xk,Yk,Zk) with k varing from 0 to n. these coordinate points can be blended to produce the following position vector P(u) , which describes the path of an approximating Bezier polynomial function between P0 and Pn. P(u)= Σn k=0 Pk BEZk ,n(u) ……….. 0≤u≤1. The Bernstein polynomials: BEZk ,n(u) =C(n,k) uk (1-u) C(n,k)= binomial coefficients. C(n,k)=n!/ k! (n-k)! BEZk ,n(u) = (1-u) BEZk ,n-1(u) +u BEZk-1 ,n-1(u) ……….. n› k≥ 1 X(u)= Σn k=0 Xk BEZk ,n(u) Y(u)= Σn k=0 Yk BEZk ,n(u) 3 points : Generate a parabola 4 points : A cubic curve 3 collinear control points : Generate a straight line segment Bezier curve use because 1. Easy to implement 2. Reasonably powerful in curve design. 3. Efficient methods for determining coordinate positions along a Bezier curve can be set up using recursive calculations. C(n,k)=((n-k+1)/k ) C (n,k-1) ………… n≥k Properties: 1. Bezier curves are always passes through the first and last control points. 2. The slop at the beginning of the curve is along the line joining the first two control points and the slop at the end of the curve is along the line joining the last two end points. 3. It lies within the convex hull of the control points. Cubic Bezier Curves. It gives reasonable design flexibility while avoiding the increased calculations needed with higher order polynomials. 3 BEZ0 ,3(u) = (1-u) 2 BEZ1,3(u) = 3u(1-u) 2 BEZ2,3(u) = 3u (1-u) 3 BEZ3,3(u) = u At u=0 and u=1 only non zero blending function is BEZ0,3 and BEZ3,3 respectively. Thus, the cubic curve will always pass through control points P0 and P3 The BEZ1,3 and BEZ2,3 influence the shape of the curve at intermediate values of parameter u, so that the resulting curve tends toward points p1 and p2. BEZ1,3 is max at u=1/3 BEZ2,3 is max at u=2/3. Bezier curves do not allow for local control of the curve shape. If we reposition any one of the control points, the entire curve will be affected. iii) B-splines: B-splines are not used very often in 2D graphics software but are used quite extensively in 3D modeling software. They have an advantage over Bezier curves in that they are smoother and easier to control. B-splines consist entirely of smooth curves, but sharp corners can be introduced by joining two spline curve segments. The continuous curve of a b-spline is defined by control points. The equation for k-order B-spline with n+1 control points (P0,P1,...Pn ) is P(t)=∑i= 0,Ni,k(t) Pi ,tk-1<=t<=t n+1 In a B-spline each control point is associated with a basis function Ni,k which is given by the recurrence relations B-spline basis functions as like as Bezier ones are nonnegative Nik >=0 and have "partition of unity" property ∑i=0,nNi,k (t)=1,tk-1=t j+k therefore a control point Pi influences the curve only for ti < t < ti+k. The main properties of B-splinescomposed of (n-k+2) Bezier curves of k-order joined C K-2 continuously at knot values (t0 , t1 , ... , tn+k) 1. each point affected by k control points 2. each control point affected k segments 3. inside convex hull 4. affine invarianc UNIT-III 5.a)Find and show the transformation to reflect a polygon whose vertices are A(-1,0), B(0,- 2),C(1,0) and D(0,2) about the line y=x+2. b) Explain parallel projection? Ans.A parallel projection is formed by extending parallel lines from each vertex of the object until they interect the plane of the screen.The point of intersection is the projection of the vertex. Then we connect the projected vertices by line segments which correspond to connections on the original object. A parallel projection preserves relative properties of objects in x and y direction. The orthographic projection is one of the most widely used parallel projections. Orthographic projection: Orthographic projection utilizes perpendicular projectors from the object to a plane of projection to generate a system of drawing views.These projections are used to describe the design and features of an object. It is one of the parallel projection form, all the projection lines are orthogonal to the projection plane. Figure. Projection plane and projection lines in orthogonal projection It is often used to generate the front, top and side views of an object. It is widely used in engineering and architectural drawings.Orthographic projection that displays more than one face of an object is known as axonometric orthographic projections. Axonometric projections use projection planes that are not normal to a principal axis.The construction of an axonometric projection is done by using rotation and translation to manipulate the object. On the basis of projection plane normal subclasses are a)Isometric projections : The direction of projection makes equal angles with all three principal axis. b)Dimetric projections : The direction of projection makes equal angles with exactly two of the principal axis. c)Trimetric projections : The direction of projection makes unequal angles with all three principal axis. Oblique projections: When the angle between the projector and the plane of projection is not equal to 90 then the projection is called oblique projection. Common subcategories of oblique projections are: a) Cavalier projection: It is obtained when angle between the oblique projectors and the angle of projection is 45. b) Cabinet projection: It is used to correct the distortion that is produced by canviler projection. 6.a)Reflect the polygon having coordinate A(-2,0),B(0,-4),C(2,0) and D(0,4) about the line y=4. b) Explain line clipping algorithms. Ans.It is desirable to restrict the effect of graphics primitives to a subregion of the canvas, to protect other portions of the canvas. All primitives are clipped to the boundaries of this clipping rectangle; that is, primitives lying outside the clip rectangle are not drawn. The default clipping rectangle is the full canvas (the screen ), and it is obvious that we cannot see any graphics primitives outside the screen. First we test whether both endpoints are inside (and hence draw the line segment) or whether both are left of x=xmin , right of x=xmax, below y=ymin, or above y=ymax.Otherwise we split the line segment into two pieces at a clipping edge. Now we proceed iteratively. A rather simple accept-reject test is the following: x=xmin Divide the plane into 9 regions and assign a 4 bit code to each: 1000...above top edge y > y max 0100...below bottom edge y < ymin 0010...right of right edge x > xmax 0001...left of left edge x < xmin Math calculate the corresponding bit-codes for both endpoints. Figure: Codes for the 9 regions associated to clipping rectangle The sequence for reading the codes' bits is LRBT (Left, Right, Bottom, Top). Once the codes for each endpoint of a line are determined, the logical AND operation of the codes determines if the line is completely outside of the window. If the logical AND of the endpoint codes is not zero, the line can be trivally rejected. For example, if an endpoint had a code of 1001 while the other endpoint had a code of 1010, the logical AND would be 1000 which indicates the line segment lies outside of the window. On the other hand, if the endpoints had codes of 1001 and 0110, the logical AND would be 0000, and the line could not be trivally rejected. The logical OR of the endpoint codes determines if the line is completely inside the window. If the logical OR is zero, the line can be trivally accepted. For example, if the endpoint codes are 0000 and 0000, the logical OR is 0000 - the line can be trivally accepted. If the endpoint codes are 0000 and 0110, the logical OR is 0110 and the line can not be trivally accepted.

Figure: Example of Cohen-Sutherland line-clipping algorithm Algorithm: The Cohen-Sutherland algorithm uses a divide-and-conquer strategy. The line segment's endpoints are tested to see if the line can be trivally accepted or rejected. If the line cannot be trivally accepted or rejected, an intersection of the line with a window edge is determined and the trivial reject/accept test is repeated. This process is continued until the line is accepted.To perform the trivial acceptance and rejection tests, we extend the edges of the window to divide the plane of the window into the nine regions. Each end point of the line segment is then assigned the code of the region in which it lies. 1. Given a line segment with endpoint P1= (x1,y1) and P2=(x2,y2) . 2. Compute the 4-bit codes for each endpoint. If both codes are 0000,(bitwise OR of the codes yields 0000 ) line lies completely inside the window: pass the endpoints to the draw routine. If both codes have a 1 in the same bit position (bitwise AND of the codes is not 0000), the line lies outside the window. It can be trivially rejected. 3. If a line cannot be trivially accepted or rejected, at least one of the two endpoints must lie outside the window and the line segment crosses a window edge. This line must be clipped at the window edge before being passed to the drawing routine. 4. Examine one of the endpoints, say P1= (x1,y1) . Read P1 's 4-bit code in order: Left-to- Right, Bottom-to-Top. 5. When a set bit (1) is found, compute the intersection I of the corresponding window edge with the line from P1 to P2. Replace P1 with I and repeat the algorithm. UNIT-IV 7. What are components of a multimedia system? Describe typical formats in which the data of different components are stored in a computer. How are these linked with each other. Ans. Multimedia is the field concerned with the computer-controlled integration of text, graphics, drawings, still and moving images Video, animation, audio, and any other media where every type of information can be represented, stored, transmitted and processed digitally. All multimedia content consists of texts in some form. Even a menu text is accompanied by a single action such as mouse click, keystroke or finger pressed in the monitor (in case of a touch screen). The text in the multimedia is used to communicate information to the user. Proper use of text and words in multimedia presentation will help the content developer to communicate the idea and message to the user. Multimedia Building Blocks Any multimedia application consists any or all of the following components : 1. Text : Text and symbols are very important for communication in any medium. With the recent explosion of the Internet and World Wide Web, text has become more the important than ever. Web is HTML (Hyper text Markup language) originally designed to display simple text documents on computer screens, with occasional graphic images thrown in as illustrations. 2. Audio : Sound is perhaps the most element of multimedia. It can provide the listening pleasure of music, the startling accent of special effects or the ambience of a mood-setting background. 3. Images : Images whether represented analog or digital plays a vital role in a multimedia. It is expressed in the form of still picture, painting or a photograph taken through a digital camera. 4. Animation : Animation is the rapid display of a sequence of images of 2-D artwork or model positions in order to create an illusion of movement. It is an optical illusion of motion due to the phenomenon of persistence of vision, and can be created and demonstrated in a number of ways. 5. Video : Digital video has supplanted analog video as the method of choice for making video for multimedia use. Video in multimedia are used to portray real time moving pictures in a multimedia project.

8.Explain authoring and presentation tools in multimedia. Ans. Authoring Tool can be described as creating highly interactive applications in which the information can flow in both the directions i.e., from application to user and from user to application. Multimedia authoring tools has helped in creating higher-quality audio and video applications with very little expertise. Authoring does not require a team of programmers, Instructors, corporate communications, and content specialists with no previous programming experience should be able to learn and use the authoring environment. At large, authoring programs could be grouped into the following two distinct types: (i) Those that relied entirely on point and click, and (ii) Those that required simple scripting. Multimedia authoring tools can be evaluated by performing certain tests, which include the creation of non-interactive multimedia, computer-based training (CBT) applications, catalog creation and even authoring for the World Wide Web. The advantage of using object-oriented programming has been inculcated in the authoring products like Everest Authoring System offers object instancing, while Media Verse and Oracle Media Objects allows to add new methods and events to existing objects. The following subsections discuss a few important authoring tools: 1) Author ware :Macromedia's Author ware is a dependable industrial-strength environment that allows multiple people to contribute to an application. Artists can load a library with media, programmers can provide templates for complex interactions, and interface designers can put everything together. And one non-programming designer can even do the entire job alone. Author ware 3 allows authoring of programs in identical environments on either the PC or the Macintosh and produce runtimes for either. Author ware’s icons hold a wealth of programming information that never requires the user to think like a programmer. 2) Everest Authoring System :The Everest development environment opens with multiple views of an application. In Everest's application creation, the user starts off with a library and names an initial screen. Dragging a layout object icon from the toolset onto that screen provides the basis for the first screen. The layout object, visible as an icon on the Icon Script, also occupies the Attributes Window, in which the user can choose a background colour on bitmap, select a screen size, and specify up to 34 more properties. 3) Icon Author:Icon Author, an authoring tool requires the use of an icon-based flowchart for building an application. It still maintains a strict separation between an application's structure and the actual content and still ships with a number of ancillary programs that help to edit and manage content. The role of Smart Object editor has become central to the creation of Icon Author applications. Smart Object Editor assembles individual object such as imported pictures, sound files, video clips, animations, database links, text push buttons, tables and list boxes into coherent pages layouts. The appearance and behavior of these objects can be controlled by changing the properties associated with each one. Icon Author handles multimedia objects with aplomb. The sound, movie and animation files are not only easy to be imported but it also takes less effort to control them. For example, adding VCR start and stop buttons to a movie object requires just activating the control bar property in Smart Object Editor. 4) ImageQ :ImageQ is another multimedia authoring software package. Although, it is a windows 3.1 application but it can also play back on windows 95, create slide show presentations well and provides a way to distribute them easily. It lacks the intuitive programming language and strong interface necessary for truly productive multimedia applications development. 5) Macromedia Director:Macromedia Director 4.04 is better suited to multimedia presentations and kiosk applications than CBT or interactive catalogs that require more database work. This product offers a lot. (i) central scoring component that provides precise timing control (ii) cross-platform compatibility (iii) Strong yet intuitive animation features (iv) And as extensive architecture to add functionality. UNIT-V 9. List four common formats of storing image data. Explain how they are different from each other. Ans. Image file formats are standardized means of organizing and storing digital images. Image files are composed of either pixels, vector (geometric) data, or a combination of the two. Whatever the format, the files are rasterized to pixels when displayed on most graphic displays. The pixels that constitute an image are ordered as a grid (columns and rows); each pixel consists of numbers representing magnitudes of brightness and color. Image file sizes- Image file size--expressed as the number of bytes--increases with the number of pixels composing an image, and the colour depth of the pixels. The greater the number of rows and columns, the greater the image resolution, and the larger the file. Also, each pixel of an image increases in size when its colour depth increases--an 8-bit pixel (1 byte) stores 256 colors, a 24-bit pixel (3 bytes) stores 16 million colours, the latter known as truecolor. Image compression uses algorithms to decrease the size of a file. High resolution cameras produce large image files, ranging from hundreds of kilobytes to megabytes, per the camera's resolution and the image-storage format capacity. High resolution digital cameras record 12 megapixel (1MP = 1,000,000 pixels / 1 million) images, or more, in truecolor. For example, an image recorded by a 12 MP camera; since each pixel uses 3 bytes to record truecolor, the uncompressed image would occupy 36,000,000 bytes of memory--a great amount of digital storage for one image, given that cameras must record and store many images to be practical. Faced with large file sizes, both within the camera and a storage disc, image file formats were developed to store such large images. Major graphic file formats Including proprietary types, there are hundreds of image file types. The PNG, JPEG, and GIF formats are most often used to display images on the Internet. These graphic formats are listed and briefly described below: JPEG/JFIF:JPEG (Joint Photographic Experts Group) is a compression method; JPEG- compressed images are usually stored in the JFIF (JPEG File Interchange Format) file format. JPEG compression is (in most cases) lossy compression. The JPEG/JFIF filename extension is JPG or JPEG. Nearly every digital camera can save images in the JPEG/JFIF format, which supports 8 bits per color (red, green, blue) for a 24-bit total, producing relatively small files. When not too great, the compression does not noticeably detract from the image's quality, but JPEG files suffer generational degradation when repeatedly edited and saved. The JPEG/JFIF format also is used as the image compression algorithm in many PDF files. TIFF:The TIFF (Tagged Image File Format) format is a flexible format that normally saves 8 bits or 16 bits per color (red, green, blue) for 24-bit and 48-bit totals, respectively, usually using either the TIFF or TIF filename extension. TIFF's flexibility can be both an advantage and disadvantage, since a reader that reads every type of TIFF file does not exist. TIFFs can be lossy and lossless; some offer relatively good lossless compression for bi-level (black&white) images. Some digital cameras can save in TIFF format, using the LZW compression algorithm for lossless storage. TIFF image format is not widely supported by web browsers. TIFF remains widely accepted as a photograph file standard in the printing business. TIFF can handle device- specific color spaces, such as the CMYK defined by a particular set of printing press inks. OCR Optical Character Recognition software packages commonly generate some form of TIFF image for scanned text pages. RAW:RAW refers to a family of raw image formats that are options available on some digital cameras. These formats usually use a lossless or nearly-lossless compression, and produce file sizes much smaller than the TIFF formats of full-size processed images from the same cameras. Although there is a standard raw image format, (ISO 12234-2, TIFF/EP), the raw formats used by most cameras are not standardized or documented, and differ among camera manufacturers. PNG:The PNG (Portable Network Graphics) file format was created as the free, open-source successor to the GIF. The PNG file format supports truecolor (16 million colors) while the GIF supports only 256 colors. The PNG file excels when the image has large, uniformly colored areas. The lossless PNG format is best suited for editing pictures, and the lossy formats, like JPG, are best for the final distribution of photographic images, because in this case JPG files are usually smaller than PNG files. The Adam7-interlacing allows an early preview, even when only a small percentage of the image data has been transmitted. PNG provides a patent-free replacement for GIF and can also replace many common uses of TIFF. Indexed-color, grayscale, and truecolor images are supported, plus an optional alpha channel. PNG is designed to work well in online viewing applications like web browsers so it is fully streamable with a progressive display option. PNG is robust, providing both full file integrity checking and simple detection of common transmission errors. Also, PNG can store gamma and chromaticity data for improved color matching on heterogeneous platforms. GIF:GIF (Graphics Interchange Format) is limited to an 8-bit palette, or 256 colors. This makes the GIF format suitable for storing graphics with relatively few colors such as simple diagrams, shapes, logos and cartoon style images. The GIF format supports animation and is still widely used to provide image animation effects. It also uses a lossless compression that is more effective when large areas have a single color, and ineffective for detailed images or dithered images. BMP:The BMP file format (Windows bitmap) handles graphics files within the Microsoft Windows OS. Typically, BMP files are uncompressed, hence they are large; the advantage is their simplicity and wide acceptance in Windows programs. 10. Explain following terms: i) Multimedia Input/Output technologies ii) MIDI Ans. i) Multimedia Input/Output technologies: The computer will be of no use if it is not communicating with the external world. Thus, a computer must have a system to receive information from the outside world and must be able to communicate results to the external world. Thus, a computer consists of input/output devices. Input and output devices can also be written as I/O devices. Input and output devices of a computer system are the devices that connect you to computer. Input devices let you to transfer data and user command into the computer system. I/O devices are used to interact with the computer system. For example, you can type in data by using a keyboard, or you can input data in picture form by using a scanner in computer system. Input and output devices allow the computer system to interact with the outside world by moving data into and out of the computer system. Examples of some input devices are: Keyboard : It is the most common input device used for entering data and information into the computer system. This is the standard input device attached to all computers. The keyboard is a primary device for inputting text by pressing a set of keys. All the keys are neatly mounted in a keyboard connected to the computer system. Keyboard devices can be classified into two types general purpose keyboards and special purpose keyboards. General purpose keyboard are standard keyboards used with most computer system. They are called general purpose because that have enough keys to make them useful for any type of application. Mouse : A Mouse is a handy device which can be moved on a smooth surface to cause the movement of a cursor on the screen. It is a pointing device which is used to input data and information into the computer system by pointing on it. Physically, a mouse contains a small case, held under one of the user’s hands with one or more buttons. For GUI-based systems a mouse is an essential pointing-device. The cursor of the mouse moves in the same direction in which the mouse ball rolls. Joystick : Joystick is a remote control device for a computer which is used for playing video games to indicate the position. It has a stick that pivots on a base and is used for controlling the action in video games. The User moves a spherical ball with the help of a stick in the joystick as opposed to the trackball where fingers are used for moving the ball. Joysticks are also used for controlling machines such as cranes, trucks, underwater unmanned vehicles, flight simulators, industrial robots etc. Graphics tablet : Digitizing or Ggraphics’ tablet is a computer input device that allows one to hand-draw images and graphics, similar to the way one draws images with a pencil and paper. These tablets may also be used to capture data of handwritten signatures. Microphone :A Microphone is an acoustic-to-electric transducer or sensor and is used to convert sound signals into electrical signals. It was originally invented by Emile Berliner in 1877, and allows you to record voices or sounds and place them onto computers, generally as a wave file. Digital Camera : A Digital camera is an electronic device which takes video or still photographs or both, digitally by recording images via an electronic image sensor. Digital cameras can do things which film cameras can’t, for example displaying images on screen immediately after they are recorded. Images recorded on a digital camera can be cropped for editing, deleted and various types of special effects can be created by using Photoshop software. Digital cameras look like ordinary cameras but have sufficient memory in the form of chips to store thousands of images, rather than using photographic films. Bar Code Reader :A barcode reader is an electronic device which is used to read printed barcodes. Barcodes represent alphanumeric data which is a combination of vertical lines (bars) that vary in width and length. It is a fast and effective way to input data. A Barcode reader uses a laser beam to read the series of thick and thin lines which represent the bar code number. An output device is used to send data out of the system. The user sees the result after processing of data by the computer through output devices. Examples of some output devices are: Monitor : Monitors display what is going on in your computer. They can run at various resolutions. It is the part of computer which looks like a TV set. After typing the characters from the keyboard, we can see them on the monitor. The main components of a CRT monitors are the electron gun, the electron beam controlled by an electromagnetic field and phosphor coated display screen. Printer : Printers are used for producing output on paper. There are a large variety of printers and printing devices which can be classified according to the print quality and printing speed. Plotter : A Plotter is a device that draws pictures on a page as output, after receiving a print command from the computer. It is also called a graph plotter. In plotters pens are used to draw lines on the paper, which is placed in the plotter. Speaker: Computer speakers, or multimedia speakers, are external speakers, commonly equipped with a low-power internal amplifier which produces sound as output. External speakers are connected with a computer by using a plug and socket. Input and output devices are also called I/O devices. They are directly connected to an electronic module called I/O module or device controller. For example, the speakers of a multimedia computer system are directly connected to a device controller called an audio card, which in turn is connected to the rest of the system. Input and output devices are similar in operation but perform opposite functions. It is through the use of these devices that the computer is able to communicate with the outside world. Input data for the computer. ii) MIDI(Musical Instrument Digital Interface) : is a technical standard that describes a protocol, digital interface and connectors and allows a wide variety of electronic musical instruments, computers and other related devices to connect and communicate with one another. A single MIDI link can carry up to sixteen channels of information, each of which can be routed to a separate device. MIDI carries event messages that specify notation, pitch and velocity, control signals for parameters such as volume, vibrato, audio panning, cues, and clock signals that set and synchronize tempo between multiple devices. These messages are sent to other devices where they control sound generation and other features. This data can also be recorded into a hardware or software device called a sequencer, which can be used to edit the data and to play it back at a later time. Advantages of MIDI include compactness (an entire song can be coded in a few hundred lines, i.e. in a few kilobytes), ease of modification and manipulation and choice of instruments.

Recommended publications