Binary Text Image File Preprocessing to Account for Printer Dot Gain∗

Total Page:16

File Type:pdf, Size:1020Kb

Binary Text Image File Preprocessing to Account for Printer Dot Gain∗ BINARY TEXT IMAGE FILE PREPROCESSING TO ACCOUNT FOR PRINTER DOT GAIN∗ Lu Zhanga, Alex Veisb, Robert Ulichneyc, Jan Allebacha aSchool of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907-2035, U.S.A. bHewlett-Packard Scitex, Ltd. Netanya 42505, ISRAEL cHewlett-Packard Laboratories USA, Cambridge, MA 02142, U.S.A ABSTRACT Dot gain is a classic problem in digital printing that causes printed halftones and text to appear darker than desired. For printing of text, we propose a method to preprocess the image sent to the printer in order to compensate for dot gain. It is based on an accurate model that predicts the printed absorp- tance for given local neighborhood in the digital image, a cost function to penalize lack of fidelity to the desired target text image, and the use of direct binary search (DBS) to minimize the cost. Fig. 1. Illustration of increased stroke thickness caused by dot gain. Index Terms— text print quality, rendering algorithm, The blue area indicates the dots added by the printer. The red area dot gain, raggedness, printer characterization indicates dots that were deleted. 1. INTRODUCTION the size of the halftone dot-clusters in the digital image that is sent to the print engine. However, to do this in a most effective Dot gain in which the colorant effectively covers a larger area manner requires three fundamental components: (1) an accu- on the printed media than that corresponding to the digital rate model for the printed absorptance that results when a giv- image sent to the printer is a common problem observed with en digital image is sent to the print engine; (2) a cost function digital printing systems [1, 2]. It affects the printing of tex- that appropriately weights the difference between the model t and graphics, as well as halftones. With positive-contrast prediction of the printed image and the target image that was (black on white) text, the character glyphs will appear darker desired; and (3) a search strategy for finding the precompen- and larger than intended. In addition, the shape of the charac- sated digital image to send to the printer that minimizes the ters may be distorted. These effects are especially prominent cost function. with small point sizes and complex typefaces that have ser- Tabular models have been shown to be effective for pre- ifs. Figure 1 provides an illustration. With negative-contrast dicting the printed absorptance value for halftone images [4– text (white on black), the effects are reversed. The nature and 8]; and similar models have been used to solve the inverse source of the differences between the digital and printed im- halftoning [9, 10]. In this paper, we will propose (in Secs. 2 ages will vary depending on the marking technology – be it and 3) a tabular model that is novel in three aspects: (1) to electrophotography or inkjet, and the specific characteristics our knowledge, it is the first tabular model applied to predic- of the print mechanism. tion of printed text character glyphs; (2) it is a 3-stage model In this paper, we will specifically consider laser, elec- that can more accurately capture a wider range of local binary trophotographic printers. With electrophotography, the depo- configurations in the digital image sent to the print engine (a sition of toner within the area of a given printer-addressable 2-stage model was proposed in [8]); (3) it is a high-definition pixel is strongly influenced by not just the image value at that model that predicts the printed output at a higher resolution pixel, but also by the image values of the immediately neigh- than the binary digital image that is sent to print engine. boring pixels [3, 4]. Our cost function (Sec. 4) penalizes the mean-absolute d- The general strategy for dealing with dot gain is to prec- ifference between the model-predicted printed text image and ompensate for it by reducing the stroke width of the glyphs or the target text image generated with an antialiasing method. It ∗Research partially supported by the Hewlett-Packard Company. also incorporates a term to penalize raggedness in the edges of 978-1-4799-5751-4/14/$31.00 ©2014 IEEE 2639 ICIP 2014 the character glyphs. Finally, we use the direct binary search PDF file in our experiments, to an 1800 dpi anti-aliased tex- (DBS) algorithm to find a locally optimum precompensated t image, and then block averaging and thresholding [17] it to binary image. DBS has proven to be an effective tool in a va- yield a 600 dpi binary image. Since the high-definition printer riety of halftoning applications [3,6,7,11–16]. To our knowl- model predicts the output at 1800 dpi, the corresponding ide- edge, this is its first application to the optimization of printed al printed image, which we call the target image is also gen- text images. erated at 1800 dpi using the pointwise rescaled anti-aliased image. 2. FRAMEWORK FOR ONLINE TEXT FILE 2.2. 3-level Window-Based High-Definition Printer Out- PREPROCESSING put Predictor In this section, we present our proposed framework for on- The predictor has a three-level structure, where each level is line text file preprocessing. It is illustrated in Fig. 2. Within defined by a LUT Π(i); i = 1; 2; 3. Figure 3 demonstrates Fig. 3. Illustration of the first-level high-definition printer predictor. (a) 600 dpi binary image of letter “s”. (b) The 5 × 5 neighborhood Fig. 2. Block diagram for online text preprocessing. We use [m; n] with pixel indices indicated (top), and the corresponding prediction for discrete spatial coordinates at the printer resolution and [M; N] of the 3 × 3 block of 1800 dpi pixels indicated by the red box (bot- for discrete spatial coordinates at the target or scanner resolution. tom). (c) Predicted 1800 dpi printer output image of the letter “s”. The function b0[m; n] denotes the initial binary image which is the (d) Actual printed image of the letter “s” scanned at 1800 dpi. original input to the printer. how the first-level of the predictor works for a 6 point Times this framework, we must first generate the target text image New Roman letter “s”. We assume that the printed absorp- and the initial binary image. We address this question in Sec tance of each 3 × 3 block of pixels in the 1800 dpi printer 2.1. Starting with the initial binary image, the search scan- output is determined by the state of the corresponding pixel s through the binary preprocessed image in raster order. At and its neighbors in the 600 dpi binary image that is sent to each pixel of the binary text image, it considers toggling it- the printer. As shown in Fig. 3(b), we estimate this absorp- s state. The high-definition printer predictor is then used to tance by first observing its 5×5 neighborhood. We assign the predict the printed binary preprocessed image at a higher res- center pixel in this 5 × 5 neighborhood of the 600 dpi image olution. We introduce this printer predictor in Sec 2.2, and (and hence all the pixels to be predicted in the corresponding present its offline training process in Sec. 3. If the trial change 3 × 3 neighborhood of the 1800 dpi image) a single configu- reduces the cost measure φ, DBS keeps that change. We ad- ration code 0 ≤ c(1) ≤ 24, calculated according to dress this cost measure in Sec. 4. One iteration of the algo- 24 rithm consists of a visit to every pixel in the 600 dpi binary (1) X i image. When no changes are retained throughout an entire it- c = bi · 2 ; (1) eration, the algorithm has converged. This typically requires i=0 8 or so iterations. where the superscript 1 in c(1) refers to the first-level LUT and bi denotes the binary value of i-th pixel in the 5 × 5 window 2.1. Generating the Target Image and the Initial Binary indicated in Fig. 3(b). Then, we look up the predicted printed Image absorptance values of the 9 corresponding 1800 dpi subpixels in the trained look-up table denoted by Π(1). In this paper, we have chosen a particular 600 dpi laser, elec- Let Ω(1) denote the set of configuration codes c(1) that oc- trophotographic printer, as our target output device. As the cur in the first-level LUT Π(1). Ω(1) will contain every config- first step toward the search process, we generate the initial 600 uration code that was encountered during the offline training dpi binary image by converting the original text file, a (vector) phase. It may happen that some configuration codes c(1) will 978-1-4799-5751-4/14/$31.00 ©2014 IEEE 2640 ICIP 2014 be observed during the online prediction phase that were not 4. COST FUNCTION seen during the training process, i.e. c(1) 62 Ω(1). In this case, we use the second-level LUT, denoted by Π(2) that is based There are two basic assumptions on the cost φ. First, it has on the dot configurations in the 3 × 3 neighborhood of the to have an ability to capture the difference between the target center pixel shown in the top half of Fig. 3(b), which contain- image and the printed image, assuming the printer predictor (2) s the pixels b0; ::b8. Let Ω denote the set of configuration gives a fairly accurate output. So we use the sum of absolute codes c(2) that occur in Π(2).
Recommended publications
  • Fi-4750C Image Scanner Operator's Guide Fi-4750C Image Scanner Operator's Guide Revisions, Disclaimers
    C150-E182-01EN fi-4750C Image Scanner Operator's Guide fi-4750C Image Scanner Operator's Guide Revisions, Disclaimers Edition Date published Revised contents 01 September, 2000 First edition Specification No. C150-E182-01EN FCC declaration: This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation. If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures: • Reorient or relocate the receiving antenna. • Increase the separation between the equipment and receiver. • Connect the equipment into an outlet on a circuit different from that to which the receiver is connected. • Consult the dealer or an experienced radio/TV technician for help. FCC warning: Changes or modifications not expressly approved by the party responsible for compliance could void the user's authority to operate the equipment. NOTICE • The use of a non-shielded interface cable with the referenced device is prohibited. The length of the parallel interface cable must be 3 meters (10 feet) or less. The length of the serial interface cable must be 15 meters (50 feet) or less.
    [Show full text]
  • Binary Image Compression Algorithms for FPGA Implementation Fahad Lateef, Najeem Lawal, Muhammad Imran
    International Journal of Scientific & Engineering Research, Volume 7, Issue 3, March-2016 331 ISSN 2229-5518 Binary Image Compression Algorithms for FPGA Implementation Fahad Lateef, Najeem Lawal, Muhammad Imran Abstract — In this paper, we presents results concerning binary image compression algorithms for FPGA implementation. Digital images are found in many places and in many application areas and they display a great deal of variety including remote sensing, computer sciences, telecommunication systems, machine vision systems, medical sciences, astronomy, internet browsing and so on. The major obstacle for many of these applications is the extensive amount of data required representing images and also the transmission cost for these digital images. Several image compression algorithms have been developed that perform compression in different ways depending on the purpose of the application. Some compression algorithms are lossless (having same information as original image), and some are lossy (loss information when compressed). Some of the image compression algorithms are designed for particular kinds of images and will thus not be as satisfactory for other kinds of images. In this project, a study and investigation has been conducted regarding how different image compression algorithms work on different set of images (especially binary images). Comparisons were made and they were rated on the bases of compression time, compression ratio and compression rate. The FPGA workflow of the best chosen compression standard has also been presented. These results should assist in choosing the proper compression technique using binary images. Index Terms—Binary Image Compression, CCITT, FPGA, JBIG2, Lempel-Ziv-Welch, PackBits, TIFF, Zip-Deflate. —————————— —————————— 1 INTRODUCTION OMPUTER usage for a variety of tasks has increased im- the most used image compression algorithms on a set of bina- C mensely during the last two decades.
    [Show full text]
  • Chapter 3 Image Data Representations
    Chapter 3 Image Data Representations 1 IT342 Fundamentals of Multimedia, Chapter 3 Outline • Image data types: • Binary image (1-bit image) • Gray-level image (8-bit image) • Color image (8-bit and 24-bit images) • Image file formats: • GIF, JPEG, PNG, TIFF, EXIF 2 IT342 Fundamentals of Multimedia, Chapter 3 Graphics and Image Data Types • The number of file formats used in multimedia continues to proliferate. For example, Table 3.1 shows a list of some file formats used in the popular product Macromedia Director. Table 3.1: Macromedia Director File Formats File Import File Export Native Image Palette Sound Video Anim. Image Video .BMP, .DIB, .PAL .AIFF .AVI .DIR .BMP .AVI .DIR .GIF, .JPG, .ACT .AU .MOV .FLA .MOV .DXR .PICT, .PNG, .MP3 .FLC .EXE .PNT, .PSD, .WAV .FLI .TGA, .TIFF, .GIF .WMF .PPT 3 IT342 Fundamentals of Multimedia, Chapter 3 1-bit Image - binary image • Each pixel is stored as a single bit (0 or 1), so also referred to as binary image. • Such an image is also called a 1-bit monochrome image since it contains no color. It is satisfactory for pictures containing only simple graphics and text. • A 640X480 monochrome image requires 38.4 kilobytes of storage (= 640x480 / (8x1000)). 4 IT342 Fundamentals of Multimedia, Chapter 3 8-bit Gray-level Images • Each pixel has a gray-value between 0 and 255. Each pixel is represented by a single byte; e.g., a dark pixel might have a value of 10, and a bright one might be 230. • Bitmap: The two-dimensional array of pixel values that represents the graphics/image data.
    [Show full text]
  • Implementation of RGB and Grayscale Images in Plant Leaves Disease Detection – Comparative Study
    ISSN (Print) : 0974-6846 Indian Journal of Science and Technology, Vol 9(6), DOI: 10.17485/ijst/2016/v9i6/77739, February 2016 ISSN (Online) : 0974-5645 Implementation of RGB and Grayscale Images in Plant Leaves Disease Detection – Comparative Study K. Padmavathi1* and K. Thangadurai2 1Research and Development Centre, Bharathiar University, Coimbatore - 641046, Tamil Nadu, India; [email protected] 2PG and Research Department of Computer Science, Government Arts College (Autonomous), Karur - 639007, Tamil Nadu, India; [email protected] Abstract Background/Objectives: Digital image processing is used various fields for analyzing differentMethods/Statistical applications such as Analysis: medical sciences, biological sciences. Various image types have been used to detect plant diseases. This work is analyzed and compared two types of images such as Grayscale, RGBResults/Finding: images and the comparative result is given. We examined and analyzed the Grayscale and RGB images using image techniques such as pre processing, segmentation, clustering for detecting leaves diseases, In detecting the infected leaves, color becomes an level.important Conclusion: feature to identify the disease intensity. We have considered Grayscale and RGB images and used median filter for image enhancement and segmentation for extraction of the diseased portion which are used to identify the disease RGB image has given better clarity and noise free image which is suitable for infected leaf detection than GrayscaleKeywords: image. Comparison, Grayscale Images, Image Processing, Plant Leaves Disease Detection, RGB Images, Segmentation 1. Introduction 2. Grayscale Image: Grayscale image is a monochrome image or one-color image. It contains brightness Images are the most important data for analysation of information only and no color information.
    [Show full text]
  • Chapter 2 Image Processing I. Image Fundamentals
    Chapter 2 Image Processing Preliminary – 24 Oct 2013 A picture is a fact. – Ludwig Wittgenstein The soul never thinks without a picture. – Aristotle We are visual organisms who depend heavily on our ability to see the world around us. Our brains have evolved to be superb image processing engines. From two simultaneous two- dimensional images, one from each eye, we automatically extract three dimensional spatial information. We can do complex pattern analysis effortlessly such as counting the number of cars on the street in front of a house or identifying the quarters in a handfull of change. In modern science, data is often in the form of pictures or images. From these, the researcher must extract quantitative information. The tasks which we do almost effortlessly can be very challenging for software. However, once the software is able to accomplish a challenge, the software can repeat it many thousands of times when a human might lose interest after only a few. There are several image processing packages available for Python. We will use the Python Image Library (PIL) and numpy’s multidimensional image processing package (ndimage). I. Image Fundamentals An image is a two dimensional array where every pixel corresponds to a single (x, y) or (column, row) point in the picture. A pixel may take several forms. Some images have only two possible values for each pixel, on or off. Such an image is binary or black and white. Some images may have a range of gray values ranging from all black to all white and denoted by an integer.
    [Show full text]
  • Ffmpeg Filters Documentation Table of Contents
    FFmpeg Filters Documentation Table of Contents 1 Description 2 Filtering Introduction 3 graph2dot 4 Filtergraph description 4.1 Filtergraph syntax 4.2 Notes on filtergraph escaping 5 Timeline editing 6 Audio Filters 6.1 adelay 6.1.1 Examples 6.2 aecho 6.2.1 Examples 6.3 aeval 6.3.1 Examples 6.4 afade 6.4.1 Examples 6.5 aformat 6.6 allpass 6.7 amerge 6.7.1 Examples 6.8 amix 6.9 anull 6.10 apad 6.10.1 Examples 6.11 aphaser 6.12 aresample 6.12.1 Examples 6.13 asetnsamples 6.14 asetrate 6.15 ashowinfo 6.16 astats 6.17 astreamsync 6.17.1 Examples 6.18 asyncts 6.19 atempo 6.19.1 Examples 6.20 atrim 6.21 bandpass 6.22 bandreject 6.23 bass 6.24 biquad 6.25 bs2b 6.26 channelmap 6.27 channelsplit 6.28 chorus 6.28.1 Examples 6.29 compand 6.29.1 Examples 6.30 dcshift 6.31 earwax 6.32 equalizer 6.32.1 Examples 6.33 flanger 6.34 highpass 6.35 join 6.36 ladspa 6.36.1 Examples 6.36.2 Commands 6.37 lowpass 6.38 pan 6.38.1 Mixing examples 6.38.2 Remapping examples 6.39 replaygain 6.40 resample 6.41 silencedetect 6.41.1 Examples 6.42 silenceremove 6.42.1 Examples 6.43 treble 6.44 volume 6.44.1 Commands 6.44.2 Examples 6.45 volumedetect 6.45.1 Examples 7 Audio Sources 7.1 abuffer 7.1.1 Examples 7.2 aevalsrc 7.2.1 Examples 7.3 anullsrc 7.3.1 Examples 7.4 flite 7.4.1 Examples 7.5 sine 7.5.1 Examples 8 Audio Sinks 8.1 abuffersink 8.2 anullsink 9 Video Filters 9.1 alphaextract 9.2 alphamerge 9.3 ass 9.4 bbox 9.5 blackdetect 9.6 blackframe 9.7 blend, tblend 9.7.1 Examples 9.8 boxblur 9.8.1 Examples 9.9 codecview 9.9.1 Examples 9.10 colorbalance 9.10.1 Examples
    [Show full text]
  • Tagged Image File Format (TIFF)
    Tagged Image File Format (TIFF) Tagged Image File Format (TIFF) is a standard file format that is largely used in the publishing and printing industry. The extensible feature of this format allows storage of multiple bitmap images having different pixel depths, which makes it advantageous for image storage needs. Since it introduces no compression artifacts, the file format is preferred over others for archiving intermediate files. Tagged Image File Format, abbreviated TIFF or TIF, is a computer file format for storing raster graphics images, popular among graphic artists, the publishing industry, and photographers. TIFF is widely supported by scanning, faxing, word processing, optical character recognition, image manipulation, desktop publishing, and page-layout applications. The latest published version is 6.0 in 1992, subsequently updated with an Adobe Systems copyright after the latter acquired Aldus in 1994. Several Aldus or Adobe technical notes have been published with minor extensions to the format, and several specifications have been based on TIFF 6.0, including TIFF/EP (ISO 12234-2), TIFF/IT (ISO 12639), TIFF-F (RFC 2306) and TIFF-FX (RFC 3949). Filename extensions : .tiff, .tif Internet media type: image/tiff, image/tiff-fx Developed by: Aldus, now Adobe Systems Initial release: 1986; 34 years ago Type of format: Image file format A Tagged Image File Format or TIFF is a specific type of computer file format for storing raster graphic images and exchanging them between application programs. Examples of these programs include word processing, scanning, image manipulation or editors, optical character recognition, and desktop publishing applications, among others. The format was developed in 1986 by a group spearheaded by Aldus Corporation, which is now part of Adobe Inc.
    [Show full text]
  • Digital Image Basics
    Digital Image Basics Written by Jonathan Sachs Copyright © 1996-1999 Digital Light & Color Introduction When using digital equipment to capture, store, modify and view photographic images, they must first be converted to a set of numbers in a process called digitiza- tion or scanning. Computers are very good at storing and manipulating numbers, so once your image has been digitized you can use your computer to archive, examine, alter, display, transmit, or print your photographs in an incredible variety of ways. Pixels and Bitmaps Digital images are composed of pixels (short for picture elements). Each pixel rep- resents the color (or gray level for black and white photos) at a single point in the image, so a pixel is like a tiny dot of a particular color. By measuring the color of an image at a large number of points, we can create a digital approximation of the image from which a copy of the original can be reconstructed. Pixels are a little like grain particles in a conventional photographic image, but arranged in a regular pattern of rows and columns and store information somewhat differently. A digital image is a rectangular array of pixels sometimes called a bitmap. Digital Image Basics 1 Digital Image Basics Types of Digital Images For photographic purposes, there are two important types of digital images—color and black and white. Color images are made up of colored pixels while black and white images are made of pixels in different shades of gray. Black and White Images A black and white image is made up of pixels each of which holds a single number corresponding to the gray level of the image at a particular location.
    [Show full text]
  • Tree Based Search Algorithm for Binary Image Compression
    Tree Based Search Algorithm for Binary Image Compression Reetu Hooda W. David Pan Dept. of Electrical and Computer Engineering Dept. of Electrical and Computer Engineering University of Alabama in Huntsville University of Alabama in Huntsville Huntsville, AL 35899 Huntsville, AL 35899 [email protected] [email protected] Abstract—The demand for large-scale image data grows in- algorithms are typically used in applications where loss of any creasingly fast, resulting in a need for efficient image compres- data is not affordable. sion. This work aims at improving the lossless compression of binary images. To this end, we propose the use of a tree-based Lossy compression: Lossy algorithms can only reconstruct optimization algorithm, which searches for the best partitions of an approximation of the original data. They are typically used the input image into non-uniform blocks, and for the best combi- nation of scan directions, with the goal of achieving more efficient for images and sound where some loss is often undetectable compression of the resulting sequences of intervals between suc- or at least acceptable [5][6]. cessive symbols of the same kind. The tree-based search algorithm searches for the best grid structure for adaptively partitioning the Various studies in image and video coding show that decom- image into blocks of varying sizes. Extensive simulations of this position of the original 2D signal (image) into a set of contigu- search algorithm on various datasets demonstrated that we can ous regions help in increasing the compression. This process achieve significantly higher compression on average than various standard binary image compression methods such as the JPEG is called “segmentation”.
    [Show full text]
  • Types of Digital Images
    Image Processing Lecture 2 Types of Digital Images The images types we will consider are: 1) binary, 2) gray-scale, 3) color, and 4) multispectral. 1. Binary images Binary images are the simplest type of images and can take on two values, typically black and white, or 0 and 1. A binary image is referred to as a 1-bit image because it takes only 1 binary digit to represent each pixel. These types of images are frequently used in applications where the only information required is general shape or outline, for example optical character recognition (OCR). Binary images are often created from the gray-scale images via a threshold operation, where every pixel above the threshold value is turned white (‘1’), and those below it are turned black (‘0’). In the figure below, we see examples of binary images. (a) (b) Figure 2.1 Binary images. (a) Object outline. (b) Page of text used in OCR application. 2. Gray-scale images Gray-scale images are referred to as monochrome (one-color) images. ©Asst. Lec. Wasseem Nahy Ibrahem Page 1 Image Processing Lecture 2 They contain gray-level information, no color information. The number of bits used for each pixel determines the number of different gray levels available. The typical gray-scale image contains 8bits/pixel data, which allows us to have 256 different gray levels. The figure below shows examples of gray-scale images. Figure 2.2 Examples of gray-scale images In applications like medical imaging and astronomy, 12 or 16 bits/pixel images are used. These extra gray levels become useful when a small section of the image is made much larger to discern details.
    [Show full text]
  • Document Image Analysis
    IEEE Computer Society Executive Briefings Document Image Analysis Lawrence O’Gorman Rangachar Kasturi ISBN 0-8186-7802-X Library of Congress Number 97-17283 1997 This book is now out of print 2009 We have recreated this online document from the authors’ original files This version is formatted differently from the published book; for example, the references in this document are included at the end of each section instead of at the end of each chapter. There are also minor changes in content. We recommend that any reference to this work should cite the original book as noted above. Document Image Analysis Table of Contents Preface Chapter 1 What is a Document Image and What Do We Do With It? Chapter 2 Preparing the Document Image 2.1: Introduction...................................................................................................... 8 2.2: Thresholding .................................................................................................... 9 2.3: Noise Reduction............................................................................................. 19 2.4: Thinning and Distance Transform ................................................................. 25 2.5: Chain Coding and Vectorization.................................................................... 34 2.6: Binary Region Detection ............................................................................... 39 Chapter 3 Finding Appropriate Features 3.1: Introduction...................................................................................................
    [Show full text]
  • Computer Vision Through Image Processing System
    BIOINFO Medical Imaging Volume 1, Issue 1, 2011, pp-01-04 Available online at: http://www.bioinfo.in/contents.php?id=162 Computer Vision through Image Processing System 1Satonkar S.S., 2Kurhe A.B. and 3Khanale P.B. 1Department of Computer Science, Arts, Commerce and Science College, Gangakhed, (M.S.), India 2Shri Guru Buddhiswami College, Purna, (M.S) 3Dnyanopasak College, Parbhani, (M.S.) e-mail: [email protected], [email protected] Abstract—Recently Image processing is attracting V. DIGITAL IMAGE REPRESENTATION much attention in the society of network multimedia information access. Area such as network security, content An image may be defined as a two-dimensional indexing and retrieval, and video compression benefits from function, f ( x , y ), where x and y are spatial ( plane) image processing system. The present paper shows the steps coordinates, and the amplitude of f at any pair of in image processing system. In the image processing system coordinates ( x , y ) is called the intensity of the image different elements are overviewed. The image processing at that point. The term gray level is used often to refer classification is also highlighted in the paper. to the intensity of monochrome images. Color images Keywords: Computer Vision, Image Processing, Intensity Image, Visual data are formed by a combination of individual 2-D images. e.g. RGB color system, a color image consists of three (red, green and blue) individual component images. I. INTRODUCTION Digital images play an important role, both in daily-life VI. DIGITAL IMAGES CLASSIFICATIONS applications such as satellite television, magnetic resonance imaging, computer tomography as well as in Digital images can be broadly classified into two types areas of research and technology such as geographical and they are information systems and astronomy.
    [Show full text]