
BINARY TEXT IMAGE FILE PREPROCESSING TO ACCOUNT FOR PRINTER DOT GAIN∗ Lu Zhanga, Alex Veisb, Robert Ulichneyc, Jan Allebacha aSchool of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907-2035, U.S.A. bHewlett-Packard Scitex, Ltd. Netanya 42505, ISRAEL cHewlett-Packard Laboratories USA, Cambridge, MA 02142, U.S.A ABSTRACT Dot gain is a classic problem in digital printing that causes printed halftones and text to appear darker than desired. For printing of text, we propose a method to preprocess the image sent to the printer in order to compensate for dot gain. It is based on an accurate model that predicts the printed absorp- tance for given local neighborhood in the digital image, a cost function to penalize lack of fidelity to the desired target text image, and the use of direct binary search (DBS) to minimize the cost. Fig. 1. Illustration of increased stroke thickness caused by dot gain. Index Terms— text print quality, rendering algorithm, The blue area indicates the dots added by the printer. The red area dot gain, raggedness, printer characterization indicates dots that were deleted. 1. INTRODUCTION the size of the halftone dot-clusters in the digital image that is sent to the print engine. However, to do this in a most effective Dot gain in which the colorant effectively covers a larger area manner requires three fundamental components: (1) an accu- on the printed media than that corresponding to the digital rate model for the printed absorptance that results when a giv- image sent to the printer is a common problem observed with en digital image is sent to the print engine; (2) a cost function digital printing systems [1, 2]. It affects the printing of tex- that appropriately weights the difference between the model t and graphics, as well as halftones. With positive-contrast prediction of the printed image and the target image that was (black on white) text, the character glyphs will appear darker desired; and (3) a search strategy for finding the precompen- and larger than intended. In addition, the shape of the charac- sated digital image to send to the printer that minimizes the ters may be distorted. These effects are especially prominent cost function. with small point sizes and complex typefaces that have ser- Tabular models have been shown to be effective for pre- ifs. Figure 1 provides an illustration. With negative-contrast dicting the printed absorptance value for halftone images [4– text (white on black), the effects are reversed. The nature and 8]; and similar models have been used to solve the inverse source of the differences between the digital and printed im- halftoning [9, 10]. In this paper, we will propose (in Secs. 2 ages will vary depending on the marking technology – be it and 3) a tabular model that is novel in three aspects: (1) to electrophotography or inkjet, and the specific characteristics our knowledge, it is the first tabular model applied to predic- of the print mechanism. tion of printed text character glyphs; (2) it is a 3-stage model In this paper, we will specifically consider laser, elec- that can more accurately capture a wider range of local binary trophotographic printers. With electrophotography, the depo- configurations in the digital image sent to the print engine (a sition of toner within the area of a given printer-addressable 2-stage model was proposed in [8]); (3) it is a high-definition pixel is strongly influenced by not just the image value at that model that predicts the printed output at a higher resolution pixel, but also by the image values of the immediately neigh- than the binary digital image that is sent to print engine. boring pixels [3, 4]. Our cost function (Sec. 4) penalizes the mean-absolute d- The general strategy for dealing with dot gain is to prec- ifference between the model-predicted printed text image and ompensate for it by reducing the stroke width of the glyphs or the target text image generated with an antialiasing method. It ∗Research partially supported by the Hewlett-Packard Company. also incorporates a term to penalize raggedness in the edges of 978-1-4799-5751-4/14/$31.00 ©2014 IEEE 2639 ICIP 2014 the character glyphs. Finally, we use the direct binary search PDF file in our experiments, to an 1800 dpi anti-aliased tex- (DBS) algorithm to find a locally optimum precompensated t image, and then block averaging and thresholding [17] it to binary image. DBS has proven to be an effective tool in a va- yield a 600 dpi binary image. Since the high-definition printer riety of halftoning applications [3,6,7,11–16]. To our knowl- model predicts the output at 1800 dpi, the corresponding ide- edge, this is its first application to the optimization of printed al printed image, which we call the target image is also gen- text images. erated at 1800 dpi using the pointwise rescaled anti-aliased image. 2. FRAMEWORK FOR ONLINE TEXT FILE 2.2. 3-level Window-Based High-Definition Printer Out- PREPROCESSING put Predictor In this section, we present our proposed framework for on- The predictor has a three-level structure, where each level is line text file preprocessing. It is illustrated in Fig. 2. Within defined by a LUT Π(i); i = 1; 2; 3. Figure 3 demonstrates Fig. 3. Illustration of the first-level high-definition printer predictor. (a) 600 dpi binary image of letter “s”. (b) The 5 × 5 neighborhood Fig. 2. Block diagram for online text preprocessing. We use [m; n] with pixel indices indicated (top), and the corresponding prediction for discrete spatial coordinates at the printer resolution and [M; N] of the 3 × 3 block of 1800 dpi pixels indicated by the red box (bot- for discrete spatial coordinates at the target or scanner resolution. tom). (c) Predicted 1800 dpi printer output image of the letter “s”. The function b0[m; n] denotes the initial binary image which is the (d) Actual printed image of the letter “s” scanned at 1800 dpi. original input to the printer. how the first-level of the predictor works for a 6 point Times this framework, we must first generate the target text image New Roman letter “s”. We assume that the printed absorp- and the initial binary image. We address this question in Sec tance of each 3 × 3 block of pixels in the 1800 dpi printer 2.1. Starting with the initial binary image, the search scan- output is determined by the state of the corresponding pixel s through the binary preprocessed image in raster order. At and its neighbors in the 600 dpi binary image that is sent to each pixel of the binary text image, it considers toggling it- the printer. As shown in Fig. 3(b), we estimate this absorp- s state. The high-definition printer predictor is then used to tance by first observing its 5×5 neighborhood. We assign the predict the printed binary preprocessed image at a higher res- center pixel in this 5 × 5 neighborhood of the 600 dpi image olution. We introduce this printer predictor in Sec 2.2, and (and hence all the pixels to be predicted in the corresponding present its offline training process in Sec. 3. If the trial change 3 × 3 neighborhood of the 1800 dpi image) a single configu- reduces the cost measure φ, DBS keeps that change. We ad- ration code 0 ≤ c(1) ≤ 24, calculated according to dress this cost measure in Sec. 4. One iteration of the algo- 24 rithm consists of a visit to every pixel in the 600 dpi binary (1) X i image. When no changes are retained throughout an entire it- c = bi · 2 ; (1) eration, the algorithm has converged. This typically requires i=0 8 or so iterations. where the superscript 1 in c(1) refers to the first-level LUT and bi denotes the binary value of i-th pixel in the 5 × 5 window 2.1. Generating the Target Image and the Initial Binary indicated in Fig. 3(b). Then, we look up the predicted printed Image absorptance values of the 9 corresponding 1800 dpi subpixels in the trained look-up table denoted by Π(1). In this paper, we have chosen a particular 600 dpi laser, elec- Let Ω(1) denote the set of configuration codes c(1) that oc- trophotographic printer, as our target output device. As the cur in the first-level LUT Π(1). Ω(1) will contain every config- first step toward the search process, we generate the initial 600 uration code that was encountered during the offline training dpi binary image by converting the original text file, a (vector) phase. It may happen that some configuration codes c(1) will 978-1-4799-5751-4/14/$31.00 ©2014 IEEE 2640 ICIP 2014 be observed during the online prediction phase that were not 4. COST FUNCTION seen during the training process, i.e. c(1) 62 Ω(1). In this case, we use the second-level LUT, denoted by Π(2) that is based There are two basic assumptions on the cost φ. First, it has on the dot configurations in the 3 × 3 neighborhood of the to have an ability to capture the difference between the target center pixel shown in the top half of Fig. 3(b), which contain- image and the printed image, assuming the printer predictor (2) s the pixels b0; ::b8. Let Ω denote the set of configuration gives a fairly accurate output. So we use the sum of absolute codes c(2) that occur in Π(2).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-