J Real-Time Image Proc (2015) 10:329–344 DOI 10.1007/s11554-012-0291-4

SPECIAL ISSUE

An effective real-time color quantization method based on divisive hierarchical clustering

M. Emre Celebi • Quan Wen • Sae Hwang

Received: 1 August 2012 / Accepted: 17 October 2012 / Published online: 6 November 2012 Ó Springer-Verlag Berlin Heidelberg 2012

Abstract Color quantization (CQ) is an important oper- 1 Introduction ation with many applications in graphics and image pro- cessing. Clustering algorithms have been extensively True-color images typically contain thousands of colors, applied to this problem. In this paper, we propose a simple which makes their display, storage, transmission, and yet effective CQ method based on divisive hierarchical processing problematic. For this reason, CQ is commonly clustering. Our method utilizes the commonly used binary used as a preprocessing step for various graphics and image splitting strategy along with several carefully selected processing tasks. In the past, CQ was a necessity due to the heuristics that ensure a good balance between effectiveness limitations of the display hardware, which could not handle and efficiency. We also propose a slightly computationally over 16 million possible colors in 24-bit images. Although expensive variant of this method that employs local opti- 24-bit display hardware has become more common, CQ mization using the Lloyd–Max algorithm. Experiments on still maintains its practical value [6]. Modern applications a diverse set of publicly available images demonstrate that of CQ in graphics and image processing include: (1) the proposed method outperforms some of the most popular compression [63], (2) segmentation [17], (3) text locali- quantizers in the literature. zation/detection [48], (4) color-texture analysis [47], (5) watermarking [35], (6) non-photorealistic rendering [55], Keywords Color quantization Á Clustering Á and (7) content-based retrieval [18]. Divisive hierarchical clustering The process of CQ is mainly comprised of two phases: design (the selection of a small set of colors that represents the original image colors) and pixel mapping (the assignment of each input pixel to one of the palette colors). The primary objective is to reduce the number of unique colors, N0, in an image to K (K  N0) with minimal distortion. In most applications, 24-bit pixels in the original M. E. Celebi (&) image are reduced to 8 bits or fewer. Since natural images Department of Computer Science, Louisiana State University, often contain a large number of colors, faithful represen- Shreveport, LA, USA e-mail: [email protected] tation of these images with a limited size palette is a dif- ficult problem. Q. Wen CQ methods can be broadly classified into two catego- School of Computer Science and Engineering, ries [60]: image-independent methods that determine a University of Electronic Science and Technology of China, Chengdu, People’s Republic of China universal (fixed) palette without regard to any specific e-mail: [email protected] image [21, 39], and image-dependent methods that deter- mine a custom (adaptive) palette based on the color S. Hwang distribution of the images. Despite being very fast, image- Department of Computer Science, University of Illinois, Springfield, IL, USA independent methods usually give poor results since they e-mail: [email protected] do not take into account the image contents. Therefore, 123 330 J Real-Time Image Proc (2015) 10:329–344 most of the studies in the literature consider only image- agglomerative. Since agglomerative methods typically dependent methods, which strive to achieve a better bal- have at least quadratic time complexity, most of the ance between computational efficiency and visual quality existing preclustering methods are of divisive type. A of the quantization output. divisive algorithm partitions the 3-dimensional Numerous image-dependent CQ methods have been of the input image into K subspaces using K - 1 planes developed over the past three decades. These can be cat- each of which is uniquely defined by a normal vector and a egorized into two families: preclustering (hierarchical point. The main heuristics used by divisive algorithms are clustering) methods and postclustering (partitional clus- the following [6]: tering) methods [6]. The former methods recursively find 1. Selection of a splitting strategy: Following tree nested clusters either in a top-down (divisive) or bottom-up structured vector quantizers, most divisive algorithms (agglomerative) fashion. In contrast, the latter ones find all employ binary splitting. In other words, the color space the clusters simultaneously as a partition of the data and do of the input image is partitioned into K subspaces by a not impose a hierarchical structure [30]. sequence of K - 1 split operations. Note that the Preclustering methods are mostly based on the statistical number of binary splits that can be performed to obtain analysis of the color distribution of the images. Divisive K subpartitions equals the number of full binary trees preclustering methods start with a single cluster that con-  2K À 2 tains all N0 image colors. This initial cluster is recursively having exactly K leaves, 1 which is K K À 1 subdivided until K clusters are obtained. Well-known typically too large to permit exhaustive enumeration. divisive methods include median-cut [24], octree [22], 2. Selection of the next partition to be split: In each variance-based method [54], binary splitting [40], greedy iteration, the algorithm selects a partition and splits it orthogonal bipartitioning [58], center-cut [31], and rwm- into two subpartitions. Possible choices for the parti- cut [64]. More recent methods can be found in [13, 25, 32, tion to be split include the most populated partition 37, 49]. On the other hand, agglomerative preclustering [24], the partition with the greatest range on any methods [1, 5, 19, 51, 61] start with N0 singleton clusters coordinate axis [31], the partition with the greatest each of which contains one image color. These clusters are dominant eigenvalue [40], and the partition with the repeatedly merged until K clusters remain. In contrast to greatest sum of squared error (SSE) [13, 49, 54, 58, preclustering methods that compute the palette only once, 64]. Among these criteria, the last one is the most postclustering methods first determine an initial palette and sensible one as the partition with the greatest SSE is then improve it iteratively. Since these methods involve the one that contributes to the total distortion the most. iterative or stochastic optimization, they can obtain higher 3. Selection of the partitioning plane normal vector: The quality results when compared to preclustering methods at partitioning plane may be orthogonal to the coordinate the expense of increased computational time. Clustering axis with the greatest range [24, 31], the coordinate algorithms adapted to CQ include maxmin [23, 59], axis with the greatest variance [49], the major axis k-means [9, 26, 27, 29, 33], k-harmonic means [20], [40], or some other specially chosen axis [13, 54, 58, competitive learning [8, 10, 46, 52], fuzzy c-means [7, 34, 64]. Among these choices, the major axis is the most 41, 45, 57], rough c-means [44], BIRCH [3], and self- sensible one as this is the axis along which the data organizing maps [12, 14, 16, 42, 43, 62]. spread is the greatest. However, determination of the In this paper, we present an effective divisive preclu- major axis requires the computation of the cluster stering method for CQ. The rest of the paper is organized covariance matrix, which is expensive. Therefore, the as follows. Section 2 describes the anatomy of a divisive coordinate axis with the greatest variance can be used hierarchical clustering algorithm and the proposed CQ as a computationally efficient alternative to the major method. Section 3 presents the experimental setup and axis. compares the proposed method to other CQ methods. 4. Selection of the partitioning plane position: The Finally, Section 4 gives the conclusions. partitioning plane may pass through the mean [13, 31, 40], the median [24], the radius-weighted mean 2 Divisive hierarchical clustering for CQ [64], or some other specially chosen point [49, 54, 58, 64] on the partitioning axis. The rationale behind the 2.1 Anatomy of a divisive hierarchical clustering choice of the median point, which is adapted from the algorithm original kd-tree construction algorithm [2], is that the resulting subpartitions will contain approximately As described in the previous section, preclustering the same number of colors. However, there is no sound methods can be divided into two categories: divisive and justification to require that each cluster contain a

123 J Real-Time Image Proc (2015) 10:329–344 331

nearly equal number of colors, while ignoring the a vector x ¼ ðÞx1; x2; x3 : Let w, m, and v denote distribution of these colors [53]. In contrast, for the weight, mean, and variance of the parent hyperspherical clusters, it can be shown that the mean partition C, respectively. The weight, mean, and var- point is the optimal choice [40]. iance of the other subpartition, Cb; are then given by hi.wb = w -wa, mb ¼ ðÞwm À wama =wb; and 2 2 2.2 Proposed CQ method vb ¼ wv À wa va þ ðÞm À ma wb À ðÞm À mb ; respectively. Motivated by computational efficiency considerations, we propose a new divisive CQ method called variance-cut (VC) that employs the binary splitting strategy. Following the majority of divisive algorithms [13, 24, 49, 54, 58, 64], VC starts by building a 32 9 32 9 32 color histogram using 5 bits/channel uniform quantization. In each itera- tion, the method splits the partition with the greatest SSE along the coordinate axis with the greatest variance at the mean point. After K - 1 iterations (splits), the centroids of the resulting K subpartitions are taken as the color palette. The proposed method can be implemented efficiently by using a few data structures and algebraic equalities as follows: • The set of colors in the input image can be determined efficiently using a hash table that uses chaining for collision resolution and a universal hash function P 3 of the form [15]: haðcÞ¼ i¼1 aici mod m; where

c = (c1, c2, c3) denotes a color with red (c1), green (c2), and blue (c3) components, m is a prime number, and the elements of sequence a = (a1, a2, a3) are chosen randomly from the set f0; 1; ...; m À 1g: In this study, the parameters of the hash function are cho- sen as a = (33023, 30013, 27011) and m = 20023. Each color is then represented by a quartet c ¼ hc1; c2; c3; wðcÞi; where w(c) is the weight (probability) of the color calculated as its frequency (count) divided by the number of pixels in the image (N). • In each iteration, the partition with the greatest SSE, say partition C, is split into two subpartitions Ca and Cb along the coordinate axis with the greatest variance. This involves going through the set of colors in C and then deciding which side of the mean each color falls to. In other words, if the projection of a color on the partitioning axis is less than the mean projection, the

color is assigned to Ca; otherwise it is assigned to Cb. During this assignment phase, incremental statistics such as count, weight, sum/squared sum of red, green, and blue values are calculated only for one of the

subpartitions, say Ca. Based on these statistics, the weight, mean, and variance of C can be calculated P .P a as w w c ; m w c c w ; and a ¼ c2Ca ð Þ a ¼ c2Ca ð Þ a .P 2 2 va ¼ wðcÞc wa À m , respectively, where ÀÁc2Ca a Fig. 1 Test images. a Baboon, b fish, c Goldhill, d Lenna, e moto- 2 2 2 2 x ¼ x1; x2; x3 denotes the componentwise square of cross, f parrots, g peppers, h pills 123 332 J Real-Time Image Proc (2015) 10:329–344

Table 1 MAE comparison of the quantization methods Method K K K K 32 64 128 256 32 64 128 256 32 64 128 256 32 64 128 256

Baboon Fish Goldhill Lenna POP 47.3 33.2 22.0 16.4 47.1 21.0 13.7 12.2 26.7 17.3 13.7 12.4 22.5 16.7 12.9 11.9 MC 33.6 28.2 23.5 19.7 22.4 18.4 14.8 11.9 23.0 18.4 15.5 12.4 19.3 16.0 14.2 11.9 MPOP 29.1 23.6 18.2 14.5 18.6 16.5 10.7 9.4 19.2 16.4 11.0 9.6 19.2 16.3 10.6 9.6 OCT 31.6 24.0 19.8 15.3 19.2 14.1 11.0 8.4 19.9 15.0 11.6 8.8 18.4 14.1 11.0 8.6 WAN 30.8 25.8 21.2 17.2 21.6 17.3 13.7 11.2 19.6 15.4 12.6 10.7 18.9 15.3 12.3 10.1 WU 29.8 23.4 18.7 15.0 17.8 13.4 10.6 8.5 18.4 14.0 11.0 8.8 16.8 13.2 10.4 8.3 CC 30.2 24.0 19.9 16.8 18.8 15.5 12.4 10.2 19.4 15.8 12.6 10.3 18.7 15.3 12.2 9.8 RWM 29.4 24.0 18.9 15.1 17.3 13.6 10.9 8.9 17.9 14.5 11.3 8.9 17.3 13.2 10.4 8.4 PWC 30.2 24.3 19.6 15.6 18.8 15.5 13.6 12.2 18.7 15.4 13.4 12.4 18.4 14.4 12.7 11.9 SAM 29.3 23.3 18.7 15.0 19.0 14.9 11.7 9.6 18.0 14.3 11.5 9.5 17.2 14.0 11.3 9.6 CY 30.2 23.3 19.0 15.1 18.6 13.9 11.2 8.9 18.2 14.8 11.5 9.2 17.9 13.6 10.8 8.8 VC 29.4 22.8 18.7 15.1 17.2 13.5 10.7 8.7 17.8 14.0 10.9 8.7 16.5 13.0 10.4 8.3 VCL 28.5 22.5 18.3 14.8 17.1 13.1 10.5 8.6 17.3 13.7 10.7 8.6 16.5 12.7 10.3 8.3 SOM 27.0 21.2 16.9 13.7 15.9 12.3 9.1 7.5 16.4 12.3 9.6 7.7 15.0 11.8 9.1 7.4 MMM 31.4 26.7 21.4 17.2 20.8 16.7 12.6 10.2 21.4 16.6 13.5 10.9 18.5 14.9 12.0 9.8 ADU 26.8 21.3 17.0 13.5 15.2 11.7 9.1 7.1 15.9 12.1 9.6 7.6 14.9 11.5 9.1 7.3 WSM 27.0 21.3 17.1 13.7 15.2 11.9 9.2 7.3 15.8 12.2 9.6 7.7 14.8 11.5 9.2 7.4 Motocross Parrots Peppers Pills POP 33.1 21.4 15.7 13.1 58.1 22.6 16.5 13.6 36.4 21.5 16.0 13.2 33.4 19.1 14.9 13.1 MC 26.7 20.9 16.7 14.1 27.9 21.9 16.5 14.1 25.5 20.9 17.9 15.1 23.9 20.4 16.7 13.4 MPOP 21.0 17.9 11.5 9.7 25.5 19.8 12.8 10.3 23.4 19.4 13.3 11.3 22.8 18.2 12.3 10.1 OCT 21.1 15.2 11.8 8.8 24.0 18.1 13.5 10.1 22.8 17.9 14.2 11.0 22.6 16.9 13.2 9.9 WAN 23.7 19.1 14.9 11.6 25.1 19.3 15.3 12.0 23.9 19.0 15.7 12.9 22.3 18.1 14.7 12.2 WU 20.6 15.2 11.6 8.9 22.7 16.8 12.7 9.8 22.0 17.0 13.5 10.7 21.6 16.3 12.5 9.7 CC 24.6 18.9 14.9 11.6 26.7 21.3 16.7 12.0 27.1 21.5 17.1 14.1 22.6 17.4 14.2 11.7 RWM 19.9 15.7 11.7 9.2 23.0 17.6 13.1 10.2 22.8 17.9 13.7 11.0 22.0 16.5 12.6 10.0 PWC 20.2 16.6 13.8 12.7 25.1 19.0 15.3 13.1 24.6 18.0 14.9 12.6 23.0 17.8 14.6 12.7 SAM 19.9 15.4 12.0 9.9 22.4 16.7 12.8 10.2 22.0 17.0 13.6 11.2 21.2 16.2 12.5 10.0 CY 20.4 15.5 12.2 9.5 23.5 18.1 13.8 10.7 23.7 18.3 14.3 11.4 21.2 17.1 13.4 10.4 VC 20.5 15.1 11.3 9.0 22.4 16.9 13.0 10.0 21.7 17.5 14.0 11.0 20.9 16.5 12.7 9.8 VCL 19.4 14.7 11.0 8.9 21.6 16.7 12.8 10.0 21.7 17.0 13.4 10.9 20.5 16.1 12.4 9.7 SOM 18.5 13.1 9.9 7.6 20.1 14.9 11.0 8.4 20.2 15.7 12.1 9.8 19.2 14.5 10.8 8.4 MMM 28.1 23.5 16.1 12.9 26.4 19.2 15.7 11.4 25.1 19.9 16.0 12.8 22.5 18.1 15.1 12.1 ADU 17.4 12.9 9.7 7.5 20.0 14.8 10.9 8.3 20.5 15.5 12.2 9.7 19.3 14.1 10.8 8.4 WSM 17.4 13.2 9.9 7.6 20.4 14.8 11.2 8.4 20.4 15.5 12.2 9.8 19.2 14.4 10.9 8.5

• The mean and variance of the partition to be split, centroid and each of the centroids is then recalculated as the C, are calculated only in the first iteration. In the mean of all colors assigned to it. These two steps are repeated subsequent iterations, mean and variance values calcu- until a predefined termination criterion is met. Each iterationP lated during the preceding iterations are used. is guaranteed to reduce the SSE calculated as na P c2Ca c m 2 n c m 2 or leave it unchanged We also experimented with a more elaborate version of kkÀ a 2 þ b c2Cb kkÀ b 2 VC called VCL that locally optimizes the two subpartitions (na and nb denote the number of colors in Ca and resulting from each split operation using a few Lloyd–Max Cb, respectively). In the experiments, this local optimization iterations [36, 38]. Starting with the centroids (means) of the procedure was terminated after 10 iterations as more itera- two subpartitions, each color is assigned to the nearest tions rarely achieved any noticeable reduction in the SSE.

123 J Real-Time Image Proc (2015) 10:329–344 333

Table 2 MSE comparison of the quantization methods Method K K K K 32 64 128 256 32 64 128 256 32 64 128 256 32 64 128 256

Baboon Fish Goldhill Lenna POP 1,679.5 849.5 330.7 170.4 2,827.6 482.5 105.2 69.8 576.7 199.3 101.8 73.1 347.2 199.5 84.5 65.3 MC 643.0 445.6 307.4 213.0 282.3 189.4 121.2 75.9 293.9 188.8 132.3 86.5 214.0 146.1 112.4 80.3 MPOP 453.1 290.4 195.0 109.3 198.4 145.5 66.2 47.7 200.2 140.7 66.7 48.6 194.5 138.9 60.0 47.8 OCT 530.2 306.6 203.6 125.0 218.4 125.1 77.8 44.3 230.3 130.3 79.0 45.7 186.7 110.0 66.0 40.6 WAN 528.3 385.7 266.0 178.0 311.6 209.0 124.5 77.1 229.0 141.2 94.5 64.4 216.5 140.8 87.6 56.7 WU 468.3 288.3 186.5 118.6 187.6 111.6 69.0 43.8 196.0 114.2 71.4 45.2 158.2 99.1 61.7 39.4 CC 473.1 299.7 202.5 144.7 189.8 127.3 82.3 56.5 202.0 134.9 87.9 57.9 189.1 125.5 80.6 52.2 RWM 459.0 301.6 188.1 120.2 176.7 109.0 68.9 44.4 179.8 118.3 71.0 44.5 161.2 94.6 60.1 39.2 PWC 469.4 308.8 206.7 128.8 201.5 130.9 93.1 69.4 193.8 125.1 88.9 70.9 186.9 108.0 78.8 65.0 SAM 464.9 293.9 188.8 119.8 198.5 120.1 74.0 48.5 179.3 111.2 70.4 46.7 158.0 102.0 65.0 45.4 CY 465.9 280.9 187.3 117.7 193.8 112.5 72.0 44.8 186.3 121.6 72.2 46.4 166.4 97.6 62.5 41.9 VC 450.6 273.5 179.9 117.6 168.1 106.5 67.4 43.4 174.8 109.5 68.3 42.4 145.6 91.7 60.7 38.9 VCL 425.6 264.0 173.1 115.3 169.9 102.5 65.1 43.1 169.3 104.3 66.2 42.0 146.3 89.2 59.2 38.6 SOM 433.6 268.9 163.9 108.2 180.4 114.1 60.4 45.1 182.1 104.2 59.5 38.4 140.2 87.4 50.5 33.9 MMM 510.0 368.4 230.4 147.5 223.4 144.2 81.7 53.7 239.9 143.1 95.4 61.0 183.3 114.2 73.5 48.5 ADU 380.3 240.4 151.7 96.3 143.5 87.8 55.0 35.2 149.3 84.5 53.5 34.3 121.4 72.6 46.4 30.0 WSM 379.5 238.6 153.7 98.6 139.2 85.1 52.9 32.8 145.6 84.7 52.9 34.6 119.2 72.4 46.8 30.5 Motocross Parrots Peppers Pills POP 1,288.6 474.3 201.6 93.5 4,086.8 371.7 180.6 104.0 1,389.3 367.7 218.3 129.1 788.2 222.9 124.0 85.3 MC 437.6 254.0 169.4 114.3 441.0 265.1 153.6 112.3 377.6 238.9 173.8 121.9 324.2 233.8 159.5 100.4 MPOP 287.5 177.9 84.1 53.3 379.8 212.1 104.7 59.4 338.7 204.9 112.1 69.3 277.5 175.2 88.4 55.1 OCT 300.5 158.9 96.2 54.2 342.4 191.2 111.2 63.8 317.4 193.1 113.9 68.9 281.9 159.8 99.1 56.9 WAN 445.6 292.1 168.7 92.4 376.0 233.4 153.4 92.2 348.1 225.7 157.2 106.4 294.9 197.7 133.1 87.7 WU 268.1 147.2 86.7 51.0 299.2 167.3 95.4 58.3 278.9 165.5 102.2 66.1 261.2 150.1 89.5 55.0 CC 335.1 202.0 122.6 74.9 398.8 246.5 148.7 78.9 418.4 256.8 160.7 107.9 285.9 171.7 111.9 77.4 RWM 251.4 150.1 83.7 51.0 296.5 171.0 99.8 60.6 295.6 178.8 107.1 69.2 260.4 149.7 88.8 55.6 PWC 243.2 161.2 101.5 78.0 349.4 205.1 125.8 86.0 344.8 183.7 121.1 80.0 283.4 169.3 110.5 75.6 SAM 238.1 138.5 81.8 53.5 282.4 157.5 92.4 58.8 275.7 159.2 100.8 65.9 246.2 141.2 85.0 53.7 CY 248.0 146.6 89.3 53.0 313.2 178.6 106.7 64.5 317.3 186.1 114.1 72.6 237.8 157.9 96.4 58.8 VC 253.2 144.5 79.6 48.8 290.6 166.4 98.0 58.5 294.8 169.3 108.0 69.5 234.4 146.6 90.2 54.2 VCL 240.6 131.5 77.1 47.9 263.7 157.5 96.6 57.2 261.1 160.3 103.8 68.4 229.8 141.4 85.7 53.8 SOM 301.7 134.7 70.3 44.2 279.4 151.5 82.2 47.7 270.9 160.5 89.9 69.1 226.4 137.8 72.4 46.0 MMM 407.9 276.9 138.2 85.6 352.1 194.8 128.7 68.5 341.5 213.3 136.5 85.2 276.2 174.9 117.2 75.6 ADU 195.2 109.9 66.8 39.8 239.1 134.7 74.8 44.1 263.7 143.1 92.0 54.0 201.6 112.7 67.7 41.3 WSM 194.0 107.6 62.8 37.9 240.7 131.3 76.7 42.5 232.6 133.7 84.0 54.4 200.4 114.4 67.1 41.2

2 2 comparison per color vector as all operations except for the Given a color c, let D ¼ kkc À ma 2Àkkc À mb 2: If D\0; c is assigned to Ca; otherwise it is assigned to Cb. dot product can be precalculated. Therefore, each Lloyd–Max iteration requires ten additions/ subtractions, six multiplications, and one comparison to 3 Experimental results and discussion classify one color vector. It can be shown that identical results can be achieved by using the following criterion  3.1 Image set and performance criteria 0 2 2 [11]: D ¼kmak2 Àkmbk2 =2 À ðÞÁma À mb c, where ‘Á’ denotes the dot product. This modified criterion costs only The proposed methods were tested on a set of eight true- two additions/subtractions, three multiplications, and one color (24-bit) test images commonly used in the CQ

123 334 J Real-Time Image Proc (2015) 10:329–344

Table 3 CPU time comparison of the quantization methods Method K K K K 32 64 128 256 32 64 128 256 32 64 128 256 32 64 128 256

Baboon Fish Goldhill Lenna POP 33 40 49.7 65 4 6 7 10 33 41 53.5 72 23 29 37 51 MC554.862122867.295545 MPOP 30 41 95.4 157 5 6 22 39 32 43 71.0 106 24 30 50 78 OCT 89 94 95.9 107 19 19 20 20 96 106 111.8 128 61 68 74 85 WAN 7 8 7.0 8 2 2 3 3 9 9 9.8 10 6 7 7 6 WU 9 6 7.2 8 2 2 3 3 10 11 10.3 11 7 7 7 8 CC 44 74 159.6 450 9 17 42 131 38 57 100.5 231 28 42 72 164 RWM 36 44 54.4 70 6 6 9 11 35 44 54.7 68 25 29 37 48 PWC 4,359 4,362 4,357.4 4,303 217 218 212 175 520 527 531.6 502 186 189 191 169 SAM 11 19 43.0 102 2 3 6 11 13 14 20.5 30 8 8 11 14 CY 35 44 54.8 69 5 6 8 11 34 43 55.4 70 24 30 38 49 VC 33 42 52.7 69 5 7 9 11 32 42 52.3 66 23 28 37 48 VCL 36 45 56.7 74 6 6 10 12 35 44 55.4 70 23 30 38 49 SOM 100 175 300.1 535 20 35 64 116 137 240 423.9 787 86 155 277 495 MMM 173 221 317.3 500 13 19 29 46 79 101 142.3 210 97 127 181 281 ADU 38 103 394.7 1,840 18 73 359 1,780 38 105 408.4 1,865 31 95 389 1,841 WSM 95 123 204.0 413 15 19 42 144 74 98 153.4 328 65 90 150 379 Motocross Parrots Peppers Pills POP 37 45 58.0 82 39 46 63 82 26 33 42.1 55 34 44 54 73 MC665.776639455.066788 MPOP 35 48 85.7 130 36 47 81 123 24 35 65.8 106 34 44 77 118 OCT 111 121 123.7 138 103 105 110 125 68 72 80.0 88 111 110 125 132 WAN 9 10 9.8 10 9 9 10 10 6 7 7.9 7 10 10 10 10 WU 10 10 10.0 10 10 10 10 10 7 7 7.8 7 11 11 11 11 CC 43 63 123.6 313 41 61 114 284 34 55 107.7 280 41 62 114 277 RWM 37 46 56.3 73 36 46 60 77 26 35 41.5 55 36 45 55 70 PWC 1,833 1,838 1,839.8 1,804 1,020 1,027 1,030 1,002 1,206 1,209 1,207.9 1,166 895 902 905 869 SAM 13 17 31.0 55 13 15 25 36 10 12 22.3 44 12 16 25 36 CY 37 46 56.1 73 37 47 61 79 26 34 41.4 56 35 44 58 72 VC 36 45 56.4 71 36 46 57 76 26 33 41.6 55 34 43 55 69 VCL 39 48 59.1 74 37 47 62 80 28 35 44.5 59 36 45 57 72 SOM 134 230 399.4 735 132 230 407 751 89 159 280.4 507 140 250 430 784 MMM 66 87 113.9 167 62 82 113 174 125 162 232.7 368 167 215 302 459 ADU 40 107 401.6 1,844 41 109 391 1,861 34 97 379.7 1,816 40 106 411 1,862 WSM 64 93 139.9 283 60 94 161 323 67 112 163.9 358 96 147 204 431 literature: Baboon (USC-SIPI Image Database, 512 9 The effectiveness of a CQ method was quantified by the 512; 230,427 colors), Fish (Luiz Velho, 300 9 200; commonly used Mean Absolute Error (MAE) and Mean 28,170 colors), Goldhill (Lee Crocker, 720 9 576; 90,966 Squared Error (MSE) measures: colors), Lenna (USC-SIPI Image Database, 512 9 512; XH XW 1 148,279 colors), Motocross (Kodak Lossless True MAEðX; X^Þ¼ Xðh; wÞÀX^ðh; wÞ HW 1 Color Image Suite, 768 9 512; 63,558 colors), Parrots h¼1 w¼1 ð1Þ XH XW (Kodak Lossless True Color Image Suite, 768 9 512; 1 2 MSEðX; X^Þ¼ Xðh; wÞÀX^ðh; wÞ 72,079 colors), Peppers (USC-SIPI Image Database, HW 2 512 9 512; 183,525 colors), and Pills (Karel de Gendre, h¼1 w¼1 800 9 519; 206,609 colors). These images are shown in where X and X^ denote, respectively, the H 9 W original Fig. 1. and quantized images in the RGB color space. MAE and

123 J Real-Time Image Proc (2015) 10:329–344 335

MSE represent the average color distortion with respect to Table 4 Rank comparison of the quantization methods 2 the L1 (City-block) and L2 (squared Euclidean) norms, Method MAE MSE OVERALL respectively. Note that most of the other popular evaluation measures used in the CQ literature such as peak signal-to- POP 16.1 16.4 16.3 noise ratio (PSNR), normalized MSE, root MSE, and MC 16.3 16.1 16.2 average color distortion [42, 52] are variants of either MAE MPOP 9.8 9.8 9.8 or MSE. OCT 9.5 10.6 10.1 The efficiency of a CQ method was measured by CPU WAN 13.3 15.0 14.2 time in milliseconds, which includes the time required for WU 6.3 7.2 6.8 both the palette generation and pixel mapping phases. In CC 13.3 13.0 13.1 order to perform a fair comparison, the fast pixel mapping RWM 7.8 7.4 7.6 algorithm described in [28] was used in CQ methods that PWC 12.4 12.0 12.2 lack an efficient pixel mapping phase. All of the programs SAM 8.0 6.8 7.4 were implemented in the C language, compiled with the CY 9.4 8.7 9.0 gcc v4.4.3 compiler, and executed on an Intel Xeon E5520 VC 6.2 5.8 6.0 2.26 GHz machine. The time figures were averaged over VCL 4.5 4.0 4.3 100 runs. SOM 2.3 4.2 3.2 MMM 13.9 13.0 13.5 3.2 Comparison of VC/VCL against other CQ methods ADU 1.4 1.8 1.6 WSM 2.3 1.3 1.8 The proposed methods were compared to 15 well-known CQ methods: – Popularity (POP) [24]: This method builds a the bottom of the tree, prunes the tree by merging its 16 9 16 9 16 color histogram using 4 bits/channel nodes until K colors are obtained. In the experiments, uniform quantization and then takes the K most the implementation of this method distributed as part of frequent colors in the histogram as the color palette. the ImageMagick software was used and the tree depth – Median-cut (MC) [24]: This method starts by building a was limited to 6 to obtain the best results. 32 9 32 9 32 color histogram using uniform quanti- – Variance-based method (WAN) [54]: This method is zation. This histogram volume is then recursively split similar to MC with the exception that at each step the into smaller boxes until K boxes are obtained. At each box with the greatest SSE is split along the axis with step, the box that contains the greatest number of colors the least weighted sum of projected variances at the is split along the longest axis at the median point, so point that minimizes the marginal squared error. In the that the resulting subboxes each contain approximately experiments, the implementation of this method dis- the same number of colors. The centroids of the final tributed as part of the Utah Raster Toolkit software was K boxes are taken as the color palette. In the used. experiments, the implementation of this method dis- – Greedy orthogonal bipartitioning (WU) [58]: This tributed as part of the GIMP software was used. method is similar to WAN with the exception that at – Modified Popularity (MPOP) [4]: This method starts by each step the box is split along the axis that minimizes building a 2R 9 2R 9 2R color histogram using R bits/ the sum of variances on both sides. In the experiments, channel uniform quantization. It chooses the most Xiaolin Wu’s implementation was used. frequent color as the first palette color c and then 1 – Center-cut (CC) [31]: This method is similar to MC reduces the frequency of each color c by a factor of  with the exception that at each step the box with the a c c 2 1 À e kkÀ 1 2 ; where a is a user-defined parameter. greatest range on any coordinate axis is split along its The remaining palette colors are chosen similarly. In longest axis at the mean point. the experiments, best results were obtained with the – Self-organizing map (SOM) [16]: This method utilizes a following parameter configuration: a = 0.25 and one-dimensional self-organizing map with K neurons. R = 4 for K B 64 and R = 5 otherwise. A random subset of N/f pixels is used in the training – Octree (OCT) [22]: This two-phase method first builds phase and the final weights of the neurons are taken as an octree (a tree data structure in which each internal the color palette. In the experiments, Anthony Dekker’s node has up to eight children) that represents the color implementation was used and the sampling factor was distribution of the input image and then, starting from set to f = 1 to obtain the best results.

123 336 J Real-Time Image Proc (2015) 10:329–344

Fig. 2 Lenna output images (K = 32). a Original, b MC output, c OCT output, d CY output, e VC output, f SOM output, g WSM output

– Radius-weighted mean-cut (RWM) [64]: This method is to the previously selected colors, i.e., c1; c2; ...; ciÀ1: similar to WAN with the exception that the box is split Each of these initial palette colors is then recalculated along the vector from the origin to the radius-weighted as the mean of the colors assigned to it. In the mean (rwm) at the rwm point. experiments, the first color was chosen as the centroid – Modified maxmin (MMM) [59]: This method chooses of the input image colors.

the first palette color c1 arbitrarily from the input image – Pairwise clustering (PWC) [51]: This method is an colors and the ith color ci (i ¼ 2; 3; ...; K) is chosen to adaptation of Ward’s agglomerative hierarchical clus- 2 tering method [56] to CQ. It builds a 2R 9 2R 9 2R be the color that has the greatest minimum weighted L2 distance (the weights for the red, green, and blue color histogram and constructs a Q 9 Q joint quanti- channels are taken as 0.5, 1.0, and 0.25, respectively) zation error matrix, where Q is the number of colors in

123 J Real-Time Image Proc (2015) 10:329–344 337

Fig. 3 Lenna error images (K = 32). a MC error, b OCT error, c CY error, d VC error, e SOM error, f WSM error

the reduced color histogram. The clustering procedure and therefore, to limit its computational requirements, starts with Q singleton clusters each of which contains the uniform quantization parameter was set to R = 4. one image color. In each iteration, the pair of clusters – Split and Merge (SAM) [5]: This two-phase method first with the least joint quantization error is merged. This partitions the color space uniformly into B partitions. merging process is repeated until K clusters remain. This initial set of B clusters is represented as an Note that this method has an OðQ3Þ time complexity adjacency graph. In the second phase, (B - K) merge

123 338 J Real-Time Image Proc (2015) 10:329–344

Fig. 4 Motocross output images (K = 64). a Original, b POP output, c CC output, d RWM output, e VCL output, f ADU output, g WSM output

operations are performed to obtain the final K clusters. color and the color that is farthest away from it at the At each step of the second phase, the pair of clusters mean point. with the least joint quantization error is merged. In the – Adaptive distributing units (ADU) [8]: This method is experiments, Luc Brun’s implementation was used and an adaptation of Uchiyama and Arbib’s clustering the initial number of clusters was set to B = 20 K to algorithm [50] to CQ. ADU is a competitive learning obtain the best results. algorithm in which units compete to represent the input – Cheng and Yang (CY) [13]: This method is similar to point presented in each iteration. The winner is then WAN with the exception that at each step the box is rewarded by moving it closer to the input point at a rate split along a specially chosen line defined by the mean of c (the learning rate). The procedure starts with a

123 J Real-Time Image Proc (2015) 10:329–344 339

Fig. 5 Motocross error images (K = 64). a POP error, b CC error, c RWM error, d VCL error, e ADU error, f WSM error

single unit whose center is given by the centroid of the and MSE separately. Table 4 shows the average MAE and input points. New units are added by splitting existing MSE ranks of the methods. The last column gives the units that reach the threshold number of wins h until the overall ranks with the assumption that both effectiveness number of units reaches K. Following [8], the algorithm criteria have equal importance. Note that the best possible pffiffiffiffi parameters were set to h ¼ 400 K; tmax = (2K - 3)h, rank is 1. The following observations are in order: and c = 0.015. • In general, postclustering methods are more effective – Weighted sort-means (WSM) [9]: This method is an but less efficient than the preclustering methods. adaptation of the conventional k-means clustering • VC and VCL are generally more effective than the algorithm to CQ. It involves data reduction, sample other preclustering methods. Except in few cases, VCL weighting, and accelerated nearest neighbor search. In outperforms VC and the MSE difference between the the experiments, WSM was initialized by the proposed two can be as large as 12 %. Note that the Lloyd–Max VCL method. iterations used in VCL perform optimization within Tables 1, 2, 3 compare the methods with respect to individual subpartitions rather than the entire set of MAE, MSE, and CPU time, respectively. For each test K subpartitions. Therefore, in rare cases, e.g., Fish image, the results of the preclustering and postclustering (K = 32) and Lenna (K = 32), such local optimization methods are listed separately for convenient comparison. can in fact slightly reduce the quality of the final color For the effectiveness criteria, the best (lowest) error values palette. Nevertheless, in such atypical cases, the MSE are shown in bold. In addition, for each image and K value difference between VC and VCL is negligible. Given combination, the methods are ranked based on their MAE that VCL is only about 6 % slower than VC, it is

123 340 J Real-Time Image Proc (2015) 10:329–344

Fig. 6 Baboon output images (K = 128). a Original, b WAN output, c MPOP output, d SAM output, e WU output, f VC output, g WSM output

generally preferable to use the former method as it latter can be significantly faster than the former, especially gives higher quality results. for moderate-to-large K values. Therefore, WSM initial- • POP is the least effective preclustering method. This is ized by VCL is often more practical than ADU. not surprising as this method disregards colors in sparse • Despite its iterative nature, MMM performs poorly regions of the color space. Interestingly, despite being a even when compared to preclustering methods. This is simple modification of POP, MPOP performs surpris- because, this method tries to distribute the quantization ingly well, surpassing some of the better known distortion more or less evenly throughout the image at methods such as MC, OCT, WAN, CC, and PWC. the expense of increased mean distortion. • ADU and WSM are the most effective postclustering • MC, WAN, and WU are the fastest preclustering methods. Both methods have similar effectiveness, but the methods. It should be noted that the implementations of

123 J Real-Time Image Proc (2015) 10:329–344 341

Fig. 7 Baboon error images (K = 128). a WAN error, b MPOP error, c SAM output, d WU error, e VC error, f WSM error

these methods have been heavily optimized for integer cache unless each color is reduced to 15 bits or fewer. arithmetic by their respective authors. In addition, these In contrast, our implementation of VC/VCL utilizes implementations take advantage of lookup tables only efficient data structures and algebraic equalities. (LUTs) that are practical only when the input image Therefore, if implemented in a similarly optimized is preprocessed using uniform quantization. This is fashion, VC/VCL should achieve similar computational because such LUTs typically do not fit into the CPU performance as MC, WAN, and WU. Furthermore, our

123 342 J Real-Time Image Proc (2015) 10:329–344

methods do not require uniform quantization, which 4. Braudaway, G.W.: Procedure for optimum choice of a small often introduces significant visible distortion. We number of colors from a large color palette for color imaging. In: Proceedings of the Electronic Imaging Conference, pp. 71–75 employed 5 bits/channel uniform quantization in our (1987) experiments only to ensure a fair comparison with the 5. Brun, L., Mokhtari, M.: Two high speed color quantization other preclustering methods. algorithms. In: Proceedings of the 1st International Conference on Color in Graphics and Image Processing, pp. 116–121 (2000) Figures 2, 4, and 6 show sample quantization results for 6. Brun, L., Tre´meau, A.: Color quantization. In: Sharma, G. (ed.) close-up parts of the Lenna, Motocross, and Baboon ima- Digital Color Imaging Handbook, CRC Press, pp. 589–638 ges, respectively. Figures 3, 5, and 7 show the full-scale (2002) 7. Cak, S., Dizdar, E.N., Ersak, A.: A fuzzy colour quantizer for error images for the respective images. The error image for renderers. Displays 19(2), 61–65 (1998) a particular CQ method was obtained by taking the pixel- 8. Celebi M.E.: An effective color quantization method based on the wise absolute difference between the original and quan- competitive learning paradigm. In: Proceedings of the Interna- tized images. In order to obtain a better visualization, pixel tional Conference on Image Processing, Computer Vision, and Pattern Recognition, vol. 2, pp. 876–880 (2009) values of the error images were multiplied by 4 and then 9. Celebi, M.E.: Improving the performance of K-means for color negated. It can be seen that the proposed VC and VCL quantization. Image Vis. Comput. 29(4), 260–271 (2011) methods perform exceptionally well in allocating repre- 10. Celebi, M.E., Schaefer, G.: Neural gas clustering for color sentative colors to various image regions, resulting in reduction. In: Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition, cleaner error images. pp. 429–432 (2010) 11. Chan, C.K., Ma, C.K.: A fast method of designing better code- books for image vector quantization. IEEE Trans. Commun. 4 Conclusions 42(2/3/4):237–242 (1994) 12. Chang, C.H., Xu, P., Xiao, R., Srikanthan, T.: New adaptive color quantization method based on self-organizing maps. IEEE Trans. In this paper, an effective real-time color quantization Neural Netw. 16(1), 237–249 (2005) method called variance-cut (VC) was introduced. The 13. Cheng, S., Yang, C.: Fast and novel technique for color quanti- method is based on divisive hierarchical clustering and zation using reduction of color space dimensionality. Pattern Recogn. Lett. 22(8), 845–856 (2001) involves iterative binary splitting of the input image color 14. Chung, K.L., Huang, Y.H., Wang, J.P., Cheng, M.S.: Speedup of space using axis-parallel planes. The effectiveness of VC color palette indexing in self-organization of kohonen feature can be improved using a few Lloyd–Max iterations after map. Expert Syst. Appl. 39(3), 2427–2432 (2012) each split at the expense of about 6 % computational 15. Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Intro- duction to Algorithms. The MIT Press, Cambridge (2009) overhead. Moreover, VC coupled with Lloyd–Max can be 16. Dekker, A.: Kohonen neural networks for optimal colour quan- used to initialize the k-means clustering algorithm to fur- tization. Netw.: Comput. Neural Syst. 5(3), 351–367 (1994) ther reduce the quantization distortion. 17. Deng, Y., Manjunath, B.: Unsupervised segmentation of color- Extensive experiments on a diverse set of classic test texture regions in images and video. IEEE Trans. Pattern Anal. Mach. Intell. 23(8), 800–810 (2001) images demonstrated that the proposed methods outper- 18. Deng, Y., Manjunath, B., Kenney, C., Moore, M., Shin, H.: An form well-known quantization methods with respect to efficient color representation for image retrieval. IEEE Trans. distortion minimization. The presented methods are rela- Image Process. 10(1), 140–147 (2001) tively easy to implement and highly efficient. Furthermore, 19. Equitz, W.H.: A new vector quantization clustering algorithm. IEEE Trans. Acoust. Speech Signal Process. 37(10), 1568–1575 unlike many existing preclustering methods, they do not (1989) require uniform quantization. 20. Frackiewicz, M., Palus, H.: KM and KHM clustering techniques for colour image quantisation. In: Joao Manuel, R., Tavares, S., Acknowledgments This publication was made possible by grants Natal Jorge, R.M. (eds.) Computational Vision and Medical from the Louisiana Board of Regents (LEQSF2008-11-RD-A-12), US Image Processing: Recent Trends. Springer, Berlin, pp. 161–174 National Science Foundation (0959583, 1117457), and National (2011) Natural Science Foundation of China (61050110449, 61073120). 21. Gentile, R.S., Allebach, J.P., Walowit, E.: Quantization of color images based on uniform color spaces. J. Imaging Technol. 16(1), 11–21 (1990) 22. Gervautz, M., Purgathofer, W.: Simple method for color quanti- References zation: octree quantization. In: Magnenat-Thalmann, N., Thal- mann, D. (eds.) New trends in , Springer, 1. Balasubramanian, R., Allebach, J.: A new approach to palette Berlin, pp. 219–231 (1988) selection for color images. J. Imaging Technol. 17(6), 284–290 23. Goldberg, N.: Colour image quantization for high resolution (1991) graphics display. Image Vis. Comput. 9(5), 303–312 (1991) 2. Bentley, J.L.: Multidimensional binary search trees used for 24. Heckbert, P.: Color image quantization for frame buffer display. associative searching. Commun. ACM 18(9), 509–517 (1975) ACM SIGGRAPH Comput. Graph. 16(3), 297–307 (1982) 3. Bing, Z., Junyi, S., Qinke, P.: An adjustable algorithm for color 25. Hsieh, I.S., Fan, K.C.: An adaptive clustering algorithm for color quantization. Pattern Recogn. Lett. 25(16), 1787–1797 (2004) quantization. Pattern Recogn. Lett. 21(4), 337–346 (2000)

123 J Real-Time Image Proc (2015) 10:329–344 343

26. Hu, Y.C., Lee, M.G.: (2007) K-means based color palette design 50. Uchiyama, T., Arbib, M.: An algorithm for competitive learning scheme with the use of stable flags. J. Electron. Imaging 16(3): in clustering problems. Pattern Recogn. 27(10), 1415–1421 033003 (1994) 27. Hu, Y.C., Su, B.H.: Accelerated K-means clustering algorithm for 51. Velho, L., Gomez, J., Sobreiro, M.V.R.: Color image quantiza- colour image quantization. Imaging Sci. J. 56(1), 29–40 (2008) tion by pairwise clustering. In: Proceedings of the 10th Brazilian 28. Hu, Y.C., Su, B.H.: Accelerated pixel mapping scheme for colour Symposium on Computer Graphics and Image Processing, image quantisation. Imaging Sci. J. 56(2), 68–78 (2008) pp. 203–210 (1997) 29. Huang, Y.L., Chang, R.F.: A fast finite-state algorithm for gen- 52. Verevka, O., Buchanan, J.: Local k-means algorithm for colour erating RGB palettes of color quantized images. J. Inf. Sci. Eng. image quantization. In: Proceedings of the Graphics/Vision 20(4), 771–782 (2004) Interface Conference, pp. 128–135 (1995) 30. Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: A review. 53. Wan, S.J., Wong, S.K.M., Prusinkiewicz, P.: An algorithm for ACM Comput. Surv. 31(3), 264–323 (1999) multidimensional data clustering. ACM Trans. Math. Softw. 31. Joy, G., Xiang, Z.: Center-cut for color image quantization. Vis. 14(2), 153–162 (1988) Comput. 10(1), 62–66 (1993) 54. Wan, S.J., Prusinkiewicz, P., Wong, S.K.M.: Variance-based 32. Kanjanawanishkul, K., Uyyanonvara, B.: Novel fast color color image quantization for frame buffer display. Color Res. reduction algorithm for time-constrained applications. J. Vis. Appl. 15, 52–58 (1990) Commun. Image Represent. 16(3), 311–332 (2005) 55. Wang, S., Cai, K., Lu, J., Liu, X., Wu, E.: Real-time coherent 33. Kasuga, H., Yamamoto, H., Okamoto, M.: Color quantization stylization for augmented reality. Vis. Comput. 26(6–8), 445–455 using the fast k-means algorithm. Syst. Comput. Jpn. 31(8), (2010) 33–40 (2000) 56. Ward, J.: Hierarchical grouping to optimize an objective function. 34. Kim, D.W., Lee, K., Lee, D.: A novel initialization scheme for J. Am. Stat. Assoc. 58(301), 236–244 (1963) the fuzzy c-means algorithm for color clustering. Pattern Recogn. 57. Wen, Q., Celebi, M.E.: Hard versus Fuzzy c-means clustering Lett. 25(2), 227–237 (2004) for color quantization. EURASIP J. Adv. Sig. Process. 2011, 35. Kuo, C.T., Cheng, S.C.: Fusion of color edge detection and color 118–129 (2011) quantization for color image watermarking using principal axes 58. Wu, X.: Efficient statistical computations for optimal color analysis. Pattern Recogn. 40(12), 3691–3704 (2007) quantization. In: Arvo, J. (ed.) Graphics Gems, vol. II, Academic 36. Lloyd, S.: Least squares quantization in PCM. IEEE Trans. Inf. Press, London, pp. 126–133 (1991) Theory 28(2), 129–136 (1982) 59. Xiang, Z.: Color image quantization by minimizing the maximum 37. Lo, K., Chan, Y., Yu, M.: Colour quantization by three-dimen- intercluster distance. ACM Trans. Graph. 16(3), 260–276 (1997) sional frequency diffusion. Pattern Recogn. Lett. 24(14), 60. Xiang, Z.: Color quantization. In: Gonzalez, T.F. (ed.) Handbook 2325–2334 (2003) of Approximation Algorithms and Metaheuristics. Chapman & 38. Max, J.: Quantizing for minimum distortion. IRE Trans. Inf. Hall/CRC, London, pp 86-1–86-17 (2007) Theory 6(1), 7–12 (1960) 61. Xiang, Z., Joy, G.: Color image quantization by agglomerative 39. Mojsilovic, A., Soljanin, E.: Color quantization and processing clustering. IEEE Comput. Graph. Appl. 14(3), 44–48 (1994) by fibonacci lattices. IEEE Trans. Image Process. 10(11), 62. Xiao, Y., Leung, C.S., Lam, P.M., Ho, T.Y.: Self-organizing 1712–1725 (2001) map-based color palette for high-dynamic range texture com- 40. Orchard, M., Bouman, C.: Color quantization of images. IEEE pression. Neural Comput. Appl. 21(4), 639–647 (2012) Trans. Image Process. 39(12), 2677–2690 (1991) 63. Yang, C.K., Tsai, W.H.: Color image compression using quan- 41. Ozdemir, D., Akarun, L.: Fuzzy algorithm for color quantization tization, thresholding, and edge detection techniques all based on of images. Pattern Recogn. 35(8), 1785–1791 (2002) the moment-preserving principle. Pattern Recogn. Lett. 19(2), 42. Papamarkos, N., Atsalakis, A., Strouthopoulos, C.: Adaptive 205–215 (1998) color reduction. IEEE Trans. Syst. Man Cybern. B 32(1), 44–56 64. Yang, C.Y., Lin, J.C.: RWM-cut for color image quantization. (2002) Comput. Graph. 20(4), 577–588 (1996) 43. Rasti, J., Monadjemi, A., Vafaei, A.: Color reduction using a multi-stage kohonen self-organizing map with redundant features. Expert. Syst. Appl. 38(10), 13188–97 (2011) 44. Schaefer, G.: Intelligent approaches to colour palette design. In: Author Biography Kwasnicka, H., Jain, L.C. (eds.) Innovations in Intelligent Image Analysis, Springer, Berlin, pp. 275–289 (2011) M. Emre Celebi M. Emre Celebi received his B.Sc. degree in 45. Schaefer, G., Zhou, H.: Fuzzy clustering for colour reduction in Computer Engineering from the Middle East Technical University images. Telecommun. Syst. 40(1-2), 17–25 (2009) (Ankara, Turkey) in 2002. He received his M.Sc. and Ph.D. degrees in 46. Scheunders, P.: Comparison of clustering algorithms applied Computer Science and Engineering from the University of Texas at to color image quantization. Pattern Recogn. Lett. 18(11–13), Arlington (Arlington, TX, USA) in 2003 and 2006, respectively. He is 1379–1384 (1997) currently an Associate Professor and the founding director of the 47. Sertel, O., Kong, J., Catalyurek, U.V., Lozanski, G., Saltz, J.H., Image Processing and Analysis Laboratory in the Department of Gurcan, M.N.: Histopathological image analysis using model- Computer Science at the Louisiana State University in Shreveport. based intermediate representations and color texture: Follicular Dr. Celebi has actively pursued research in the field of image lymphoma grading. J. Signal Process. Syst. 55(1–3), 169–183 processing and analysis with an emphasis on medical image analysis (2009) and color image processing. He has worked on several projects funded 48. Sherkat, N., Allen, T., Wong, S.: Use of colour for hand-filled by the US National Science Foundation (NSF) and National Institutes form analysis and recognition. Pattern Anal. Appl. 8(1), 163–180 of Health (NIH) and published over 100 articles in premier journals (2005) and conference proceedings. His research contributions are covered in 49. Sirisathitkul, Y., Auwatanamongkol, S., Uyyanonvara, B.: Color two recent books published by Wiley Interscience: Image Processing image quantization using distances between adjacent colors along - Principles and Applications (Acharya and Ray, 2005) and Clustering the color axis with highest color variance. Pattern Recogn. Lett. (Xu and Wunsch, 2009). His recent research is funded by grants from 25(9), 1025–1043 (2004) the NSF and Louisiana Board of Regents. Dr. Celebi is an editorial

123 344 J Real-Time Image Proc (2015) 10:329–344 board member of 6 international journals, reviews for over 60 several colloquia, workshops, and conferences, is the organizer of international journals, and served on the program committee of more several workshops, and the editor of several journal special issues and than 50 international conferences. He has been invited as speaker to books. He is a senior member of the IEEE and SPIE.

123