VECTOR QUANTIZATION AND SIGNAL COMPRESSION THE KLUWER INTERNATIONAL SERIES IN ENGINEERING AND COMPUTER SCIENCE

COMMUNICATIONS AND

Consulting Editor: Robert Gallager

Other books in the series: Digital Communication. Edward A. Lee, David G. Messerschmitt ISBN: 0-89838-274-2 An Introduction to Cryptolog)'. Henk c.A. van Tilborg ISBN: 0-89838-271-8 Finite Fields for Computer Scientists and Engineers. Robert J. McEliece ISBN: 0-89838-191-6 An Introduction to Error Correcting Codes With Applications. Scott A. Vanstone and Paul C. van Oorschot ISBN: 0-7923-9017-2 Source .. Robert M. Gray ISBN: 0-7923-9048-2 Switching and TraffIC Theory for Integrated BroadbandNetworks. Joseph Y. Hui ISBN: 0-7923-9061-X Advances in Speech Coding, Bishnu Atal, Vladimir Cuperman and Allen Gersho ISBN: 0-7923-9091-1 Coding: An Algorithmic Approach, John B. Anderson and Seshadri Mohan ISBN: 0-7923-9210-8 Third Generation Wireless Information Networks, edited by Sanjiv Nanda and David J. Goodman ISBN: 0-7923-9128-3 VECTOR QUANTIZATION AND SIGNAL COMPRESSION

by

Allen Gersho University of California, Santa Barbara Robert M. Gray

.....

SPRINGER SCIENCE+BUSINESS" MEDIA, LLC Library of Congress Cataloging-in-Publication Data

Gersho, A1len. Vector quantization and signal compression / by Allen Gersho, Robert M. Gray. p. cm. -- (K1uwer international series in engineering and computer science ; SECS 159) Includes bibliographical references and index. ISBN 978-1-4613-6612-6 ISBN 978-1-4615-3626-0 (eBook) DOI 10.1007/978-1-4615-3626-0 1. Signal processing--Digital techniques. 2. Data compression () 3. Coding theory. 1. Gray, Robert M., 1943- . II. Title. III. Series. TK5102.5.G45 1991 621.382'2--dc20 91-28580 CIP

This book was prepared with U-TEX and reproduced by Kluwer from camera-ready copy supplied by the authors.

Copyright © 1992 Springer Science+Business Media New York Eighth Printing 2001. Originally published by Kluwer Academic Publishers,New York in 1992 Softcover reprint ofthe hardcover Ist edition 1992 AII rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, mechanical, photo-copying, recording, or otherwise, without the prior written permission of the publisher, Springer Science+Business Media, LLC.

Printed on acid-free paper. to Roberta & Lolly Contents

Preface xiii

1 Introduction 1 1.1 Signals, Coding, and Compression 1 1.2 Optimality...... 8 1.3 How to Use this Book. 12 1.4 Related Reading . . . . 13

I Basic Tools 15

2 Random Processes and Linear Systems 17 2.1 Introduction ...... 17 2.2 Probability...... 18 2.3 Random Variables and Vectors. 23 2.4 Random Processes. 26 2.5 Expectation ...... 29 2.6 Linear Systems ...... 32 2.7 Stationary and Ergodic Properties. 35 2.8 Useful Processes 39 2.9 Problems 42

3 Sampling 49 3.1 Introduction 49 3.2 Periodic Sampling . 50 3.3 Noise in Sampling. 57 3.4 Practical Sampling Schemes 60 3.5 Sampling Jitter ...... 65 3.6 Multidimensional Sampling. 67 3.7 Problems ...... 78

VB viii CONTENTS

4 Linear Prediction 83 4.1 Introduction...... 83 4.2 Elementary Estimation Theory . 84 4.3 Finite-Memory Linear Prediction 93 4.4 Forward and Backward Prediction. 98 4.5 The Levinson-Durbin Algorithm .. 104 4.6 Linear Predictor Design from Empirical Data 108 4.7 Minimum Delay Property ..... 112 4.8 Predictability and Determinism . . 115 4.9 Infinite Memory Linear Prediction. 117 4.10 Simulation of Random Processes. 125 4.11 Problems ...... 125

II Scalar Coding 131

5 Scalar Quantization I 133 5.1 Introduction ...... 133 5.2 Structure of a Quantizer ...... 138 5.3 Measuring Quantizer Performance. 142 5.4 The Uniform Quantizer ...... 151 5.5 Nonuniform Quantization and Companding 156 5.6 High Resolution: General Case. 161 5.7 Problems ...... 168

6 Scalar Quantization II 173 6.1 Introduction ...... 173 6.2 Conditions for Optimality ...... 173 6.3 High Resolution Optimal Companding 185 6.4 Quantizer Design Algorithms . 187 6.5 Implementation 194 6.6 Problems...... 198

7 Predictive Quantization 203 7.1 Introduction..... 203 7.2 Difference Quantization ...... 204 7.3 Closed-Loop Predictive Quantization 206 7.4 Delta Modulation 214 7.5 Problems ...... 220 CONTENTS IX

8 Bit Allocation and Transform Coding 225 8.1 Introduction ...... 225 8.2 The Problem of Bit Allocation . . . . 226 8.3 Optimal Bit Allocation Results ... 228 8.4 Integer Constrained Allocation Techniques 233 8.5 Transform Coding ...... 235 8.6 Karhunen-Loeve Transform ...... 240 8.7 Performance Gain of Transform Coding. 243 8.8 Other Transforms 245 8.9 Sub-band Coding 246 8.10 Problems .. 252

9 Entropy Coding 259 9.1 Introduction ...... 259 9.2 Variable-Length Scalar Noiseless Coding 261 9.3 Prefix Codes ...... 269 9.4 Huffman Coding .... 271 9.5 Vector Entropy Coding 276 9.6 Arithmetic Coding .. 277 9.7 Universal and Adaptive Entropy Coding 284 9.8 Ziv-Lempel Coding ...... 288 9.9 Quantization and Entropy Coding . 295 9.10 Problems ...... 302

III Vector Coding 307

10 Vector Quantization I 309 10.1 Introduction.... 309 10.2 Structural Properties and Characterization. 317 10.3 Measuring Vector Quantizer Performance. 323 lOA Nearest Neighbor Quantizers ...... 327 10.5 Lattice Vector Quantizers ...... 335 10.6 High Resolution Distortion Approximations 338 10.7 Problems ...... 340

11 Vector Quantization II 345 11.1 Introduction ...... 345 11.2 Optimality Conditions for VQ 349 11.3 Vector Quantizer Design 358 11.4 Design Examples 372 11.5 Problems...... 401 x CONTENTS

12 Constrained Vector Quantization 407 12.1 Introduction ...... 407 12.2 Complexity and Storage Limitations 408 12.3 Structurally Constrained VQ . 409 12.4 Tree-Structured VQ . 410 12.5 Classified VQ ...... 423 12.6 Transform VQ ...... 424 12.7 Product Code Techniques 430 12.8 Partitioned VQ .. 434 12.9 Mean-Removed VQ 435 12.10 Shape-Gain VQ .. 441 12.11 Multistage VQ ... 451 12.12 Constrained Storage VQ 459 12.13 Hierarchical and Multiresolution VQ 461 12.14 Nonlinear Interpolative VQ .... 466 12.15 Lattice Codebook VQ ...... 470 12.16 Fast Nearest Neighbor Encoding. 479 12.17 Problems ...... 482

13 Predictive Vector Quantization 487 13.1 Introduction ...... 487 13.2 Predictive Vector Quantization 491 13.3 Vector Linear Prediction .... 496 13.4 Predictor Design from Empirical Data 504 13.5 Nonlinear Vector Prediction 506 13.6 Design Examples 509 13.7 Problems ...... 517

14 Finite-State Vector Quantization 519 14.1 Recursive Vector Quantizers ...... 519 14.2 Finite-State Vector Quantizers ...... 524 14.3 Labeled-States and Labeled-Transitions . 528 14.4 Encoder/Decoder Design .. 533 14.5 Next-State Function Design 537 14.6 Design Examples 545 14.7 Problems ...... 552

15 Tree and Trellis Encoding 555 15.1 Delayed Decision Encoder 555 15.2 Tree and Trellis Coding .. 557 15.3 Decoder Design ...... 568 15.4 Predictive Trellis Encoders 573 CONTENTS Xl

15.5 Other Design Techniques 584 15.6 Problems...... 585

16 Adaptive Vector Quantization 587 16.1 Introduction...... 587 16.2 Mean Adaptation ...... 590 16.3 Gain-Adaptive Vector Quantization 594 16.4 Switched Codebook Adaptation 602 16.5 Adaptive Bit Allocation ...... 605 16.6 Address VQ ...... 611 16.7 Progressive Code Vector Updating. 618 16.8 Adaptive Codebook Generation 620 16.9 Vector Excitation Coding .... . 621 16.10 Problems ...... 628

17 Variable Rate Vector Quantization 631 17.1 Variable Rate Coding ...... 631 17.2 Variable Dimension VQ ...... 634 17.3 Alternative Approaches to Variable Rate VQ . 638 17.4 Pruned Tree-Structured VQ .... 640 17.5 The Generalized BFOS Algorithm . 645 17.6 Pruned Tree-Structured VQ 652 17.7 Entropy Coded VQ . 653 17.8 Greedy Tree Growing .. 654 17.9 Design Examples .... 656 17.10 Bit Allocation Revisited 677 17.11 Design Algorithms. 682 17.12 Problems ...... 688

Bibliography 691

Index 720 Preface

Herb Caen, a popular columnist for the San Francisco Chronicle, recently quoted a Voice of America press release as saying that it was reorganizing in order to "eliminate duplication and redundancy." This quote both states a goal of data compression and illustrates its common need: the removal of duplication (or redundancy) can provide a more efficient representation of data and the quoted phrase is itself a candidate for such surgery. Not only can the number of words in the quote be reduced without losing informa• tion, but the statement would actually be enhanced by such compression since it will no longer exemplify the wrong that the policy is supposed to correct. Here compression can streamline the phrase and minimize the em• barassment while improving the English style. Compression in general is intended to provide efficient representations of data while preserving the essential information contained in the data. This book is devoted to the theory and practice of signal compression, i.e., data compression applied to signals such as speech, audio, images, and video signals (excluding other data types such as financial data or general• purpose computer data). The emphasis is on the conversion of analog waveforms into efficient digital representations and on the compression of digital information into the fewest possible bits. Both operations should yield the highest possible reconstruction fidelity subject to constraints on the bit rate and implementation complexity. The conversion of signals into such efficient digital representations has several goals:

• to minimize the communication capacity required for transmission of high quality signals such as speech and images or, equivalently, to get the best possible fidelity over an available digital communication channel,

• to minimize the storage capacity required for saving such information in fast storage media and in archival data bases or, equivalently, to get the best possible quality for the largest amount of information

XlIl XlV PREFACE

stored in a given medium,

• to provide the simplest possible accurate descriptions of a signal so as to minimize the subsequent complexity of signal processing algorithms such as classification, transformation, and encryption. In addition to these common goals of communication, storage, and signal processing systems, efficient coding of both analog and digital information is intimately connected to a variety of other fields including pattern recog• nition, image classification, speech recognition, cluster analysis, regression, and decision tree design. Thus techniques from each field can often be extended to another and combined signal processing operations can take advantage of the similar algorithm structures and designs. During the late 1940s and the 1950s, developed a the• ory of source coding in order to quantify the optimal achievable performance trade-offs in analog-to-digital (A/D) conversion and data compression sys• tems. The theory made precise the best possible tradeoffs between bit rates and reproduction quality for certain idealized communication systems and it provided suggestions of good coding structures. Unfortunately, however, it did not provide explicit design techniques for coding systems and the performance bounds were of dubious relevance to real world data such as speech and images. On the other hand, two fundamental ideas in Shannon's original work did lead to a variety of coder design techniques over time. The first idea was that purely digital signals could be compressed by assigning shorter codewords to more probable signals and that the maximum achiev• able compression could be determined from a statistical description of the signal. This led to the idea of noiseless or lossless coding, which for reasons we shall see is often called entropy coding. The second idea was that coding systems can perform better if they operate on vectors or groups of symbols (such as speech samples or pixels in images) rather than on individual symbols or samples. Although the first idea led rapidly to a variety of specialized coder design techniques,the second idea of coding vectors took many years before yielding useful coding schemes. In the meantime a variety of effective coding systems for analog• to-digital conversion and data compression were developed that performed the essential conversion operation on scalars, although they often indirectly coded vectors by peforming preprocessing such as prediction or transform• ing on the signal before the scalar quantization. In the 1980s vector coding or vector quantization has come of age and made an impact on the tech• nology of signal compression. Several commercial products for speech and video coding have emerged which are based on vector coding ideas. This book emphasizes the vector coding techniques first described by Shannon, but only developed and applied during the past twelve years. PREFACE xv

To accomplish this in a self-contained fashion, however, it is necessary to first provide several prerequisites and it is useful to develop the traditional scalar coding techniques with the benefit of hindsight and from a unified viewpoint. For these reasons this book is divided into three parts. Part I provides a survey of the prerequisite theory and notation. Although all of the results are well known, much of the presentation and several of the proofs are new and are matched to their later applications. Part II provides a detailed development of the fundamentals of traditional scalar quantiza• tion techniques. The development is purposefully designed to facilitate the later development of vector quantization techniques. Part III forms the heart of the book. It is the first published in-depth development in book form of the basic principles of vector quantizers together with a description of a wide variety of coding structures, design algorithms, and applications. Vector quantization is simply the coding structure developed by Shan• non in his theoretical development of source coding with a fidelity criterion. Conceptually it is an extension of the simple scalar quantizers of Part II to multidimensional spaces; that is, a vector quantizer operates on vectors intsead of scalars. Shannon called vector quantizers "block source codes with a fidelity criterion" and they have also been called "block quantizers." Much of the material contained in Part III is relatively recent in origin. The development of useful design algorithms and coding structures began in the late 1970s and interest in vector quantization expanded rapidly in the 1980s. Prior to that time digital signal processing circuitry was not fast enough and the memories were not large enough to use vector coding techniques in real time and there was little interest in design algorithms for such codes. The rapid advance in digital signal processor chips in the past decade made possible low cost implementations of such algorithms that would have been totally infeasible in the 1970s. During the past ten years, vector quantization has proved a valuable coding technique in a variety of applications, especially in voice and image coding. This is because of its simple structure, its ability to trade ever cheaper memory for often expensive computation, and the often serendipi• tous structural properties of the codes designed by iterative clustering algo• rithms. As an example of the desirable struct ural properties of vector quan• tizers, suitably designed tree-structured codes are nested and are naturally optimized for progressive transmission applications where one progressively improves a signal (such as an image) as more bits arrive. Another example is the ability of clustering algorithms used to design vector quantizers to enhance certain features of the original signal such as small tumors in a medici:il image. In many applications the traditional scalar techniques remain dominant and likely will remain so, but their vector extensions are finding a steadily xvi PREFACE increasing niche in signal compression and other signal processing applica• tions. This book grew out of the authors' long standing interests in a wide range of theoretical and practical problems in analog-to-digital conversion and data compression. Our common interest in vector quantization and our cooperation and competition date from the late 1970s and continue to the present. Combined with our pedagogical interests, writing this book has been a common goal and chore for years. Other responsibilities and a constant desire to add more new material made progress slower than we would have liked. Many compromises have finally led to a completed book that includes most of the originally intended contents and provides a useful reference and teaching text, but it is less than the perfect treatment of our fantasies. Many interesting and useful techniques omitted here have emerged in the ever widening research literature of vector quantization. We apologize to the authors of such works for this omission. It was not possible to do justice to the entire research literature in the field and the book has already reached a length that stretches the ability of Kluwer Academic Publishers to be able to produce this volume at a reasonable price. We cannot claim to be the first to write a book devoted to signal com• pression, That honor goes to the excellent text by J ayant and Noll, Digital Coding of Waveforms [195]. Their book and ours have little in common, however, except for the common goal of analyzing and designing AID con• version and signal compression systems. Our emphasis is far more on vector quantization than was theirs (although they had one of the first basic treat• ments of vector quantization published in book form). We spend more time on the underlying fundamentals and basic properties and far more time on the rich variety of techniques for vector quantization. Much has happened since 1984 when their book was published and we have tried to describe the more important variations and extensions. While the two books over• lap somewhat in their treatment of scalar quantization, our treatment is designed to emphasize all the ideas necessary for the subsequent extensions to vector quantizers. We do not develop traditional techniques in the detail of J ayant and Noll as we reserve the space for the far more detailed de• velopment of vector quantization. For example, they provide an extensive coverage of ADPCM, which we only briefly mention in presenting the basic concepts of DPCM. We do not, however, skimp on the fundamentals. Our treatment of entropy coding differs from J ayant and Noll in that we provide more of the underlying theory and treat arithmetic codes and Ziv-Lempel codes as well as the standard Huffman codes. This is not, however, a text devoted to entropy coding and the reader is referred to the cited references for further details. PREFACE XVll Synopsis

Part I Part I contains the underlying theory required to explain and derive the coding algorithms. Chapter 1 introduces the topic, the history, and further discusses the goals of the book. Chapter 2 provides the basic stochastic and linear systems background. Chapter 3 treats sampling, the conversion of a continuous time waveform into a discrete time waveform or sequence of sample values. Sampling is the first step in analog-to-digital conver• sion. Chapter 3 treats both the traditional one-dimensional sampling of a waveform and two-dimensional sampling used to convert two-dimensional image intensity rasters into a rectangular array of image pixels. Chapter 4 presents the basics of prediction theory with an emphasis on linear predic• tion. Prediction forms an essential component of many coding algorithms and the basic theory of prediction provides a guide to the design of such coding methods.

Part II The traditional scalar coding techniques are developed in Part II. Chapters 5 through 8 treat analog-to-digital conversion techniques that perform the essential coding operation on individual symbols using simple scalar quan• tization and Chapter 9 treats entropy coding and its combination with quantization. Chapters 5 and 6 focus on direct coding of scalars by quanti• zation, in Chapter 7 prediction is used prior to quantization, and in Chapter 8 linear transformations on the data are taken before quantization. Chapter 5 treats the basics of simple scalar quantization: the performance charac• teristics and common high resolution approximations developed by Ben• nett. Chapter 6 describes the optimality properties of simple quantizers, the structure of high-resolution optimal quantizers, and the basic design algorithm used throughout the book to design codebooks, the algorithm developed by Stuart Lloyd of Bell Laboratories in the mid 1950s. Chapters 7 and 8 build on scalar quantizers by operating on the signal before quanti• zation so as to make the quantization more efficient. Such pre-processing is intended to remove some of the redundancy in the signal, to reduce the sig• nal variance, or to concentrate the signal energy. All of these properties can result in better performance for a given bit rate and complexity if properly used. Chapter 7 concentrates on predictive quantization wherein a linear prediction based on past reconstructed values is removed from the signal and the resulting prediction residual is quantized. In Chapter 8 vectors or blocks of input symbols are transformed by a simple linear and orthogo- XVlll PREFACE nal transform and the resulting transform coefficients are quantized. The issues of the optimal transform and bit allocation among the scalar quan• tizers are treated. The discrete-cosine transform is briefly covered and the basic concepts and performance capability of sub-band coding are presented briefly.

Part III Part III is a detailed exposition of vector quantization fundamentals, design algorithms, and applications. Chapters 10 and 11 extend the fundamentals of scalar quantization of Chapters 5 and 6 to vectors. Chapter 10 pro• vides the motivation, definitions, properies, structures, and figures of merit of vector quantization. Chapter 11 develops the basic optimality proper• ties for vector quantizers and extends the Lloyd clustering algorithm to vectors. A variety of design examples to random processes, speech wave• forms, speech models, and images are described and pursued through the subsequent chapters. Chapter 12 considers the shortcomings in terms of complexity and memory of simple memoryless, unconstrained vector quan• tizers and provides a variety of constrained coding schemes that provide reduced complexity and better performance in trade for a tolerable loss of optimality. Included are tree-structured vector quantization (TSVQ), clas• sified vector quantizers, transform vector quantizers, product codes such as gain/shape and mean-residual vector quantizers, and multistage vector quantizers. Also covered are fast search algorithms for codebook searching nonlinear interpolative coding, and hierarchical coding. Chapters 13 and 14 consider vector quantizers with memory, sometimes called recursive vector quantizers or feedback vector quantizers. Chapter 13 treats the extension of predictive quantization to vectors, predictive vector quantization (PVQ). Here vector predictors are used to form a prediction residual of the original input vector and the resulting residual is quantized. This chapter builds on the linear prediction theory of Chapter 4 and de• velops some vector extensions for more sophisticated systems. Chapter 14 treats finite-state vector quantization (FSVQ) wherein the encoder and de• coder are finite-state machines. Like a predictive VQ, a finite-state VQ uses the past to implicitly predict the future and use a code book matched to the likely behavior. Unlike a predictive VQ, a finite-state VQ is limited to only a finite number of possible codebooks. Design algorithms and examples are provided for both coding classes. Chapter 15 is devoted to tree and trellis encoding systems. These sys• tems have decoders like those of predictive and finite-state vector quan• tizers, but the encoders are allowed to "look ahead" into the future before making their decision as to which bits to send. At the cost of additional de- PREFACE XIX lay, such coding methods can provide improved performance by effectively increasing the input vector size while keeping complexity managable. Vari• ations on the design algorithms of Chapters 13 and 14 which are matched to such look-ahead coders are considered. Chapter 16 treats adaptive vector quantizers wherein the codebooks are allowed to change in a slow manner relative to the incoming data rate so as to better track local statistical variations. Both forward and backward adaptation are treated and simple gain and mean adaptive systems are described. More complicated adaptive coding techniques such as residual excited linear prediction (RELP) and code excited linear prediction (CELP) are also described. Chapter 17 is devoted to variable-rate coding, vector quantizers that can use more bits for active signals and fewer for less active signals while preserving an overall average bit rate. Such coding systems can provide a significantly better tradeoff between bit rate and average distortion, but they can be more complex and can require buffering if they are used in conjunction with fixed rate communication links. The performance im• provement often merits any such increase in complexity, however, and the complexity may in fact be reduced in applications that are inherently vari• able rate such as storage channels and communication networks. Much of Chapter 17 consists of taking advantage of the similarities of variable-rate vector quantizers and decision trees for statistical pattern classification in order to develop coder design algorithms for unbalanced tree-structured vector quantizers. Methods of growing and pruning such tree-structured coders are detailed. As vector quantizers can be used in conjunction with entropy coding to obtain even further compression at the expense of the added complication and the necessity of variable-rate coding, the design of vector quantizers specifically for such application is considered. Such entropy-constrained vector quantizers are seen to provide excellent com• pression if one is willing to pay the price. The techniques for designing variable-rate vector quantizers are shown to provide a simple and exact solution to the bit allocation problem introduced in Chapter 8 and impor• tant for a variety of vector quantizer structures, including classified and transform vector quantizers.

Instructional Use

This book is intended both as a reference text and for use in a graduate course on quantization and signal compression. Its self-contained development of prerequisites, traditional techniques, and vec• tor quantization together with its extensive citations of the literature make xx PREFACE the book useful for a general and thorough introduction to the field or for occasional searches for descriptions of a particular technique or the relative merits of different approaches. Both authors (and several of our colleagues at other universities) have used the book in manuscript form as a course text for a one quarter course in quantization and signal compression. Typically in these courses much of Part I is not taught, but left as a reference source for the assumed prerequisites of linear systems, Fourier techniques, probability and random processes. Topics such as sampling and prediction are treated in varying degree depending on the particular prerequisites assumed at the different schools. Individual topics from Part I can be covered as needed or left for the student to review if his/her background is deficient. For example, the basics of linear prediction of one vector given another is treated in Chapter 4 and can be reviewed during the treatment of predictive vector quantization in Chapter 13. Part II is usually covered in the classroom at a reasonably fast pace. The basic development of scalar quantization is used as background to vector quantization. Predictive quantization and delta modulation are summarized but not treated in great detail. Chapter 8 is presented at a slower pace because the now classical development of transform coding, sub-band coding, and bit allocation should be part of any compression engineers toolbox. Similarly Chapter 9 is treated in some detail because of the important role played by entropy coding in modern (especially standardized) data compression systems. The coverage is not deep, but it is a careful survey of the fundamentals and most important noiseless coding methods. Part III is typically the primary focus of the course. Chapters 10 through 12 are usually covered carefully and completely, as are the ba• sic techniques of Chapter 13. The time and effort spent on the final three chapters depend on taste. Chapter 15 is typically only briefly covered as a variation on the techniques of Chapters 13 and 14. Chapter 14 can be summarized briefly by skipping the finite-state machine details such as the differences between labeled-state and labeled-transition systems. Finite state vector quantizers have been successfully applied in image coding and the open-Ioop/closed-Ioop design approach of the "omniscient" system is a useful design approach for feedback systems. Chapter 16 can be covered superficially or in depth, depending on the available time and interest. Chapter 17 treats some of the newest and best vector quantizers from the viewpoint of performance/complexity tradeoffs and provides the only description in book form of the application of classification tree design techniques to the design of vector quantizers. This material is particularly useful if the intended audience is interested in classification and pattern recognition as well as data compression. PREFACE XXi

Each chapter provides at least a few problems for students to work through as a valuable aid to fully digesting the ideas of the chapter. The chapters having the most extensive set of problems are usually those chap• ters that are covered in depth in academic courses. Almost all of the prob• lems in the book have appeared in homework assignments at the authors' universities. xxii PREFACE

Acknowledgements

While so many researchers have over the years contributed to the theory and techniques covered in this book, the name of one pioneer continually arises in this book, Stuart P. Lloyd. His now classical paper on optimal scalar quantization, which first appeared as an internal memorandum at AT&T Bell Laboratories in 1956 and was subsequently published in 1982 [219] contained the key algorithm that is the basis of so many of the design techniques treated in this book. It is a pleasure to acknowledge his signifi• cant and pervasive contribution to the discipline of vector quantization. Numerous colleagues, students, and former students at University of California, Santa Barbara and Stanford University and elsewhere have per• formed many of the simulations, provided many of the programs and figures, and given us a wealth of comments and corrections through the many re• visions of this book during the past several years. Particular thanks go to Huseyin Abut, Barry Andrews, Ender Ayanoglu, Rich Baker, Andres Buzo, Geoffrey Chan, Pao Chi Chang, Phil Chou, Pamela Cosman, Vladimir Cuperman, Yariv Ephraim, Vedat Eyuboglu, , Bob Gallager, Jerry Gibson, Smita Gupta, Amanda Heaton, Maurizio Longo, Tom Look• abaugh, Nader Moayeri, Nicolas Moreau, Dave Neuhoff, Mari Ostendorf, Erdal Paksoy, Antonio Petraglia, Eve Riskin, Mike Sabin, Debasis Sen• gupta, Ogie Shentov, Jacques Vaisey, Shihua Wang, Yao Wang, Siu-Wai Wu, and Ken Zeger. We also acknowledge the financial support of several government agencies and industries for the research that led to many of design algorithms, applications, and theoretical results reported here. In particular we would like to thank the Air Force Office of Scientific Research, the National Science Foundation, the National Cancer Institute of the Na• tional Institutes of Health, the National Aeronautics and Space Adminis• tration, the California MICRO Program, Bell Communications Research, Inc., Bell-Northern Research Ltd, Compression Labs, Inc., Eastman Kodak Company, ERL, Inc., Rockwell International, and the Stanford University Information Systems Laboratory Industrial Affiliates Program. Thanks also to Blue Sky Research of Portland, Oregon, for their help in providing a beta release of their Textures ™ implementation of UTEX for the Apple Macin• tosh computer and for their helpful advice. Of course, preparation of this book would not have been possible without the academic environment of our institutions, the University of California, Santa Barbara, and Stanford University.

Allen Gersho Robert M. Gray Goleta, California La Honda, California