Optimal Multiple Description Transform Coding of Gaussian Vectors
Total Page:16
File Type:pdf, Size:1020Kb
To appear in Proceedings of IEEE Data Compression Conference Lucent Technologies PROPRIETARY Optimal Multiple Description Transform Co ding of Gaussian Vectors Jelena Kovacevic Vivek K Goyal Bell Lab oratories Dept of Elec Eng Comp Sci Murray Hill NJ University of California Berkeley jelenabelllabscom vgoyalieeeorg Abstract Multiple description co ding MDC is source co ding for multiple channels such that a deco der which receives an arbitrary subset of the channels maypro duce a useful reconstruction Orchard et al prop osed a transform co ding metho d for MDC of pairs of indep endent Gaussian random variables This pa p er provides a general framework which extends multiple description transform co ding MDTC to anynumber of variables and expands the set of transforms which are considered Analysis of the general case is provided which can be used to numerically design optimal MDTC systems The case of twovariables sent over two channels is analytically optimized in the most general setting where channel failures need not have equal probabilityor be indep endent It is shown that when channel failures are equally probable and indep endent the transforms used in are in the optimal set but many other choices are pos sible A cascade structure is presented which facilitates lowcomplexity design co ding and deco ding for a system with a large number of variables Intro duction For decades after the inception of information theory techniques for source and chan nel co ding develop ed separately This was motivated b oth by Shannons famous separation principle and by the conceptual simplicity of considering only one or the other Recently the limitations of separate source and channel co ding has lead many researchers to the problem of designing joint sourcechannel JSC co des An exam ination of Shannons result leads to the primary motivating factor for constructing joint sourcechannel co des The separation theorem is an asymptotic result which re quires innite blo ck lengths and hence innite complexityanddelay at b oth source co der and channel co der for a particular nite complexity or delay one can often do b etter with a JSC co de JSC co des have also drawn interest for being robust to channel variation Multiple description transform co ding is a technique which can be considered a JSC co de for erasure channels The basic idea is to intro duce correlation between transmitted co ecients in a known controlled manner so that erased co ecients can be statistically estimated from received co ecients This correlation is used at the deco der at the co ecient level as opp osed to the bit level so it is fundamentally dierent from schemes that use information ab out the transmitted data to pro duce likeliho o d information for the channel deco der The latter is a common element of JSC co ding systems Our general mo del for multiple description co ding is as follows A source sequence fx g is input to a co der which outputs m streams at rates R R R These k m streams are sentonm separate channels There are many receivers and each receives a subset of the channels and uses a deco ding algorithm based on which channels it m receives Sp ecically there are receivers one for each distinct subset of streams except for the empty set and each exp eriences some distortion This is equivalentto communicating with a single receiver when each channel may be working or broken and the status of the channel is known to the deco der but nottothe enco der This is a reasonable mo del for a lossy packet network Each channel corre sp onds to a packet or set of packets Some packets may be lost but b ecause of wn whichpackets are lost An appropriate ob jectiveisto header information it is kno minimize a weighted sum of the distortions sub ject to a constraint on the total rate When m the situation is that studied in information theory as the multiple description problem Denote the distortions when b oth channels are received only channel is received and only channel is received by D D andD resp ec tively The classical problem is to determine the achievable R R D D D tuples A complete characterization is known only for an iid Gaussian source and squarederror distortion This pap er considers the case where fx g is an iid sequence of zeromean jointly k T Gaussian ve ctors with a known correlation matrix R E x x Distortion is mea x k k sured by the meansquared error MSE The technique wedevelop is based on square linear transforms and simple scalar quantization and the design of the transform is paramount Rather dissimilar metho ds have b een develop ed which use nonsquare transforms The problem could also be addressed with an emphasis on quantizer design Prop osed Co ding Structure Since the source is jointly Gaussian we can assume without loss of generality that the comp onents are indep endent If not one can use a KarhunenLoeve transform of and the inverse at each deco der We prop ose the following the source at the enco der steps for multiple description transform co ding MDTC of a source vector x x x is quantized with a uniform scalar quantizer with stepsize x i q i where denotes rounding to the nearest multiple of T The vector x x x x is transformed with an invertible discrete q q q q n n n The design and implementation of T transform T Z Z y T x q are describ ed b elow For example the internet when UDP is used as opp osed to TCP The vectors can b e obtained byblocking a scalar Gaussian source The comp onents of y are indep endently entropy co ded If mn the comp onents of y are group ed to be sent over the m channels When all the comp onents of y are received the reconstruction pro cess is to ex actly invert the transform T to get x x The distortion is precisely the quantiza q tion error from Step If some comp onents of y are lost they are estimated from the received comp onents using the statistical correlation intro duced by the transform T The estimate x is then generated by inverting the transform as b efore Starting with a linear transform T with determinant one the rst step in deriving a discrete version T is to factor T into lifting steps This means that T is factored into a pro duct of upp er and lower triangular matrices with unit diagonals T T T T The discrete version of the transform is then given by k T x T T T x q k q The lifting structure ensures that the inverse of T can be implemented by reversing the calculations in h i T y T T T y k The factorization of T is not unique for example a bca a b b c ac a bc bca c c b a ab Dierent factorizations yield dierent discrete transforms except in the limit as approaches zero The co ding structure prop osed here is a generalization of the metho d prop osed by Orchard et al In only transforms implemented in two lifting steps were considered By xing a in both factorizations reduce to having two nonidentity factors It is very imp ortant to note that we rst quantize and then use a discrete trans form If we were to apply a continuous transform rst and then quantize the use of a nonorthogonal transform would lead to noncubic partition cells which are inherently sub optimal among the class of partition cells obtainable with scalar quan tization The present conguration allows one to use discrete transforms derived from nonorthogonal linear transforms and thus obtain b etter p erformance Analysis of an MDTC System The analysis and optimizations presented in this pap er are based on ne quantization approximations Sp ecically we make three assumptions which are valid for small First we assume that the scalar entropy of y T x is the same as that of we assume that the correlation structure of y is unaected by the Tx Second quantization Finally when at least one comp onent of y is lost we assume that the distortion is dominated by the eect of the erasure so quantization can be ignored Denote the variances of the comp onents of x by and denote the n T correlation matrix of x by R diag Let R TR T In the x y x n absence of quantization R would be exactly the correlation matrix of y Under y our ne quantization approximations we will use R in the estimation of rates and y distortions Estimating the rate is straightforward Since the quantization is ne y is approx i imately the same as Tx ie a uniformly quantized Gaussian random variable i If we treat y as a Gaussian random variable with p ower R quantized with i y ii y i bin width we get for the entropy of the quantized co ecient Ch log e log log log e log log k H y i y y y i i i where k log e log and all logarithms are basetwo Notice that k dep ends only on We thus estimate the total rate as n n X Y R H y nk log i y i i i Q Q n n The minimum rate o ccurs when and at this rate the comp onents y i i i i of y are uncorrelated Interestingly T I is not the only transform which achieves the minimum rate In fact an arbitrary split of the total rate among the dierent comp onents of y is p ossible This is a justication for using a total rate constraintin our following analyses However we will pay particular attention to the case where across each channel are equal the rates sent Wenow turn to the distortion and rst consider the average distortion due only to quantization Since the quantization noise is approximately uniform this distortion is for each comp onent Thus the distortion when no comp onents are erased is given by n D and is