Some Multivariate Signal Processing Operations

Some Multivariate Signal Processing Operations

Chapter 5 Some multivariate signal processing operations In this chapter we study some signal processing operations for multivariable signals. We will consider applications where one collects a set of signals xi(n); i = 1; 2;:::;M from some system. It is then important to study whether there are possible interde- pendencies between the signals. Such interdependencies cause redundancies, which can be exploited for data compression. Interdependencies between the individual signals can also contain useful information about the structure of the underlying systems that generated the set of signals. Firstly, when the number M of signals is large dependencies between the individual signals make it possible to compress the data. This is for example the case when the M signals xi represent pixels of a sequence of images, where M is the number of pixels in one image. If the images of interest depict a speci¯c class of objects, such as human faces, it turns out that there are redundancies among the various xi's which can be exploited to compress the images. Secondly, the individual signals xi(n) are often mixtures of unknown source signals sj(n). If the mixture is assumed linear, this implies that there are source signals sj(n) such that xi(n) = ai1s1(n) + ai2s2(n) + ¢ ¢ ¢ + aiMS sMS ; i = 1; 2;:::;M (5.1) for n = 0; 1;:::;N ¡ 1, where MS is the number of source signals. Introducing the vectors 2 3 2 3 x1(n) s1(n) 6 7 6 7 6 x (n) 7 6 s (n) 7 6 2 7 6 2 7 x(n) = 6 . 7 ; s(n) = 6 . 7 (5.2) 4 . 5 4 . 5 xM (n) sMS (n) and the matrix A with elements aij, (5.1) can be written compactly in vector form as x(n) = As(n); n = 0; 1;:::;N ¡ 1 (5.3) In the following examples the measured signals are composed of source signals. 47 Example 5.1 The Cocktail party problem. At a cocktail party, individual voices sj(n) are mixed and only the mixed signals xi(n) can be measured. The 'cocktail party problem' consists of computing the source signals sj(n) from measured sound signals xi(n). Example 5.2 Biomedical signal analysis. Magneto encephalogram (MEG) signals used to analyze brain activity are determined using sensors placed at di®erent position on the head. As all activities in the human brain, such as heartbeats, breathing and eye blinking, generate magnetic signals, it follows that the measured signals xi(n) are superpositions of a number of source signals sj(n). For better understanding brain activity it is important to remove the signal components which are caused by heartbeat, breathing and eye blinking. This can be achieved by removing the associated source signals after these have been determined from the measured set of signals. The problem of ¯nding the source signals s(n) from a set of measured signals xi(n) is called source signal separation. If the mixing matrix A is known, it is trivial to determine the source signal s(n) by inverting the linear relation (5.3). In many ap- plications it is, however, not known how the source signals are mixed to produce the measured signals xi(n). The problem to ¯nd the source signals from the measured signals when the mixing matrix A is unknown, is called blind signal separation. The classical example of blind signal separation is the cocktail party problem in Example 5.1. In order to solve the blind signal separation problem some assumptions on the source signals have to be made. The most natural ones are that they are mutually uncorrelated or independent. In section 5.1 Principal Component Analysis (PCA) is described, which can be used for signal decorrelation. Important application of the technique are in data compression. In section 5.2 Independent Component Analysis (ICA) is presented, which can be used to solve the blind signal separation problem. 5.1 Principal component analysis Assume that we have a sequence of M signals xi(n); n = 0; 1;:::;N ¡1; i = 1; 2;:::;M. We should like to express the signals in the form (5.1) in such a way that the source signals sj(n) are uncorrelated. In order to accomplish this, we consider signal variations about their mean values by de¯ning the signals wi(n) = xi(n) ¡ mi; i = 1; 2;:::;M (5.4) where mi is the mean value of fxi(n)g, 1 NX¡1 mi = xi(n); i = 1; 2;:::;M (5.5) N n=0 48 By construction, the signals wi(n) have zero mean values. Our purpose is to express the signals wi(n) in the form (5.1), wi(n) = ai1s1(n) + ai2s2(n) + ¢ ¢ ¢ + aiMS sMS (n); i = 1; 2;:::;M (5.6) where now the source signals fsj(n)g have zero mean values. Introducing the vector 2 3 w1(n) 6 7 6 w (n) 7 6 2 7 w(n) = 6 . 7 (5.7) 4 . 5 wM (n) we have in analogy with (5.3), w(n) = As(n); n = 0; 1;:::;N ¡ 1 (5.8) It is convenient to introduce the signal matrices W = [ w(0) w(1) ¢ ¢ ¢ w(N ¡ 1) ] (5.9) S = [ s(0) s(1) ¢ ¢ ¢ s(N ¡ 1) ] (5.10) Relation (5.8) can then be written compactly as W = AS (5.11) Blind signal decorrelation consists of ¯nding the matrix A and uncorrelated source signals, such that their correlations rjk vanish, i.e., 1 NX¡1 rjk = sj(n)sk(n) = 0; j 6= k; j; k = 1; 2;:::;MS (5.12) N n=0 T Notice that the product sj(n)sk(n) is the j; kth element of s(n)s (n). It follows that using (5.10), relation (5.12) can be written compactly in matrix form as 1 1 ³ ´ SST = s(0)sT (0) + s(1)sT (1) + ¢ ¢ ¢ + s(N ¡ 1)sT (N ¡ 1) N N = diag (r11; r22; : : : ; rMS MS ) (5.13) where diag(r11; r22; : : : ; rMS MS ) denotes a diagonal matrix with diagonal elements rkk; k = 1; 2;:::;MS, and zero o®-diagonal elements. As the source signals can be scaled by incorporating the factor rkk into the mixing parameters aik in equation (5.6), we can take the source signals to have unit variances, 1 T rkk = 1; k = 1; 2;:::;MS. This implies N SS = I, where I denotes the identity matrix. It is important to notice that the signal decorrelation problem is not unique. This 1 T can be seen by considering source signals with unit variances, N SS = I. De¯ne transformed source signals de¯ned by sY (n) = Ys(n) (5.14) 49 corresponding to factorization of the signal matrix W as W = AS = AY¡1YS (5.15) where YS is the source signal matrix associated with source signals sY (n). Then we have for any matrix Y such that YYT = I, 1 1 YS(YS)T = YSST YT = I (5.16) N N implying that the source signals sY (n) are also uncorrelated. In order to obtain a signal decorrelation procedure useful for data reduction we impose a further condition on the source signals as follows. Introduce the sum of variances of the signal sequence fw(n)g, NX¡1 XM 2 2 kfw(n)gk = wi(n) n=0 i=1 NX¡1 = w(n)T w(n) (5.17) n=0 Then determine the M £ 1-vector a1 and signal sequence fs1(n)g such that the error variance 2 kfw(n) ¡ a1s1(n)gk is minimized. Hence fs1(n)g is the scalar source signal which gives the best approxi- mation (in terms of smallest sum of error variances) of fw(n)g. Next, determine the M £ 1-vector a2 and the signal sequence fs2(n)g such that the error variance 2 kfw(n) ¡ a1s1(n) ¡ a2s2(n)gk is minimized. Hence fs1(n)g and fs2(n)g are the two source signals which give the best approximation of fw(n)g. Continuing this process, we can construct the vectors a1; a2;:::; ar and the source signals fs1(n)g; fs2(n)g;:::; fsr(n)g such that the error variance 2 kfw(n) ¡ a1s1(n) ¡ ¢ ¢ ¢ ¡ arsr(n)gk is minimized. Hence the signals fs1(n)g; fs2(n)g;:::; fsr(n)g are the r source signals which give the best approximation of fw(n)g. It turns out the source signals con- structed in this way are uncorrelated. By construction, this decorrelation procedure is optimal for data compression in the sense that it gives the best approximation of fw(n)g with a given number of source signals. The solution of the optimal signal decorrelation problem described above is given by singular value decomposition of the data matrix W. Recall that the signal decorrelation problem is equivalent to factoring the data matrix W according to (5.11) in such a way that SST is diagonal. In order to achieve this, it is convenient to introduce the normalized matrix ³ ´ VT = N ¡1=2diag r¡1=2; r¡1=2; : : : ; r¡1=2 S (5.18) 11 22 MS MS 50 or ³ ´ S = N 1=2diag r1=2; r1=2; : : : ; r1=2 VT (5.19) 11 22 MS MS Relation (5.13) is then equivalent to VT V = I (5.20) where I is the identity matrix. Property (5.20) means that V has orthonormal rows, i.e., the rows are orthogonal and have unit euclidian norm. Introducing (5.19) into (5.11) reduces the signal decorrelation problem to ¯nding a diagonal matrix § and a matrix V with orthonormal rows such that W = A§VT (5.21) The factorization (5.21) corresponding to optimal signal decorrelation can be deter- mined by recalling the following standard result from matrix analysis.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    17 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us