<<

1

VIETNAM NATIONAL UNIVERSITY UNIVERSITY OF SCIENCE FACULTY OF MATHEMATICS, MECHANICS AND INFORMATICS

Hoang Dinh Linh

FRAMES IN FINITE-DIMENSIONAL INNER PRODUCT SPACES

Undergraduate Thesis Undergraduate Advanced Program in Mathematics

Thesis advisor: PhD. Dang Anh Tuan

Hanoi - 2013 Contents

Contents 1

Introduction 2

1 Frames in Finite-dimensional Inner Product Spaces 4 1.1 Some basic facts about frames ...... 4 1.2 Frames in Cn ...... 20 1.3 The discrete ...... 29 1.4 Pseudo-inverses and the singular value decomposition ...... 33 1.5 Finite-dimensional function spaces ...... 41

Conclusion 51

References 52

1 Introduction

One of the important concepts in the study of vector spaces is the concepts of a for the vectors spaces, which allows every vector to be uniquely represented as a linear combination of the basis elements. However, the property for a basis is restrictive; sometimes it is impossible to find vector which both fulfill the basis requirements and also satisfy external condition demanded by applied problems. For such purpose, we need to look for more flexible type of spanning sets. Frames are such tools which provide these alternatives. They not only have great variety for use in applications, but also have a rich theory from a pure analysis point of view. A frame for a vector space equipped with an inner prod- uct also allows each element in the space to be written as a linear combination of the elements in the frame, but linear independence between the frame elements is not required. Intuitively, one can think about a frame as a basis to which one has added more elements. The theory for frames and bases has developed rapidly in recent years because of its role as a mathematical tool in signal and image processing. Let’s say you want to send a signal across some kind of communication sys- tem, perhaps by talking on wireless phone or sending a photo to your friend over the internet. We think that signal as a vector in a vector space. The way it get transmitted is as a sequence of coefficients which represent the signal in term of a spanning set. If that spanning set is an , then comput- ing those coefficients just involves finding some inner product of vectors, which a computer can accomplish very quickly. As a result, there is not a significant time delay in sending your voice or the photograph. This is a good feature for a communication system to have, so orthonormal bases are used a lot in such situation.

2 CONTENTS

Orthogonality is a very restrictive property, though. What if one of the coeffi- cients representing a vector gets lost in transmission? That piece of information cannot be reconstructed. It is lost. Perhaps, we’d like our system to have some redundancy, so that if one piece gets lost, the information can be pieced together from what does get through. This is where frames come in. By using a frame instead of an orthonormal basis, we do give up the unique- ness of coefficients and orthogonality of the vectors. However, these properties are supperfluous. If you are sending your side of a phone conversation or a photo, what matter is quickly computing a working set of expansion coefficients, not whether those coefficients are unique. In fact, in some setting the linear inde- pendence and orthogonality restrictions inhibit the use of orthonormal bases. Frame can be constructed with a wider variety of characteristics, and can thus be tailored to match the needs of a particular system. This thesis will introduce the concept of a frame for a finite-dimensional . We begin with the characteristics of frames. The first section dis- cusses some basic facts about frames, giving a standard definition of a frame. Then proceed in the latter to understood a litter bit about what frames are?, and how to construct a frames in finite-dimensional spaces. From that, thinking about the connection between frames in finite-dimensional vector spaces and the infinite-dimensional constructions. Most of contents of the thesis come from the book "An Introduction to Frames and Riesz Bases", written by Prof. Ole Christensen. Moreover, I have used some results of Prof. Ingrid Daubechies in [2] and some results of Prof. Nguyễn Hữu Việt Hưng in [3].

3 CHAPTER 1

Frames in Finite-dimensional Inner Product Spaces

1.1 Some basic facts about frames

Let V be a finite-dimensional vector space, equipped with an inner producth., .i

Definition 1.1 [1] A countable family of elements { fk}k∈I in V is a frame for V if there exist constants A, B > 0 such that 2 2 2 Ak f k ≤ ∑ |h f , fki| ≤ Bk f k , ∀ f ∈ V. (1.1) k∈I

• A , B are called frame bounds, and they are not unique! Indeed, we can 0 A 0 choose A = 2 , B = B + 1 as other frame bounds.

• The frame are normalized if k fkk = 1 , ∀k ∈ I.

• In the finite-dimensional space, { fk}k∈I can be having infinitely many ele- ments by adding infinitely many zero elements to the given frame.

Now, we will only consider finite families { fk}k∈I , I finite. Then, the upper frame condition is automatically satisfied by Cauchy-Schwartz’s inequality

m m 2 2 2 ∑ |h f , fki| ≤ ∑ k fkk k f k , ∀ f ∈ V. (1.2) k=1 k=1

4 1.1. Some basic facts about frames

For all f ∈ V, we are easily to prove that

2 2 2 |h f , fki| ≤ k fkk k f k , ∀k ∈ {1, 2, 3, ..., m}. (1.3)

Indeed, for each k ∈ {1, 2, ..., m} ; f , fk ∈ V. If f = 0 then

2 2 |h f , fki| = |h0, fki| = 0 = k f k k fkk .

So, the result holds automatically. Now assume f 6= 0, for any λ ∈ C we have:

2 0 ≤ k fk + λ f k = h fk + λ f , fk + λ f i. (1.4)

Expanding the right side

2 0 ≤ h fk, fki + λh fk, f i + λh f , fki + |λ| h f , f i 2 2 2 = k fkk + λh fk, f i + λh f , fki + |λ| k f k .

Now select h f , f i λ = − k . k f k2 Substituting this into preceding expression yields

|h f , f i|2 |h f , f i|2 0 ≤ k f k2 − 2 k + k k f k2 k k f k2 k f k4 |h f , f i|2 = k f k2 − k . k k f k2 which yields

2 2 2 |h f , fki| ≤ k f k k fkk .

m 2 Thus, we can choose the upper frame bound B = ∑ k fkk . k=1 Recall the Lemma about a vector spaces decomposition.

Definition 1.2 Let W is a subspace of a finite-dimensional vector space V. Then

W⊥ = {α ∈ V|α ⊥ W, i.e.,hα, βi = 0, ∀β ∈ W} (1.5) is said to be orthogonal complement of W in V.

5 1.1. Some basic facts about frames

We have the following lemma:

Lemma 1.3 [3] Let W be a subspace of finite-dimensional vector space V. Then (W⊥)⊥ = W , and V can be decomposed as V = W ⊕⊥ W⊥.

PROOF Let (e1, e2, ..., em) be an orthogonal basis of W, and extend it to be a basis (e1, e2, ..., em, αm+1, ..., αn) of V. Applying Schmidt’s orthogonalization process to this basis. Then we obtain an orthogonal basis (e1, e2, ..., em, em+1, ..., en) for V. The vectors em+1, ..., en are orthogonal to each element in (e1, e2, ..., em) , then they ⊥ are orthogonal to W. So, em+1, ..., en ∈ W . Let α ∈ W then hα, βi = 0, ∀β ∈ W⊥. So, α ∈ (W⊥)⊥. Therefore, W ⊂ (W⊥)⊥. In addition, if α is an arbitrary vector in (W⊥)⊥ ⊆ V then

α = a1e1 + a2e2 + ... + anen.

⊥ Since em+1, ..., en ∈ W then

0 = hα, eji = a1he1, eji + a2he2, eji + ... + anhen, eji =

= ajhej, eji ∀j = m + 1, n.

Then, am+1 = am+2 = ... = an = 0. So, α represents linearly in (e1, ..., em). Therefore, α ∈ W. Thus, (W⊥)⊥ ⊂ W. Hence, (W⊥)⊥ = W. Finally, we will prove that W ∩ W⊥ = {0}. Indeed, if α ∈ W ∩ W⊥ then kαk2 = hα, αi = 0. Thus α = 0. In conclusion,

⊥ ⊥ ⊥ V = span(e1, e2, ..., em) ⊕ span(em+1, ..., en) = W ⊕ W

.

m In order for the lower condition to be satisfied, if and only if span{ fk} f =1 = V. Then we have the following theorem:

m Theorem 1.4 [1] A family of elements { fk}k=1 in V is a frame for V if and only m if span{ fk}k=1 = V.

6 1.1. Some basic facts about frames

PROOF : m If span{ fk} f =1 = V then we consider the following mapping:

φ :V −→ R m 2 f 7−→ ∑ |h f , fki| . k=1 Firstly, we will prove that there exist A, B > 0 such that

2 2 2 Ak f k ≤ ∑ |h f , fki| ≤ Bk f k , ∀ f ∈ V, k f k = 1. (1.6) k∈I m 2 Take B = ∑ | fk| then (1.6) holds automatically by (1.2). So, we only need to k=1 show the existence of A. For any f , g ∈ V, we have: m 2 2 |φ( f ) − φ(g)| = | ∑ (|h f , fki| − |hg, fki| )| k=1 m = | ∑ (|h f , fki| − |hg, fki|)(|h f , fki| + |hg, fki|)| k=1 m ≤ ∑ |h f − g, fki|(k f kk fkk + kgkk fkk) k=1 m 2 ≤ ∑ k f − gk(k f k + kgk)k fkk k=1 m 2 = k f − gk(k f k + kgk) ∑ k fkk . k=1 Then, φ( f ) tends to φ(g) as f tends to g. So, φ is continuous. Moreover, the set { f ∈ V| k f k = 1} ,the unit sphere in finite-dimensional space V, is compact. By using Weierstrass’s theorem, we know that φ( f ) has a mini- mum on the unit sphere. Choose A = min φ( f ), we will prove that φ( f ) > 0 when k f k = 1, then A > 0. k f k=1 Suppose the contrary that φ( f ) = 0 for some f ∈ V, k f k = 1. We have: m 2 φ( f ) = ∑ |h f , fki| = 0. k=1 It implies that

|h f , fki| = 0, ∀k = {1, 2, ..., n}.

7 1.1. Some basic facts about frames

m m m Besides, f ∈ V = span{ fk}k=1 then ∃{αk}k=1 : f = ∑ αk fk and k=1

1 = k f k2 m = h f , f i = h f , ∑ αk fki k=1 m = ∑ αkh f , fki = 0. k=1 It is impossible! Thus, φ( f ) > 0 , ∀ f ∈ V when k f k = 1. Then for A > 0, we have

m 2 2 ∑ |h f , fki| ≥ A = Ak f k , ∀ f ∈ V, k f k = 1. k=1 So the lower bound could be chosen as A = min φ( f ) > 0. k f k=1 Indeed, if f = 0 then

m 2 2 0 = ∑ |h f , fki| = Ak f k k=1

If 6= , we can choose 0 = f then k 0k = . f 0 f k f k f 1 Apply the above result then

m 0 2 ∑ |h f , fki| ≥ A k=1 equivalent to

m f |h , f i|2 ≥ A. ∑ k k k k=1 f Hence,

m 2 2 ∑ |h f , fki| ≥ Ak f k , ∀ f ∈ V. k=1

m m Conversely, we claim that if { fk}k=1 is a frame for V then span{ fk}k=1 = V. m m Indeed, we suppose the contrary that { fk}k=1 is a frame for V but span{ fk}k=1 $ V. m Denote W = span{ fk}k=1 $ V. Then there exists f ∈ V but f ∈/ W, i.e.,we can write f = g + h where g ∈ W⊥ \{0} and h ∈ W (by Lemma 1.3).

8 1.1. Some basic facts about frames

m So, g is orthogonal to span{ fk}k=1. m 2 It implies that ∑ |hg, fki| = 0. By the definition of frame, g = 0. Contradiction! k=1 m Hence, span{ fk}k=1 = V.

Remark 1.5

• A frame might contain more elements than needed to be a basic of V since m m n span{ fk}k=1 = V. In particular, if { fk}k=1 is a frame for V and {gk}k=1 is m S n an arbitrary finite collection of vectors in V, then { fk}k=1 {gk}k=1 is also m S n m a frame for V. Because span{{ fk}k=1 {gk}k=1} = span{ fk}k=1 = V. • A frame which is not a basis is said to be overcomplete or redundant.

Example 1.6

        1 0 0 1 3         V = R , f1 =  0  , f2 =  1  , f3 =  0  , f4 =  1 . 0 0 1 1   a 3   For all f ∈ R , f=  b  for a, b, c ∈ R, we have: c

4 2 2 2 2 2 2 2 2 2 2 2 a + b + c ≤ ∑ |h f , fki| = a + b + c + (a + b + c) ≤ 4(a + b + c ). k=1

3 3 3 So { fk}k=1 is a frame for R but not a basis of R . m Now, we consider a vector space V equipped with a frame { fk}k=1 and define a linear mapping

T : Cm −→ V, m m {ck}k=1 7−→ ∑ ck fk. k=1 T is called pre-frame operator (or synthesis operator). The adjoint operator is given by

T∗ :V −→ Cm, m f 7−→ {h f , fki}k=1.

9 1.1. Some basic facts about frames

Indeed, we have

m m m hT{ck}k=1, f i = h ∑ ck fk, f i = ∑ ckh fk, f i k=1 k=1 m = ∑ ckh f , fki = hck, h f , fkii k=1 ∗ = hck, T f i.

T∗ is called analysis operator. By composing T with its adjoint,T∗, we obtain frame operator

T◦T∗ S :V −−−→ V, m f 7−→ ∑ h f , fki fk. k=1

m 2 So, we have hS f , f i = ∑ |h f , fki| and k=1

Ak f k2 ≤ hS f , f i ≤ Bk f k2 , ∀ f ∈ V. (1.7)

Thus, the lower frame condition can be consider as ”lower bound” of S and the upper frame bound can be consider as ”upper bound” of S.

m Definition 1.7 A frame { fk}k=1 is tight if we can choose A = B in the Definition 1.1, i.e.,

m 2 2 ∑ |h f , fki| = Ak f k , ∀ f ∈ V. (1.8) k=1

m Proposition 1.8 [1] Assume that { fk}k=1 is a tight frame for V with frame bound A. Then S = AId ( Id is the identity operator on V), and

1 m f = ∑ h f , fki fk , ∀ f ∈ V. (1.9) A k=1

In order to prove our proposition, we first prove the following lemma.

Lemma 1.9 [1] Let H be a Hilbert space. Consider U : H −→ H be a bounded operator, and assume that hUx, xi = 0 for all x ∈ H. Then the following holds:

10 1.1. Some basic facts about frames

(i) If H is a complex Hilbert space, then U = 0.

(ii) If H is a real Hilbert space and U is self-adjoint, then U = 0.

PROOF :

(i) If H is a complex Hilbert space then we have:

4hUx, yi = hU(x + y), x + yi − hU(x − y), x − yi + ihU(x + iy), x + iyi−

− ihU(x − iy), x − iyi, ∀x, y ∈ H. (1.10)

In detail, we have:

hU(x + y), x + yi = hUx, xi + hUx, yi + hUy, xi + hUy, yi, hU(x − y), x − yi = hUx, xi − hUx, yi − hUy, xi + hUy, yi, ihU(x + iy), x + iyi = ihUx, xi − i2hUx, yi + i2hUy, xi − i3hUy, yi, ihU(x − iy), x − iyi = ihUx, xi + i2hUx, yi − i2hUy, xi − i3hUy, yi.

Then we obtain (1.10) by a direct calculation. Since hUx, xi = 0, ∀x ∈ H then hUx, yi = 0, ∀x, y ∈ H by using (1.10). That means U = 0.

∞ (ii) If H is a real Hilbert space and U is self adjoint then let {ek}k=1 be an orthonormal basis for H. For arbitrary j, k ∈ N, we have:

0 = hU({ek + ej), ek + eji = hUek, eji + hUej, eki

= hUek, eji + hej, Ueki = 2hUek, eji.

It implies U = 0.

Now, we’re going to prove Proposition 1.8.

PROOF Let U = S − AId, we have:

hUx, yi = h(S − AId)x, yi = hSx, yi − AhIdx, yi = hx, Syi − Ahx, Idyi = hx, (S − AId)yi = hx, Uyi.

11 1.1. Some basic facts about frames

Therefore, U is self adjoint. Applying Lemma 1.9, we get

S − AId = 0 , i.e., S = AId, then

S f = A f , ∀ f ∈ V.

Thus

1 1 m f = S f = ∑ h f , fki fk , ∀ f ∈ V. A A k=1

Lemma 1.10 Let V, W be finite-dimensional vector spaces, equipped with inner products h., .iV, respectively h., .iW. Assume that dim V = n, dim W = m. Given a linear operator T : V −→ W, the adjoint operator T∗ : W −→ V. Then the vector space KerT and ImT∗ are subspace of V, and KerT = ImT∗⊥.In particular, the linear map T induces orthogonal decomposition of V and (via T∗ of W) given by

V = KerT ⊕⊥ ImT∗ W = KerT∗ ⊕⊥ ImT

PROOF Let v ∈ KerT, then Tv = 0. ∗ ∗⊥ For any u ∈ W, we have: 0 = hTv, uiW = hv, T uiV. So v ∈ ImT . Thus KerT ⊆ ImT∗⊥. ∗⊥ ∗ For v ∈ ImT , we have: hv, T uiV = 0, ∀u ∈ W. It is equivalent to

hTv, uiW = 0, ∀u ∈ W.

Choose u = Tv ∈ W, then kTvk2 = 0. Therefore, Tv = 0 then v ∈ KerT. Thus, KerT = ImT∗⊥. Similarly, we have KerT∗ = ImT⊥. Applying Lemma 1.3 we have

V = KerT ⊕⊥ ImT∗ W = KerT∗ ⊕⊥ ImT.

12 1.1. Some basic facts about frames

m An interpretation of (1.9) is that if { fk}k=1 is a tight frame and we want to m 1 express f ∈ V as a linear combination f = ∑ ck fk, we can simply define gk = A fk k=1 and take ck = h f , gki. Formula (1.9) is similar to the representation m f = ∑ h f , ekiek (1.11) k=1

m 1 via an orthonormal basis {ek}k=1: the only difference is the factor A in (1.9). For general frames we now prove that we still have a representation of each m m f ∈ V of the form f = ∑ h f , gki fk for an appropriate choice of {gk}k=1. The k=1 obtained theorem is one of the most important results about frames, and (1.12) is called the frame decomposition:

m Theorem 1.11 [1] Let { fk}k=1 be a frame for V with frame operator S. Then

(i) S is invertible and self-adjoint.

(ii) Every f ∈ V can be represented as

m m −1 −1 f = ∑ h f , S fki fk = ∑ h f , fkiS fk. (1.12) k=1 k=1

m (iii) If f ∈ V also has the representation f = ∑ ck fk for some scalar coefficients k=1 m {ck}k=1, then m m m 2 −1 2 −1 2 ∑ |ck| = ∑ |h f , S fki| + ∑ |c − h f , S fki| . (1.13) k=1 k=1 k=1

PROOF (i) We have S = T ◦ T∗ is self-adjoint. Now, we prove that S is bijective. Indeed, let f ∈ V and assume that S f = 0 then

m 2 2 0 = hS f , f i = ∑ |h f , fki| ≥ Ak f k . k=1 Therefore, f = 0 then S is injective. m Moreover, since V =span{ fk}k=1 then T is surjective i.e., m m m for f ∈ V there exists {gk}k=1 ∈ C such that T{gk}k=1 = f . Rewrite

m m m {gk}k=1 = {gk}k=1 + {hk}k=1,

13 1.1. Some basic facts about frames

m ⊥ m where {gk}k=1 ∈ KerT and {hk}k=1 ∈ KerT. Then

m m m m f = T{gk}k=1 = T{gk}k=1 + T{hk}k=1 = T{gk}k=1.

By lemma 1.10, we have (KerT)⊥ = ImT∗. m ∗ m Therefore, for f ∈ V there exists {gk}k=1 ∈ ImT such that T{gk}k=1 = f . So, V ⊂ T(ImT∗) = Im(TT∗) = ImS. Thus, S is surjective. Hence, S is invertible.

(ii) For any u, v ∈ V , S is self-adjoint then

hS−1u, vi = hS−1u, S ◦ S−1vi = hS ◦ S−1u, S−1vi = hu, S−1vi.

So S−1 is also self-adjoint. Therefore,

m m −1 −1 −1 f = S ◦ S f = ∑ hS f , fki fk = ∑ h f , S fki fk. k=1 k=1 Moreover,

m m −1 −1 −1 f = S ◦ S f = S ( ∑ h f , fki fk) = ∑ h f , fkiS fk. k=1 k=1

(iii) We can write

m m −1 m −1 m {ck}k=1 = {ck}k=1 − {h f , S fki}k=1 + {h f , S fki}k=1.

Since

m m m  −1  −1 ∑ ck − h f , S fki fk = ∑ ck fk − ∑ h f , S fki fk = 0. k=1 k=1 k=1 then

( m −1 m ∗⊥ {ck}k=1 − {h f , S fki}k=1 ∈ KerT = ImT , −1 m −1 m ∗ {h f , S fki}k=1 = {hS f , fki}k=1 ∈ ImT . Thus, by Pythagoras’s Theorem, we get

m m m 2 −1 2 −1 2 ∑ |ck| = ∑ |h f , S fki| + ∑ |c − h f , S fki| . k=1 k=1 k=1

14 1.1. Some basic facts about frames

Remark 1.12

• Every frame in a finite-dimensional space contain a subfamily which is a m basic. Because span{ fk}k=1 = V.

m m • If { fk}k=1 is a frame but not a basis. Then there exist {dk}k=1 non-zero m sequence such that ∑ dk fk = 0 and k=1

m m m −1 −1 f = ∑ h f , S fki fk + ∑ dk fk = ∑ (h f , S fki + dk ) fk. k=1 k=1 k=1 It implies that f has many representations.

−1 m 2 • Theorem 1.11 shows that {h f , S fki}k=1 has minimal ` -norm among all m m sequences {ck}k=1 for which f = ∑ ck fk. k=1 −1 The numbers h f , S fki , k = 1, 2, 3, ..., m, are called frame coefficients.

−1 m −1 m •{ S fk}k=1 is also a frame for V because span{S fk}k=1 = V. It is called m the canonical dual of { fk}k=1. Theorem 1.11 gives some special information m in case { fk}k=1 is a basis:

m Proposition 1.13 [1] Assume that { fk}k=1 is a basis for V. Then there exist a m unique family {gk}k=1 in V such that

m f = ∑ h f , gki fk , ∀ f ∈ V. (1.14) k=1

m −1 m In term of the frame operator, {gk}k=1 = {S fk}k=1. Furthermore, h fj, gki = δjk.

ROOF m P The existence of {gk}k=1 follow from Theorem 1.11. Now, we suppose 0 m that there exist {gk}k=1 satisfying

m m 0 f = ∑ h f , gki fk = ∑ h f , gki fk. k=1 k=1

m 0 Then, ∑ h f , gk − gki fk = 0. k=1 m 0 Since { fk}k=1 is a basis for V then h f , gk − gki = 0 , ∀ f ∈ V. 0 m Therefore, gk = gk , ∀k = 1, 2, ..., m or {gk}k=1 is unique!

15 1.1. Some basic facts about frames

m m Moreover, since { fk}k=1 is a basis and fj = ∑ h fj, gki fk k= ( 1 h fj, gki = 1 if j = k then or h fj, gki = δjk. h fj, gki = 0 if j 6= k

We can give an intuitive explanation of why frames are important in signal transmission. Assume that we want to transmit the signal f belonging to a vector space V from a transmitter A to a receiver R. If both A and R have m knowledge of a frame { fk}k=1 for V, this can be done if A transmits the frame −1 m coefficients {h f , S fki}k=1; based on knowledge of these numbers, the receiver R can reconstruct the signal f using the frame decomposition. Now assume that −1 m R receives a noisy signal, meaning a perturbation {h f , S fki + ck}k=1 of the correct frame coefficients. Based on the received coefficients, R will claim that the transmitted signal was m m m −1 −1 ∑ (h f , S fki + ck) fk = ∑ (h f , S fki) fk + ∑ ck fk k=1 k=1 k=1 m = f + ∑ ck fk. k=1 m m This differs from the correct signal f by the noise ∑ ck fk. If { fk}k=1 is over- k=1 m m complete, the pre-frame operator T{ck}k=1 = ∑ ck fk has a non- trivial kernel, k=1 implying that parts of the noise contribution might add up to zero and can- m cel. This will never happen if { fk}k=1 is an orthonormal basis! In that case m m 2 2 k ∑ ck fkk = ∑ |ck| , so each noise contribution will make the reconstruction k=1 k=1 worse. −1 m We have already seen that for given f ∈ V, the frame coefficients {h f , S fki}k=1 m 2 m have minimal ` -norm among all sequences {ck}k=1 for which f = ∑ ck fk. We k=1 can also choose to minimize the norm in other space than `2, we now show the existence of coefficients minimizing the `1-norm.

m Theorem 1.14 [1] Let { fk}k=1 be a frame for a finite-dimensional vector space m m m V. Given f ∈ V, there exist coefficients {dk}k=1 in C such that f = ∑ dk fk and k=1 m m m ∑ |dk| = inf{ ∑ |ck| : f = ∑ ck fk}. (1.15) k=1 k=1 k=1

16 1.1. Some basic facts about frames

ROOF m P Fix f ∈ V, it’s clear that we can choose a set of coefficients {ck}k=1 such m that f = ∑ ck fk. k=1 Let

m r := ∑ |ck|, k=1 and

m m m M := {{dk}k=1 ∈ C , ∑ |dk| ≤ r ; k = 1, 2, ..., m}. k=1 M is closed and bounded in finite-dimensional space Cm, so M is compact. Then the following closed set

m m N = {{dk}k=1 ∈ M| f = ∑ dk fk} k=1 is also compact. Consider

φ : C −→ R, m m {dk}k=1 7−→ ∑ |dk|. k=1 We have

m m m |φ({ck}k=1) − φ({dk}k=1)| = ∑ |(|ck| − |dk|)| k=1 m m m ≤ ∑ |ck − dk| = φ({ck}k=1 − {dk}k=1). k=1

m m m m So, φ({ck}k=1) tends to φ({dk}k=1) as {ck}k=1 → {dk}k=1. Thus, φ is continuous. Applying Weierstrass’s theorem, we get φ has minimum on N. In other words, m m m there exist coefficients {dk}k=1 in C such that f = ∑ dk fk and k=1

m m m ∑ |dk| = inf{ ∑ |ck| : f = ∑ ck fk}. k=1 k=1 k=1

17 1.1. Some basic facts about frames

There are some important differences between Theorem 1.11 and Theorem 1.14. In Theorem 1.11 we find the sequence minimizing the `2-norm of the co- efficients in the expansion of f explicity; it is unique, and it depends linearly on f . On the other hand, Theorem 1.14 only gives the existence of an `1-minimizer, and it may not be unique.

Example 1.15

2 Let {ek}k=1 be an orthonormal basis for two-dimensional vector space V with h., .i. Let

f1 = e1 , f2 = e1 − e2 , f3 = e1 + e2.

3 Then { fk}k=1 is a frame for V. Indeed, for f ∈ V, f can be represent as

f = c1e1 + c2e2 , for some scalar c1, c2 ∈ R.

We have

3 2 2 2 2 ∑ |h f , fki| = |c1| + |c1 − c2| + |c1 + c2| k=1 2 2 = 3c1 + 2c2.

So,

3 2 2 2 2k f k ≤ ∑ |h f , fki| ≤ 3k f k , ∀ f ∈ V. k=1

3 Assume f = ∑ dk fk then k=1 ( d1 + d2 + d3 = c1 −d2 + d3 = c2.

3 3 This equation system has at least 2 solutions {dk}k=1 that minimizing ∑ |dk|. k=1 Indeed, take f = e1 then ( d1+ d2 + d3 = 1 − d2 + d3 = 0.

18 1.1. Some basic facts about frames

So, we can choose ( d1 = 1 d2 = d3 = 0, or 1 d = d = d = , 1 2 3 3 then both cases get

3 ∑ |dk| = 1. k=1

Even if the minimizer is unique, it may not depend linearly on f .

Example 1.16

2 Let {e1, e2} be the canonical orthonormal basis for C and consider the frame 3 { fk}k=1 = {e1, e2, e1 + e2}. Then

(1) 3 {ck }k=1 = {1, 0, 0}, (2) 3 {ck }k=1 = {0, 1, 0}, 1 minimize the ` -norm in the representation of e1 and e2, respectively. 3 (1) (2) (1) (2) 3 1 Clearly, e1 + e2 = ∑ (ck + ck ) fk but {ck + ck }k=1 not minimizing the ` -norm k=1 among all sequences representing e1 + e2.

m Theorem 1.17 [1] Let { fk}k=1 be a frame for a subspace W of the vector space V. Then the orthogonal projection of V onto W is given by m −1 P f = ∑ h f , S fki fk , f ∈ V. (1.16) k=1 PROOF It is enough to proof that if we define P by (1.16) then ( f for f ∈ W P f = 0 for f ∈ W⊥. Because of Theorem 1.11, P f = f for f ∈ W. Moreover, S is a bijection on W then −1 ⊥ S fk ∈ W. It implies P f = 0 for f ∈ W .

19 1.2. Frames in Cn

1.2 Frames in Cn

The natural examples of finite-dimensional vector space are

n R = {(c1, c2, ..., cn)|ci ∈ R, i = 1, 2, ..., n}, and

n C = {(c1, c2, ..., cn)|ci ∈ C, i = 1, 2, ..., n}.

The latter is equipped with the inner product

n n n h{ck}k=1, {dk}k=1i = ∑ ckdk, k=1 and the associated norm n 1 n 2 2 k{ck}k=1k = ( ∑ |ck| ) . k=1 From elementary , we know many equivalent conditions for a set of vectors to constitute a basis for Cn. Let us list the most important charac- terizations:

Theorem 1.18 [1] Consider n vectors in Cn and write them as columns in an n × n   λ11 λ12 ... λ1n    λ21 λ22 ... λ2n  Λ =    ...  λn1 λn2 ... λnn then the following are equivalent:

(i) The columns in Λ (i.e., the given vectors) constitute a basis for Cn.

(ii) The rows in Λ constitute a basis for Cn.

(iii) The determinant of Λ is non-zero.

(iv) Λ is invertible.

(v) Λ defines an injective mapping on Cn.

20 1.2. Frames in Cn

(vi) Λ defines a surjective mapping on Cn.

(vii) The columns in Λ are linearly independent.

(viii) Λ has rank equal to n.

PROOF (vii) ↔ (i) ↔ (iii) If the columns in Λ are linearly independent then it is maximal linearly inde- pendent. So, it constitute a basis for Cn. If the columns in Λ are linearly dependent then the determinant of Λ should be zero. Assume the columns in Λ are linearly independent then it constitute a basis for Cn. Denote: n e = (e1, e2, ..., en) be the canonical basis for C α1, α2, ..., αn is the column vectors in Λ and, Λn(Cn)∗ be the set of all alternative n-linear in Cn. So,

n det Λ =dete(α1, α2, ..., αn) where αj = ∑ λijei. i=1

n n ∗ Therefore, det = dete constitute a basis for Λ (C ) . We get

detα = cdet , (c ∈ C).

Thus

c det(α1, α2, ..., αn) =detα(α1, α2, ..., αn) = det In = 1.

Hence,

det Λ = det(α1, α2, ..., αn) 6= 0.

(iii) ↔ (ii) We have

det Λ = det ΛT, ∀Λ ∈ M(n × n, C).

Then det ΛT 6= 0, it is equivalent to the columns of ΛT constitute a basis for Cn as the above proof.

21 1.2. Frames in Cn

Therefore, the rows of Λ constitute a basis for Cn. (iii) ↔ (iv) Suppose Λ define a homomorphism f : Cn −→ Cn. Then we have to prove that f is an isomorphism if det Λ 6= 0. n Assume that a = (a1, a2, ..., an) is a basis for C . Then

( f (a1) f (a2) ... f (an)) = (a1 a2 ... n)Λ.

We have det Λ = det( f ) = det( f (a1) f (a2) ... f (an)) = det( f (a1) f (a2) ... f (an)) det Idn

= det( f (a1) f (a2) ... f (an))deta(a1, a2, ..., n)

=deta (f(a1) f (a2) ... f (an)) det Λ.

Note that, f is an isomorphism if and only if ( f (a1), f (a2), ..., f (an)) is linearly independent.

This equivalent to (α1, α2, ..., αn) are linearly independent, or det Λ = det( f ) 6= 0. (iv) ↔ (v) ↔ (vi) Λ is invertible means that Λ defines a bijective mapping on Cn. Since Cn is finite then Λ also defines an injective and a surjective mapping on Cn. (viii) ↔ (vi) The rank of Λ should be the dimension of its range ImΛ. So, it must be n.

Recall that the rank of E is defines as the dimension of its range ImE. Now, we turn to a discussion of frames for Cn. Note that we consequently identify operator A : Cn → Cm , with their matrix representations with respect to the canonical bases in Cn and Cm. m n m n In case { fk}k=1 is a frame for C , the pre-frame operator T maps C onto C , and its matrix with respect to the canonical bases in Cn and Cm is   | | ... |   T =  f1 f2 ... fn  | | ... | i.e., the m × n matrix having the vectors fk as columns. Since m vectors can at most span an m-dimensional space then we necessarily m n have m ≥ n when { fk}k=1 is a frame for C , i.e., the matrix T has at least as many columns as rows.

22 1.2. Frames in Cn

m n Theorem 1.19 [1] Let { fk}k=1 be a frame for C . Then the vectors fk can be m m consider as first n coordinates of some vectors gk in C consituting a basis for C . m If { fk}k=1 is a tight frame then the vectors fk are the first n coordinates of some m m vector gk in C constituting an orthogonal basis for C .

ROOF m n P Since { fk}k=1 be an arbitrary frame for C then

n m C = span{ fk}k=1.

Therefore, m ≥ n. Now, consider the pre-frame operator

T : Cm → Cn, m m {ck}k=1 7→ ∑ ck fk k=1 corresponding to the matrix   | | ... |   T =  f1 f2 ... fn  . | | ... |

Then the adjoint operator

F : Cn → Cm, m x 7→ {hx, fki}k=1 corresponding to   − f1 −  − −   f2  F =   .  ...  − fn −

We claim that F is injective. Indeed, if Fx = 0 then

m 2 2 0 = kFxk = ∑ |hx, fki| . k=1

Therefore, |hx, fki| = 0 for all k = 1, m. m n In addition, span{ fk}k=1 = C . Hence we get x = 0.

23 1.2. Frames in Cn

T n For x = (x1, x2, ..., xn) ∈ C  n  ∑ f1ixi  f f ... f   x   i=1   hx, f i  11 12 1n 1  n  1    f21 f22 ... f2n   x2   ∑ f2ixi   hx, f2i  Fx =     =  i=1  =   .  ...   ...     ...       ...      fm1 fm2 ... fmn xn  n  hx, fmi ∑ fmixi i=1 Now, we extend F to a bijection F˜ of Cm onto Cm. m m m Let {ek}k=1 be the canonical basis for C and {αk}k=n+1 be the orthogonal basis for ImF⊥ in Cm. Then the operator F˜ corresponding to the matrix  1 1 1  f11 f12 ... f1n αn+1 αn+2 ... αm  f f f 2 2 2  ˜  21 22 ... 2n αn+1 αn+2 ... αm  F =    ......  m m m fm1 fm2 ... fmn αn+1 αn+2 ... αm is surjective. n Indeed, since span{ f•k}k=1 = ImF , and m ⊥ span{αk}k=n+1 = ImF . then the columns of F˜ spans Cm. Hence, the columns of F˜ constitute a basis for Cm. Applying Theorem 1.18, the rows of F˜ constitute a basis for Cm. That means m fk can be consider as the first n coordinates of some vectors gk in C constituting a basis for Cm. m If { fk}k=1 is a tight frame then rewrite

1 m f = ∑ h f , fki fk A k=1 or S = AId. So, ∗ hTT ei, eji = Aδij, ∀i, j = 1, 2, ..., m. Thus, the n rows in the matrix   | | ... |   T =  f1 f2 ... fn  | | ... |

24 1.2. Frames in Cn are orthogonal. By adding m − n rows, we can extend the matrix for T to an m × n matrix in which the rows are orthogonal. Therefore, the columns are orthogonal.

From Theorem 1.19 we can construct a frame or a tight frame of Cn by taking a basis or an orthogonal basis of Cm, m > n, respectively. Then by taking first n coordinates, we have a frame, or a tight frame.

Example 1.20

Consider

 1   1   1   −1       −     1   1   1   1  4   ,   ,   ,   in C .  1   −1   1   1  −1 1 1 1

By Theorem 1.19,         1 1 1 −1          1  ,  1  ,  −1  ,  1  1 −1 1 1 is a tight frame for C3 with frame bound A = 4. For a given m × n matrix Λ the following proposition gives a condition for the rows constituting a frame for Cn .

Proposition 1.21 [1] For an m × n matrix   λ11 λ12 ... λ1n    λ21 λ22 ... λ2n  Λ =    ...  λm1 λm2 ... λmn The following are equivalent:

(i) There exists a constant A > 0 such that n 2 n 2 n n A ∑ |ck| ≤ kΛ{ck}k=1k for all {ck}k=1 ∈ C . k=1

25 1.2. Frames in Cn

(ii) The columns in Λ constitute a basis for their span in Cm.

(iii) The rows in Λ constitute a frame for Cn.

PROOF (i) ↔ (ii) m Denote the columns in Λ by g1, g2, ..., gn ; gi ∈ C . Then (i) is equivalent to

n n 2 2 A ∑ |ck| ≤ k ∑ ckgkk . k=1 k=1

n n So, by Theorem 1.4 {gk}k=1 being a frame for their span. Moreover, {gk}k=1 n are linearly independent. Indeed, suppose that there exist {αk}k=1 and at least αi 6= 0 such that

n ∑ αkgk = 0. k=1 Then

n 2 A ∑ |ck| = 0 k=1 implies αk = 0 , ∀k = 1, n. n Therefore, {gk}k=1 being a basis for their span. (i) ↔ (iii) n Now, denote the rows in Λ by f1, f2, ..., fm ; fi ∈ C . Then (i) is equivalent to   c1 n n   2  c2  2 A ∑ |ck| ≤ ∑ |h fk,  i| . k=1 k=1  ...  cn

m n It is equivalent to { fk}k=1 should be a frame for C by Theorem 1.4.

As an illustration of Proposition 1.21 , consider   1 0   Λ =  0 1  . 1 0

26 1.2. Frames in Cn

! ! ! 1 0 1 It is clear that the rows , , constitute a frame for C2. 0 1 0     1 0     3 The columns  0  ,  1  constitute a basis for their span in C , but their 1 0 span is only two-dimensional subspace of C3.

Corollary 1.22 [1] Let Λ be an m × n matrix. Denote the columns by g1, g2, ..., gn m and the rows by f1, f2, ..., fm. Given A, B > 0, the vectors { fk}k=1 constitute a frame for Cn with bounds A, B if and only if

n n n 2 2 2 n n A ∑ |ck| ≤ k ∑ ckgkk ≤ B ∑ |ck| , {ck}k=1 ∈ C . (1.17) k=1 k=1 k=1

PROOF We have (1.17) is equivalent to   c1 n n  c  n | |2 ≤ |h  2 i|2 ≤ | |2 { }n ∈ Cn A ∑ ck ∑ fk,   B ∑ ck , ck k=1 . k=1 k=1  ...  k=1 cn

m n Therefore, the vectors { fk}k=1 constitute a frame for C with bounds A, B.

Example 1.23

Consider      q   q  0 0   5 5 0 − 6 6  √1   √1        ,  −  ,  1  ,  0  ,  0  in C3. (1.18)  q 3   q 3        2 2 0 √1 √1 3 3 6 6 Corresponding to these vectors we consider matrix  √  0 √1 √2  3 √3   0 √−1 √2   3 3    Λ =  0 1 0  .  √   5 1   √ 0 √   √6 6  −√ 5 0 √1 6 6

27 1.2. Frames in Cn

3 5 We can check that the column {gk} = are normalized orthogonal in C with q k 1 5 length 3 . Therefore,

3 3 2 5 2 k ∑ ckgkk = ∑ |ck| . k=1 3 k=1 for all c1, c2, c3 in C. By Corollary 1.22 we conclude that the vectors define by 3 5 (1.18) constitute a tight frame for C with frame bound 3 For later use we state a special case of Corollary 1.22:

Corollary 1.24 [1] Let Λ be an m × n matrix. Then the following are equivalent:

∗ (i) Λ Λ = Idn×n , in which Idn×n is the n × n identity matrix.

m (ii) The columns g1, g2, ..., gn in Λ constitute an orthonormal system in C .

n (iii) The rows f1, f2, ..., fm in Λ constitute a tight frame for C with frame bound equal to 1.

PROOF Let g1, g2, ..., gn be the columns of Λ. Then

∗ Λ Λ = Idn×n equivalent to     − f1 −   1 0 ··· 0 | | ... |  − −   0 1 ··· 0   f2        f1 f2 ... fn =  . . .  . ...    . 0 .. .    | | ... |   − fn − 0 ··· 1 Thus,

( 2 |gi| = 1 , ∀i = 1, n, hgi, gji = 0 , ∀i 6= j.

m Hence, g1, g2, ..., gn constitute an orthonormal system in C . Applying the Proposition 1.21 and Corollary 1.22, then

n n 2 2 k ∑ ckgkk = ∑ |ck| . k=1 k=1 n Consequently, f1, f2, ..., fm is a tight frame for C with frame bound equal to 1.

28 1.3. The discrete Fourier transform

Example 1.25

Consider  √  0 √1 √2  5 √5   0 √−1 √2   5 5   √  Λ =  0 √3 0   5     √1 0 √1   2 10  √−1 0 √1 2 10 then

 0 0 0 √1 √−1  √ 2 2 ∗  − 3  Λ =  √1 √1 √ 0 0  .  √5 √5 5  √2 √2 0 √1 √1 5 5 10 10 We can check

 1 ··· 0  ∗  . .. .  Λ Λ =  . . .  . 0 ··· 1

Thus, the columns in Λ constitute an orthonormal system in C5. The rows in Λ constitute a tight frame for C3 with frame bound equal 1.

1.3 The discrete Fourier transform

When working with frames and bases in Cn, one has to be particularly careful n with the meaning of the notation. For example: fk, gk be vectors in C ; ck be scalars. In order to avoid confusion, we will change the notation slightly. n Define vectors ek in C by

1 i(j− ) k−1 e (j) = √ e2π 1 n , j = 1, n. (1.19) k n

29 1.3. The discrete Fourier transform

That is   1  i k−1   e2π n  1  k−1   4πi n  ek = √  e  . n  .   .    i(n− ) k−1 e2π 1 n

n Theorem 1.26 [1] The vectors {ek}k=1 defined in (1.19) constitute an orthonor- mal basis for Cn.

PROOF It is easy to see that kekk = 1 for all k = 1, n. Moreover, n 1 2πi(j−1) k−1 −2πi(j−1) `−1 hek, e`i = ∑ e n e n n j=1 n 1 ij k−` = ∑ e2π n . n j=0 Using the formula

(1 − x)(1 + x + ... + xn−1) = 1 − xn

2πij k−` with x = e n . Then hek, e`i = 0 for all k 6= `.

n The basis {ek}k=1 is called the discrete Fourier transform basis. Using this basis, every sequence f ∈ Cn has a representation n n n 1 −2πi(`−1) k−1 f = ∑ h f , ekiek = √ ∑ ∑ f (`)e n ek. (1.20) k=1 n k=1 `=1 Applications often ask for tight frames because the cumbersome inversion of the frame operator is avoided in this case, see (1.9). It is interesting that overcom- plete tight frames can be obtained in Cn a by projecting the discrete Fourier transform basis in any Cm , m > n, onto Cn:

m n Proposition 1.27 [1] Let m > n and define the vectors { fk}k=1 in C by   1  i k−1   e2π m  1  k−1   4πi m  fk = √  e  ; k = 1, 2, ..., m. (1.21) m  .   .    i(n− ) k−1 e2π 1 m

30 1.3. The discrete Fourier transform

m n Then { fk}k=1 is a tight overcomplete frame for C with frame bound equal to 1 p n and k fkk = m for all k.

ROOF ˜ m P Applying Theorem 1.26 , one has { fk}k=1 constitute an orthonormal ba- sis for Cm with   1  i k−1   e2π m  1  k−1  ˜  4πi m  fk = √  e  ; k = 1, 2, ..., m. (1.22) m  .   .    i(m− ) k−1 e2π 1 m

m n Applying the Corollary 1.24 , { fk}k=1 constitute a tight frame bound for C with frame bound equal to 1. p n In addition, k fkk = m for all k = 1, m.

It is important to notice that all the vectors fk in Proposition 1.27 have the same norm. If needed, we can therefore normalize them while keeping a tight frame; we only have to adjust the frame bound accordingly.

Corollary 1.28 [1] For any m > n , there exists a tight frame in Cn consisting of m normalized vectors.

Example 1.29

The discrete Fourier transform basis in C4 consists of the vectors  1   1   1   1       −   −  1  1  1  i  1  1  1  i    ,   ,   ,   . 2  1  2  −1  2  1  2  −1  1 i −1 i Via Proposition 1.27 , the vectors         1 1 1 1 1 1 1 1  1  ,  i  ,  −1  ,  −i  . 2   2   2   2   1 −1 1 −1 constitute a tight frame in C3. One advantage of an overcomplete frame compared to a basis is that the frame property might be kept if an element is removed. However, even for frames it can

31 1.3. The discrete Fourier transform happen that the remaining set is no longer a frame, for example if the removed element is orthogonal to the rest of the frame elements. Unfortunately this can be the case no matter how large the number of frame elements is, i.e., no matter how redundant the frame is! If we have information on the lower frame bound and the norm of the frame elements we can provide a criterion for how many elements we can (at least) remove:

m n Proposition 1.30 [1] Let { fk}k=1 be a normalized frame for C with lower bound frame bound A > 1. Then for any index set I ⊂ {1, 2, ..., m} with |I| < A, the n family { fk}k∈/I is a frame for C with lower frame bound A − |I|.

PROOF Given f ∈ Cn ,

2 2 2 2 ∑ |h f , fki| ≤ ∑ k fkk k f k = |I|k f k . k∈I k∈I Thus,

2 2 ∑ |h f , fki| ≥ (A − |I|)k f k . k∈/I

m Since A − |I| > 0 then we can choose (A − |I|) as a lower frame bound of { fk}k=1

m n Considering an arbitrary frame { fk}k=1 for C , the maximal number of ele- ments one can hope to remove while keeping the frame property is m − n. If we want to be able to remove m − n arbitrary elements it is not enough to as- m sume that { fk}k=1 is a normalized tight frame. Concerning the stability against removal of vectors, the frames obtained in Proposition 1.27 behave well: m − n arbitrary elements can be removed:

m n Proposition 1.31 [1] Consider the frame { fk}k=1 for C define as (1.21). Any subset containing at least n elements of this frame forms a frame for Cn.

PROOF Consider an arbitrary subset {k1, f2, ..., kn} ⊆ {1, 2, ..., m}. 2πi Placing the vectors { }n as rows in an × matrix and letting = m , then fki i=1 n n z : e

 − −   k1−1 ··· (k1−1)(n−1)  fk1 1 z z  − f −  1  1 zk2−1 ··· z(k2−1)(n−1)   k2      = √   .  ...  m  ···  kn−1 (kn−1)(n−1) − fkn − 1 z ··· z

32 1.4. Pseudo-inverses and the singular value decomposition

This is Vandermode matrix with determinant

1 ki−1 kj−1 det = n ∏(z − z ) 6= 0. (1.23) m 2 i6=j

Hence, { }n is a basis for Cn. fki i=1

From Proposition 1.31 , we have another way to construct a frame for Cn. In m detail, taking any at least n elements of the frame { fk}k=1 define as (1.21), we will obtain a frame for Cn.

1.4 Pseudo-inverses and the singular value decomposition

Given m × n matrix E, consider it as a linear mapping

E : Cn −→ Cm.

E is not necessary injective, but by restricting E to orthogonal complement of the kernel KerE, we obtain an injective linear mapping

E˜ : (KerE)⊥ −→ Cm.

Indeed, if Ex˜ = 0 then x = 0 because KerE T(KerE)⊥ = {0}. So, E˜ is injective. We see that ImE = ImE˜ then

E˜ : (KerE)⊥ −→ ImE has inverse and its inverse

(E˜)−1 : ImE −→ (KerE)⊥.

Extend E˜−1 to E† : Cm −→ Cn by defining

E†(y + z) = E˜−1y if y ∈ ImE , z ∈ (ImE)⊥. (1.24)

So,

EE†x = x for all x ∈ ImE.

33 1.4. Pseudo-inverses and the singular value decomposition

The operator E† is called pseudo-inverse of E. From the construction and Lemma 1.10, we have:

KerE† = ImE⊥ = KerE∗ (1.25) ImE† = KerE⊥ = ImE∗. (1.26)

Indeed, if x ∈ (ImE)⊥ then E†x = 0 by (1.24). So, x ∈ KerE†. Thus (ImE)⊥ ⊆ KerE†. If x ∈ KerE† then 0 = E†x = E˜−1y, for which x = y + z , y ∈ ImE, z ∈ (ImE)⊥. Since E˜−1 is bijective then y = 0. Therefore, x = z ∈ (ImE)⊥. Thus, KerE† ⊆ (ImE)⊥. In conclusion, we obtain

KerE† = ImE⊥ = KerE∗.

Similarly, we have

ImE† = KerE⊥ = ImE∗.

We note two characterizations of the pseudo-inverse:

Proposition 1.32 [1] Let E be an m × n matrix. Then

(i) E† is the unique n × m matrix for which EE† is the orthogonal projection onto ImE and E†E is the orthogonal projection onto ImE†.

(ii) E† is the unique n × m matrix for which EE† and E†E are self-adjoint and

EE†E = E , E†EE† = E†.

PROOF Assume that EE† is the orthogonal projection onto ImE then EE†E = E and EE†x = 0 , ∀x ∈ (ImE)⊥. Therefore, (ImEE†)⊥ = KerEE† , i.e., EE† is self- adjoint. Similarly, we have E†E is self-adjoint and E†EE† = E†. Suppose EE† is self-adjoint and EE†E = E. Then, we have

(EE†)2 = EE†EE† = EE†.

So, EE† is a projection onto ImEE†. Since EE† is self-adjoint then

(ImEE†)⊥ = KerEE†, i.e., EE†(x) = 0 , ∀x ∈ (ImEE†)⊥.

34 1.4. Pseudo-inverses and the singular value decomposition

Therefore, EE† is the orthogonal projection onto ImEE†. Moreover, EE†E = E then ImE = ImEE†E ⊆ ImEE† ⊆ ImE. Hence, ImEE† = ImE. That means EE† is the orthogonal projection onto ImE. Similar for E†E, E†E is the orthogonal projection onto ImE†.

Now, we have to prove that E† in Proposition 1.32 is exact E† in the above con- struction.

PROOF Assume E† as in the construction, if y ∈ ImE then EE†y = y. If y ∈ (ImE)⊥ = KerE† then EE†y = 0. Therefore, EE† is the orthogonal projection onto ImE. In addition, if y ∈ (ImE†)⊥ = KerE then E†Ey = 0. If y ∈ ImE† then y = E†x for some x. So,

E†Ey = E†EE†x = E†x − E†(Id − EE†)x = E†x = y, here Id − EE† is the orthogonal projection onto (ImE)⊥ = KerE†. Thus, E†Ey = ImE†. Hence, E†E is the orthogonal projection onto ImE†. Conversely, suppose E† is defined as in Proposition 1.32. From (ii)

E∗ = (EE†E)∗ = (E†E)∗E∗ = E†EE∗ because E†E is self-adjoint. Therefore,

(KerE)⊥ = ImE∗ ⊆ ImE†.

Now, take y ∈ ImE then ∃x ∈ (KerE)⊥ such that y = Ex. Then,

E†y = E†Ex = x = (E˜)−1Ex = (E˜)−1y.

If z ∈ (ImE)⊥ then EE†z = 0 by (i) , and E†z = E†EE†z = 0 by (ii).

The pseudo-inverse gives the solution to an important minimization problem:

35 1.4. Pseudo-inverses and the singular value decomposition

Theorem 1.33 [1] Let E be an m × n matrix. Given y ∈ ImE. Then the equation Ex = y has a unique solution of minimal norm, namely x = E†y.

PROOF We have: EE†y = y for y ∈ ImE. So, E†y is a solution to the equation Ex = y. Thus, the general solution x = E†y + z , where z ∈ KerE and y ∈ (KerE)⊥. Hence, kxk2 = kE†yk2 + kzk2 ≥ kE†yk2. It occurs when z = 0.

For computational purposes it is important to notice that the pseudo-inverse can be found using the singular value decomposition of E. We begin with a lemma.

Lemma 1.34 [1] Let E be an m × n matrix with rank r ≥ 1. Then there exist r r constants σ1, σ2, ..., σr > 0 ,and orthonormal bases {uk}k=1 for ImE and {vk}k=1 for ImE∗ such that

Evk = σkuk for k = 1, r. (1.27)

PROOF We have:

E∗E : Cn −→ Cn is self-adjoint. n n Then there exists an orthonormal basis {vk}k=1 for C consisting of eigenvectors for E∗E. n Let {λk}k=1 denote the corresponding eigenvalues. Then

2 λk = λkkvkk ∗ 2 = hE Evk, vki = kEvkk ≥ 0 , ∀k = 1, n.

Thus,

r = dimImE = dimImE∗. (1.28)

Since

(ImE)⊥ = KerE∗

36 1.4. Pseudo-inverses and the singular value decomposition then

∗ ∗ ∗ n ImE = ImE E = span{E Evk}k=1 n = span{λkvk}k=1.

Therefore, the rank equal to the number of non-zero eigenvalues. r r Assume that we can order {vk}k=1 such that {vk}k=1 corresponds to the non-zero eigenvalues. Since

∗ r ImE = span{λkvk}k=1

r ∗ then {vk}k=1 is an orthonormal basis for ImE . For k > r ,

2 ∗ kEvkk = hE Evk, vki = 0.

Then Evk = 0 , ∀k > r. Define 1 uk := √ Evk , k = 1, 2, ..., r. (1.29) λk

r Therefore, we obtain {uk}k=1 spans ImE. Hence, p σk = λk , k = 1, 2, ..., r.

Lemma 1.34 leads to the singular value decomposition of E:

Theorem 1.35 [1] Every m × n matrix E with rank r ≥ 1 has a decomposition: ! D 0 E = U V∗ (1.30) 0 0 where U is a unitary m × m matrix , V is a unitary n × n matrix , ! D 0 and is an m × n block matrix in which D is an r × r diagonal matrix 0 0 with positive entries σ1, σ2, ..., σr in the diagonal.

37 1.4. Pseudo-inverses and the singular value decomposition

ROOF n n r P Let {vk}k=1 be the orthonormal basis for C ordered such that {vk}k=1 is the orthogonal basis for ImE∗. n Let V be n × n matrix having the vector {vk}k=1 as columns:   | | ... |   V =  v1 v2 ... vn  . | | ... |

r m Extend the orthonormal basis {uk}k=1 for ImE to the orthonormal basis {uk}k=1 for Cm and let   | | ... |   U =  u1 u2 ... um  . | | ... |

Finally, let D be the r × r diagonal matrix with positive entries σ1, σ2, ..., σr in the diagonal. Since ( Evk = 0 for k > r, Evk = σkuk for k = 1, r then

EV = (σ1u1 σ2u2 ... σrur 0 ... 0)

! D 0 = U . (1.31) 0 0

Multiplying (1.31) with V∗ from the right, then ! D 0 E = U V∗. 0 0

The numbers σ1, ..., σr are called singular values of E; the proof of Lemma 1.34 shows that they are the square roots of the positive eigenvalues for E∗E.

Corollary 1.36 [1] With the notation in Theorem 1.35, the pseudo-inverse of E is given by ! D−1 0 E† = V U∗ (1.32) 0 0

38 1.4. Pseudo-inverses and the singular value decomposition

! D−1 0 where is an m × n block matrix in which D−1 is an r × r diagonal 0 0 −1 −1 −1 matrix with positive entries σ1 , σ2 , ..., σr in the diagonal.

PROOF We check that the matrix E† defined by (1.38) satisfies the requirements in Proposition 1.32. First, via (1.30), ! ! D 0 D−1 0 EE† = U V∗V U∗ 0 0 0 0 ! Id 0 = U U∗, 0 0 which shows that EE† is self-adjoint. Furthermore, ! ! Id 0 D 0 EE†E = U U∗U V∗ 0 0 0 0 = E.

Similarly, we can verify that E†E is self-adjoint and E†EE† = E†.

m n Let us return to the setting where { fk}k=1 is a frame for C with pre-frame operator T : Cm −→ Cn. The calculation of the frame coefficients amounts to finding T†:

m n Theorem 1.37 [1] Let { fk}k=1 be a frame for C , with pre-frame operator T and frame operator S. Then

† −1 m m T f = {h f , S fki}k=1 , ∀ f ∈ C . (1.33)

PROOF Let f ∈ Cn. Expressed in terms of the pre-frame operator T, the equa- m m m tion f = ∑ ck fk means that T{ck}k=1 = f , f ∈ C . k=1 By Theorem 1.33 , the equation

m T{ck}k=1 = f

39 1.4. Pseudo-inverses and the singular value decomposition

m † has a unique solution of minimal norm {ck}k=1 = T f . Applying Theorem 1.11 , we have:

−1 ck = h f , S ki , ∀k = 1, m.

Therefore,

† −1 m m T f = {h f , S fki}k=1 , ∀ f ∈ C .

m n One interpretation of Theorem 1.37 is that when { fk}k=1 is a frame for C , the matrix T† is obtained by replacing the complex conjugate of the vectors in the −1 m canonical dual frame {S f }k=1 as rows in an m × n matrix:

 −1  − S f1 −  −1   − S f2 −  T† =  .  .  .   .  −1 − S fm −

The singular value decomposition gives a natural way to obtain coefficients m m m n {ck}k=1 such that f = ∑ ck fk. Let { fk}k=1 be an overcomplete frame for C . k=1 Since T is surjective, its rank equals n, and the singular value decomposition of T is

T = U(D 0)V∗.

Note that since T is an n × m matrix, (D 0) is now an n × m matrix; U is an n × n matrix, and V is an m × n matrix. Given any (m − n) × n matrix F, we have ! ! D−1 D−1 TV U∗ f = U(D 0)V∗V U∗ f F F = UIU∗ f = f .

This means that we can use the coefficients

−1 ! m D ∗ {ck} = V U f (1.34) k=1 F

40 1.5. Finite-dimensional function spaces for the reconstruction of f . As noted already in Theorem 1.11 the choice F = 0 is optimal in the sense that the `2-norm of the coefficients is minimized, but for other purposes other choices might be preferable. The matrix ! D† V U∗ (1.35) F is frequently called a generalized inverse of T.

1.5 Finite-dimensional function spaces

Given a, b ∈ R with a < b, let C[a, b] denote the set of continuous functions f : [a, b] −→ C. We equip C[a, b] with the supremums-norm

k f k∞ = sup | f (x)|. x∈[a,b]

The Weierstrass’s Approximation Theorem say that every f ∈ C[a, b] can be ap- proximated arbitrarily well by a polynomial:

Theorem 1.38 [1] Let f ∈ C[a, b]. Given e > 0, there exists a polynomial

n k P(x) = ∑ ckx k=0 such that

k f (x) − P(x)k∞ ≤ e for all x ∈ [a, b]. (1.36)

PROOF Let t = a + x(b − a) for t ∈ [a, b], then x ∈ [0, 1]. We will show that the polynomial

k p B ( ) = B ( ) = p ( ) p( − )k−p (1.37) k x k f x ∑ Ck f x 1 x p=0 k converges to f for all x ∈ [0, 1]. Indeed, remark

k ( + )k = p p k−p (1.38) x y ∑ Ck x y . p=0

41 1.5. Finite-dimensional function spaces

Taking the derivative (1.38) and multiplying x to both sides

k ( + )k−1 = p p k−p (1.39) kx x y ∑ pCk x y . p=0

Taking the derivative twice (1.38) and multiplying x2 to both sides

k ( − ) 2( + )k−2 = ( − ) p p k−p (1.40) k k 1 x x y ∑ p p 1 Ck x y . p=0

p p k−p Let y := 1 − x, rp(x) = Ck x y , then

k k k 2 ∑ rp(x) = 1 , ∑ prp(x) = kx , ∑ p(p − 1)rp(x) = k(k − 1)x . p=0 p=0 p=0

Therefore,

k k k k 2 2 2 2 ∑ (p − kx) rp(x) = ∑ p rp(x) − 2kx ∑ prp(x) + k x ∑ rp(x) p=0 p=0 p=0 p=0 = [kx + (k − 1)kx2] − 2kx + k2x2 = kx(1 − x).

Now, let

M = max | f (x)|. (1.41) |x|≤1

Given e > 0, since f ∈ C[a, b] then

∃δ > 0 such that |x − y| < δ yields | f (x) − f (y)| < e , ∀x, y ∈ [a, b]. (1.42)

Consider

k p ( ) − B ( ) = ( ) − p ( ) p( − )k−p f x k x f x ∑ Ck f x 1 x p=0 k k k p = ( ) ( ) − p ( ) p( − )k−p ∑ f x rp x ∑ Ck f x 1 x p=0 p=0 k k p = ∑ ( f (x) − f ( ))rp(x). p=0 k

42 1.5. Finite-dimensional function spaces

Thus, p p f (x) − Bk(x) = ∑ ( f (x) − f ( ))rp(x) + ∑ ( f (x) − f ( ))rp(x). p k p k {p|| k −x|<δ} {p|| k −x|≥δ} (1.43)

Since rp(x) ≥ 0 and p e p | f (x) − f ( )| < when| − x| < δ k 2 k then

p e k e | ∑ ( f (x) − f ( ))rp(x)| ≤ ∑ rp(x) = . p k 2 p=0 2 {p|| k −x|<δ} Moreover, p | ∑ ( f (x) − f ( ))rp(x)| ≤ 2M ∑ rp(x) p k p {p|| k −x|≥δ} {p|| k −x|≥δ} k (p − kx) 2 ≤ 2M ∑ ( ) rp(x) p=0 kδ 2M = kx(1 − x) k2δ2 M ≤ . 2kδ2 Hence, for ≥ M k eδ2 p p | f (x) − Bk(x)| = | ∑ ( f (x) − f ( ))rp(x) + ∑ ( f (x) − f ( ))rp(x)| p k p k {p|| k −x|<δ} {p|| k −x|≥δ} p p ≤ | ∑ ( f (x) − f ( ))rp(x)| + | ∑ ( f (x) − f ( ))rp(x)| p k p k {p|| k −x|<δ} {p|| k −x|≥δ} e e ≤ + = e. 2 2 In conclusion,

k f (x) − Bk(x)k∞ = sup | f (x) − Bk(x)| < e. |x|≤1

It is essential for the conclusion that [a, b] is a finite and closed interval. Theorem 1.38 will be fail if [a, b] replaced by an open interval or unbounded interval.

43 1.5. Finite-dimensional function spaces

2 k ∞ The polynomials {1, x, x , ...} = {x }k=1 are linearly independent and do not span a finite-dimensional subspace of C[a, b]. Indeed, if

n n−1 a0x + a1x + ... + an−1x + an = 0 , ∀x ∈ [a, b] then ak must be zero for all k = 0, n. However, for a given n ∈ N, the vector space

V := span{1, x, x2, ..., xn} is a finite-dimensional subspace of C[a, b] with the polynomial {1, x, x2, ..., xn} as a basis. From this Theorem, we know that an element in infinite-dimensional vector space can be approximated by an element in finite-dimensional vector space. In classical one expands functions in L2(0, 1) in terms of com- 2πikx plex exponential functions {e }k∈Z. Let us for the moment consider a finite iλkx n n collection of exponential functions {e }k=1 , where {λk}k=1 is a sequence of real n numbers. Unless {λk}k=1 contains repetitions, such a family of exponentials is always linearly independent:

n Lemma 1.39 [1] Let {λk}k=1 be a sequence of real numbers, and assume that λk 6= λj for k 6= j. Let I ⊆ R be an arbitrary non-empty interval, and consider the iλkx n iλkx n complex exponentials {e }k=1 as functions on I. Then the functions {e }k=1 are linearly independent.

iλ x PROOF It is enough to prove that the functions {e k }k∈Z are linearly indepen- dent as function on any bounded interval ]a, b[ where a, b ∈ R, a < b. Assume that n iλkx ∑ cke = 0 for any x ∈]a, b[. k=1 a−b b−a a+b When x run through the interval ] 2 , 2 [, the variable x + 2 runs through ]a, b[. Therefore, n a+b a − b b − a iλk(x+ ) ∑ cke 2 = 0 for any x ∈] , [. k=1 2 2 Taking the derivative j times, we get n a − b b − a j iλkx ∑ dk(iλk) e = 0 for any x ∈] , [ k=1 2 2

44 1.5. Finite-dimensional function spaces

iλ a+b where dk := cke k 2 . Letting x = 0 then       1 1 ··· 1 d1 0 1 1 1      λ λ ··· λn   d2   0   1 2   .  =  .  .  ···   .   .     .   .  n−1 n−1 n−1 λ1 λ2 ··· λn dn 0 Besides  1 1 ··· 1   1 1 ··· 1   λ1 λ2 λn  A =    ···  n−1 n−1 n−1 λ1 λ2 ··· λn is Vandermonde matrix with determinant

det A = ∏(λi − λj) 6= 0. i>j

So, d1 = d2 = ... = dn = 0. Thus, c1 = c2 = ... = cn = 0. iλ x Hence, {e k }k∈Z are linearly independent.

In words, Lemma 1.39 means that complex exponentials do not give natural ex- amples of frame in finite-dimensional spaces: if λk 6= λj for k 6= j, then the iλkx n 2 complex exponentials {e }k=1 form a basis for their span in L (I) for any in- terval I of finite length, and not overcomplete system. We can not obtain over- completeness by adding extra exponentials (except by repeating some of the λ -values)-this will just enlarge the space.

Specially, as an important case we now consider the case where λk = 2. A func- tion f which is a finite combination of the type

N2 2πikx f (x) = ∑ cke for some ck ∈ C; N1, N2 ∈ Z; N2 ≥ N1 (1.44) k=N1 is called a trigonometric polynomial. Trigonometric polynomial correspond to partial sums in . A trigonometric polynomial f can also be written as a linear combination of functions sin(2πx), cos(2πx), in general with complex coefficients. It will be useful later to note that if f is real-valued and coefficients ck in (1.44) are real, then f is linear combination of functions cos(2πx) alone:

45 1.5. Finite-dimensional function spaces

Lemma 1.40 [1] Assume that the trigonometric polynomial in (1.44) is real- valued and that ck ∈ R, then

N2 f (x) = ∑ ck cos(2πkx). (1.45) k=N1

PROOF We have:

N2 2πikx f (x) = ∑ cke k=N1

N2 = ∑ ck(cos 2πkx + i sin 2πkx). k=N1

Since f is real-valued then

N2 f (x) = ∑ ck cos(2πkx). k=N1

Note that we need the assumption that ck ∈ R: for example, the function 1 1 f (x) = eπix − e−πix = sin x, 2i 2i is real-valued, but does not have the form (1.45). Moreover, we mention that a positive-valued triogonometric polynomial has a square root, which is again a triogonometric polynomial:

Lemma 1.41 [1] Let f be a positive-valued triogonometric polynomial of the form

N2 f (x) = ∑ ck cos(2πkx) , ck ∈ R. k=N1 Then there exists a triogonometric polynomial

N 2πikx g(x) = ∑ dke with dk ∈ R, (1.46) k=0 such that

|g(x)|2 = f (x) , ∀x ∈ R. (1.47)

46 1.5. Finite-dimensional function spaces

M PROOF We can write f (x) = ∑ am cos(mx) with am ∈ R. m=0 Moreover, we have

cos(nx) = 2 cos(n − 1)x cos x + cos(n − 2)x , ∀n ≥ 2 , n ∈ R. (1.48)

So, cos(mx) can be rewritten as a polynomial of degree m of cos x.

Therefore, f (x) = p f (cos x), where p f is a polynomial of degree M with real coefficients. This polynomial can be factored,

M p f (c) = α ∏(c − cj), j=1 where the zeros cj of p f appear either in complex duplets cj, cj, or in real singlets. We can also write

iMx −ix f (x) = e Pf (e ), (1.49) where Pf is a polynomial of degree 2M. For |z| = 1, we have

M −1 M M z + z 1 1 2 Pf (z) = z α ∏( − cj) = α ∏( − cjz + z ). (1.50) j=1 2 j=1 2 2 Indeed, we have

M −ix ix −ix −iMx e + e Pf (e ) = e α ∏( − cj). j=1 2 Therefore,

M −ix ix iMx −ix e + e e Pf (e ) = α ∏( − cj) j=1 2 M = α ∏(cos x − cj) j=1

= p f (cos x) = f (x).

1 1 2 q 2 If cj is real, then the zeros of 2 − cjz + 2 z are cj ± cj − 1. −1 For |cj| ≥ 1, these are two real zeros (degenerate if cj = ±1) of the form rj, rj . For |cj| < 1, the two zeros are complex conjugate and of absolute value 1, i.e., iαj −iαj they are of the form e , e . Then αj, −αj be the zeros of f (x), or

n f (x) = ((x − αj)(x + αj)) Q(x). (1.51)

47 1.5. Finite-dimensional function spaces

In order not to cause any contradiction with f (x) ≥ 0, these zeros must be have even multiplicity. −1 If cj is not real, then we consider it together with ck = cj . The polynomial q 1 1 2 1 1 2 q 2 2 ( 2 − cjz + 2 z )( 2 − cjz + 2 z ) has four zeros, cj ± cj − 1 and cj ± cj − 1. The −1 −1 four zeros are all different, and form a quadruplet zj , zj , zj , zj . Therefore, we have

J 1 −1 −1 Pf (z) = [∏(z − zj)(z − zj )(z − zj)(z − zj )]× 2 j=1 K L iαk 2 −iαk 2 −1 × ∏(z − e ) (z − e ) ] × ∏(z − r`)(z − r` )], k=1 `=1 where we have regrouped the three different kind of zeros. For z = e−ix on the unit circle, we have 1 1 |( −ix − )( −ix − −1)| = |( −ix − )( − )| e z0 e z0 e z0 ix e z0 −1 −ix ix = |z0| |(e − z0)(e − z0)| −1 −ix 2 = |z0| |(e − z0)| .

Consequently,

−ix f (x) = | f (x)| = |Pf (e )| J L J 1 −2 −1 −ix −ix 2 = [ |aM| ∏ |zj| ∏ |z` |] × | ∏(e − zj)(e − zj)| × 2 j=1 `=1 j=1 K L −ix iα −ix −iα 2 −ix 2 × | ∏(e − e k )(e − e k )| × | ∏(e − r`)| k=1 `=1 = |g(x)|2, where

J L J 1 1 −2 −1 2 −ix −ix g(x) = [ |aM| ∏ |zj| ∏ |z` |] . ∏(e − zj)(e − zj)× 2 j=1 `=1 j=1 K L −ix iα −ix −iα −ix × ∏(e − e k )(e − e k ) × ∏(e − r`) k=1 `=1

48 1.5. Finite-dimensional function spaces

J L J 1 1 −2 −1 2 −2ix −ix 2 = [ |aM| ∏ |zj| ∏ |z` |] × ∏(e − 2e Rezj + |zj| )× 2 j=1 `=1 j=1 K L −2ix −ix −ix × ∏(e − 2e cos αk + 1)] × ∏(e − r`) k=1 `=1 is clear a triogonometric polynomial of order M with real coefficients.

Example 1.42

Consider f (x) = 1 + cos 2x. ix i2x 2 We find g(x) = d0 + d1e + d2e satisfies |g(x)| = f (x). We have

2 2 |g(x)| = |d0 + d1 cos x + d2 cos 2x + i(d1 sin x + d2 sin 2x)| 2 2 = (d0 + d1 cos x + d2 cos 2x) + (d1 sin x + d2 sin 2x) 2 2 2 = d0 + d1 + d2 + 2(d0d1 + d1d2) cos x + 2d0d2 cos 2x = 1 + cos 2x.

Therefore,  d2 + d2 + d2 = 1  0 1 2 2(d0d1 + d1d2) = 0   2d0d2 = 1

Thus, ( d1 = 0 d = d = ± √1 0 2 2 Hence,

1 g(x) = ±√ (1 + ei2x). 2 Remark 1.43

1. This proof is constructive. It use factorization of a polynomial of degree M, however, which has to be done numerically and may lead to problems if M is large and some zeros are closed together. Note that in this proof we need to factor a polynomial of degree only M, unlike some procedures

which factor directly Pf , a polynomial of degree 2M.

49 1.5. Finite-dimensional function spaces

2. This procedure of ”extracting the square root” is also called spectral factor- ization in the engineering literature.

3. The polynomial g(x) is not unique! For M odd, for instance, Pf may have quadruplets of complex zeros, and 1 pair of real zeros. In each quadruplet −1 −1 we can choose either zj, zj to make up g(x), or zj , zj ; for each duplet we −1 M+1 can choose either r` or r` . This make already for 2 2 different choices for g(x). Moreover, we can always multiply g(x) with einx, n arbitrary in Z.

Example 1.44

Consider f (x) = 1 + cos x. ix 2 We calculate directly g(x) = d0 + d1e for which |g(x)| = f (x). Then we obtain g(x) = ± √1 (1 + eix). 2 Moreover, we can choose g0(x) = ± √1 (eix + ei2x) also satisfy |g0(x)|2 = f (x). 2 Note that by definition, the function g(x) in (1.46) is complex-valued, unless f is constant. Actually, despite the fact that f is assumed to be positive, there might not exist a positive trigonometric polynomial g satisfying (1.47).

50 Conclusion

The thesis can be fully understood with an elementary knowledge of linear algebra. This presents basic results in finite-dimensional vector spaces with an inner product. Section 1.1 contains the basic properties of frames. For example, it is proved m that every set of vectors { fk}k=1 in a vector space with an inner product is a m frame for span{ fk}k=1. We have proved the existence of coefficients minimizing the `2-norm of the coefficients in a frame expansion and showed how a frame for a subspace lead to a formula for the orthogonal projection onto the subspace. In Section 1.2 and Section 1.3, we have considered frames on Cn; in particu- lar, we’ve proved how we can obtain an overcomplete frame by a projection of a m n basis for a larger space. We also proved that the vectors{ fk}k=1 is a frame for C can be considered as the first n coordinates of some vectors in Cm constituting m m a basis for C , and that the frame property for { fk}k=1 is equivalent to certain properties for the m × n matrix having the vectors fk as rows. In Section 1.4 we have proved that the canonical coefficients from the frame expansion arise naturally by considering the pseudo-inverse of the pre-frame op- erator, and we showed how to find it in term of the singular value decomposition. Finally, in Section 1.5 we connected frames in finite-dimensional vector spaces with the infinite-dimensional constructions which we expect study latter.

51 References

[1] Christensen, O. An Introduction to Frames and Riesz Bases, Birkhauser Ver- lag, 2003.

[2] Daubechies, I. Lectures on . SIAM, Philadelphia, 1992.

[3] N. H. V. Hưng, Đại Số Tuyến Tính, NXB ĐHQGHN, 2001.

52