EXAMINING THE ABSOLUTE RATE OF CONVERGENCE OF SUMMABILITY ASSISTED FOURIER SERIES
A dissertation submitted to Kent State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy
by
Brian M. Wright
March, 2007 Dissertation written by
Brian M. Wright
M.A., Kent State University, 2000
B.S./B.S., Bucknell University 1992
Approved by
Kazim Khan , Chair, Doctoral Dissertation Committee
Charles Gartland , Members, Doctoral Dissertation Committee
Laura Smithies ,
C. C. Lu ,
Declan Keane ,
Accepted by
Andrew Tonge , Chair, Department of Mathematical Sciences
Jerry Feezel , Dean, College of Arts and Sciences
ii TABLE OF CONTENTS
Acknowledgements ...... v
Introduction ...... 1
0.1 History and definitions ...... 1
0.2 Overview ...... 6
1 Introduction to Summability Methods and Some Lemmas ...... 8
1.1 Why summability methods? ...... 8
1.2 Hausdorff summability ...... 10
1.3 Preliminary Lemmas ...... 11
1.4 Lemma for Chapter 2 ...... 13
1.5 Lemma for Chapter 3 ...... 15
2 Bound for Absolute Rate of Convergence for a Hausdorff Method . . . 17
2.1 Main Theorem ...... 17
2.2 How strict a requirement? ...... 24
2.3 Example illustrating the sharpness of the bound ...... 24
iii 3 Results Concerning a Hausdorff Method’s Row Total Variation . . . . . 26
3.1 Introduction ...... 26
3.2 Two theorems ...... 26
3.3 Proof of theorem 3.2.1 ...... 28
3.4 Proof of theorem 3.2.2 ...... 33
4 Bounds for Tensor Product Hausdorff Transforms ...... 35
4.1 Introduction and definitions ...... 35
4.2 Extending lemma 1.4.1 ...... 39
4.3 Some necessary propositions ...... 45
4.4 Main theorem and proof ...... 46
4.5 Example illustrating the sharpness of the bound ...... 57
4.6 Future endeavors ...... 58
BIBLIOGRAPHY ...... 59
iv Acknowledgements
To call this work “my” dissertation would be an unimaginable conceit. Although its
completion required my blood, sweat and tears (and even a few sleepless nights - though I
tried to avoid that like the plague), I want to gratefully acknowledge all those whose help
was invaluable.
First and foremost, I wish to thank my advisor, Dr. Kazim Khan. You were the perfect
advisor for me - someone who let me work at my own pace, yet was willing to give me a
nudge (or kick in the butt) when necessary. The original concept for research area, and many
of the clever solution techniques found within were the product of your fertile imagination.
I hope that we may continue to collaborate on future projects, as the student has not yet
become the master (PhD not withstanding)!
Second, I have found Kent State University to be a wonderful place to study. Many of the professors have gone out of their way to help and teach, and I am extremely grateful
by how much they aided in my job search. Also, some of my fellow students have become
friends whose friendship I hope to enjoy the rest of my life. Two friends in particular are
Juan Seoane (el carajote numero uno - or was it numero cero?) and Antonia Cardwell (also
a carajote). During the writing of this dissertation, you both have proofread numerous drafts; encouraged me to keep on keeping on; and just listened when I needed to vent. Hugs to both of you!
Last, but not least, I want to thank my family and friends from home for being emo- tional support as well. I particularly want to dedicate this dissertation to my mother and
grandparents who have been waiting very patiently (usually) for me to finish up my PhD.
v Here it is - finally!
OK, I am sure most readers would like to get past all this mushy stuff and get into the
mathematics. Enjoy!
vi Introduction
0.1 History and definitions
Approximating an arbitrary function by a series of “nicer” functions is nothing new
in mathematics. Perhaps the most famous example is the Taylor series approximation
which approximates a function by a polynomial. This is advantageous since mathematicians
have studied polynomials for hundreds of years and know many results concerning them.
Another useful approximation technique is Fourier series approximation, particularly when
approximating a periodic function, and it is this on which we will focus. A Fourier series
1 has the desirable qualities that the functions { 2 , cos x, sin x, cos 2x, sin 2x, . . . } are each 2π-periodic and the set is both orthogonal and complete on any interval of length 2π. As
Zygmund points out in [17], many basic results of the theory of functions have been obtained by mathematicians who were working on trigonometric series. For example, the definition of a Riemann integral in its general form first appeared in Riemann’s Habilitationsschrift which is devoted to trigonometric series; the more modern Lebesgue integral was developed in close connection with the theory of Fourier series; and set theory was created by Cantor in his attempts to solve the problem of the sets of uniqueness for trigonometric series. For an integrable function, f, we will denote its Fourier series by:
∞ a (1) S(f, x) ∼ 0 + (a cos kx + b sin kx) 2 k k k X=1
where the coefficients ak and bk are defined as:
1 2
1 π 1 π (2) a = f(t) cos kt dt, b = f(t) sin kt dt, k = 0, 1, 2, . . . . k π k π Z−π Z−π
These coefficient definitions demonstrate another interesting difference between Fourier
series and Taylor series. In Taylor series, the coefficients require the existence of derivatives of the function, while Fourier series’ coefficients require integrability of the function (a much less strict requirement). The ∼ symbol in (1) above indicates we recognize that the infinite
sum need not converge. This convergence, or lack thereof, immediately became crucial in
the study of Fourier series. As a result, it is useful to define the nth partial sum of the
series: n a0 2 + (ak cos kx + bk sin kx), n = 1, 2, . . . , (3) Sn(f, x) = k=1 X a0 2 , n = 0. If we substitute (2) into (3),we can derive another well known formula for the Fourier series
partial sums (see [17] for example):
1 π (4) S (f, x) = f(x − u)D (u) du, n π n Z−π where
1 (n + 2 ), if u = 2mπ for some integer m (5) Dn(u) = 1 sin(n+ 2 )u 1 , otherwise. 2 sin( 2 u) Dn(u) is called the Dirichletkernel. An equivalent form for the Dirichlet kernel is:
n 1 (6) D (u) = + cos(ju). n 2 j X=1 The figure on the next page illustrates the shape of the Dirichlet kernel for n = 5 and n = 8. 3
6 10 The Dirichlet Kernel for n=5 The Dirichlet Kernel for n=8
5 8
4
6 3
(u) 2 (u) 4 5 8 k k
1 2
0
0 −1
−2 −2 −4 −2 0 2 4 −4 −2 0 2 4 u u
Figure 1: Typical Shapes of Dirichlet Kernel.
If we extend our definitions for an and bn to the other integral values of n in the following way:
a−n = an (n > 0), b−n = −bn (n > 0),
then we can define a new coefficient:
1 (7) c = (a − ib ), n 2 n n for any n ∈ Z. By substituting (2) into (7), this is equivalent to:
1 π (8) c = f(t) e−int dt. n 2π Z−π Using this new coefficient, we can express the Fourier series (1) as (see [6], for example):
∞ ikx (9) S(f, x) ∼ ck e . k=X−∞ The partial Fourier sums (3) may be written [6]:
n ikx (10) Sn(f, x) = ck e . kX=−n 4
In chapters 1 through 3, we will use (1) and (3) exclusively. In chapter 4, however, we find
it useful to use the other, equivalent, definitions.
To draw any conclusions about the convergence of the partial sums, we will require
stronger restrictions on f. Therefore, denote
N b (11) V ara(f) = sup |f(ti) − f(ti−1)|, N,{ti} i X=1 b where a = t0 < t1 < . . . < tN = b. V ara(f) is called the total variation of the function
f on the interval [a, b]. If the total variation is finite, then the function is said to be of
bounded variation on that interval. The notation BV [a, b] represents the space of functions of bounded variation over the interval [a, b]. It is well known that f ∈ BV [a, b] if and only if there exist an f1 and an f2, both bounded, monotonic functions such that f = f1 − f2.
This implies BV [a, b] ⊆ L∞[a, b], so that BV [a, b] ⊆ L∞[a, b] ⊆ . . . ⊆ L2[a, b] ⊆ L1[a, b].
For f ∈ BV [−π, π] and 2π-periodic, there is a theorem of Dirichlet-Jordan [17] which states that:
1 + − (12) lim Sn(f, x) = [f(x ) + f(x )], n→∞ 2
for every x ∈ [−π, π]. Notice that for a periodic f ∈ BV [a, b], this implies the Fourier
series of the function at points of continuity converge (pointwise) to the function itself,
and at points of simple discontinuity gives a limit value for the convergence. The rate of
convergence was given by Bojanic in [3] to be (for n ≥ 1):
n 1 3 π (13) S (f, x) − [f(x+) + f(x−)] ≤ V ar k (φ ), n 2 n 0 x k=1 X
where
f(x + t) + f(x − t) − [f(x+) + f(x−)], t 6= 0, (14) φx(t) = 0, t = 0.
5
The rate of convergence of Fourier series at smooth regions of the function have been well-studied, and we will not go into any details here. The interested reader may consult
[17], [8], or [11]. Similarly, there are many interesting results concerning lack of convergence which we will not go into in this dissertation, and the interested reader should consult [1] or [2].
Rather, not only will this dissertation focus on the convergence of Fourier series at points of discontinuities, but we also wish to focus our field of study on summability as- sisted Fourier series. Perhaps the simplest and most commonly used summability tech- nique is the Ces`aro transform (also called the (C, 1) transform). Here, instead of looking n 1 at convergence of S (f, x), we look at the convergence of S (f, x). Fej´er [13] n n + 1 k k=0 n X 1 showed that S (f, x) converges to f(x) when f is 2π-periodic and continuous over n + 1 k k=0 [−π, π]. Riesz [17] Xextended this result by showing that the Ces`aro transform converges to
1 + − 1 2 [f(x ) + f(x )] provided f is 2π-periodic, f ∈ L [−π, π], and x is a point of continuity or simple discontinuity.
More generally, for α > 0, we can define the (C, α) transform as:
n α 1 α−1 (15) Sn (f, x) = α Bn−k · Sk(f, x), Bn Xk=0 where, for t > −1 and n ≥ 0,
Γ(n + t + 1) Bt = , n Γ(n + 1)Γ(t + 1)
and ∞ Γ(z) = e−ttz−1 dt, Z0 for z > 0. The gamma function is such that if z is a positive integer, Γ(z) = (z − 1)!. Also
notice that in the case where α = 1, we have the Ces`aro transform, since
α−1 0 Bn−k Bn−k 1 α = 1 = . Bn Bn n + 1 6
Both Zygmund [17], and Bojanic and Mazhar [4] showed that the above theorems of Fej´er and Riesz extend for all (C, α) transforms.
In 1999, Humphreys and Bojanic [7] found a rate of convergence for the absolute (C, α) summed Fourier series. In fact, for f being 2π-periodic and f ∈ BV [−π, π], they found the absolute convergence rate to be:
n 1 4α π (16) Sα − [f(x+) + f(x−)] ≤ |Sα(f, x) − Sα (f, x)| ≤ V ar k (φ ), n 2 k k−1 nπ 0 x k>n+1 k=1 X X
for α > 0, and φx as defined in (14). In 2003, Kunyang and Dan [10] showed that the
Humphreys and Bojanic result only held for α ≥ 1, and they found the bound for 0 < α < 1
to be:
n 1 100 π (17) Sα − [f(x+) + f(x−)] ≤ |Sα(f, x) − Sα (f, x)| ≤ kα−1V ar k (φ ). n 2 k k−1 α2nα 0 x k>n+1 k=1 X X
0.2 Overview
In this dissertation, we wish to expand results (16) and (17). In the first chapter, we
will examine the purpose of summability methods in general. We then define a particular
class of summability methods called Hausdorff methods and discuss some notation. Finally, we will introduce and prove several lemmas which will be used in the later chapters.
In the second chapter, we introduce and prove a theorem concerning a bound for the absolute rate of convergence of a Hausdorff method. We then give an example to illustrate the sharpness of the bound. Finally, we examine the strictness of our assumptions in the theorem, and show that our theorem is in fact an extension of previous work done by
Humphreys and Bojanic, and Kunyang and Dan.
In the third chapter, we introduce and prove two new theorems to examine why the different summability methods have different convergence rates, and provide an application 7
of our results to show the equivalence of the submethods of regular Hausdorff methods.
In chapter four, we look at functions of two variables, and give the extensions of all the previous definitions. We then introduce a new lemma and several propositions and extend a previous lemma in order to extend our bounding theorem of chapter 2 to the multivariate case. After stating and proving this extension, we again illustrate the sharpness of our bound. CHAPTER 1
Introduction to Summability Methods and Some Lemmas
1.1 Why summability methods?
For a general function, f, its Fourier series need not converge, even at a point of con- tinuity. Poisson seems to have been the first person to try to improve convergence using summability methods. He applied Abel’s summation technique to Fourier series in a method which is now referred to as, alternately, the Poisson method, Abel method, or A method.
This is a stronger method than the more common (C,1) method - but it is not a Hausdorff method. Since the developments in this dissertation are specifically for Hausdorff methods, we will say no more about the Abel method (the interested reader may consult [6] or [14], for example). Then, as mentioned in the introduction, in 1904 Fej´er showed that the (C, 1) transform of f converges to f at any point of continuity. Later, Lebesgue showed that the (C, 1) transform of f converges to f almost everywhere [6]. These successes led to the
development of summability methods as an entire field in their own right.
As more and more summability methods were being developed, several characteristics were determined to be important. First, it is considered desirable for the method to be linear. Second, and more importantly, the summability method should be regular. That is, anytime the Fourier series’ partial sums converge, the summability assisted sums also converge to the same value, so that the summability method always does at least as well as the original Fourier series in terms of convergence. Necessary and sufficient conditions for a general linear method to be regular were found by Toeplitz, and can be found in [6].
8 9
We will present the conditions required for a Hausdorff method to be regular later in the chapter.
Although better convergence is considered the best benefit of using a summability method, it is not the only one. After all, since we will be restricting ourselves to functions f which are 2π-periodic and of bounded variation on [−π, π], we are not worried about convergence. Recall from the introduction, that there is a theorem of Dirichlet-Jordan (12)
which states that for such functions, the regular Fourier series itself will converge for every x ∈ [−π, π]. So what can we gain from using a Hausdorff summability method? Well, summability methods sometimes have more desirable properties than the original Fourier series. For example, it is well known [14] that a Fourier series exhibits Gibb’s phenomenon near a point of a jump discontinuity in the original function. Sometimes a summability method will be used expressly to eliminate this (although not every summability method eliminates Gibb’s phenomenon). For instance, Gronwall [5] has shown that there exists a constant, c, such that whenever α < c the (C, α) method preserves Gibb’s phenomenon, while when α ≥ c the (C, α) method eliminates it. Gronwall also found this constant to be approximately 0.4395. Hence, someone wishing to preserve the occurrence of Gibb’s phenomenon, and still needing to quantify the rate of convergence, need only choose to use a (C, α) method with an appropriately small α, while someone wishing to kill the Gibb’s phenomenon need only choose to use a (C, α) method with an appropriately large α.
Finally, notice that while Bojanic (13) gives a bound for the rate of convergence of functions of bounded variations, Humphreys and Bojanic (16) and Kunyang and Dan (17) are able to give a bound on the absolute convergence rate of the same type of functions which are (C, α) assisted. Even though we generalize to all Hausdorff methods, we also will
find a bound for the absolute rate of convergence. 10
1.2 Hausdorff summability
For the rest of this dissertation, we wish to focus on Hausdorff summability methods.
Let f be a 2π-periodic function, and let f ∈ BV [−π, π]. The Hausdorff transform, (HΦS)k, of the partial sums of f is defined to be:
k (1.1) (HΦS)k = hk,jSj(f, x), k = 0, 1, 2, . . . , j X=0 where 1 P (Xk,r = j) dΦ(r), 0 ≤ j ≤ k, (1.2) hk,j = Z0 0, otherwise, for Φ ∈ BV [0, 1], Xk,r is a binomially distributed random variable, and P (Xk,r = j) =
k j k−j j r (1 − r) is the probability of getting j successes on k independent trials each of whic h results in a success with probability r. A Hausdorff transformation, as defined above, is regular if and only if the following two conditions hold [13]:
• Φ(r) is continuous from the right at r = 0,
1 • dΦ(r) = Φ(1) − Φ(0) = 1. Z0
Without loss of generality, we will assume Φ(0) = 0, in which case the regularity condi- tions become:
• Φ(0+) = Φ(0) = 0.
• Φ(1) = 1.
dΦ Some particular regular Hausdorff methods are the Ces`aro (C, α) methods in which dr =
α−1 dΦ 1 1 α−1 α(1 − r) ; the H¨older (H, α) methods in which dr = Γ(α) [ln( r )] ; and the Euler 11
methods in which Φ(r) equals zero on [0, c) and equals one on [c, 1] for some fixed c ∈ (0, 1).
The interested reader can verify that this definition of the (C, α) transform corresponds with that given in (15).
1.3 Preliminary Lemmas
The following two lemmas will be useful later in this chapter.
Lemma 1.3.1 Let j, k ∈ N, k ≥ 2 and 0 ≤ j ≤ k. Also let Xk,r be a binomially distributed random variable, then
(k − j)P (Xk,r = j) + (j + 1)P (Xk,r = j + 1) = k · P (Xk−1,r = j).
Proof
If j = k, both sides are trivially zero. For j < k:
(k − j)P (Xk,r = j) + (j + 1)P (Xk,r = j + 1) k k = (k − j) rj(1 − r)k−j + (j + 1) rj+1(1 − r)k−j−1 j j + 1 k!(k − j) k!(j + 1) = rj(1 − r)k−j + rj+1(1 − r)k−j−1 j!(k − j)! (j + 1)!(k − j − 1)! k! = rj(1 − r)k−j−1 [(1 − r) + r] j!(k − j − 1)! k − 1 = k · rj(1 − r)k−1−j j = k · P (Xk−1,r = j).
Lemma 1.3.2 If aj and bj are the Fourier coefficients as defined in (2) and φx is as defined in (14), then for j ≥ 1 : 12
1 π a cos(jx) + b sin(jx) = φ (t) cos(jt) dt. j j π x Z0
Proof
Using the definitions of the Fourier coefficients, we have:
1 π 1 π a cos(jx) + b sin(jx) = f(t) cos(jt) cos(jx) dt + f(t) sin(jt) sin(jx) dt j j π π Z−π Z−π 1 π = f(t) cos[j(x − t)] dt, π Z−π using the trigonometric identity cos(a−b) = cos a cos b+sin a sin b. Making the substitution,
u = x − t, we obtain:
1 x+π a cos(jx) + b sin(jx) = f(x − u) cos(ju) du j j π Zx−π 1 π = f(x − u) cos(ju) du π Z−π 1 0 1 π = f(x − u) cos(ju) du + f(x − u) cos(ju) du. π π Z−π Z0 Making the change of variable, v = −u in the first integral yields:
1 π 1 π a cos(jx) + b sin(jx) = f(x + v) cos(−jv) dv + f(x − u) cos(ju) du j j π π Z0 Z0 1 π 1 π = f(x + v) cos(jv) dv + f(x − u) cos(ju) du. π π Z0 Z0 Changing the dummy variables u and v to t, and combining the two integrals, gives:
1 π a cos(jx) + b sin(jx) = [f(x + t) + f(x − t)] cos(jt) dt j j π Z0 1 π = [f(x + t) + f(x − t) − f(x+) − f(x−)] cos(jt) dt π Z0 1 π = φ (t) cos(jt) dt, π x Z0 since f(x+) and f(x−) are just constants (with respect to t), and the integral of a constant times cosine from 0 to π equals zero. 13
1.4 Lemma for Chapter 2
Here we introduce and prove a lemma which we will use in chapter 2 in the proof of our main theorem.
Lemma 1.4.1 Let f be a 2π-periodic function, let hk,j be defined as in (1.2), and let φx be as defined in (14). For any Φ(r), if (HΦS)k is the Hausdorff transform of the Fourier
series, then:
k 1 π (H S) − (H S) = φ (t) h j cos(jt) dt. Φ k Φ k−1 kπ x k,j 0 j Z X=0
Proof
1 (H S) − (H S) = [k · (H S) − k · (H S) ] Φ k Φ k−1 k Φ k Φ k−1 k−1 1 1 = k · (H S) − k · Sj P (X = j) dΦ(r) k Φ k k−1,r j 0 X=0 Z k−1 1 1 = k · (H S) − Sj k · P (X = j) dΦ(r) . k Φ k k−1,r j 0 X=0 Z Now we will use lemma 1.3.1 to get:
(HΦS)k − (HΦS)k−1 k−1 1 1 = k · (H S)k − Sj [(k − j)P (Xk,r = j) + (j + 1)P (Xk,r = j + 1)] dΦ(r) k Φ j 0 X=0 Z k 1 1 = k · Sj P (X = j) dΦ(r) k k,r j 0 X=0 Z k−1 1 1 − Sj [(k − j)P (X = j) + (j + 1)P (X = j + 1)] dΦ(r) k k,r k,r j 0 X=0 Z 14
k−1 1 1 1 = kS P (X = k) dΦ(r) + kSj P (X = j) dΦ(r) k k k,r k,r 0 j 0 Z X=0 Z k−1 1 1 − Sj [(k − j)P (Xk,r = j) + (j + 1)P (Xk,r = j + 1)] dΦ(r) k j 0 X=0 Z 1 1 = kS P (X = k) dΦ(r) k k k,r Z0 k−1 k−1 1 1 1 + jSj P (X = j) dΦ(r) − (j + 1)Sj P (X = j + 1) dΦ(r) . k k,r k,r j 0 j 0 X=0 Z X=0 Z Now making the change of the index of summation i = j + 1 in the last sum gives:
(HΦS)k − (HΦS)k−1 1 1 = kS P (X = k) dΦ(r) k k k,r Z0 k−1 k 1 1 1 + jSj P (X = j) dΦ(r) − iSi P (X = i) dΦ(r) k k,r −1 k,r j 0 i 0 X=0 Z X=1 Z k k 1 1 1 = jSj P (X = j) dΦ(r) − iSi P (X = i) dΦ(r) k k,r −1 k,r j 0 i 0 X=0 Z X=0 Z k 1 1 = j(Sj − Sj ) P (X = j) dΦ(r) k −1 k,r j 0 X=0 Z k 1 1 = j(aj cos(jx) + bj sin(jx)) P (X = j) dΦ(r) , k k,r j 0 X=0 Z by the definition (3) of the partial Fourier sums. Therefore, using lemma 1.3.2:
k 1 1 π 1 (H S) − (H S) = · j φ (t) cos(jt) dt P (X = j) dΦ(r) Φ k Φ k−1 k π x k,r j 0 0 X=0 Z Z k 1 π = · φ (t) h j cos(jt) dt. kπ x k,j 0 j Z X=0 15
1.5 Lemma for Chapter 3
Next, we will introduce and prove a lemma which we will use in chapter 3.
Lemma 1.5.1 Let hn,k be the coefficients for a regular Hausdorff method as defined in (1.1)
N Ψ for the weight function Φ, and let n, k ∈ with 0 ≤ k ≤ n. Define hn,k to be the coefficients for the Hausdorff method with weight function Ψ such that dΨ(r) = r · dΦ(r); that is, the
Radon-Nikodym derivative of Ψ(r) with respect to Φ(r) is r. Then:
Ψ Ψ |hn+1,k − hn,k| = hn,k − hn,k−1 .
Proof
When k = 0,
1 n + 1 1 n h − h = r0(1 − r)n+1 dΦ(r) − r0(1 − r)n dΦ(r) n+1,0 n,0 0 0 Z0 Z0 1 = (1 − r)n(1 − r − 1) dΦ(r) 0 Z 1 = −r(1 − r)n dΦ(r). Z0 While:
1 n hΨ − hΨ = r0(1 − r)n dΨ(r) − 0 n,0 n,−1 0 Z0 1 = r(1 − r)n dΦ(r). Z0 Comparing the absolute values of the above results shows that the lemma holds for k = 0.
When 0 < k ≤ n, we have:
1 n + 1 1 n h − h = rk(1 − r)n+1−k dΦ(r) − rk(1 − r)n−k dΦ(r) n+1,k n,k k k Z0 Z0 1 rk(1 − r)n−kn! (n + 1)(1 − r) = · − 1 dΦ(r) k!(n − k)! (n + 1 − k) Z0 16
1 rk(1 − r)n−kn! k − (n + 1)r = · · dΦ(r). k!(n − k)! n + 1 − k Z0 Thus:
1 rk(1 − r)n−kn! (1.3) h − h = · [k − (n + 1)r] · dΦ(r). n+1,k n,k k!(n + 1 − k)! Z0 Meanwhile:
1 n 1 n hΨ − hΨ = rk(1 − r)n−k dΨ(r) − rk−1(1 − r)n−k+1 dΨ(r) n,k n,k−1 k k − 1 Z0 Z0 1 rk(1 − r)n−kn! r 1 − r = · − dΦ(r) (k − 1)!(n − k)! k n − k + 1 Z0 1 rk(1 − r)n−kn! nr − kr + r k − kr = · − dΦ(r) (k − 1)!(n − k)! k(n − k + 1) k(n − k + 1) Z0 1 rk(1 − r)n−kn! = · [(n + 1)r − k] · dΦ(r). k!(n − k + 1)! Z0 So:
1 rk(1 − r)n−kn! (1.4) hΨ − hΨ = − · [k − (n + 1)r] · dΦ(r). n,k n,k−1 k!(n − k + 1)! Z0 Comparing (1.3) with (1.4), we obtain the proof of lemma 1.5.1. CHAPTER 2
Bound for Absolute Rate of Convergence for a Hausdorff Method
Now that all the pieces are in place, we are ready to give a bound for the Hausdorff transform.
2.1 Main Theorem
Theorem 2.1.1 If f ∈ BV [−π, π] and 2π-periodic, and if the Hausdorff transform of the k −β π sequence (sin jt, j = 0, 1, 2, . . .) is such that hk,j sin(jt) = O((kt) ) for every t ∈ [ k , π] j=0 and some constant β ∈ (0, 1], then for someXconstant C
∞ n C π RΦ(f, x) := |(H S) − (H S) | ≤ kβ−1V ar k (φ ), n ≥ 2. n Φ k Φ k−1 nβ 0 x k=Xn+1 Xk=1
Proof
Using Lemma 1.4.1, we have:
∞ Φ Rn (f, x) = |(HΦS)k − (HΦS)k−1| k n =X+1 ∞ k 1 π = φ (t) h j cos(jt) dt kπ x k,j k=n+1 Z0 j=0 X X ∞ k 1 π = h φ (t)j cos(jt) dt kπ k,j x k=n+1 j=0 Z0 X X ∞ k 1 π = h sin(jt) dφ (t) kπ k,j x k=n+1 j=0 Z0 X X
17 18
after integrating by parts. Thus:
∞ π k Φ 1 R (f, x) = h sin(jt) dφx(t) n kπ k,j k=n+1 Z0 j=0 X X ∞ k 1 π ≤ h sin(jt) dV art (φ ) kπ k,j 0 x k=n+1 Z0 j=0 X X
since we had a Riemann-Stieltjes integral. Therefore,
∞ π k 1 k (2.1) RΦ(f, x) ≤ h sin(jt) dV art (φ ) n kπ k,j 0 x k=n+1 Z0 j=0 X X
∞ π k 1 t + hk,j sin(jt) dV ar0(φx). kπ π k=n+1 Z k j=0 X X
Let us look at each part of the right hand side of the inequalit y, separately. For the first term:
∞ π k 1 k h sin(jt) dV art (φ ) kπ k,j 0 x k=n+1 Z0 j=0 X X ∞ π k 1 k ≤ h |sin( jt)| dV art (φ ) kπ k,j 0 x k n 0 j =X+1 Z X=0 ∞ π k 1 k ≤ h (jt) dV art (φ ) kπ k,j 0 x k n 0 j =X+1 Z X=0 ∞ π k 1 1 k = P (X = j) dΦ(r) (jt) dV art (φ ) kπ k,r 0 x k n 0 j 0 =X+1 Z X=0 Z ∞ π k 1 1 k = jP (X = j) dΦ(r) t dV art (φ ) kπ k,r 0 x 0 j 0 k=Xn+1 Z X=0 Z ∞ π 1 k 1 k t = jP (X = j) dΦ(r) t dV ar (φx). kπ k,r 0 k n 0 0 j =X+1 Z Z X=0 From probability theory, we recognize the sum in the parentheses as the expectation of a
binomial random variable, which is equal to kr. So:
∞ π k ∞ π 1 1 k 1 k h sin(jt) dV art (φ ) ≤ kr dΦ(r) t dV art (φ ). kπ k,j 0 x kπ 0 x k=n+1 Z0 j=0 k=n+1 Z0 Z0 X X X
19
1 Let c1 = 0 r dΦ(r). Then, we have:
R ∞ π k ∞ π 1 k c k h sin(jt) dV art (φ ) ≤ 1 t dV art (φ ). kπ k,j 0 x π 0 x k=n+1 Z0 j=0 k=n+1 Z0 X X X
At this point, recall the indicator function, I, is defined as:
1, if a is true, (2.2) I[a] = 0, if a is false.
Using this, we have:
∞ π k ∞ π 1 k c n h sin(jt) dV art (φ ) ≤ 1 I[kt ≤ π] t dV art (φ ). kπ k,j 0 x π 0 x k=n+1 Z0 j=0 k=n+1 Z0 X X X
t Since I, t, and dV ar0(φx) are all nonnegative, Fubini’s theorem [15] gives: ∞ π k π ∞ 1 k c n (2.3) h sin(jt) dV art (φ ) ≤ 1 I[kt ≤ π] t dV art (φ ). kπ k,j 0 x π 0 x k=n+1 Z0 j=0 Z0 k=n+1 X X X
Now let 0 < ε < π . Then notice: n π ∞ π c1 n t c1 n t I[kt ≤ π] t dV ar0(φx) = t dV ar0(φx) π ε π ε k=n+1 n π Since this is true for any 0 < ε < n , (2.3) becomes: ∞ π k 1 k π (2.4) h sin(jt) dV art (φ ) ≤ c V ar n (φ ). kπ k,j 0 x 1 0 x k=n+1 Z0 j=0 X X As for the second part of (2.1), by assumption for some positive constant c : 2 ∞ π k 1 t hk,j sin(jt) dV ar0(φx) kπ π k=n+1 Z k j=0 X X 20 ∞ π 1 c2 t ≤ dV ar0(φx) kπ π (kt)β k=Xn+1 Z k ∞ π c2 1 t = dV ar0(φx). π(kβ+1) π tβ k=Xn+1 Z k So: ∞ π k 1 t (2.5) hk,j sin(jt) dV ar0(φx) kπ π k=n+1 Z k j=0 X X ∞ π π c2 n 1 t 1 t ≤ dV ar0(φx) + dV ar0(φx) . π(kβ+1) π tβ π tβ k n k n ! =X+1 Z Z Let us look at each of these two terms separately. First, notice that: ∞ π ∞ π c2 n 1 t c2 n 1 t dV ar0(φx) = I[kt > π] dV ar0(φx). π(kβ+1) π tβ π(kβ+1) tβ k n k k n 0 =X+1 Z =X+1 Z So, again using Fubini’s theorem: ∞ π π ∞ c2 n 1 t n c2 1 t (2.6) dV ar0(φx) = I[kt > π] dV ar0(φx). π(kβ+1) π tβ π(kβ+1) tβ k n k 0 k n =X+1 Z Z =X+1 π Now, for any 0 < δ < n , we have: π ∞ π n c 1 n c 1 2 I[kt > π] dV art (φ ) ≤ 2 dV art (φ ) π(kβ+1) tβ 0 x π(kβ+1) tβ 0 x δ k=n+1 δ k> π Z X Z Xt π ∞ n c2 1 t = β+1 β dV ar0(φx), δ π(k ) t k=[ π ]+1 Z Xt π π where [ t ] represents the greatest integer ≤ t . Then, π ∞ n c 1 2 I[kt > π] dV art (φ ) π(kβ+1) tβ 0 x δ k n Z =X+1 π ∞ n c2 1 t ≤ β β+1 dV ar0(φx) δ πt k k=[ π ]+1 Z Xt π ∞ n c2 1 1 t = β β+1 + β+1 dV ar0(φx) δ πt π k [ t ] + 1 k=[ π ]+2 Z Xt π n ∞ c2 1 1 t ≤ + du dV ar0(φx) β π β+1 π β+1 δ πt [ ]+1 u Z t Z t ! 21 π β+1 ∞ n c2 t 1 t ≤ β β+1 + β+1 du dV ar0(φx) δ πt π π u Z Z t ! π β β n c t +1 t = 2 + dV art (φ ) πtβ πβ+1 βπβ 0 x Zδ π β β n c t t ≤ 2 + dV art (φ ), πtβ βπβ βπβ 0 x Zδ π as 0 < β ≤ 1 < n ≤ t . Therefore, π ∞ π n c 1 2c n 2 I[kt > π] dV art (φ ) ≤ 2 dV art (φ ) π(kβ+1) tβ 0 x βπβ+1 0 x Zδ k=n+1 Zδ X π 2c n ≤ 2 dV art (φ ) βπβ+1 0 x Z0 2c π = 2 V ar n (φ ). βπβ+1 0 x π Since this is true for all 0 < δ < n , (2.6) becomes: ∞ π n π c2 1 t 2c2 n (2.7) dV ar0(φx) ≤ V ar0 (φx). π(kβ+1) π tβ βπβ+1 k n k =X+1 Z As for the second term in (2.5): ∞ π π ∞ c2 1 t c2 1 t dV ar0(φx) = dV ar0(φx) π(kβ+1) π tβ π π(kβ+1) tβ k=Xn+1 Z n Z n k=Xn+1 π ∞ c2 1 t = dV ar0(φx) π πtβ kβ+1 Z n n+1 ! π X∞ c2 1 t ≤ β β+1 du dV ar0(φx) π πt n u Z n Z π c2 t = dV ar0(φx) π βπtβnβ Z n π c2 1 t = dV ar0(φx). βπnβ π tβ Z n Integrating by parts, we obtain: ∞ π c2 1 t dV ar0(φx) π(kβ+1) π tβ k n n =X+1 Z π π t c2 1 t V ar0(φx) ≤ β β · V ar0(φx) + β β+1 dt βπn t π π t " n Z n # 22 π π t c2 π c2 n c2 V ar0(φx) = V ar0 (φx) − V ar0 (φx) + dt. βπβ+1nβ βπβ+1 πnβ π tβ+1 Z n π Making the substitution u = t in the integral gives: π π t n u β+1 V ar0(φx) 1 V ar0 (φx) · u β+1 dt = β 2 du π t π 1 u Z n Z 1 n π = V ar u (φ ) · uβ−1 du πβ 0 x Z1 n−1 1 k+1 π = V ar u (φ ) · uβ−1 du πβ 0 x k k X=1 Z n−1 1 π k+1 ≤ V ar k (φ ) uβ−1 du πβ 0 x k k X=1 Z n−1 1 π = V ar k (φ ) (k + 1)β − kβ βπβ 0 x k=1 h i Xn c π ≤ 3 kβ−1 · V ar k (φ ), βπβ 0 x k X=1 for some constant c3. Thus, ∞ π c2 1 t (2.8) dV ar0(φx) π(kβ+1) π tβ k=n+1 Z n X n c c π c · c π ≤ 2 V arπ(φ ) − 2 V ar n (φ ) + 2 3 kβ−1 · V ar k (φ ). βπβ+1nβ 0 x βπβ+1 0 x βπβ+1nβ 0 x k X=1 Plugging (2.7) and (2.8) back into (2.5) gives: ∞ π k 1 t (2.9) hk,j sin(jt) dV ar0(φx) kπ π k=n+1 Z k j=0 X X π n π c2 π c2 n c2 · c3 β−1 k ≤ V ar (φx) + V ar (φx) + k · V ar (φx). βπβ+1nβ 0 βπβ+1 0 βπβ+1nβ 0 Xk=1 And, finally, substituting (2.4) and (2.9) into (2.1), gives: n c c π c · c π RΦ(f, x) ≤ 2 V arπ(φ ) + c + 2 V ar n (φ ) + 2 3 kβ−1 · V ar k (φ ). n βπβ+1nβ 0 x 1 βπβ+1 0 x βπβ+1nβ 0 x k X=1 Notice that the first term above resembles (except for a different multiplicative constant) the k = 1 term in the sum. Therefore, by adjusting the multiplicative constant on the 23 summation, we can absorb the first term. Thus: n c π C π (2.10) RΦ(f, x) ≤ c + 2 V ar n (φ ) + kβ−1 · V ar k (φ ). n 1 βπβ+1 0 x nβ 0 x k b X=1 Now, to finish the proof, we need to show that the remaining term outside the sum can n π k also be absorbed in the summation. To do this, look at the weighted mean pkV ar0 (φx) k=1 n X where pk = 1, and 0 ≤ pk ≤ 1 for every k. By properties of a weighted mean, the mean k=1 is alwaXys greater than (or equal to) the smallest of the individual data values, and since the π n total variation is a nondecreasing function, the smallest variation used is the V ar0 (φx); i.e. n π π d · kβ−1 V ar n (φ ) ≤ p V ar k (φ ). Now, we will choose to let p = . Obviously, we can 0 x k 0 x k nβ k=1 find such a dXfor a fixed n, but we want to ascertain that d remains bounded as n → ∞. To see this, notice that: n 1 = pk k=1 X n d = kβ−1 nβ k ! X=1 d n ≥ xβ−1 dx nβ Z1 d = nβ − 1 . β · nβ Therefore, β · nβ d ≤ nβ − 1 nβ − 1 + 1 = β · nβ − 1 1 = β · 1 + . nβ − 1 Provided n ≥ 2, we can bound d by: 1 d ≤ β · 1 + . 2β − 1 24 Thus, n π 1 1 π (2.11) V ar n (φ ) ≤ β · 1 + · kβ−1 · V ar k (φ ). 0 x 2β − 1 nβ 0 x Xk=1 Substituting (2.11) back into (2.10), we get, for some new positive constant C, n C π RΦ(f, x) ≤ kβ−1 · V ar k (φ ). n nβ 0 x Xk=1 2.2 How strict a requirement? Now that the theorem has been proven, let us examine how strict of a requirement is k −β π in theorem 2.1.1 that hk,j sin(jt) = O((kt) ) for every t ∈ [ k , π] and some constant j=0 β ∈ (0, 1]. In the introXduction, we have already discussed the work of Humphreys and Bojanic [7], and Kunyang and Dan [10]. In [7], Humphreys and Bojanic show in lemma 2.2 that this requirement holds in the (C, α) case with α ≥ 1 for β = 1. In [10], Kunyang and Dan show in lemma 2 that this requirement holds in the (C, α) case with 0 < α < 1 for β = α. Thus, our theorem can be applied for any (C, α) transform, α > 0, and it extends both of these results. The natural question which then arises is: Besides the (C, α) transforms, how can we tell which of the Hausdorff transforms satisfy this requirement? We will answer this question with the first of two theorems presented in chapter 3. 2.3 Example illustrating the sharpness of the bound To show the sharpness of the bound, we want to show that there exists a Hausdorff assisted transform of some function which converges only as quickly as the bound indicates. The following example was given by Kunyang and Dan in [10]. Define the function, f, by: 25 π−x when 0 < x < 2π, ∞ sin kx 2 (2.12) f(x) = k=1 k = 0 when x = 0, P on [0, 2π] and extend to the entire real lineby making f 2π-periodic. Kunyang and Dan show that the convergence rate of this particular function under the (C, α) transform at π x = 2 when 0 < α < 1 is: n 1 π Rα(f, x) > kα−1V ar k (φ ). n 2000αnα 0 x k X=1 Hence, the bound given cannot be improved without further assumptions. CHAPTER 3 Results Concerning a Hausdorff Method’s Row Total Variation 3.1 Introduction In this chapter, we will introduce and prove two theorems concerning a Hausdorff method’s row total variation, but first some remarks are in order. In (1.1), we defined the Hausdorff transform using summation notation. The transform also can be thought of in terms of matrix multiplication, however. Think of HΦ = (hk,j) as an infinite dimensional matrix, S as a column vector of the Fourier series’ partial sums, and (HΦS) as a column th vector of the transforms (making (HΦS)k the k element). Thus we have: h0,0 0 · · · · · · · · · S0 (HΦS)0 h1,0 h1,1 0 · · · · · · S1 (HΦS)1 H = , S = , and (H S) = . Φ Φ h h h 0 · · · S (H S) 2,0 2,1 2,2 2 Φ 2 ...... . . . . . . . . . ∞ For any row k, the total row variation of HΦ is |hk,j − hk,j−1| where we will define j X=0 hk,−1 := 0. Because the Hausdorff transforms have the property that hk,j = 0 for j > k, k+1 the total row variation for a row k can be written as |hk,j − hk,j−1|. j X=0 3.2 Two theorems Now we are ready to answer the question posed at the end of chapter 2. 26 27 1 Theorem 3.2.1 For any Hausdorff method, if the row total variation is O( k ), then k 1 h sin(jt) = O( ), k,j kt j X=0 π for all t ∈ [ k , π]. In particular, when Φ is differentiable with respect to the Lebesgue measure with Φ0 ∈ BV [0, 1], then this result holds. Before proving this theorem, we will mention that this result has further implications in summability theory and approximation theory. If HΦ is a regular Hausdorff method, λ and if {λ(k)} is an infinite increasing sequence of positive integers, then HΦ := (hλ(k),j ) is λ called a λ-submethod of HΦ. Note that HΦ is another infinite dimensional matrix which is λ derived by eliminating some of the rows of HΦ. If HΦ sums a sequence then, trivially, HΦ sums the same sequence to the same limit. However, the converse need not hold in general. Those sequences {λ(k)} for which the converse holds (over a specified space of sequences) are called condensation sequences. By using our results on row total variation, we provide the following simple condensation test for bounded sequences in any normed linear space. This extends the results of Osikiewicz [12]. Ψ Theorem 3.2.2 Let HΦ = (hk,j) be a regular Hausdorff method. Let (hk,j) be the Hausdorff method for the weight function dΨ(r) = r · dΦ(r), with row total variation: ∞ 1 (3.1) |hΨ − hΨ | = O , k,j k,j−1 kβ j X=0 for some β ∈ (0, 1]. Let {λ(k)} be an infinite sequence of positive integers. If λ(k + 1) 1 (3.2) = 1 + o , λ(k) λ(k)1−β λ then HΦ and HΦ are equivalent over the space of bounded sequences in any normed linear space. 28 It is interesting to note that when Φ0 ∈ BV [0, 1], then we have Ψ0 = r · Φ0 ∈ BV [0, 1] and therefore by theorem 3.2.1, condition (3.1) will hold with β = 1. For instance, all (C, α) methods with α ≥ 1 are of this type. However, for (H, α) methods with α > 1, Φ0 ∈/ BV [0, 1], but r · Φ0 ∈ BV [0, 1], and once again condition (3.1) will hold with β = 1. 3.3 Proof of theorem 3.2.1 Proof Recall the trigonometric identity, cos(a − b) − cos(a + b) = 2 sin a sin b. Then: ∞ ∞ t t 2 sin h sin(jt) = h · 2 sin(jt) sin 2 k,j k,j 2 j j X=0 X=1 ∞ 1 1 = h · cos j − t − cos j + t k,j 2 2 j X=1 ∞ ∞ 1 1 = h cos i + t − h cos j + t k,i+1 2 k,j 2 i j X=0 X=1 ∞ ∞ 1 1 = h cos j + t − h cos j + t k,j+1 2 k,j 2 j j X=0 X=0 t +h cos k,0 2 ∞ t 1 = h cos + (h − h ) cos j + t k,0 2 k,j+1 k,j 2 j X=0 ∞ 1 = (h − h ) cos j + t, k,j+1 k,j 2 j X=−1 recalling that hk,−1 = 0. So: 29 ∞ ∞ t 2 sin h sin(jt) ≤ |h − h | 2 k,j k,j+1 k,j j=0 j=−1 X X ∞ = |hk,j − hk,j−1|. j X=0 t For any t ∈ (0, π], sin 2 > 0 so: