Orthogonal Functions 11.1

Total Page:16

File Type:pdf, Size:1020Kb

Orthogonal Functions 11.1 398 ● CHAPTER 11 ORTHOGONAL FUNCTIONS AND FOURIER SERIES 11.1 ORTHOGONAL FUNCTIONS REVIEW MATERIAL ● The notions of generalized vectors and vector spaces can be found in any linear algebra text. INTRODUCTION The concepts of geometric vectors in two and three dimensions, orthogonal or perpendicular vectors, and the inner product of two vectors have been generalized. It is perfectly routine in mathematics to think of a function as a vector. In this section we will examine an inner product that is different from the one you studied in calculus. Using this new inner product, we define orthogonal functions and sets of orthogonal functions. Another topic in a standard calculus course is the expansion of a function f in a power series. In this section we will also see how to expand a suitable function f in terms of an infinite set of orthogonal functions. INNER PRODUCT Recall that if u and v are two vectors in 3-space, then the inner product (u, v) (in calculus this is written as u ؒ v) possesses the following properties: (i)(u, v) ϭ (v, u), (ii)(ku, v) ϭ k(u, v), k a scalar, (iii)(u, u) ϭ 0 if u ϭ 0 and (u, u) Ͼ 0 if u 0, (iv)(u ϩ v, w) ϭ (u, w) ϩ (v, w). We expect that any generalization of the inner product concept should have these same properties. * Suppose that f1 and f2 are functions defined on an interval [a, b]. Since a definite integral on [a, b] of the product f1(x) f2(x) possesses the foregoing properties (i)–(iv) whenever the integral exists, we are prompted to make the following definition. DEFINITION 11.1.1 Inner Product of Functions The inner product of two functions f1 and f2 on an interval [a, b] is the number b ϭ ͵ ( f1, f 2) f1(x) f 2(x) dx. a ORTHOGONAL FUNCTIONS Motivated by the fact that two geometric vectors u and v are orthogonal whenever their inner product is zero, we define orthogonal functions in a similar manner. DEFINITION 11.1.2 Orthogonal Functions Two functions f1 and f2 are orthogonal on an interval [a, b] if b ϭ ͵ ϭ ( f1, f 2) f1(x) f 2(x) dx 0. (1) a *The interval could also be (Ϫϱ, ϱ), [0, ϱ), and so on. 11.1 ORTHOGONAL FUNCTIONS ● 399 2 3 For example, the functions f1(x) ϭ x and f2(x) ϭ x are orthogonal on the interval [Ϫ1, 1], since 1 1 1 ϭ ͵ 2 ؒ 3 ϭ 6 ϭ ( f1, f2) x x dx x ͉ 0. Ϫ1 6 Ϫ1 Unlike in vector analysis, in which the word orthogonal is a synonym for perpendic- ular, in this present context the term orthogonal and condition (1) have no geometric significance. ORTHOGONAL SETS We are primarily interested in infinite sets of orthogonal functions. DEFINITION 11.1.3 Orthogonal Set A set of real-valued functions {f 0(x), f1(x), f2(x),...} is said to be orthogonal on an interval [a, b] if b ␾ ␾ ϭ ͵ ␾ ␾ ϭ Y ( m, n) m(x) n(x) dx 0, m n. (2) a ORTHONORMAL SETS The norm, or length ʈuʈ, of a vector u can be expressed in terms of the inner product. The expression (u, u) ʈuʈ2 is called the square norm, and so the norm is ʈuʈ ϭ 1(u, u). Similarly, the square norm of a function fn 2 is ʈfn(x)ʈ ϭ (fn , fn), and so the norm, or its generalized length, is ␾ ␾ ʈfn(x)ʈ ϭ 1( n, n). In other words, the square norm and norm of a function fn in an orthogonal set {fn(x)} are, respectively, b b ʈ ʈ2 ϭ ͵ ␾2 ʈ ʈ ϭ ͵ 2 fn(x) n (x) dx and fn(x) fn(x) dx. (3) a B a If {fn(x)} is an orthogonal set of functions on the interval [a, b] with the property that ʈfn(x)ʈ ϭ 1 for n ϭ 0, 1, 2, . , then {fn(x)} is said to be an orthonormal set on the interval. EXAMPLE 1 Orthogonal Set of Functions Show that the set {1, cos x, cos 2x, . .} is orthogonal on the interval [Ϫp, p]. SOLUTION If we make the identification f0(x) ϭ 1 and fn(x) ϭ cos nx, we must ͵␲ ␾ ␾ ϭ ͵␲ ␾ ␾ ϭ then show that Ϫ␲ 0(x) n(x) dx 0, n 0, and Ϫ␲ m(x) n(x) dx 0, m n. We have, in the first case, ␲ ␲ ␾ ␾ ϭ ͵ ␾ ␾ ϭ ͵ ( 0 , n) 0(x) n(x) dx cos nx dx Ϫ␲ Ϫ␲ 1 ␲ 1 ϭ sin nx͉ ϭ [sin n␲ Ϫ sin(Ϫn␲)] ϭ 0, n 0, n Ϫ␲ n 400 ● CHAPTER 11 ORTHOGONAL FUNCTIONS AND FOURIER SERIES and, in the second, ␲ ␾ ␾ ϭ ͵ ␾ ␾ ( m , n) m(x) n(x) dx Ϫ␲ ␲ ϭ ͵ cos mx cos nx dx Ϫ␲ 1 ␲ ϭ ͵ [cos(m ϩ n)x ϩ cos(m Ϫ n)x] dx ; trig identity 2 Ϫ␲ 1 sin (m ϩ n)x sin (m Ϫ n)x ␲ ϭ ΄ ϩ ΅ ϭ 0, m n. 2 m ϩ n m Ϫ n Ϫ␲ EXAMPLE 2 Norms Find the norm of each function in the orthogonal set given in Example 1. SOLUTION For f0(x) ϭ 1 we have, from (3), ␲ ʈ ʈ2 ϭ ͵ ϭ ␲ f0 (x) dx 2 , Ϫ␲ so ʈf0(x)ʈ ϭ 12␲. For fn(x) ϭ cos nx, n Ͼ 0, it follows that ␲ 1 ␲ ʈ ʈ2 ϭ ͵ 2 ϭ ͵ ϩ ϭ ␲ fn (x) cos nx dx [1 cos 2nx] dx . Ϫ␲ 2 Ϫ␲ Thus for n Ͼ 0, ʈfn(x)ʈ ϭ 1␲. Any orthogonal set of nonzero functions {fn(x)}, n ϭ 0, 1, 2, . can be normalized—that is, made into an orthonormal set—by dividing each function by its norm. It follows from Examples 1 and 2 that the set 1 cos x cos 2x Ά , , , ...· 12␲ 1␲ 1␲ is orthonormal on the interval [Ϫp, p]. We shall make one more analogy between vectors and functions. Suppose v1, v2, and v3 are three mutually orthogonal nonzero vectors in 3-space. Such an orthogonal set can be used as a basis for 3-space; that is, any three-dimensional vec- tor can be written as a linear combination ϭ ϩ ϩ u c1v1 c2v2 c3v3, (4) where the ci , i ϭ 1, 2, 3, are scalars called the components of the vector. Each component ci can be expressed in terms of u and the corresponding vector vi . To see this, we take the inner product of (4) with v1: 2 .u, v1) ϭ c1(v1, v1) ϩ c2(v2, v1) ϩ c3(v3, v1) ϭ c1ʈv1ʈ ϩ c2 ؒ 0 ϩ c3 ؒ 0) (u, v ) Hence c ϭ 1 . 1 ' '2 v1 In like manner we find that the components c2 and c3 are given by (u, v ) (u, v ) c ϭ 2 and c ϭ 3 . 2 ' '2 3 ' '2 v2 v3 11.1 ORTHOGONAL FUNCTIONS ● 401 Hence (4) can be expressed as (u, v ) (u, v ) (u, v ) 3 (u, v ) u ϭ 1 v ϩ 2 v ϩ 3 v ϭ ͚ n v . (5) ' '2 1 ' '2 2 ' '2 3 ' '2 n v1 v2 v3 nϭ1 vn ORTHOGONAL SERIES EXPANSION Suppose {fn(x)} is an infinite orthogo- nal set of functions on an interval [a, b]. We ask: If y ϭ f (x) is a function defined on the interval [a, b], is it possible to determine a set of coefficients cn, n ϭ 0, 1, 2, . , for which ϭ ␾ ϩ ␾ ϩиииϩ ␾ ϩиии f (x) c0 0(x) c1 1(x) cn n(x) ? (6) As in the foregoing discussion on finding components of a vector we can find the coefficients cn by utilizing the inner product. Multiplying (6) by fm(x) and integrating over the interval [a, b] gives b b b b ͵ ␾ ϭ ͵ ␾ ␾ ϩ ͵ ␾ ␾ ϩиииϩ ͵ ␾ ␾ ϩиии f (x) m(x) dx c0 0(x) m(x) dx c1 1(x) m(x) dx cn n(x) m(x) dx a a a a ϭ ␾ ␾ ϩ ␾ ␾ ϩиииϩ ␾ ␾ ϩиии c0( 0, m) c1( 1, m) cn( n, m) . By orthogonality each term on the right-hand side of the last equation is zero except when m ϭ n. In this case we have b b ͵ ␾ ϭ ͵ ␾2 f (x) n(x) dx cn n(x) dx. a a It follows that the required coefficients are ͵b f (x)␾ (x) dx c ϭ a n , n ϭ 0, 1, 2, . n ͵b␾2 a n(x)dx ϱ ϭ ␾ In other words,f (x) ͚ cn n(x), (7) nϭ0 ͵b f (x)␾ (x) dx wherec ϭ a n . (8) n '␾ '2 n(x) With inner product notation, (7) becomes ϱ ( f, ␾ ) f (x) ϭ ͚ n ␾ (x). (9) '␾ '2 n nϭ0 n(x) Thus (9) is seen to be the function analogue of the vector result given in (5). DEFINITION 11.1.4 Orthogonal Set/Weight Function A set of real-valued functions {f0(x), f1(x), f2(x), . .} is said to be orthogonal with respect to a weight function w(x) on an interval [a, b] if b ͵ ␾ ␾ ϭ w(x) m(x) n(x) dx 0, m n. a The usual assumption is that w(x) Ͼ 0 on the interval of orthogonality [a, b]. The set {1, cos x, cos 2x, . .} in Example 1 is orthogonal with respect to the weight function w(x) ϭ 1 on the interval [Ϫp, p]. If {fn(x)} is orthogonal with respect to a weight function w(x) on the interval [a, b], then multiplying (6) by w(x)fn(x) and integrating yields ͵b f (x) w(x)␾ (x) dx c ϭ a n , (10) n '␾ '2 n(x) 402 ● CHAPTER 11 ORTHOGONAL FUNCTIONS AND FOURIER SERIES b ʈ ʈ2 ϭ ͵ ␾2 where fn(x) w(x) n(x) dx.
Recommended publications
  • The Matroid Theorem We First Review Our Definitions: a Subset System Is A
    CMPSCI611: The Matroid Theorem Lecture 5 We first review our definitions: A subset system is a set E together with a set of subsets of E, called I, such that I is closed under inclusion. This means that if X ⊆ Y and Y ∈ I, then X ∈ I. The optimization problem for a subset system (E, I) has as input a positive weight for each element of E. Its output is a set X ∈ I such that X has at least as much total weight as any other set in I. A subset system is a matroid if it satisfies the exchange property: If i and i0 are sets in I and i has fewer elements than i0, then there exists an element e ∈ i0 \ i such that i ∪ {e} ∈ I. 1 The Generic Greedy Algorithm Given any finite subset system (E, I), we find a set in I as follows: • Set X to ∅. • Sort the elements of E by weight, heaviest first. • For each element of E in this order, add it to X iff the result is in I. • Return X. Today we prove: Theorem: For any subset system (E, I), the greedy al- gorithm solves the optimization problem for (E, I) if and only if (E, I) is a matroid. 2 Theorem: For any subset system (E, I), the greedy al- gorithm solves the optimization problem for (E, I) if and only if (E, I) is a matroid. Proof: We will show first that if (E, I) is a matroid, then the greedy algorithm is correct. Assume that (E, I) satisfies the exchange property.
    [Show full text]
  • CONTINUITY in the ALEXIEWICZ NORM Dedicated to Prof. J
    131 (2006) MATHEMATICA BOHEMICA No. 2, 189{196 CONTINUITY IN THE ALEXIEWICZ NORM Erik Talvila, Abbotsford (Received October 19, 2005) Dedicated to Prof. J. Kurzweil on the occasion of his 80th birthday Abstract. If f is a Henstock-Kurzweil integrable function on the real line, the Alexiewicz norm of f is kfk = sup j I fj where the supremum is taken over all intervals I ⊂ . Define I the translation τx by τxfR(y) = f(y − x). Then kτxf − fk tends to 0 as x tends to 0, i.e., f is continuous in the Alexiewicz norm. For particular functions, kτxf − fk can tend to 0 arbitrarily slowly. In general, kτxf − fk > osc fjxj as x ! 0, where osc f is the oscillation of f. It is shown that if F is a primitive of f then kτxF − F k kfkjxj. An example 1 6 1 shows that the function y 7! τxF (y) − F (y) need not be in L . However, if f 2 L then kτxF − F k1 6 kfk1jxj. For a positive weight function w on the real line, necessary and sufficient conditions on w are given so that k(τxf − f)wk ! 0 as x ! 0 whenever fw is Henstock-Kurzweil integrable. Applications are made to the Poisson integral on the disc and half-plane. All of the results also hold with the distributional Denjoy integral, which arises from the completion of the space of Henstock-Kurzweil integrable functions as a subspace of Schwartz distributions. Keywords: Henstock-Kurzweil integral, Alexiewicz norm, distributional Denjoy integral, Poisson integral MSC 2000 : 26A39, 46Bxx 1.
    [Show full text]
  • Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra
    Takustraße 7 Konrad-Zuse-Zentrum D-14195 Berlin-Dahlem fur¨ Informationstechnik Berlin Germany RUDIGER¨ STEPHAN1 Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra 1Email: [email protected] ZIB-Report 08-48 (December 2008) Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra R¨udigerStephan Abstract Given a combinatorial optimization problem and a subset N of natural numbers, we obtain a cardinality constrained version of this problem by permitting only those feasible solutions whose cardinalities are elements of N. In this paper we briefly touch on questions that addresses common grounds and differences of the complexity of a combinatorial optimization problem and its cardinality constrained version. Afterwards we focus on polytopes associated with cardinality constrained combinatorial optimiza- tion problems. Given an integer programming formulation for a combina- torial optimization problem, by essentially adding Gr¨otschel’s cardinality forcing inequalities [11], we obtain an integer programming formulation for its cardinality restricted version. Since the cardinality forcing inequal- ities in their original form are mostly not facet defining for the associated polyhedra, we discuss possibilities to strengthen them. In [13] a variation of the cardinality forcing inequalities were successfully integrated in the system of linear inequalities for the matroid polytope to provide a com- plete linear description of the cardinality constrained matroid polytope. We identify this polytope as a master polytope for our class of problems, since many combinatorial optimization problems can be formulated over the intersection of matroids. 1 Introduction, Basics, and Complexity Given a combinatorial optimization problem and a subset N of natural numbers, we obtain a cardinality constrained version of this problem by permitting only those feasible solutions whose cardinalities are elements of N.
    [Show full text]
  • EOF/PC Analysis
    ATM 552 Notes: Matrix Methods: EOF, SVD, ETC. D.L.Hartmann Page 68 4. Matrix Methods for Analysis of Structure in Data Sets: Empirical Orthogonal Functions, Principal Component Analysis, Singular Value Decomposition, Maximum Covariance Analysis, Canonical Correlation Analysis, Etc. In this chapter we discuss the use of matrix methods from linear algebra, primarily as a means of searching for structure in data sets. Empirical Orthogonal Function (EOF) analysis seeks structures that explain the maximum amount of variance in a two-dimensional data set. One dimension in the data set represents the dimension in which we are seeking to find structure, and the other dimension represents the dimension in which realizations of this structure are sampled. In seeking characteristic spatial structures that vary with time, for example, we would use space as the structure dimension and time as the sampling dimension. The analysis produces a set of structures in the first dimension, which we call the EOF’s, and which we can think of as being the structures in the spatial dimension. The complementary set of structures in the sampling dimension (e.g. time) we can call the Principal Components (PC’s), and they are related one-to-one to the EOF’s. Both sets of structures are orthogonal in their own dimension. Sometimes it is helpful to sacrifice one or both of these orthogonalities to produce more compact or physically appealing structures, a process called rotation of EOF’s. Singular Value Decomposition (SVD) is a general decomposition of a matrix. It can be used on data matrices to find both the EOF’s and PC’s simultaneously.
    [Show full text]
  • Some Properties of AP Weight Function
    Journal of the Institute of Engineering, 2016, 12(1): 210-213 210 TUTA/IOE/PCU © TUTA/IOE/PCU Printed in Nepal Some Properties of AP Weight Function Santosh Ghimire Department of Engineering Science and Humanities, Institute of Engineering Pulchowk Campus, Tribhuvan University, Kathmandu, Nepal Corresponding author: [email protected] Received: June 20, 2016 Revised: July 25, 2016 Accepted: July 28, 2016 Abstract: In this paper, we briefly discuss the theory of weights and then define A1 and Ap weight functions. Finally we prove some of the properties of AP weight function. Key words: A1 weight function, Maximal functions, Ap weight function. 1. Introduction The theory of weights play an important role in various fields such as extrapolation theory, vector-valued inequalities and estimates for certain class of non linear differential equation. Moreover, they are very useful in the study of boundary value problems for Laplace's equation in Lipschitz domains. In 1970, Muckenhoupt characterized positive functions w for which the Hardy-Littlewood maximal operator M maps Lp(Rn, w(x)dx) to itself. Muckenhoupt's characterization actually gave the better understanding of theory of weighted inequalities which then led to the introduction of Ap class and consequently the development of weighted inequalities. 2. Definitions n Definition: A locally integrable function on R that takes values in the interval (0,∞) almost everywhere is called a weight. So by definition a weight function can be zero or infinity only on a set whose Lebesgue measure is zero. We use the notation to denote the w-measure of the set E and we reserve the notation Lp(Rn,w) or Lp(w) for the weighted L p spaces.
    [Show full text]
  • Importance Sampling
    Chapter 6 Importance sampling 6.1 The basics To movtivate our discussion consider the following situation. We want to use Monte Carlo to compute µ = E[X]. There is an event E such that P (E) is small but X is small outside of E. When we run the usual Monte Carlo algorithm the vast majority of our samples of X will be outside E. But outside of E, X is close to zero. Only rarely will we get a sample in E where X is not small. Most of the time we think of our problem as trying to compute the mean of some random variable X. For importance sampling we need a little more structure. We assume that the random variable we want to compute the mean of is of the form f(X~ ) where X~ is a random vector. We will assume that the joint distribution of X~ is absolutely continous and let p(~x) be the density. (Everything we will do also works for the case where the random vector X~ is discrete.) So we focus on computing Ef(X~ )= f(~x)p(~x)dx (6.1) Z Sometimes people restrict the region of integration to some subset D of Rd. (Owen does this.) We can (and will) instead just take p(x) = 0 outside of D and take the region of integration to be Rd. The idea of importance sampling is to rewrite the mean as follows. Let q(x) be another probability density on Rd such that q(x) = 0 implies f(x)p(x) = 0.
    [Show full text]
  • Complete Set of Orthogonal Basis Mathematical Physics
    R. I. Badran Complete set of orthogonal basis Mathematical Physics Complete set of orthogonal functions Discrete set of vectors: ˆ ˆ ˆ ˆ ˆ ˆ The two vectors A AX i A y j Az k and B BX i B y j Bz k are 3 orthogonal if A B 0 or Ai Bi 0 . i1 Continuous set of functions on an interval (a, b): a) The two continuous functions A(x) and B (x) are orthogonal b on the interval (a, b) if A(x)B(x)dx 0. a b) The two complex functions A (x) and B (x) are orthogonal on b the interval (a, b) if A (x)B(x)dx 0 , where A*(x) is the a complex conjugate of A (x). c) For a whole set of functions An (x) (where n= 1, 2, 3,) and on the interval (a, b) b 0 if m n An (x)Am (x)dx a , const.t 0 if m n An (x) is called a set of orthogonal functions. Examples: 0 m n sin nx sin mxdx i) , m n 0 where sin nx is a set of orthogonal functions on the interval (-, ). Similarly 0 m n cos nx cos mxdx if if m n 0 R. I. Badran Complete set of orthogonal basis Mathematical Physics ii) sin nx cos mxdx 0 for any n and m 0 (einx ) eimxdx iii) 2 1 vi) P (x)Pm (x)dx 0 unless. m 1 [Try to prove this; also solve problems (2, 5) of section 6].
    [Show full text]
  • Adaptively Weighted Maximum Likelihood Estimation of Discrete Distributions
    Unicentre CH-1015 Lausanne http://serval.unil.ch Year : 2010 ADAPTIVELY WEIGHTED MAXIMUM LIKELIHOOD ESTIMATION OF DISCRETE DISTRIBUTIONS Michael AMIGUET Michael AMIGUET, 2010, ADAPTIVELY WEIGHTED MAXIMUM LIKELIHOOD ESTIMATION OF DISCRETE DISTRIBUTIONS Originally published at : Thesis, University of Lausanne Posted at the University of Lausanne Open Archive. http://serval.unil.ch Droits d’auteur L'Université de Lausanne attire expressément l'attention des utilisateurs sur le fait que tous les documents publiés dans l'Archive SERVAL sont protégés par le droit d'auteur, conformément à la loi fédérale sur le droit d'auteur et les droits voisins (LDA). A ce titre, il est indispensable d'obtenir le consentement préalable de l'auteur et/ou de l’éditeur avant toute utilisation d'une oeuvre ou d'une partie d'une oeuvre ne relevant pas d'une utilisation à des fins personnelles au sens de la LDA (art. 19, al. 1 lettre a). A défaut, tout contrevenant s'expose aux sanctions prévues par cette loi. Nous déclinons toute responsabilité en la matière. Copyright The University of Lausanne expressly draws the attention of users to the fact that all documents published in the SERVAL Archive are protected by copyright in accordance with federal law on copyright and similar rights (LDA). Accordingly it is indispensable to obtain prior consent from the author and/or publisher before any use of a work or part of a work for purposes other than personal use within the meaning of LDA (art. 19, para. 1 letter a). Failure to do so will expose offenders to the sanctions laid down by this law.
    [Show full text]
  • 3 May 2012 Inner Product Quadratures
    Inner product quadratures Yu Chen Courant Institute of Mathematical Sciences New York University Aug 26, 2011 Abstract We introduce a n-term quadrature to integrate inner products of n functions, as opposed to a Gaussian quadrature to integrate 2n functions. We will characterize and provide computataional tools to construct the inner product quadrature, and establish its connection to the Gaussian quadrature. Contents 1 The inner product quadrature 2 1.1 Notation.................................... 2 1.2 A n-termquadratureforinnerproducts. 4 2 Construct the inner product quadrature 4 2.1 Polynomialcase................................ 4 2.2 Arbitraryfunctions .............................. 6 arXiv:1205.0601v1 [math.NA] 3 May 2012 3 Product law and minimal functions 7 3.1 Factorspaces ................................. 8 3.2 Minimalfunctions............................... 11 3.3 Fold data into Gramians - signal processing . 13 3.4 Regularization................................. 15 3.5 Type-3quadraturesforintegralequations. .... 15 4 Examples 16 4.1 Quadratures for non-positive definite weights . ... 16 4.2 Powerfunctions,HankelGramians . 16 4.3 Exponentials kx,hyperbolicGramians ................... 18 1 5 Generalizations and applications 19 5.1 Separation principle of imaging . 20 5.2 Quadratures in higher dimensions . 21 5.3 Deflationfor2-Dquadraturedesign . 22 1 The inner product quadrature We consider three types of n-term Gaussian quadratures in this paper, Type-1: to integrate 2n functions in interval [a, b]. • Type-2: to integrate n2 inner products of n functions. • Type-3: to integrate n functions against n weights. • For these quadratures, the weight functions u are not required positive definite. Type-1 is the classical Guassian quadrature, Type-2 is the inner product quadrature, and Type-3 finds applications in imaging and sensing, and discretization of integral equations.
    [Show full text]
  • A Complete System of Orthogonal Step Functions
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 132, Number 12, Pages 3491{3502 S 0002-9939(04)07511-2 Article electronically published on July 22, 2004 A COMPLETE SYSTEM OF ORTHOGONAL STEP FUNCTIONS HUAIEN LI AND DAVID C. TORNEY (Communicated by David Sharp) Abstract. We educe an orthonormal system of step functions for the interval [0; 1]. This system contains the Rademacher functions, and it is distinct from the Paley-Walsh system: its step functions use the M¨obius function in their definition. Functions have almost-everywhere convergent Fourier-series expan- sions if and only if they have almost-everywhere convergent step-function-series expansions (in terms of the members of the new orthonormal system). Thus, for instance, the new system and the Fourier system are both complete for Lp(0; 1); 1 <p2 R: 1. Introduction Analytical desiderata and applications, ever and anon, motivate the elaboration of systems of orthogonal step functions|as exemplified by the Haar system, the Paley-Walsh system and the Rademacher system. Our motivation is the example- based classification of digitized images, represented by rational points of the real interval [0,1], the domain of interest in the sequel. It is imperative to establish the \completeness" of any orthogonal system. Much is known, in this regard, for the classical systems, as this has been the subject of numerous investigations, and we use the latter results to establish analogous properties for the orthogonal system developed herein. h i RDefinition 1. Let the inner product f(x);g(x) denote the Lebesgue integral 1 0 f(x)g(x)dx: Definition 2.
    [Show full text]
  • Depth-Weighted Estimation of Heterogeneous Agent Panel Data
    Depth-Weighted Estimation of Heterogeneous Agent Panel Data Models∗ Yoonseok Lee† Donggyu Sul‡ Syracuse University University of Texas at Dallas June 2020 Abstract We develop robust estimation of panel data models, which is robust to various types of outlying behavior of potentially heterogeneous agents. We estimate parameters from individual-specific time-series and average them using data-dependent weights. In partic- ular, we use the notion of data depth to obtain order statistics among the heterogeneous parameter estimates, and develop the depth-weighted mean-group estimator in the form of an L-estimator. We study the asymptotic properties of the new estimator for both homogeneous and heterogeneous panel cases, focusing on the Mahalanobis and the pro- jection depths. We examine relative purchasing power parity using this estimator and cannot find empirical evidence for it. Keywords: Panel data, Depth, Robust estimator, Heterogeneous agents, Mean group estimator. JEL Classifications: C23, C33. ∗First draft: September 2017. The authors thank to Robert Serfling and participants at numerous sem- inar/conference presentations for very helpful comments. Lee acknowledges support from Appleby-Mosher Research Fund, Maxwell School, Syracuse University. †Corresponding author. Address: Department of Economics and Center for Policy Research, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244. E-mail: [email protected] ‡Address: Department of Economics, University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080. E-mail: [email protected] 1Introduction A robust estimator is a statistic that is less influenced by outliers. Many robust estimators are available for regression models, where the robustness is toward outliers in the regression error.
    [Show full text]
  • FOURIER ANALYSIS 1. the Best Approximation Onto Trigonometric
    FOURIER ANALYSIS ERIK LØW AND RAGNAR WINTHER 1. The best approximation onto trigonometric polynomials Before we start the discussion of Fourier series we will review some basic results on inner–product spaces and orthogonal projections mostly presented in Section 4.6 of [1]. 1.1. Inner–product spaces. Let V be an inner–product space. As usual we let u, v denote the inner–product of u and v. The corre- sponding normh isi given by v = v, v . k k h i A basic relation between the inner–productp and the norm in an inner– product space is the Cauchy–Scwarz inequality. It simply states that the absolute value of the inner–product of u and v is bounded by the product of the corresponding norms, i.e. (1.1) u, v u v . |h i|≤k kk k An outline of a proof of this fundamental inequality, when V = Rn and is the standard Eucledian norm, is given in Exercise 24 of Section 2.7k·k of [1]. We will give a proof in the general case at the end of this section. Let W be an n dimensional subspace of V and let P : V W be the corresponding projection operator, i.e. if v V then w∗ =7→P v W is the element in W which is closest to v. In other∈ words, ∈ v w∗ v w for all w W. k − k≤k − k ∈ It follows from Theorem 12 of Chapter 4 of [1] that w∗ is characterized by the conditions (1.2) v P v, w = v w∗, w =0 forall w W.
    [Show full text]