Vectors in Function Spaces

Total Page:16

File Type:pdf, Size:1020Kb

Vectors in Function Spaces Jim Lambers MAT 606 Spring Semester 2015-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V , also known as a linear vector space, is a set of objects, called vectors, together with two operations: • Addition of two vectors in V , which must be commutative, associative, and have an identity element, which is the zero vector 0. Each vector v must have an additive inverse −v which, when added to v, yields the zero vector. • Multiplication of a vector in V by a scalar, which is typically a real or complex number. The term \scalar" is used in this context, rather than \number", because the multiplication process is \scaling" a given vector by a factor indicated by a given number. Scalar multiplication must satisfy distributive laws, and have an identity element, 1, such that 1v = v for any vector v 2 V . Both operations must be closed, which means that the result of either operation must be a vector in V . That is, if u and v are two vectors in V , then u + v must also be in V , and αv must be in V for any scalar α. n Example 1 The set of all points in n-dimensional space, R , is a vector space. Addition is defined as follows: 0 1 0 1 0 1 u1 v1 u1 + v1 B u2 C B v2 C B u2 + v2 C u + v = B C + B C = B C = u + v: B . C B . C B . C @ . A @ . A @ . A un vn un + vn Scalar multiplication is defined by 0 1 αv1 B αv2 C αv = B C : B . C @ . A αvn Similarly, the set of all n-dimensional points whose coordinates are complex numbers, denoted by n C , is also a vector space. 2 In these next few examples, we introduce some vector spaces whose vectors are functions, which are also known as function spaces. Example 2 The set of all polynomials of degree at most n, denoted by Pn, is a vector space, in which addition and scalar multiplication are defined as follows. Given f(x); g(x) 2 Pn, (f + g)(x) = f(x) + g(x); (αf)(x) = αf(x): 1 These operations are closed, because adding two polynomials of degree at most n will not yield a sum whose degree is greater than n, and multiplying any polynomial by a nonzero scalar will not change its degree. 2 Example 3 The set of all functions with power series of the form 1 X n f(x) = anx ; n=0 that are convergent on the interval (−1; 1) is a vector space, in which addition and multiplication are defined as in the previous example. These operations are closed because the sum of two convergent series is also convergent, as is a scalar multiple of a convergent series. 2 Example 4 The set of all continuous functions on the interval [a; b], denoted by C[a; b], is a vector space in which addition and scalar multiplication are defined as in the previous two examples. These operations are closed because the sum of two continuous functions, and a scalar multiple of a continuous function, is also continuous. 2 A vector space V is most effectively described in terms of a set of specific vectors fv1; v2;:::g that, in conjunction with the operations of addition and scalar multiplication, can be used to obtain every vector in the space. That is, for every vector v 2 V , there must exist scalars c1; c2;:::, such that v = c1v1 + c2v2 + ··· : We say that v is a linear combination of v1; v2;:::, and the scalars c1; c2;::: are the coefficients of the linear combination. Ideally, it should be possible to express any vector v 2 V as a unique linear combination of the vectors v1; v2;::: that are to be used to describe all vectors in V . With this criteria in mind, we introduce the following two essential concepts from linear algebra: • A set of vectors fv1; v2;:::; vng is linearly independent if the vector equation c1v1 + c2v2 + ··· + anvn = 0 is satisfied if and only if c1 = c2 = ··· = 0: In other words, this set of vectors is linearly independent if it is not possible to express any vector in the set as a linear combination of other vectors in the set. This definition can be generalized in a natural way to an infinite set of vectors. If a set of vectors is not linearly independent, then we say that it is linearly dependent. • A set of vectors fv1; v2;:::; vng spans a vector space V if, for any vector v 2 V , there exist scalars c1; c2; : : : ; an such that v = c1v1 + c2v2 + ··· + anvn: That is, any vector in V can be expressed as a linear combination of vectors in the set. We define spanfv1; v2;:::; vng to be the set of all linear combinations of v1; v2;:::; vn. As with linear independence, the notion of span generalizes naturally to an infinite set of vectors. 2 We then say that a set of vectors fv1; v2;:::g (which may be finite or infinite) is a basis for a vector space V if it is linearly independent, and if it spans V . This definition ensures that any vector in V is a unique linear combination of the vectors in the basis. If a basis for V is finite, then we say that V is finite-dimensional and define the dimension of V to be the number of elements in a basis; all bases of a finite-dimensional vector space must have the same number of elements. If V does not have a finite basis, then we say that V is infinite-dimensional. Example 5 The function space P3, consisting of polynomials of degree at most 3, has a basis 2 3 f1; x; x ; x g. It is clear that any polynomial in P3 can be expressed as a linear combination of these basis functions, as the coefficients of any such polynomial are also the coefficients in the linear combination of these basis functions. To confirm linear independence, suppose that there exists constants c0; c1; c2; and c3 such that 2 3 c0(1) + c1x + c2x + c3x = 0 for all x 2 R. Then certainly this must be the case at x = 0, which requires that c1 = 0. Substituting 3 other values of x into the above equation yields a system of 3 linear equations in the remaining 3 unknows c1; c2 and c3. It can be shown that the only solution of such a system of equations is 2 3 the trivial solution c1 = c2 = c3 = 0. Therefore the set f1; x; x ; x g is linearly independent. An alternative basis consists of the first 4 Chebyshev polynomials f1; x; 2x2 − 1; 4x3 − 3xg. It can be confirmed using a similar approach that these polynomials are linearly independent. 2 Example 6 The function space consisting of all power series that are convergent on the interval (−1; 1) has as a basis the infinite set f1; x; x2; x3;:::g. Using an inductive argument, it can be shown that this set is linearly independent 2 Inner Product n Recall that the dot product of two vectors u and v in R is u · v = u1v1 + u2v2 + ··· + unvn = kukkvk cos θ; where q 2 2 2 kuk = u1 + u2 + ··· + un is the magnitude or length of u, and θ is the angle between u and v, with 0 ≤ θ ≤ π radians. The dot product has the following properties: 1. u · u = kuk2 2. u · (v + w) = u · v + u · w 3. u · v = v · u 4. u · (cv) = c(u · v) When u and v are perpendicular, then cos θ = 0. It follows that u · v = 0, and we say that u and v are orthogonal. We would like to generalize the concept of a dot product to vectors in function spaces, and we also need to ensure that complex numbers are properly taken into account. To that end, we define the inner product of two functions f(x) and g(x) to be Z b hf; gi = f(x)g(x)w(x) dx; a 3 where w(x) is a weight function and, for any complex number z = x + iy, z = x − iy is the complex conjugate of z. The interval of integration [a; b] depends on the function space under consideration. Using this definition, it can be verified that the inner product has the following properties: 1. hf; g + hi = hf; gi + hf; hi 2. hf; gi = hg; fi 3. hf; cgi = chf; gi for any complex number c Note that the second property is slightly different from the corresponding property for vectors in n R , as it requires the complex conjugate. Combining the second and third property yields the result hcf; gi = chf; gi. Inner Product Spaces and Hilbert Spaces n Just as we use kvk to measure the magnitude of a vector v 2 R , we need a notion of magnitude for a function f(x) in a function space. To that end, we say that a function k · k : V ! R is a norm on a vector space V if it satisfies the following conditions: 1. kvk ≥ 0 for any vector v 2 V , and kvk = 0 if and only if v = 0. 2. kαvk = jαjkvk for any complex scalar α. 3. ku + vk ≤ kuk + kvk for any two vectors u; v 2 V . This is known as the Triangle inequality.
Recommended publications
  • The Matroid Theorem We First Review Our Definitions: a Subset System Is A
    CMPSCI611: The Matroid Theorem Lecture 5 We first review our definitions: A subset system is a set E together with a set of subsets of E, called I, such that I is closed under inclusion. This means that if X ⊆ Y and Y ∈ I, then X ∈ I. The optimization problem for a subset system (E, I) has as input a positive weight for each element of E. Its output is a set X ∈ I such that X has at least as much total weight as any other set in I. A subset system is a matroid if it satisfies the exchange property: If i and i0 are sets in I and i has fewer elements than i0, then there exists an element e ∈ i0 \ i such that i ∪ {e} ∈ I. 1 The Generic Greedy Algorithm Given any finite subset system (E, I), we find a set in I as follows: • Set X to ∅. • Sort the elements of E by weight, heaviest first. • For each element of E in this order, add it to X iff the result is in I. • Return X. Today we prove: Theorem: For any subset system (E, I), the greedy al- gorithm solves the optimization problem for (E, I) if and only if (E, I) is a matroid. 2 Theorem: For any subset system (E, I), the greedy al- gorithm solves the optimization problem for (E, I) if and only if (E, I) is a matroid. Proof: We will show first that if (E, I) is a matroid, then the greedy algorithm is correct. Assume that (E, I) satisfies the exchange property.
    [Show full text]
  • Determinant Notes
    68 UNIT FIVE DETERMINANTS 5.1 INTRODUCTION In unit one the determinant of a 2× 2 matrix was introduced and used in the evaluation of a cross product. In this chapter we extend the definition of a determinant to any size square matrix. The determinant has a variety of applications. The value of the determinant of a square matrix A can be used to determine whether A is invertible or noninvertible. An explicit formula for A–1 exists that involves the determinant of A. Some systems of linear equations have solutions that can be expressed in terms of determinants. 5.2 DEFINITION OF THE DETERMINANT a11 a12 Recall that in chapter one the determinant of the 2× 2 matrix A = was a21 a22 defined to be the number a11a22 − a12 a21 and that the notation det (A) or A was used to represent the determinant of A. For any given n × n matrix A = a , the notation A [ ij ]n×n ij will be used to denote the (n −1)× (n −1) submatrix obtained from A by deleting the ith row and the jth column of A. The determinant of any size square matrix A = a is [ ij ]n×n defined recursively as follows. Definition of the Determinant Let A = a be an n × n matrix. [ ij ]n×n (1) If n = 1, that is A = [a11], then we define det (A) = a11 . n 1k+ (2) If na>=1, we define det(A) ∑(-1)11kk det(A ) k=1 Example If A = []5 , then by part (1) of the definition of the determinant, det (A) = 5.
    [Show full text]
  • Eigenvalues and Eigenvectors
    Jim Lambers MAT 605 Fall Semester 2015-16 Lecture 14 and 15 Notes These notes correspond to Sections 4.4 and 4.5 in the text. Eigenvalues and Eigenvectors In order to compute the matrix exponential eAt for a given matrix A, it is helpful to know the eigenvalues and eigenvectors of A. Definitions and Properties Let A be an n × n matrix. A nonzero vector x is called an eigenvector of A if there exists a scalar λ such that Ax = λx: The scalar λ is called an eigenvalue of A, and we say that x is an eigenvector of A corresponding to λ. We see that an eigenvector of A is a vector for which matrix-vector multiplication with A is equivalent to scalar multiplication by λ. Because x is nonzero, it follows that if x is an eigenvector of A, then the matrix A − λI is singular, where λ is the corresponding eigenvalue. Therefore, λ satisfies the equation det(A − λI) = 0: The expression det(A−λI) is a polynomial of degree n in λ, and therefore is called the characteristic polynomial of A (eigenvalues are sometimes called characteristic values). It follows from the fact that the eigenvalues of A are the roots of the characteristic polynomial that A has n eigenvalues, which can repeat, and can also be complex, even if A is real. However, if A is real, any complex eigenvalues must occur in complex-conjugate pairs. The set of eigenvalues of A is called the spectrum of A, and denoted by λ(A). This terminology explains why the magnitude of the largest eigenvalues is called the spectral radius of A.
    [Show full text]
  • Algebra of Linear Transformations and Matrices Math 130 Linear Algebra
    Then the two compositions are 0 −1 1 0 0 1 BA = = 1 0 0 −1 1 0 Algebra of linear transformations and 1 0 0 −1 0 −1 AB = = matrices 0 −1 1 0 −1 0 Math 130 Linear Algebra D Joyce, Fall 2013 The products aren't the same. You can perform these on physical objects. Take We've looked at the operations of addition and a book. First rotate it 90◦ then flip it over. Start scalar multiplication on linear transformations and again but flip first then rotate 90◦. The book ends used them to define addition and scalar multipli- up in different orientations. cation on matrices. For a given basis β on V and another basis γ on W , we have an isomorphism Matrix multiplication is associative. Al- γ ' φβ : Hom(V; W ) ! Mm×n of vector spaces which though it's not commutative, it is associative. assigns to a linear transformation T : V ! W its That's because it corresponds to composition of γ standard matrix [T ]β. functions, and that's associative. Given any three We also have matrix multiplication which corre- functions f, g, and h, we'll show (f ◦ g) ◦ h = sponds to composition of linear transformations. If f ◦ (g ◦ h) by showing the two sides have the same A is the standard matrix for a transformation S, values for all x. and B is the standard matrix for a transformation T , then we defined multiplication of matrices so ((f ◦ g) ◦ h)(x) = (f ◦ g)(h(x)) = f(g(h(x))) that the product AB is be the standard matrix for S ◦ T .
    [Show full text]
  • Scaling Algorithms and Tropical Methods in Numerical Matrix Analysis
    Scaling Algorithms and Tropical Methods in Numerical Matrix Analysis: Application to the Optimal Assignment Problem and to the Accurate Computation of Eigenvalues Meisam Sharify To cite this version: Meisam Sharify. Scaling Algorithms and Tropical Methods in Numerical Matrix Analysis: Application to the Optimal Assignment Problem and to the Accurate Computation of Eigenvalues. Numerical Analysis [math.NA]. Ecole Polytechnique X, 2011. English. pastel-00643836 HAL Id: pastel-00643836 https://pastel.archives-ouvertes.fr/pastel-00643836 Submitted on 24 Nov 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Th`esepr´esent´eepour obtenir le titre de DOCTEUR DE L'ECOLE´ POLYTECHNIQUE Sp´ecialit´e: Math´ematiquesAppliqu´ees par Meisam Sharify Scaling Algorithms and Tropical Methods in Numerical Matrix Analysis: Application to the Optimal Assignment Problem and to the Accurate Computation of Eigenvalues Jury Marianne Akian Pr´esident du jury St´ephaneGaubert Directeur Laurence Grammont Examinateur Laura Grigori Examinateur Andrei Sobolevski Rapporteur Fran¸coiseTisseur Rapporteur Paul Van Dooren Examinateur September 2011 Abstract Tropical algebra, which can be considered as a relatively new field in Mathemat- ics, emerged in several branches of science such as optimization, synchronization of production and transportation, discrete event systems, optimal control, oper- ations research, etc.
    [Show full text]
  • 171 Composition Operator on the Space of Functions
    Acta Math. Univ. Comenianae 171 Vol. LXXXI, 2 (2012), pp. 171{183 COMPOSITION OPERATOR ON THE SPACE OF FUNCTIONS TRIEBEL-LIZORKIN AND BOUNDED VARIATION TYPE M. MOUSSAI Abstract. For a Borel-measurable function f : R ! R satisfying f(0) = 0 and Z sup t−1 sup jf 0(x + h) − f 0(x)jp dx < +1; (0 < p < +1); t>0 R jh|≤t s n we study the composition operator Tf (g) := f◦g on Triebel-Lizorkin spaces Fp;q(R ) in the case 0 < s < 1 + (1=p). 1. Introduction and the main result The study of the composition operator Tf : g ! f ◦ g associated to a Borel- s n measurable function f : R ! R on Triebel-Lizorkin spaces Fp;q(R ), consists in finding a characterization of the functions f such that s n s n (1.1) Tf (Fp;q(R )) ⊆ Fp;q(R ): The investigation to establish (1.1) was improved by several works, for example the papers of Adams and Frazier [1,2 ], Brezis and Mironescu [6], Maz'ya and Shaposnikova [9], Runst and Sickel [12] and [10]. There were obtained some necessary conditions on f; from which we recall the following results. For s > 0, 1 < p < +1 and 1 ≤ q ≤ +1 n s n s n • if Tf takes L1(R ) \ Fp;q(R ) to Fp;q(R ), then f is locally Lipschitz con- tinuous. n s n • if Tf takes the Schwartz space S(R ) to Fp;q(R ), then f belongs locally to s Fp;q(R). The first assertion is proved in [3, Theorem 3.1].
    [Show full text]
  • CONTINUITY in the ALEXIEWICZ NORM Dedicated to Prof. J
    131 (2006) MATHEMATICA BOHEMICA No. 2, 189{196 CONTINUITY IN THE ALEXIEWICZ NORM Erik Talvila, Abbotsford (Received October 19, 2005) Dedicated to Prof. J. Kurzweil on the occasion of his 80th birthday Abstract. If f is a Henstock-Kurzweil integrable function on the real line, the Alexiewicz norm of f is kfk = sup j I fj where the supremum is taken over all intervals I ⊂ . Define I the translation τx by τxfR(y) = f(y − x). Then kτxf − fk tends to 0 as x tends to 0, i.e., f is continuous in the Alexiewicz norm. For particular functions, kτxf − fk can tend to 0 arbitrarily slowly. In general, kτxf − fk > osc fjxj as x ! 0, where osc f is the oscillation of f. It is shown that if F is a primitive of f then kτxF − F k kfkjxj. An example 1 6 1 shows that the function y 7! τxF (y) − F (y) need not be in L . However, if f 2 L then kτxF − F k1 6 kfk1jxj. For a positive weight function w on the real line, necessary and sufficient conditions on w are given so that k(τxf − f)wk ! 0 as x ! 0 whenever fw is Henstock-Kurzweil integrable. Applications are made to the Poisson integral on the disc and half-plane. All of the results also hold with the distributional Denjoy integral, which arises from the completion of the space of Henstock-Kurzweil integrable functions as a subspace of Schwartz distributions. Keywords: Henstock-Kurzweil integral, Alexiewicz norm, distributional Denjoy integral, Poisson integral MSC 2000 : 26A39, 46Bxx 1.
    [Show full text]
  • Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra
    Takustraße 7 Konrad-Zuse-Zentrum D-14195 Berlin-Dahlem fur¨ Informationstechnik Berlin Germany RUDIGER¨ STEPHAN1 Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra 1Email: [email protected] ZIB-Report 08-48 (December 2008) Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra R¨udigerStephan Abstract Given a combinatorial optimization problem and a subset N of natural numbers, we obtain a cardinality constrained version of this problem by permitting only those feasible solutions whose cardinalities are elements of N. In this paper we briefly touch on questions that addresses common grounds and differences of the complexity of a combinatorial optimization problem and its cardinality constrained version. Afterwards we focus on polytopes associated with cardinality constrained combinatorial optimiza- tion problems. Given an integer programming formulation for a combina- torial optimization problem, by essentially adding Gr¨otschel’s cardinality forcing inequalities [11], we obtain an integer programming formulation for its cardinality restricted version. Since the cardinality forcing inequal- ities in their original form are mostly not facet defining for the associated polyhedra, we discuss possibilities to strengthen them. In [13] a variation of the cardinality forcing inequalities were successfully integrated in the system of linear inequalities for the matroid polytope to provide a com- plete linear description of the cardinality constrained matroid polytope. We identify this polytope as a master polytope for our class of problems, since many combinatorial optimization problems can be formulated over the intersection of matroids. 1 Introduction, Basics, and Complexity Given a combinatorial optimization problem and a subset N of natural numbers, we obtain a cardinality constrained version of this problem by permitting only those feasible solutions whose cardinalities are elements of N.
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • Comparative Programming Languages
    CSc 372 Comparative Programming Languages 10 : Haskell — Curried Functions Department of Computer Science University of Arizona [email protected] Copyright c 2013 Christian Collberg 1/22 Infix Functions Declaring Infix Functions Sometimes it is more natural to use an infix notation for a function application, rather than the normal prefix one: 5+6 (infix) (+) 5 6 (prefix) Haskell predeclares some infix operators in the standard prelude, such as those for arithmetic. For each operator we need to specify its precedence and associativity. The higher precedence of an operator, the stronger it binds (attracts) its arguments: hence: 3 + 5*4 ≡ 3 + (5*4) 3 + 5*4 6≡ (3 + 5) * 4 3/22 Declaring Infix Functions. The associativity of an operator describes how it binds when combined with operators of equal precedence. So, is 5-3+9 ≡ (5-3)+9 = 11 OR 5-3+9 ≡ 5-(3+9) = -7 The answer is that + and - associate to the left, i.e. parentheses are inserted from the left. Some operators are right associative: 5^3^2 ≡ 5^(3^2) Some operators have free (or no) associativity. Combining operators with free associativity is an error: 5==4<3 ⇒ ERROR 4/22 Declaring Infix Functions. The syntax for declaring operators: infixr prec oper -- right assoc. infixl prec oper -- left assoc. infix prec oper -- free assoc. From the standard prelude: infixl 7 * infix 7 /, ‘div‘, ‘rem‘, ‘mod‘ infix 4 ==, /=, <, <=, >=, > An infix function can be used in a prefix function application, by including it in parenthesis. Example: ? (+) 5 ((*) 6 4) 29 5/22 Multi-Argument Functions Multi-Argument Functions Haskell only supports one-argument functions.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Some Properties of AP Weight Function
    Journal of the Institute of Engineering, 2016, 12(1): 210-213 210 TUTA/IOE/PCU © TUTA/IOE/PCU Printed in Nepal Some Properties of AP Weight Function Santosh Ghimire Department of Engineering Science and Humanities, Institute of Engineering Pulchowk Campus, Tribhuvan University, Kathmandu, Nepal Corresponding author: [email protected] Received: June 20, 2016 Revised: July 25, 2016 Accepted: July 28, 2016 Abstract: In this paper, we briefly discuss the theory of weights and then define A1 and Ap weight functions. Finally we prove some of the properties of AP weight function. Key words: A1 weight function, Maximal functions, Ap weight function. 1. Introduction The theory of weights play an important role in various fields such as extrapolation theory, vector-valued inequalities and estimates for certain class of non linear differential equation. Moreover, they are very useful in the study of boundary value problems for Laplace's equation in Lipschitz domains. In 1970, Muckenhoupt characterized positive functions w for which the Hardy-Littlewood maximal operator M maps Lp(Rn, w(x)dx) to itself. Muckenhoupt's characterization actually gave the better understanding of theory of weighted inequalities which then led to the introduction of Ap class and consequently the development of weighted inequalities. 2. Definitions n Definition: A locally integrable function on R that takes values in the interval (0,∞) almost everywhere is called a weight. So by definition a weight function can be zero or infinity only on a set whose Lebesgue measure is zero. We use the notation to denote the w-measure of the set E and we reserve the notation Lp(Rn,w) or Lp(w) for the weighted L p spaces.
    [Show full text]