On Almost Sure Rates of Convergence for Sample Average Approximations

Total Page:16

File Type:pdf, Size:1020Kb

On Almost Sure Rates of Convergence for Sample Average Approximations On almost sure rates of convergence for sample average approximations Dirk Banholzer1, J¨orgFliege1, and Ralf Werner2 1 Department of Mathematical Sciences, University of Southampton, Southampton, SO17 1BJ, UK [email protected] 2 Institut f¨urMathematik, Universit¨atAugsburg 86159 Augsburg, Germany Abstract. In this paper, we study the rates at which strongly con- sistent estimators in the sample average approximation approach (SAA) converge to their deterministic counterparts. To be able to quantify these rates at which convergence occurs in the almost sure sense, we consider the law of the iterated logarithm in a Banach space setting and first establish under relatively mild assumptions convergence rates for the approximating objective functions. These rates can then be transferred to the estimators for optimal values and solutions of the approximated problem. Based on these results, we further investigate the asymptotic bias of the SAA approach. Especially, we show that under the same as- sumptions as before the SAA estimator for the optimal value converges in mean to its deterministic counterpart, at a rate which essentially co- incides with the one in the almost sure sense. Further, optimal solutions also convergence in mean; for convex problems with the same rates, but with an as of now unknown rate for general problems. Finally, we address the notion of convergence in probability and conclude that in this case (weak) rates for the estimators can also be derived, and that without im- posing strong conditions on exponential moments, as it is typically done for obtaining exponential rates. Results on convergence in probability then allow to build universal confidence sets for the optimal value and optimal solutions under mild assumptions. Key words: Stochastic programming, sample average approximation, almost sure rates of convergence, law of the iterated logarithm, L1-convergence 1 Introduction Let (Ω; F; P) be a complete probability space on which we consider the stochastic programming problem n o min f(x) := E [h(x; ξ)] ; (1) x2X P where X ⊂ Rn denotes a nonempty compact set, ξ is a random vector whose distribution is supported on a set Ξ ⊂ Rm, and h : X × Ξ ! R is a function 2 Dirk Banholzer et al. depending on some parameter x 2 X and the random vector ξ 2 Ξ. For f to be well-defined, we assume for every x 2 X that h(x; ·) is measurable with respect to the Borel σ-algebras B(Ξ) and B(R), and that it is P-integrable. The stochastic problem (1) may arise in various applications from a broad range of areas, such as finance and engineering, where deterministic approaches turn out to be unsuitable for formulating the actual problem. Quite frequently, the problem is encountered as a first-stage problem of a two-stage stochastic program where h(x; ξ) describes the optimal value of a subordinate second-stage problem, see, e.g., Shapiro et al. (2014). Naturally, problem (1) may also be viewed as a self-contained problem, in which h directly results from modelling a stochastic quantity. Unfortunately, in many situations, the distribution of the random function h(·; ξ) is not known exactly, such that the expected value in (1) cannot be evalu- ated readily and therefore needs to be approximated in some way. Using Monte Carlo simulation, a common approach (see, e.g., Homem-de-Mello and Bayrak- san (2014) for a recent survey) consists of drawing a sample of i.i.d. random vectors ξ1; : : : ; ξN , N 2 N from the same distribution as ξ, and considering the sample average approximation (SAA) problem N ^ 1 X min fN (x) := h(x; ξi) (2) x2X N i=1 as an approximation to the original stochastic programming problem (1). Since the SAA problem (2) depends on the set of random vectors ξ1; : : : ; ξN , it essen- ^∗ tially constitutes a random problem. Its optimal value fN is thus an estimator of ∗ ∗ the optimal value f of the original problem (1), and an optimal solutionx ^N is an estimator of an optimal solution x∗ of the original problem (1). However, for a particular realisation of the random sample, the approximating problem (2) represents a deterministic problem instance, which can then be solved by ade- quate optimisation algorithms. For this purpose, one usually assumes that the set X is described by (deterministic) equality and inequality constraints. An appealing feature of the SAA approach is its sound convergence proper- ties, which have been discussed in a number of publications regarding different aspects. Under some relatively mild regularity assumptions on h, the strong consistency of the sequences of optimal SAA estimators was shown in various publications, see, e.g., Domowitz and White (1982) and Bates and White (1985) for an approach via uniform convergence, or Dupaˇcov´aand Wets (1988) and Robinson (1996) for a general approach based on the concept of epi-convergence. Given consistency, it is meaningful to analyse the rate of convergence at which the SAA estimators approach their original counterparts as N tends to infinity. In this regard, Shapiro (1989, 1991, 1993) and King and Rockafellar (1993), for instance, provided necessary and sufficient conditions for the asymptotic ^∗ normality of the estimators, from which it immediately follows that ffN g and ∗ fx^pN g converge in distribution to their deterministic counterparts at a rate of 1= N. Whereas the findings of the former author are essentially based on the On almost sure rates of convergence for sample average approximations 3 central limit theorem in Banach spaces to which the delta method with a first and second order expansion of the minimum value function is applied, the latter use a generalised implicit function theorem to achieve these results. Rates of convergence for stochastic optimisation problems of the type above were also studied in different settings by Kaniovski et al. (1995), Dai et al. (2000), Shapiro and Homem-de-Mello (2000) and Homem-de-Mello (2008). Specifically, they use large deviation theory to show that, under the rather strong but un- avoidable assumption of an existing moment generating function with a finite value in a neighbourhood of zero, the probability of deviation of the SAA opti- ∗ ∗ mal solutionx ^N from the true solution x converges exponentially fast to zero for N ! 1. This, in turn, allows to establish rates of convergence in probability and to derive conservative estimates for the sample size required to solve the original problem to a given accuracy with overwhelming probability, see, e.g., Shapiro (2003) and Shapiro et al. (2014), Section 5.3, for further details. In view of these findings, it has to be noted that in the SAA context all rates of convergence which have been established so far are for convergence in probability or convergence in distributions. To the best of our knowledge, rates of convergence that hold almost surely and thus complement the strong consistency of optimal estimators with its corresponding rate have not yet been considered in the framework of SAA, with very few exceptions using particular assumptions. Further, not many results are available on convergence in mean, nor have any convergence rates been established to this extent. This is an important issue, as convergence in mean is the basis to derive reasonable statements on the average biasedness of estimators. The most notable previous work here is the paper by Homem-de-Mello (2003), where almost sure convergence rates for the objective function have been investigated, albeit in the slightly different setup of a variable SAA (VSAA), in the special case of finite feasibility set X . For this purpose Homem-de-Mello (2003) applies a (pointwise) law of the iterated logarithm to derive the usual almost sure convergence rate for the objective function; due to the VSAA setup in a more general context of random triangular arrays. As Homem-de-Mello (2003) is focused on a finite (i.e. discrete) feasible set X , these results can be straightforwardly translated to rates for optimal values. However, it is not possible to directly generalise the usage of the pointwise law of the iterated logarithm (LIL) to a general compact feasible set as considered here. For this reason, we have to refer to a uniform, i.e. Banach space, version of the LIL instead. We aim at closing this gap in the existing literature on the SAA approach with the present paper, providing almost sure rates of convergence of the ap- ^ ^∗ proximating objective function ffN g as well as for the optimal estimators ffN g ∗ and fx^N g. We further provide results on the convergence in mean, with cor- responding rates where possible. As it has to be expected, our results can be derived by means of the law of the iterated logarithm in Banach spaces, which is similar to the technique that has been already applied in the form of the func- tional central limit theorem to obtain asymptotic distributions of optimal values and solutions, see, e.g., Shapiro (1991). The main difference between the law of 4 Dirk Banholzer et al. the iterated logarithm and the central limit theorem is that LIL provides almost sure convergence results (in contrast to convergence results in distribution), all under quite similar assumptions. Further, as one advantage, we also directly ob- tain results on convergence in mean. While the estimator for the optimal value shares the same asymptotic rate of convergence, no rates could be obtained for the convergence in mean of the optimal solutions in a general setup. For con- vex problems, however, again the same convergence rate applies. We will show that these convergence rates in mean will not only give us a quantification of the asymptotic bias, but also provide the basis for (weak) rates of convergence in probability via Markov's inequality.
Recommended publications
  • Conditional Generalizations of Strong Laws Which Conclude the Partial Sums Converge Almost Surely1
    The Annals ofProbability 1982. Vol. 10, No.3. 828-830 CONDITIONAL GENERALIZATIONS OF STRONG LAWS WHICH CONCLUDE THE PARTIAL SUMS CONVERGE ALMOST SURELY1 By T. P. HILL Georgia Institute of Technology Suppose that for every independent sequence of random variables satis­ fying some hypothesis condition H, it follows that the partial sums converge almost surely. Then it is shown that for every arbitrarily-dependent sequence of random variables, the partial sums converge almost surely on the event where the conditional distributions (given the past) satisfy precisely the same condition H. Thus many strong laws for independent sequences may be immediately generalized into conditional results for arbitrarily-dependent sequences. 1. Introduction. Ifevery sequence ofindependent random variables having property A has property B almost surely, does every arbitrarily-dependent sequence of random variables have property B almost surely on the set where the conditional distributions have property A? Not in general, but comparisons of the conditional Borel-Cantelli Lemmas, the condi­ tional three-series theorem, and many martingale results with their independent counter­ parts suggest that the answer is affirmative in fairly general situations. The purpose ofthis note is to prove Theorem 1, which states, in part, that if "property B" is "the partial sums converge," then the answer is always affIrmative, regardless of "property A." Thus many strong laws for independent sequences (even laws yet undiscovered) may be immediately generalized into conditional results for arbitrarily-dependent sequences. 2. Main Theorem. In this note, IY = (Y1 , Y2 , ••• ) is a sequence ofrandom variables on a probability triple (n, d, !?I'), Sn = Y 1 + Y 2 + ..
    [Show full text]
  • Numerical Solution of Ordinary Differential Equations
    NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS Kendall Atkinson, Weimin Han, David Stewart University of Iowa Iowa City, Iowa A JOHN WILEY & SONS, INC., PUBLICATION Copyright c 2009 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created ore extended by sales representatives or written sales materials. The advice and strategies contained herin may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
    [Show full text]
  • Polarization Fields and Phase Space Densities in Storage Rings: Stroboscopic Averaging and the Ergodic Theorem
    Physica D 234 (2007) 131–149 www.elsevier.com/locate/physd Polarization fields and phase space densities in storage rings: Stroboscopic averaging and the ergodic theorem✩ James A. Ellison∗, Klaus Heinemann Department of Mathematics and Statistics, University of New Mexico, Albuquerque, NM 87131, United States Received 1 May 2007; received in revised form 6 July 2007; accepted 9 July 2007 Available online 14 July 2007 Communicated by C.K.R.T. Jones Abstract A class of orbital motions with volume preserving flows and with vector fields periodic in the “time” parameter θ is defined. Spin motion coupled to the orbital dynamics is then defined, resulting in a class of spin–orbit motions which are important for storage rings. Phase space densities and polarization fields are introduced. It is important, in the context of storage rings, to understand the behavior of periodic polarization fields and phase space densities. Due to the 2π time periodicity of the spin–orbit equations of motion the polarization field, taken at a sequence of increasing time values θ,θ 2π,θ 4π,... , gives a sequence of polarization fields, called the stroboscopic sequence. We show, by using the + + Birkhoff ergodic theorem, that under very general conditions the Cesaro` averages of that sequence converge almost everywhere on phase space to a polarization field which is 2π-periodic in time. This fulfills the main aim of this paper in that it demonstrates that the tracking algorithm for stroboscopic averaging, encoded in the program SPRINT and used in the study of spin motion in storage rings, is mathematically well-founded.
    [Show full text]
  • Approximations in Numerical Analysis, Cont'd
    Jim Lambers MAT 460/560 Fall Semester 2009-10 Lecture 7 Notes These notes correspond to Section 1.3 in the text. Approximations in Numerical Analysis, cont'd Convergence Many algorithms in numerical analysis are iterative methods that produce a sequence fαng of ap- proximate solutions which, ideally, converges to a limit α that is the exact solution as n approaches 1. Because we can only perform a finite number of iterations, we cannot obtain the exact solution, and we have introduced computational error. If our iterative method is properly designed, then this computational error will approach zero as n approaches 1. However, it is important that we obtain a sufficiently accurate approximate solution using as few computations as possible. Therefore, it is not practical to simply perform enough iterations so that the computational error is determined to be sufficiently small, because it is possible that another method may yield comparable accuracy with less computational effort. The total computational effort of an iterative method depends on both the effort per iteration and the number of iterations performed. Therefore, in order to determine the amount of compu- tation that is needed to attain a given accuracy, we must be able to measure the error in αn as a function of n. The more rapidly this function approaches zero as n approaches 1, the more rapidly the sequence of approximations fαng converges to the exact solution α, and as a result, fewer iterations are needed to achieve a desired accuracy. We now introduce some terminology that will aid in the discussion of the convergence behavior of iterative methods.
    [Show full text]
  • On the Three Methods for Bounding the Rate of Convergence for Some Continuous-Time Markov Chains
    On the Three Methods for Bounding the Rate of Convergence for some Continuous-time Markov Chains Alexander Zeifman∗ Yacov Satiny Anastasia Kryukovaz Rostislav Razumchik§ Ksenia Kiseleva{ Galina Shilovak Abstract. Consideration is given to the three different analytical methods for the computation of upper bounds for the rate of convergence to the limiting regime of one specific class of (in)homogeneous continuous- time Markov chains. This class is particularly suited to describe evolutions of the total number of customers in (in)homogeneous M=M=S queueing systems with possibly state-dependent arrival and service intensities, batch arrivals and services. One of the methods is based on the logarithmic norm of a linear operator function; the other two rely on Lyapunov functions and differential inequalities, respectively. Less restrictive conditions (compared to those known from the literature) under which the methods are applicable, are being formulated. Two numerical examples are given. It is also shown that for homogeneous birth-death Markov processes defined on a finite state space with all transition rates being positive, all methods yield the same sharp upper bound. ∗Vologda State University, Vologda, Russia; Institute of Informatics Problems, Federal Research Center “Computer Science and Control”, Russian Academy of Sciences, Moscow, arXiv:1911.04086v2 [math.PR] 24 Feb 2020 Russia; Vologda Research Center of the Russian Academy of Sciences, Vologda, Russia. E-mail: [email protected] yVologda State University, Vologda, Russia zVologda State
    [Show full text]
  • CONVERGENCE RATES of MARKOV CHAINS 1. Orientation 1
    CONVERGENCE RATES OF MARKOV CHAINS STEVEN P. LALLEY CONTENTS 1. Orientation 1 1.1. Example 1: Random-to-Top Card Shuffling 1 1.2. Example 2: The Ehrenfest Urn Model of Diffusion 2 1.3. Example 3: Metropolis-Hastings Algorithm 4 1.4. Reversible Markov Chains 5 1.5. Exercises: Reversiblity, Symmetries and Stationary Distributions 8 2. Coupling 9 2.1. Coupling and Total Variation Distance 9 2.2. Coupling Constructions and Convergence of Markov Chains 10 2.3. Couplings for the Ehrenfest Urn and Random-to-Top Shuffling 12 2.4. The Coupon Collector’s Problem 13 2.5. Exercises 15 2.6. Convergence Rates for the Ehrenfest Urn and Random-to-Top 16 2.7. Exercises 17 3. Spectral Analysis 18 3.1. Transition Kernel of a Reversible Markov Chain 18 3.2. Spectrum of the Ehrenfest random walk 21 3.3. Rate of convergence of the Ehrenfest random walk 23 1. ORIENTATION Finite-state Markov chains have stationary distributions, and irreducible, aperiodic, finite- state Markov chains have unique stationary distributions. Furthermore, for any such chain the n−step transition probabilities converge to the stationary distribution. In various ap- plications – especially in Markov chain Monte Carlo, where one runs a Markov chain on a computer to simulate a random draw from the stationary distribution – it is desirable to know how many steps it takes for the n−step transition probabilities to become close to the stationary probabilities. These notes will introduce several of the most basic and important techniques for studying this problem, coupling and spectral analysis. We will focus on two Markov chains for which the convergence rate is of particular interest: (1) the random-to-top shuffling model and (2) the Ehrenfest urn model.
    [Show full text]
  • Lecture Notes 4 36-705 1 Reminder: Convergence of Sequences 2
    Lecture Notes 4 36-705 In today's lecture we discuss the convergence of random variables. At a high-level, our first few lectures focused on non-asymptotic properties of averages i.e. the tail bounds we derived applied for any fixed sample size n. For the next few lectures we focus on asymptotic properties, i.e. we ask the question: what happens to the average of n i.i.d. random variables as n ! 1. Roughly, from a theoretical perspective the idea is that many expressions will consider- ably simplify in the asymptotic regime. Rather than have many different tail bounds, we will derive simple \universal results" that hold under extremely weak conditions. From a slightly more practical perspective, asymptotic theory is often useful to obtain approximate confidence intervals. 1 Reminder: convergence of sequences When we think of convergence of deterministic real numbers the corresponding notions are classical. Formally, we say that a sequence of real numbers a1; a2;::: converges to a fixed real number a if, for every positive number , there exists a natural number N() such that for all n ≥ N(), jan − aj < . We call a the limit of the sequence and write limn!1 an = a: Our focus today will in trying to develop analogues of this notion that apply to sequences of random variables. We will first give some definitions and then try to circle back to relate the definitions and discuss some examples. Throughout, we will focus on the setting where we have a sequence of random variables X1;:::;Xn and another random variable X, and would like to define what is means for the sequence to converge to X.
    [Show full text]
  • Convergence Rates for the Quantum Central Limit Theorem
    Commun. Math. Phys. 383, 223–279 (2021) Communications in Digital Object Identifier (DOI) https://doi.org/10.1007/s00220-021-03988-1 Mathematical Physics Convergence Rates for the Quantum Central Limit Theorem Simon Becker1 , Nilanjana Datta1, Ludovico Lami2,3, Cambyse Rouzé4 1 Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences University of Cambridge, Cambridge CB3 0WA,UK. E-mail: [email protected]; [email protected] 2 School of Mathematical Sciences and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems, University of Nottingham, University Park, Nottingham NG7 2RD, UK. 3 Institute of Theoretical Physics and IQST, Universität Ulm, Albert-Einstein-Allee, 89069 Ulm, Germany. E-mail: [email protected] 4 Zentrum Mathematik, Technische Universität München, 85748 Garching, Germany. E-mail: [email protected] Received: 20 February 2020 / Accepted: 15 January 2021 Published online: 15 February 2021 – © The Author(s) 2021 Contents 1. Introduction ................................. 224 2. Notation and Definitions ........................... 228 2.1 Mathematical notation ......................... 228 2.2 Definitions ................................ 229 2.2.1 Quantum information with continuous variables .......... 229 2.2.2 Phase space formalism ....................... 231 2.3 Moments ................................ 233 2.4 Quantum convolution .......................... 235 3. Cushen and Hudson’s Quantum Central Limit Theorem .......... 236 4. Main Results ................................. 237 4.1 Quantitative bounds in the QCLT .................... 237 4.2 Optimality of convergence rates and necessity of finite second moments in the QCLT ............................... 239 4.3 Applications to capacity of cascades of beam splitters with non-Gaussian environment ............................... 240 4.4 New results on quantum characteristic functions ............ 242 5. New Results on Quantum Characteristic Functions: Proofs ........
    [Show full text]
  • Stat 609: Mathematical Statistics Lecture 19
    Lecture 19: Convergence Asymptotic approach In statistical analysis or inference, a key to the success of finding a good procedure is being able to find some moments and/or distributions of various statistics. In many complicated problems we are not able to find exactly the moments or distributions of given statistics. When the sample size n is large, we may approximate the moments and distributions of statistics, using asymptotic tools, some of which are studied in this course. In an asymptotic analysis, we consider a sample X = (X1;:::;Xn) not for fixed n, but as a member of a sequence corresponding to n = n0;n0 + 1;:::, and obtain the limit of the distribution of an appropriately normalized statistic or variable Tn(X) as n ! ¥. The limiting distribution and its moments are used as approximations to the distribution and moments of Tn(X) in thebeamer-tu-logo situation with a large but actually finite n. UW-Madison (Statistics) Stat 609 Lecture 19 2015 1 / 17 This leads to some asymptotic statistical procedures and asymptotic criteria for assessing their performances. The asymptotic approach is not only applied to the situation where no exact method (the approach considering a fixed n) is available, but also used to provide a procedure simpler (e.g., in terms of computation) than that produced by the exact approach. In addition to providing more theoretical results and/or simpler procedures, the asymptotic approach requires less stringent mathematical assumptions than does the exact approach. Definition 5.5.1 (convergence in probability) A sequence of random variables Zn, i = 1;2;:::, converges in probability to a random variable Z iff for every e > 0, lim P(jZn − Z j ≥ e) = 0: n!¥ A sequence of random vectors Zn converges in probability to a random vector Z iff each component of Zn converges in probability to the corresponding component of Z.
    [Show full text]
  • High Order Regularization of Dirac-Delta Sources in Two Space Dimensions
    INTERNSHIP REPORT High order regularization of Dirac-delta sources in two space dimensions. Author: Supervisors: Wouter DE VRIES Prof. Guustaaf JACOBS s1010239 Jean-Piero SUAREZ HOST INSTITUTION: San Diego State University, San Diego (CA), USA Department of Aerospace Engineering and Engineering Mechanics Computational Fluid Dynamics laboratory Prof. G.B. JACOBS HOME INSTITUTION University of Twente, Enschede, Netherlands Faculty of Engineering Technology Group of Engineering Fluid Dynamics Prof. H.W.M. HOEIJMAKERS March 4, 2015 Abstract In this report the regularization of singular sources appearing in hyperbolic conservation laws is studied. Dirac-delta sources represent small particles which appear as source-terms in for example Particle-laden flow. The Dirac-delta sources are regularized with a high order accurate polynomial based regularization technique. Instead of regularizing a single isolated particle, the aim is to employ the regularization technique for a cloud of particles distributed on a non- uniform grid. By assuming that the resulting force exerted by a cloud of particles can be represented by a continuous function, the influence of the source term can be approximated by the convolution of the continuous function and the regularized Dirac-delta function. It is shown that in one dimension the distribution of the singular sources, represented by the convolution of the contin- uous function and Dirac-delta function, can be approximated with high accuracy using the regularization technique. The method is extended to two dimensions and high order approximations of the source-term are obtained as well. The resulting approximation of the singular sources are interpolated using a polynomial interpolation in order to regain a continuous distribution representing the force exerted by the cloud of particles.
    [Show full text]
  • Strongly Mixed Random Errors in Mann's Iteration Algorithm for a Contractive Real Function
    Pr´e-Publica¸c~oesdo Departamento de Matem´atica Universidade de Coimbra Preprint Number 17{29 STRONGLY MIXED RANDOM ERRORS IN MANN'S ITERATION ALGORITHM FOR A CONTRACTIVE REAL FUNCTION HASSINA ARROUDJ, IDIR ARAB AND ABDELNASSER DAHMANI Abstract: This work deals with the Mann's stochastic iteration algorithm under α−mixing random errors. We establish the Fuk-Nagaev's inequalities that enable us to prove the almost complete convergence with its corresponding rate of con- vergence. Moreover, these inequalities give us the possibility of constructing a confidence interval for the unique fixed point. Finally, to check the feasibility and validity of our theoretical results, we consider some numerical examples, namely a classical example from astronomy. Keywords: Fixed point; Fuk-Nagaev's inequalities; Mann's iteration algorithm; α-mixing; Rate of convergence; confidence domain. 1. Introduction In many mathematical problems arising from various domains, the exis- tence of a solution is the same as the existence of a fixed point by some appropriate transformation of the problem. The most known problem in that framework is the root existence which can be tackled easily as the ex- istence of a fixed point and vice versa. Therefore, the fixed point theory is of paramount importance in engineering sciences and many areas of mathe- matics. Fixed point theory provides conditions under which maps have the existence and uniqueness of solutions. Over the last decades, that theory has been revealed as one of the most significant tool in the study of nonlinear problems. In particular, in many fields, equilibria or stability are fundamen- tal concepts that can be described in terms of fixed points.
    [Show full text]
  • 1 Probability Spaces
    BROWN UNIVERSITY Math 1610 Probability Notes Samuel S. Watson Last updated: December 18, 2015 Please do not hesitate to notify me about any mistakes you find in these notes. My advice is to refresh your local copy of this document frequently, as I will be updating it throughout the semester. 1 Probability Spaces We model random phenomena with a probability space, which consists of an arbitrary set ­, a collection1 of subsets of ­, and a map P : [0, 1], where and P satisfy certain F F! F conditions detailed below. An element ! ­ is called an outcome, an element E is 2 2 F called an event, and we interpret P(E) as the probability of the event E. To connect this setup to the way you usually think about probability, regard ! as having been randomly selected from ­ in such a way that, for each event E, the probability that ! is in E (in other words, that E happens) is equal to P(E). If E and F are events, then the event “E and F” corresponds to the ! : ! E and ! F , f 2 2 g abbreviated as E F. Similarly, E F is the event that E happens or F happens, and \ [ ­ E is the event that E does not happen. We refer to ­ E as the complement of E, and n n sometimes denote2 it by Ec. To ensure that we can perform these basic operations, we require that is closed under F them. In other words, ­ E must be an event whenever E is an event (that is, ­ E n n 2 F whenever E ).
    [Show full text]