Modeling Periodic Functions for Time Series Analysis

Total Page:16

File Type:pdf, Size:1020Kb

Modeling Periodic Functions for Time Series Analysis From: AAAI Technical Report WS-98-07. Compilation copyright © 1998, AAAI (www.aaai.org). All rights reserved. Modeling periodic functions for time series analysis Anish Biswas Department of Mathematics and Computer Science University of Tulsa 600 South College Avenue Tulsa, OK 74014 E-mail: [email protected] Abstract the model can be improved. Wehave also provided the convergence proof of the algorithm, which says that un- Timeseries problemsinvolve analysis of periodic functions for predicting the future. Aflexible re- der infinite sampling this method is guaranteed to give gression method should be able to dynamically accurate model and the level of accuracy is limited only select the appropriate model to fit the available by error due to truncation of the series. data. In this paper, we present a function ap- proximation scheme that can be used for model- Chebychev polynomials ing periodic functions using a series of orthogonal polynomials, named Chebychev polynomials. In Chebychev polynomials are a family of orthogonal poly- our approach, we obtain an estimate of the error nomials (Geronimus 1961). Any function f(x) may due to neglecting higher order polynomials and approximated by a weighted sum of these polynomial thus can flexibly select a polynomialmodel of the functions with an appropriate selection of the coeffi- proper order. Wealso showthat this approxima- cients. OO tion approachis stable in the presence of noise. a0 f(x) = -~ + E ai * 7](x) Keywords: Function approximation,Orthogonal poly- i=1 nomials where Introduction Analysis of time series data plays an important role and in finance and marketing. A model derived from past 2 F f(x) * 7](x) data can help to predict future behavior of the phe- ai=-zrJ_l ~-1---’-’~ dx. nomenon under study. For example , models predict- ing demand for product can be used to direct capital Working with an infinite series is not feasible in allocation. For analysis of time series data, we, in gen- practice. We can, however, truncate the above series eral, start with a predetermined model and try to tune and still obtain an approximation, ](x), of the func- its parameters to fit the data. For example, we may tion (Gerald & Wheatly 1992). The Chebychev poly- choose to fit a linear model to the data. for a flexi- nomials converges faster than the Taylor series for the ble approximation scheme, however, a model should be same function (Gerald g: Wheatly 1992). Let us assume chosen dynamically, based on data. In this paper, a we are using only the first n terms of the series. For a function approximation scheme is presented, which can rapidly converging series, the error due to truncation is be used for modeling periodic functions. Wehave used approximately given by the first term of the remainder, a series of orthogonal polynomials, called Chebychev i.e., a,~Tn(x). We have chosen Chebychev polynomials polynomials for approximating time series data. We for function approximation because truncation points know that any function can be represented with an in- can be chosen to provide approximate error bounds. finite series of Chebychev polynomials. But we cannot It should be noted that the expansion of a function work with an infinite series and use only a finite number f(x) with n Chebychev polynomials, gives the least of terms to derive an approximation. This method gives squared polynomial approximation with the n polyno- an approximate model for the data using n Chebychev mials (Foxand & Parker 1968). polynomials where the number of polynomials can be To produce a viable mechanism for building a model dynamically decided from the data. The approximated based on Chebychev polynomials would require the de- model with n Chebychev polynomials gives the least velopment of an algorithm for calculating the polyno- squared error with n polynomials. mial coefficients. Wewould also need to prove that this Furthermore, our algorithm allows incremental de- algorithmic updates would result in a convergence of velopment of the model, i.e. with each piece of data, the approximated function to the actual time function 1.5 i , , , I i i i 1.2 , ) i i i , ) i i i ap~n~x(r(x)) approx(r(x)) abs(~ln(5*x)) i ~ sin(lO*x).... /~ 1 i i L 0,5 0.8 " i, i i \ i Z i o t..................... ~" 0.6 i J i t 4).5 0.4 0.2 11 ~,i/ t / Lll ~ I I i I I I "l I -1.5 I I -1 -0.8 -0.6 4).4 -I).2 (I,2 114 0 6 0 8 -0,8 -0,6 -0.4 -ILl O 0.2 0,4 I},6 (ill X Figure 1: Approximating f(x) = sin(lO*x) for Number Figure 2: Approximating f(x) = Isin(5*x)l for Number of Chebychev Polynomials used = 20 and number of of Chebychev Polynomials used = 10 and number of data points = 1000. data points = 1000. underlying the sampled data. We provide such an al- Approximations of periodic functions gorithm and the associated convergence theorem in the We know that a periodic function S(x) can in general next section. be expressed as An algorithm to get the temporal model f(x) ---- f(x -t- T* from time series data where T is the period and n E [0, 1,2,...] . So the na- Let f(x) be the target function of x and f(x) be its ture of the function in a single period and the length approximation based on the set of samples S = {Sj}, of the period is enough for the prediction of behavior where Sj = (xj, v~:j)Vj = 1, 2,..., k and k = Numberof for unknown x values. From our approximation scheme instances and v~j = f(xj). Wemay have to change the we get these two pieces of information and can success- scale for the values x j, so that all the values are in the fully predict the behavior of the functional values for range [-1 : 1]; then we get the approximated function unseen situations. Wehave taken some typical periodic in the range [-1 : 1], which we may need to scale back functions and tested the algorithm for those functions: to get the desired value. f(x) --- sin(x 10), an d f( x) -=Isi n(x * 5)1 Let n be the number of Chebychev polynomials we are going to use for the purpose of learning. Let Weknow that the error due to truncation is given by the ~(x) i E [0..n] be the Chebychev polynomials. The first term in the series and thus we can flexibly decide steps of the algorithm are: the number of terms to be chosen for approximation. The actual functions and the polynomial approxima- 1. InitializeCi=0, Vi=0,1,2,...,n tions obtained for these two functions are presented in 2. For all j do Figures 1 and in Figures 2. We have given each data 3. Vi = 0,1,2,...,n point sequentially into the algorithm and the resulting functions generated are found to be fairly good approx- imations of the underlying functions. Thus we can get a 2 good measure of the period and nature of the function x/l - x in a period. We can use this informations to get the 4. End for functioal values in unseen points. 5. Set Wehave also shown the nature of improvement of the n Co ~G*Ti model with more data points for the f(x) Isin(x * 5) = s<, (-7- + in Figures 3. We have calculated error as the average i=l of the difference of approximate functional values and where K = ¢(k) , is function of number of interac- the accurate values, over a set of sampled points in the tions. interval [-1 : 1]. It shows that the approximation ac- curacy increases rapidly with number of points. Theorem 1 Under infinite sampling the algorithm can approximate the actual function. Effects of Noise: We have also tested the approxima- Proof: See Appendix. tion scheme under noisy data, where noise is added 0.3 0.4 0.35 1 0,25 0.3’- No m)l~-- ~.d.,~).l.... ~, ~.d.~O.3..... Em)r -- i; ~.d.-05..... 0.25 0,2 0,15 ~.j,,\ o.1 0.05 0,05 I I I I I / I I I 4) 200 400 600 R00 I(X~ 1200 1400 1604) 1800 0 2 ) ) I(X)0 I 1404) I(~10 I~.) 20{}0 Numhc¢of Data [xfints Numberofdatll [~)inl~; Figure 3: Nature of decreasing error in approximating Figure 4: Nature of decreasing error in approximating f(x) = Isin(x 5)] Number of Chebychev Pol ynomials f(x) = sin(x, 10) under noise Number of Chebychev used is 11 and sampling resolution is 1000. Polynomials used is 20 and sampling resolution is 1000. to each data point. Here we assume that the random of greater resolution is received. Wehave noise follow a Normal distribution. ¢ We have run experiments with random noise follow- ,fi (/(x~) ~(x~) ing Normal distribution(# = 0,~) where g = -- = 0.1, ~ = 0.3, g = 0.5 values are applied where where ¢ is a function of no of interactions and can be value of f(x) is in the range [-1 : 1]. Wehave in- given by ¢ = 2" So,we get, vestigated the nature of decreasing error with more data points and see that the process is resilient to noise as average error do not increase rapidly with noise shownin Figures 4. j=l Now,in case of infinite sampling and infinite resolu- Conclusion tion, we have, We have presented an algorithm that can build a model from the time series data where this model is not a pre- So for Vi -- 0, 1, 2,...n i.e.
Recommended publications
  • 7: Inner Products, Fourier Series, Convolution
    7: FOURIER SERIES STEVEN HEILMAN Contents 1. Review 1 2. Introduction 1 3. Periodic Functions 2 4. Inner Products on Periodic Functions 3 5. Trigonometric Polynomials 5 6. Periodic Convolutions 7 7. Fourier Inversion and Plancherel Theorems 10 8. Appendix: Notation 13 1. Review Exercise 1.1. Let (X; h·; ·i) be a (real or complex) inner product space. Define k·k : X ! [0; 1) by kxk := phx; xi. Then (X; k·k) is a normed linear space. Consequently, if we define d: X × X ! [0; 1) by d(x; y) := ph(x − y); (x − y)i, then (X; d) is a metric space. Exercise 1.2 (Cauchy-Schwarz Inequality). Let (X; h·; ·i) be a complex inner product space. Let x; y 2 X. Then jhx; yij ≤ hx; xi1=2hy; yi1=2: Exercise 1.3. Let (X; h·; ·i) be a complex inner product space. Let x; y 2 X. As usual, let kxk := phx; xi. Prove Pythagoras's theorem: if hx; yi = 0, then kx + yk2 = kxk2 + kyk2. 1 Theorem 1.4 (Weierstrass M-test). Let (X; d) be a metric space and let (fj)j=1 be a sequence of bounded real-valued continuous functions on X such that the series (of real num- P1 P1 bers) j=1 kfjk1 is absolutely convergent. Then the series j=1 fj converges uniformly to some continuous function f : X ! R. 2. Introduction A general problem in analysis is to approximate a general function by a series that is relatively easy to describe. With the Weierstrass Approximation theorem, we saw that it is possible to achieve this goal by approximating compactly supported continuous functions by polynomials.
    [Show full text]
  • Fourier Series
    Academic Press Encyclopedia of Physical Science and Technology Fourier Series James S. Walker Department of Mathematics University of Wisconsin–Eau Claire Eau Claire, WI 54702–4004 Phone: 715–836–3301 Fax: 715–836–2924 e-mail: [email protected] 1 2 Encyclopedia of Physical Science and Technology I. Introduction II. Historical background III. Definition of Fourier series IV. Convergence of Fourier series V. Convergence in norm VI. Summability of Fourier series VII. Generalized Fourier series VIII. Discrete Fourier series IX. Conclusion GLOSSARY ¢¤£¦¥¨§ Bounded variation: A function has bounded variation on a closed interval ¡ ¢ if there exists a positive constant © such that, for all finite sets of points "! "! $#&% (' #*) © ¥ , the inequality is satisfied. Jordan proved that a function has bounded variation if and only if it can be expressed as the difference of two non-decreasing functions. Countably infinite set: A set is countably infinite if it can be put into one-to-one £0/"£ correspondence with the set of natural numbers ( +,£¦-.£ ). Examples: The integers and the rational numbers are countably infinite sets. "! "!;: # # 123547698 Continuous function: If , then the function is continuous at the point : . Such a point is called a continuity point for . A function which is continuous at all points is simply referred to as continuous. Lebesgue measure zero: A set < of real numbers is said to have Lebesgue measure ! $#¨CED B ¢ £¦¥ zero if, for each =?>A@ , there exists a collection of open intervals such ! ! D D J# K% $#L) ¢ £¦¥ ¥ ¢ = that <GFIH and . Examples: All finite sets, and all countably infinite sets, have Lebesgue measure zero. "! "! % # % # Odd and even functions: A function is odd if for all in its "! "! % # # domain.
    [Show full text]
  • Kriging Prediction with Isotropic Matérn Correlations: Robustness
    Journal of Machine Learning Research 21 (2020) 1-38 Submitted 12/19; Revised 7/20; Published 9/20 Kriging Prediction with Isotropic Mat´ernCorrelations: Robustness and Experimental Designs Rui Tuo∗y [email protected] Wm Michael Barnes '64 Department of Industrial and Systems Engineering Texas A&M University College Station, TX 77843, USA Wenjia Wang∗ [email protected] The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong Editor: Philipp Hennig Abstract This work investigates the prediction performance of the kriging predictors. We derive some error bounds for the prediction error in terms of non-asymptotic probability under the uniform metric and Lp metrics when the spectral densities of both the true and the imposed correlation functions decay algebraically. The Mat´ernfamily is a prominent class of correlation functions of this kind. Our analysis shows that, when the smoothness of the imposed correlation function exceeds that of the true correlation function, the prediction error becomes more sensitive to the space-filling property of the design points. In particular, the kriging predictor can still reach the optimal rate of convergence, if the experimental design scheme is quasi-uniform. Lower bounds of the kriging prediction error are also derived under the uniform metric and Lp metrics. An accurate characterization of this error is obtained, when an oversmoothed correlation function and a space-filling design is used. Keywords: Computer Experiments, Uncertainty Quantification, Scattered Data Approx- imation, Space-filling Designs, Bayesian Machine Learning 1. Introduction In contemporary mathematical modeling and data analysis, we often face the challenge of reconstructing smooth functions from scattered observations.
    [Show full text]
  • Contentscontents 2323 Fourier Series
    ContentsContents 2323 Fourier Series 23.1 Periodic Functions 2 23.2 Representing Periodic Functions by Fourier Series 9 23.3 Even and Odd Functions 30 23.4 Convergence 40 23.5 Half-range Series 46 23.6 The Complex Form 53 23.7 An Application of Fourier Series 68 Learning outcomes In this Workbook you will learn how to express a periodic signal f(t) in a series of sines and cosines. You will learn how to simplify the calculations if the signal happens to be an even or an odd function. You will learn some brief facts relating to the convergence of the Fourier series. You will learn how to approximate a non-periodic signal by a Fourier series. You will learn how to re-express a standard Fourier series in complex form which paves the way for a later examination of Fourier transforms. Finally you will learn about some simple applications of Fourier series. Periodic Functions 23.1 Introduction You should already know how to take a function of a single variable f(x) and represent it by a power series in x about any point x0 of interest. Such a series is known as a Taylor series or Taylor expansion or, if x0 = 0, as a Maclaurin series. This topic was firs met in 16. Such an expansion is only possible if the function is sufficiently smooth (that is, if it can be differentiated as often as required). Geometrically this means that there are no jumps or spikes in the curve y = f(x) near the point of expansion.
    [Show full text]
  • On the Real Projections of Zeros of Almost Periodic Functions
    ON THE REAL PROJECTIONS OF ZEROS OF ALMOST PERIODIC FUNCTIONS J.M. SEPULCRE AND T. VIDAL Abstract. This paper deals with the set of the real projections of the zeros of an arbitrary almost periodic function defined in a vertical strip U. It pro- vides practical results in order to determine whether a real number belongs to the closure of such a set. Its main result shows that, in the case that the Fourier exponents {λ1, λ2, λ3,...} of an almost periodic function are linearly independent over the rational numbers, such a set has no isolated points in U. 1. Introduction The theory of almost periodic functions, which was created and developed in its main features by H. Bohr during the 1920’s, opened a way to study a wide class of trigonometric series of the general type and even exponential series. This theory shortly acquired numerous applications to various areas of mathematics, from harmonic analysis to differential equations. In the case of the functions that are defined on the real numbers, the notion of almost periodicity is a generalization of purely periodic functions and, in fact, as in classical Fourier analysis, any almost periodic function is associated with a Fourier series with real frequencies. Let us briefly recall some notions concerning the theory of the almost periodic functions of a complex variable, which was theorized in [4] (see also [3, 5, 8, 11]). A function f(s), s = σ + it, analytic in a vertical strip U = {s = σ + it ∈ C : α< σ<β} (−∞ ≤ α<β ≤ ∞), is called almost periodic in U if to any ε > 0 there exists a number l = l(ε) such that each interval t0 <t<t0 + l of length l contains a number τ satisfying |f(s + iτ) − f(s)|≤ ε ∀s ∈ U.
    [Show full text]
  • Almost Periodic Sequences and Functions with Given Values
    ARCHIVUM MATHEMATICUM (BRNO) Tomus 47 (2011), 1–16 ALMOST PERIODIC SEQUENCES AND FUNCTIONS WITH GIVEN VALUES Michal Veselý Abstract. We present a method for constructing almost periodic sequences and functions with values in a metric space. Applying this method, we find almost periodic sequences and functions with prescribed values. Especially, for any totally bounded countable set X in a metric space, it is proved the existence of an almost periodic sequence {ψk}k∈Z such that {ψk; k ∈ Z} = X and ψk = ψk+lq(k), l ∈ Z for all k and some q(k) ∈ N which depends on k. 1. Introduction The aim of this paper is to construct almost periodic sequences and functions which attain values in a metric space. More precisely, our aim is to find almost periodic sequences and functions whose ranges contain or consist of arbitrarily given subsets of the metric space satisfying certain conditions. We are motivated by the paper [3] where a similar problem is investigated for real-valued sequences. In that paper, using an explicit construction, it is shown that, for any bounded countable set of real numbers, there exists an almost periodic sequence whose range is this set and which attains each value in this set periodically. We will extend this result to sequences attaining values in any metric space. Concerning almost periodic sequences with indices k ∈ N (or asymptotically almost periodic sequences), we refer to [4] where it is proved that, for any precompact sequence {xk}k∈N in a metric space X , there exists a permutation P of the set of positive integers such that the sequence {xP (k)}k∈N is almost periodic.
    [Show full text]
  • Fourier Transform for Mean Periodic Functions
    Annales Univ. Sci. Budapest., Sect. Comp. 35 (2011) 267–283 FOURIER TRANSFORM FOR MEAN PERIODIC FUNCTIONS L´aszl´oSz´ekelyhidi (Debrecen, Hungary) Dedicated to the 60th birthday of Professor Antal J´arai Abstract. Mean periodic functions are natural generalizations of periodic functions. There are different transforms - like Fourier transforms - defined for these types of functions. In this note we introduce some transforms and compare them with the usual Fourier transform. 1. Introduction In this paper C(R) denotes the locally convex topological vector space of all continuous complex valued functions on the reals, equipped with the linear operations and the topology of uniform convergence on compact sets. Any closed translation invariant subspace of C(R)iscalledavariety. The smallest variety containing a given f in C(R)iscalledthevariety generated by f and it is denoted by τ(f). If this is different from C(R), then f is called mean periodic. In other words, a function f in C(R) is mean periodic if and only if there exists a nonzero continuous linear functional μ on C(R) such that (1) f ∗ μ =0 2000 Mathematics Subject Classification: 42A16, 42A38. Key words and phrases: Mean periodic function, Fourier transform, Carleman transform. The research was supported by the Hungarian National Foundation for Scientific Research (OTKA), Grant No. NK-68040. 268 L. Sz´ekelyhidi holds. In this case sometimes we say that f is mean periodic with respect to μ. As any continuous linear functional on C(R) can be identified with a compactly supported complex Borel measure on R, equation (1) has the form (2) f(x − y) dμ(y)=0 for each x in R.
    [Show full text]
  • 6.436J / 15.085J Fundamentals of Probability, Homework 1 Solutions
    MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2018 Problem Set 1 Readings: (a) Notes from Lecture 1. (b) Handout on background material on sets and real analysis (Recitation 1). Supplementary readings: [C], Sections 1.1-1.4. [GS], Sections 1.1-1.3. [W], Sections 1.0-1.5, 1.9. Exercise 1. (a) Let N be the set of positive integers. A function f : N → {0, 1} is said to be periodic if there exists some N such that f(n + N)= f(n),for all n ∈ N. Show that the set of periodic functions is countable. (b) Does the result from part (a) remain valid if we consider rational-valued periodic functions f : N → Q? Solution: (a) For a given positive integer N,letA N denote the set of periodic functions with a period of N.Fora given N ,since thesequence, f(1), ··· ,f(N), actually defines a periodic function in AN ,wehave thateach A N contains 2N elements. For example, for N =2,there arefour functions inthe set A2: f(1)f(2)f(3)f(4) ··· =0000··· ;1111··· ;0101··· ;1010··· . The set of periodic functions from N to {0, 1}, A,can bewritten as, ∞ A = AN . N=1 ! Since the union of countably many finite sets is countable, we conclude that the set of periodic functions from N to {0, 1} is countable. (b) Still, for a given positive integer N,let AN denote the set of periodic func- tions with a period N.Fora given N ,since the sequence, f(1), ··· ,f(N), 1 actually defines a periodic function in AN ,weconclude that A N has the same cardinality as QN (the Cartesian product of N sets of rational num- bers).
    [Show full text]
  • Fourier Series
    Chapter 10 Fourier Series 10.1 Periodic Functions and Orthogonality Relations The differential equation ′′ y + 2y = F cos !t models a mass-spring system with natural frequency with a pure cosine forcing function of frequency !. If 2 = !2 a particular solution is easily found by undetermined coefficients (or by∕ using Laplace transforms) to be F y = cos !t. p 2 !2 − If the forcing function is a linear combination of simple cosine functions, so that the differential equation is N ′′ 2 y + y = Fn cos !nt n=1 X 2 2 where = !n for any n, then, by linearity, a particular solution is obtained as a sum ∕ N F y (t)= n cos ! t. p 2 !2 n n=1 n X − This simple procedure can be extended to any function that can be repre- sented as a sum of cosine (and sine) functions, even if that summation is not a finite sum. It turns out that the functions that can be represented as sums in this form are very general, and include most of the periodic functions that are usually encountered in applications. 723 724 10 Fourier Series Periodic Functions A function f is said to be periodic with period p> 0 if f(t + p)= f(t) for all t in the domain of f. This means that the graph of f repeats in successive intervals of length p, as can be seen in the graph in Figure 10.1. y p 2p 3p 4p 5p Fig. 10.1 An example of a periodic function with period p.
    [Show full text]
  • Fourier Series for Periodic Functions Lecture #8 5CT3,4,6,7
    Fourier Series for Periodic Functions Lecture #8 5CT3,4,6,7 BME 333 Biomedical Signals and Systems 2 - J.Schesser Fourier Series for Periodic Functions • Up to now we have solved the problem of approximating a function f(t) by fa(t) within an interval T. • However, if f(t) is periodic with period T, i.e., f(t)=f(t+T), then the approximation is true for all t. • And if we represent a periodic function in terms of an infinite Fourier series, such that the frequencies are all integral multiples of the frequency 1/T, where k=1 corresponds to the fundamental frequency of the function and the remainder are its harmonics. Tjkt 2 1 2 T aftedtk T () T 2 j22kt j kt j 2 kt j 2 kt TT T T ft() a00 [aakk e * e ] a k e a2Re[ a k e ] kkk11 jkt2 j k ftak( ) a0 C cos( kkk ), where 2aa Ce and 00 C k1 T BME 333 Biomedical Signals and Systems 3 - J.Schesser Another Form for the Fourier Series 2 kt ft() C0 Ckk cos( ) k1 T 22kt kt AA0 kkcos( ) B sin( ) k1 TT where Akk CBCACcos k and k - k sin k and 00 BME 333 Biomedical Signals and Systems 4 - J.Schesser Fourier Series Theorem • Any periodic function f (t) with period T which is integrable ( f ( t ) dt ) can be represented by an infinite Fourier Series • If [f (t)]2 is also integrable, then the series converges to the value of f (t) at every point where f(t) is continuous and to the average value at any discontinuity.
    [Show full text]
  • Periodic Linear Systems Lecture 17 Math 634 10/8/99
    Periodic Linear Systems Lecture 17 Math 634 10/8/99 We now consider x˙ = A(t)x (1) when A is a continuous periodic n n matrix function of t; i.e., when there × is a constant T>0 such that A(t + T )=A(t) for every t R . When that condition is satisfied, we say, more precisely, that A is T -periodic∈ .IfT is the smallest positive number for which this condition holds, we say that T is the minimal period of A. Let A be T -periodic, and let X(t) be a fundamental matrix for (1). Define ˜ n n ˜ ˜ R R X : R ( , )byX(t)=X(t + T ). Clearly, the columns of X are linearly→L independent functions of t. Also, d d X˜(t)= X(t + T )=X0(t + T )=A(t + t)X(t + T )=A(t)X˜(t), dt dt so X˜ solves the matrix equivalent of (1). Hence, X˜ is a fundamental matrix for (1). Because the dimension of the solution space of (1) is n, this means that there is a nonsingular (constant) matrix C such that X(t + T )=X(t)C for every t R . C is called a monodromy matrix. ∈ n n TB C Lemma There exists B (C , ) such that C = e . ∈L Proof. Without loss of generality, we assume that T = 1, since if it isn’t we can just rescale B by a scalar constant. We also assume, without loss of generality, that C is in Jordan canonical form. (If it isn’t, then use the fact 1 B PBP 1 that P − CP = e implies that C = e − .) Furthermore, because of the way the matrix exponential acts on a block diagonal matrix, it suffices to 1 show that for each p p Jordan block × λ 0 0 ··· ··· .
    [Show full text]
  • On Prediction Properties of Kriging: Uniform Error Bounds and Robustness
    On Prediction Properties of Kriging: Uniform Error Bounds and Robustness Wenjia Wang∗1, Rui Tuoy2 and C. F. Jeff Wuz3 1The Statistical and Applied Mathematical Sciences Institute, Durham, NC 27709, USA 2Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX 77843, USA 3The H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA Abstract Kriging based on Gaussian random fields is widely used in reconstructing unknown functions. The kriging method has pointwise predictive distributions which are com- putationally simple. However, in many applications one would like to predict for a range of untried points simultaneously. In this work we obtain some error bounds for the simple and universal kriging predictor under the uniform metric. It works for a scattered set of input points in an arbitrary dimension, and also covers the case where the covariance function of the Gaussian process is misspecified. These results lead to a better understanding of the rate of convergence of kriging under the Gaussian or the Matérn correlation functions, the relationship between space-filling designs and kriging models, and the robustness of the Matérn correlation functions. Keywords: Gaussian Process modeling; Uniform convergence; Space-filling designs; Radial arXiv:1710.06959v4 [math.ST] 19 Mar 2019 basis functions; Spatial statistics. ∗Wenjia Wang is a postdoctoral fellow in the Statistical and Applied Mathematical Sciences Institute, Durham, NC 27709, USA (Email: [email protected]); yRui Tuo is Assistant Professor in Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX 77843, USA (Email: [email protected]); Tuo’s work is supported by NSF grant DMS 1564438 and NSFC grants 11501551, 11271355 and 11671386.
    [Show full text]