Math Notes for ECE 278

Total Page:16

File Type:pdf, Size:1020Kb

Math Notes for ECE 278 Math Notes for ECE 278 G. C. Papen September 6, 2017 c 2017 by George C. Papen All rights reserved. No part of this manuscript is to be reproduced without written consent of the author. Contents 1 Background 1 1.1 Linear Systems.......................................1 1.1.1 Bandwidth and Timewidth............................7 1.1.2 Passband and Complex-Baseband Signals.................... 12 1.1.3 Signal Space.................................... 15 1.2 Random Signals....................................... 23 1.2.1 Probability Distribution Functions........................ 23 1.2.2 Random Processes................................. 35 1.3 Electromagnetics...................................... 46 1.3.1 Material Properties................................ 47 1.3.2 The Wave Equation................................ 50 1.3.3 Random Electromagnetic field Fields....................... 55 1.4 References.......................................... 56 1.5 Problems.......................................... 56 2 Examples 65 2.1 Filter Estimation...................................... 65 2.2 Constant-Modulus Objective Function.......................... 68 2.3 Adaptive Estimation.................................... 69 Bibliography 71 iii iv 1 Background The study of communication systems is rich and rewarding, bringing together a broad range of topics in engineering and physics. Our development of this subject draws on the understanding of basic material in the subjects of linear systems, random signals, electromagnetics. The emphasis in this chapter is on the concepts that are relavent to the understanding of modern digital communication systems. This background chapter also introduces and reinforces various and alternative sets of notation that are used throughout the book. Much of the understanding of the various topics in this book depends on the choice of clear and appropriate notation and terminology. 1.1 Linear Systems A communication system conveys information by embedding that information into temporal and perhaps spatial variations of a propagating signal. We begin with a discussion of the properties of signals and systems. A signal is a real-valued or complex-valued function of a continuous or discrete variable called time. A system responds to a signal s(t) at its input producing one or more signals r(t) at its output. The most amenable systems are linear systems because a linear mathematical model can support Figure 1.1: A block diagram of a linear system characterized by an impulse response function h(t). Using the properties of homogeneity and additivity, an input ax1(t) + bx2(t) produces an output ay1(t) + by2(t). powerful methods of analysis and design. We are interested in both discrete systems and continuous systems expressed in a variety of mathematical forms such as by using continuous integral equations, continuous differential equations, or discrete difference equations. A communication signal may be a real function of time or a complex function of time. The rectangular form of a complex function is a(t) = aR(t) + iaI (t) where aR(t) is the real part and aI (t) 2 iφ(t) p 2 2 is the imaginary part and i = 1. The polar form is a(t) = A(t)e where A(t) = aR(t) + aI (t) −−1 is the amplitude and φ = tan (aI =aR) is the phase. Systems can be classified by the properties that relate the input s(t) to the output r(t). Linearity A system, either real or complex, is linear if it is homogeneous and additive: 1 1. Homogeneous systems If input s(t) has output r(t), then for every scalar a, real or complex, input as(t) has output ar(t). 2. Additive systems If input x1(t) has output y1(t) and input x2(t) has output y2(t), then input x1(t) + x2(t) has output y1(t) + y2(t). The output r(t) of a linear continuous-time system can be written as a superposition integral of the input s(t) and a function h(t; τ) Z 1 r(t) = h(t; τ)s(τ)dτ (1.1.1) −∞ where h(t; τ), called the time-varying impulse response, is defined as the output of the system at time t in response to a Dirac impulse δ(t τ). − The Dirac impulse δ(t) is a not a proper functiona. It is defined by the formal integral relationship Z 1 s(t) = δ(t τ)s(τ)dτ; (1.1.2) −∞ − for any function s(t). This integral is referred to as the sifting property of a Dirac impulse. For the treatment of discrete-time signals, a Kronecker impulse δmn is useful, defined by δmn equal to one if m is equal to n, and δmn equal to zero otherwise. Shift Invariance Under appropriate conditions, a system described by a superposition integral can be reduced to a simpler form known as a shift-invariant system, or when appropriate, as a time-invariant or a space-invariant system. If input s(t) has output r(t), then for every τ, input s(t τ) has output − r(t τ). In this case, the form of the impulse response for a linear and shift-invariant system − depends only on the time difference so that h(t; τ) = h(t τ; 0) and (1.1.1) reduces to − Z 1 r(t) = h(τ)s(t τ)dτ: (1.1.3) −∞ − The output is then a convolution of the input s(t) and the shift-invariant impulse response h(t) and is denoted by r(t) = s(t) ~ h(t). The shift-invariant impulse response is also called, simply, the impulse response. Every linear shift-invariant system can described as a linear shift-invariant filter. Convolution has the following properties: 1. Commutative property h(t) ~ s(t) = s(t) ~ h(t): 2. Distributive property h(t) ~ (x1(t) + x2(t)) = h(t) ~ x1(t) + h(t) ~ x2(t). aA Dirac impulse is an example of a generalized function or a generalized signal. For the formal theory see Strichartz (2003). 2 3. Associative property h1(t) ~ (h2(t) ~ s(t)) = (h1(t) ~ h2(t)) ~ s(t): Using the distributive property of convolution, we can write for complex functions, a(t) = b(t) ~ c(t) aR(t) + iaI (t) = (bR(t) + ibI (t)) ~ (cR(t) + icI (t)) = b (t) c (t) b (t) c (t) + ib (t) c (t) + b (t) c (t): (1.1.4) R ~ R − I ~ I R ~ I I ~ R The class of shift-invariant systems includes all those described by constant-coefficient, linear differential equations. An example of a spatially-invariant system is free space because it has no boundaries and thus the choice of the spatial origin is arbitrary. Systems with spatial boundaries are spatially-varying in at least one direction, but may be spatially-invariant in the other directions. However, many spatial systems with boundaries can be approximated as spatially-invariant over a limited range of spatial inputs. Causality A causal filter h(t) is a linear filter whose impulse response has a value equal to zero for all times t less than zero. A causal impulse response cannot have an output before it has an input. A right- sided signal s(t) has a value equal to zero for all times less than zero. A linear time-invariant system is causal if and only if it has a right-sided impulse response. A causal h(t) can be defined using the unit-step function, which is 8 1 for t > 0 : < 1 u(t) = 2 for t = 0 (1.1.5) : 0 for t < 0: A linear shift-invariant system is causal if its impulse response h(t) satisfies h(t) = h(t)u(t) except at t = 0. For this case, the lower limit of the integral for the output signal given in (1.1.3) is equal to zero. A function related to the unit-step function is the signum function defined as : sgn(t) = 2u(t) 1 − 8 1 for t > 0 : < = 0 for t = 0 (1.1.6) : 1 for t < 0: − A system for which the output r(t) depends on only the current value of s(t) is called memoryless. The corresponding property in space is called local. 3 The Fourier Transform The Fourier transformb (or spectrum) S(f) of the temporal signal s(t) is defined, provided the integral exists, as Z 1 S(f) = s(t)e−i2πftdt: (1.1.7) −∞ The Fourier transform formally exists for any signal whose energyc E, given by Z 1 E = s(t) 2; (1.1.8) −∞ j j is finite. Such signals are called finite energy or square-integrable signals. The Fourier transform can be extended to include a large number of signals and generalized i2πfct signals with infinite energy, but finite power, such as cos(2πfct) and e by means of a limiting process that often can be expressed using the Dirac impulse δ(t). The signal s(t) can be recovered as an inverse Fourier transform Z 1 s(t) = S(f)ei2πftdf; (1.1.9) −∞ with s(t) S(f) denoting the transform pair. To this purpose, two signals whose difference has zero energy ! are regarded as the same signal. Another way to say this is that the two signals are equal almost everywhere. A Fourier transform can also be defined for spatial signals. For a one-dimensional spatial signal f(x), we haved Z 1 F (k) = f(x)eikxdx; (1.1.10) −∞ where k is the spatial frequency, which is the spatial equivalent of the temporal frequency ! = 2πf. Properties of the Fourier Transform Several properties of the Fourier transform used to analyze communication systems are listed below. 1. Scaling 1 f s(at) S (1.1.11) ! a a j j for any nonzero real value a. This scaling property states that the width of a function in one domain scales inversely with the width of the function in the other domain.
Recommended publications
  • Random Signals
    RANDOM SIGNALS Random Variables A random variable x(ξ) is a mapping that assigns a real number x to every outcome ξ from an abstract probability space. The mapping should satisfy the following two conditions: • the interval {x(ξ) ≤ x} is an event in the abstract probabilty space for every x; • Pr[x(ξ) < ∞] = 1 and Pr[x(ξ) = −∞] = 0. Cumulative distribution function (cdf) of a random variable x(ξ): Fx(x) = Pr{x(ξ) ≤ x}. Probability density function (pdf): dF (x) f (x) = x . x dx Then Z x Fx(x) = fx(x) dx. −∞ EE 524, Fall 2004, # 7 1 Since Fx(∞) = 1, we have normalization condition: Z ∞ fx(x)dx = 1. −∞ Several important properties: 0 ≤ Fx(x) ≤ 1,Fx(−∞) = 0,Fx(∞) = 1, Z ∞ fx(x) ≥ 0, fx(x) dx = 1. −∞ Simple interpretation: Pr{x − ∆/2 ≤ x(ξ) ≤ x + ∆/2} fx(x) = lim . ∆→0 ∆ EE 524, Fall 2004, # 7 2 Expectation of an arbitrary function g(x(ξ)): Z ∞ E {g(x(ξ))} = g(x)fx(x) dx. −∞ Mean: Z ∞ µx = E {x(ξ)} = xfx(x) dx. −∞ Variance of a real random variable x(ξ): 2 2 var{x} = σx = E {(x − E {x}) } = E {x2 − 2xE {x} + E {x}2} = E {x2} − (E {x})2 2 2 = E {x } − µx. Complex random variables: A complex random variable x(ξ) = xR(ξ) + jxI(ξ). Although the definition of the mean remains unchanged, the definition of variance changes for complex x(ξ): 2 2 var{x} = σx = E {|x − E {x}| } = E {|x|2 − xE {x}∗ − x∗E {x} + |E {x}|2} = E {|x|2} − |E {x}|2 2 2 = E {|x| } − |µx| .
    [Show full text]
  • Random Signals
    Chapter 8 RANDOM SIGNALS Signals can be divided into two main categories - deterministic and random. The term random signal is used primarily to denote signals, which have a random in its nature source. As an example we can mention the thermal noise, which is created by the random movement of electrons in an electric conductor. Apart from this, the term random signal is used also for signals falling into other categories, such as periodic signals, which have one or several parameters that have appropriate random behavior. An example is a periodic sinusoidal signal with a random phase or amplitude. Signals can be treated either as deterministic or random, depending on the application. Speech, for example, can be considered as a deterministic signal, if one specific speech waveform is considered. It can also be viewed as a random process if one considers the ensemble of all possible speech waveforms in order to design a system that will optimally process speech signals, in general. The behavior of stochastic signals can be described only in the mean. The description of such signals is as a rule based on terms and concepts borrowed from probability theory. Signals are, however, a function of time and such description becomes quickly difficult to manage and impractical. Only a fraction of the signals, known as ergodic, can be handled in a relatively simple way. Among those signals that are excluded are the class of the non-stationary signals, which otherwise play an essential part in practice. Working in frequency domain is a powerful technique in signal processing. While the spectrum is directly related to the deterministic signals, the spectrum of a ran- dom signal is defined through its correlation function.
    [Show full text]
  • Thompson Sampling on Symmetric Α-Stable Bandits
    Thompson Sampling on Symmetric α-Stable Bandits Abhimanyu Dubey and Alex Pentland Massachusetts Institute of Technology fdubeya, [email protected] Abstract Thompson Sampling provides an efficient technique to introduce prior knowledge in the multi-armed bandit problem, along with providing remarkable empirical performance. In this paper, we revisit the Thompson Sampling algorithm under rewards drawn from α-stable dis- tributions, which are a class of heavy-tailed probability distributions utilized in finance and economics, in problems such as modeling stock prices and human behavior. We present an efficient framework for α-stable posterior inference, which leads to two algorithms for Thomp- son Sampling in this setting. We prove finite-time regret bounds for both algorithms, and demonstrate through a series of experiments the stronger performance of Thompson Sampling in this setting. With our results, we provide an exposition of α-stable distributions in sequential decision-making, and enable sequential Bayesian inference in applications from diverse fields in finance and complex systems that operate on heavy-tailed features. 1 Introduction The multi-armed bandit (MAB) problem is a fundamental model in understanding the exploration- exploitation dilemma in sequential decision-making. The problem and several of its variants have been studied extensively over the years, and a number of algorithms have been proposed that op- timally solve the bandit problem when the reward distributions are well-behaved, i.e. have a finite support, or are sub-exponential. The most prominently studied class of algorithms are the Upper Confidence Bound (UCB) al- gorithms, that employ an \optimism in the face of uncertainty" heuristic [ACBF02], which have been shown to be optimal (in terms of regret) in several cases [CGM+13, BCBL13].
    [Show full text]
  • On Products of Gaussian Random Variables
    On products of Gaussian random variables Zeljkaˇ Stojanac∗1, Daniel Suessy1, and Martin Klieschz2 1Institute for Theoretical Physics, University of Cologne, Germany 2 Institute of Theoretical Physics and Astrophysics, University of Gda´nsk,Poland May 29, 2018 Sums of independent random variables form the basis of many fundamental theorems in probability theory and statistics, and therefore, are well understood. The related problem of characterizing products of independent random variables seems to be much more challenging. In this work, we investigate such products of normal random vari- ables, products of their absolute values, and products of their squares. We compute power-log series expansions of their cumulative distribution function (CDF) based on the theory of Fox H-functions. Numerically we show that for small arguments the CDFs are well approximated by the lowest orders of this expansion. For the two non-negative random variables, we also compute the moment generating functions in terms of Mei- jer G-functions, and consequently, obtain a Chernoff bound for sums of such random variables. Keywords: Gaussian random variable, product distribution, Meijer G-function, Cher- noff bound, moment generating function AMS subject classifications: 60E99, 33C60, 62E15, 62E17 1. Introduction and motivation Compared to sums of independent random variables, our understanding of products is much less comprehensive. Nevertheless, products of independent random variables arise naturally in many applications including channel modeling [1,2], wireless relaying systems [3], quantum physics (product measurements of product states), as well as signal processing. Here, we are particularly motivated by a tensor sensing problem (see Ref. [4] for the basic idea). In this problem we consider n1×n2×···×n tensors T 2 R d and wish to recover them from measurements of the form yi := hAi;T i 1 2 d j nj with the sensing tensors also being of rank one, Ai = ai ⊗ai ⊗· · ·⊗ai with ai 2 R .
    [Show full text]
  • Constructing Copulas from Shock Models with Imprecise Distributions
    Constructing copulas from shock models with imprecise distributions MatjaˇzOmladiˇc Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia [email protected] Damjan Skuljˇ Faculty of Social Sciences, University of Ljubljana, Slovenia [email protected] November 19, 2019 Abstract The omnipotence of copulas when modeling dependence given marg- inal distributions in a multivariate stochastic situation is assured by the Sklar's theorem. Montes et al. (2015) suggest the notion of what they call an imprecise copula that brings some of its power in bivari- ate case to the imprecise setting. When there is imprecision about the marginals, one can model the available information by means of arXiv:1812.07850v5 [math.PR] 18 Nov 2019 p-boxes, that are pairs of ordered distribution functions. By anal- ogy they introduce pairs of bivariate functions satisfying certain con- ditions. In this paper we introduce the imprecise versions of some classes of copulas emerging from shock models that are important in applications. The so obtained pairs of functions are not only imprecise copulas but satisfy an even stronger condition. The fact that this con- dition really is stronger is shown in Omladiˇcand Stopar (2019) thus raising the importance of our results. The main technical difficulty 1 in developing our imprecise copulas lies in introducing an appropriate stochastic order on these bivariate objects. Keywords. Marshall's copula, maxmin copula, p-box, imprecise probability, shock model 1 Introduction In this paper we propose copulas arising from shock models in the presence of probabilistic uncertainty, which means that probability distributions are not necessarily precisely known. Copulas have been introduced in the precise setting by A.
    [Show full text]
  • Probability Theory and Stochastic Processes (R18a0403)
    PROBABILITY THEORY AND STOCHASTIC PROCESSES (R18A0403) LECTURE NOTES B.TECH (II YEAR – I SEM) (2020-21) Prepared by: Mrs.N.Saritha, Assistant Professor Mr.G.S. Naveen Kumar, Assoc.Professor Department of Electronics and Communication Engineering MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY (Autonomous Institution – UGC, Govt. of India) Recognized under 2(f) and 12 (B) of UGC ACT 1956 (Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A’ Grade - ISO 9001:2015 Certified) Maisammaguda, Dhulapally (Post Via. Kompally), Secunderabad – 500100, Telangana State, India Sensitivity: Internal & Restricted MALLA REDDY COLLEGE OF ENGINEERING AND TECHNOLOGY (AUTONOMOUS INSTITUTION: UGC, GOVT. OF INDIA) ELECTRONICS AND COMMUNICATION ENGINEERING II ECE I SEM PROBABILITY THEORY AND STOCHASTIC PROCESSES CONTENTS SYLLABUS UNIT-I-PROBABILITY AND RANDOM VARIABLE UNIT-II- DISTRIBUTION AND DENSITY FUNCTIONS AND OPERATIONS ON ONE RANDOM VARIABLE UNIT-III-MULTIPLE RANDOM VARIABLES AND OPERATIONS UNIT-IV-STOCHASTIC PROCESSES-TEMPORAL CHARACTERISTICS UNIT-V- STOCHASTIC PROCESSES-SPECTRAL CHARACTERISTICS UNITWISE IMPORTANT QUESTIONS PROBABILITY THEORY AND STOCHASTIC PROCESS Course Objectives: To provide mathematical background and sufficient experience so that student can read, write and understand sentences in the language of probability theory. To introduce students to the basic methodology of “probabilistic thinking” and apply it to problems. To understand basic concepts of Probability theory and Random Variables, how to deal with multiple Random Variables. To understand the difference between time averages statistical averages. To teach students how to apply sums and integrals to compute probabilities, and expectations. UNIT I: Probability and Random Variable Probability: Set theory, Experiments and Sample Spaces, Discrete and Continuous Sample Spaces, Events, Probability Definitions and Axioms, Joint Probability, Conditional Probability, Total Probability, Bayes’ Theorem, and Independent Events, Bernoulli’s trials.
    [Show full text]
  • High-Dimensional Mahalanobis Distances of Complex Random Vectors
    mathematics Article High-Dimensional Mahalanobis Distances of Complex Random Vectors Deliang Dai 1,* and Yuli Liang 2 1 Department of Economics and Statistics, Linnaeus University, 35195 Växjö, Sweden 2 Department of Statistics, Örebro Univeristy, 70281 Örebro, Sweden; [email protected] * Correspondence: [email protected] Abstract: In this paper, we investigate the asymptotic distributions of two types of Mahalanobis dis- tance (MD): leave-one-out MD and classical MD with both Gaussian- and non-Gaussian-distributed complex random vectors, when the sample size n and the dimension of variables p increase under a fixed ratio c = p/n ! ¥. We investigate the distributional properties of complex MD when the random samples are independent, but not necessarily identically distributed. Some results regarding −1 the F-matrix F = S2 S1—the product of a sample covariance matrix S1 (from the independent variable array (be(Zi)1×n) with the inverse of another covariance matrix S2 (from the independent variable array (Zj6=i)p×n)—are used to develop the asymptotic distributions of MDs. We generalize the F-matrix results so that the independence between the two components S1 and S2 of the F-matrix is not required. Keywords: Mahalanobis distance; complex random vector; moments of MDs Citation: Dai, D.; Liang, Y. 1. Introduction High-Dimensional Mahalanobis Mahalanobis distance (MD) is a fundamental statistic in multivariate analysis. It Distances of Complex Random is used to measure the distance between two random vectors or the distance between Vectors. Mathematics 2021, 9, 1877. a random vector and its center of distribution. MD has received wide attention since it https://doi.org/10.3390/ was proposed by Mahalanobis [1] in the 1930’s.
    [Show full text]
  • Field Guide to Continuous Probability Distributions
    Field Guide to Continuous Probability Distributions Gavin E. Crooks v 1.0.0 2019 G. E. Crooks – Field Guide to Probability Distributions v 1.0.0 Copyright © 2010-2019 Gavin E. Crooks ISBN: 978-1-7339381-0-5 http://threeplusone.com/fieldguide Berkeley Institute for Theoretical Sciences (BITS) typeset on 2019-04-10 with XeTeX version 0.99999 fonts: Trump Mediaeval (text), Euler (math) 271828182845904 2 G. E. Crooks – Field Guide to Probability Distributions Preface: The search for GUD A common problem is that of describing the probability distribution of a single, continuous variable. A few distributions, such as the normal and exponential, were discovered in the 1800’s or earlier. But about a century ago the great statistician, Karl Pearson, realized that the known probabil- ity distributions were not sufficient to handle all of the phenomena then under investigation, and set out to create new distributions with useful properties. During the 20th century this process continued with abandon and a vast menagerie of distinct mathematical forms were discovered and invented, investigated, analyzed, rediscovered and renamed, all for the purpose of de- scribing the probability of some interesting variable. There are hundreds of named distributions and synonyms in current usage. The apparent diver- sity is unending and disorienting. Fortunately, the situation is less confused than it might at first appear. Most common, continuous, univariate, unimodal distributions can be orga- nized into a small number of distinct families, which are all special cases of a single Grand Unified Distribution. This compendium details these hun- dred or so simple distributions, their properties and their interrelations.
    [Show full text]
  • 2.1 Stochastic Processes and Random Fields
    2 Random Fields 2.1 Stochastic Processes and Random Fields As you read in the Preface, for us a random field is simply a stochastic pro- cess, taking values in a Euclidean space, and defined over a parameter space of dimensionality at least one. Actually, we shall be rather loose about exchang- ing the terms `random field’ and `stochastic process'. In general, we shall use ‘field’ when the geometric structure of the parameter space is important to us, and shall use `process' otherwise. We shall usually denote parameter spaces by either T or M, generally using T when the parameter space is simple Euclidean domain, such as a cube, and M when refering to manifolds, or surfaces. Elements of both T and M will be denoted by t, in a notation harking back to the early days of stochastic processes when the parameter was always one-dimensional `time'. Of course, we have yet to define what a stochastic process is. To do this properly, we should really talk about notions such as probability spaces, mea- surability, separability, and so forth, as we did in RFG. However, we shall send the purist to RFG to read about such things, and here take a simpler route. Definition 2.1.1. Given a parameter space T , a stochastic process f over T is a collection of random variables ff(t): t 2 T g : If T is a set of dimension N, and the random variables f(t) are all vector valued of dimension d, then we call the vector valued random field f a (N; d) random field.
    [Show full text]
  • QUANTIFYING EEG SYNCHRONY USING COPULAS Satish G
    QUANTIFYING EEG SYNCHRONY USING COPULAS Satish G. Iyengara, Justin Dauwelsb, Pramod K. Varshneya and Andrzej Cichockic a Dept. of EECS, Syracuse University, NY b Massachusetts Institute of Technology, Cambridge, MA c RIKEN Brain Science Institute, Japan ABSTRACT However, this is not true with other distributions. The Gaussian In this paper, we consider the problem of quantifying synchrony model fails to characterize any nonlinear dependence and higher or- between multiple simultaneously recorded electroencephalographic der correlations that may be present between the variables. Further, signals. These signals exhibit nonlinear dependencies and non- the multivariate Gaussian model constrains the marginals to also Gaussian statistics. A copula based approach is presented to model follow the Gaussian disribution. As we will show in the following the joint statistics. We then consider the application of copula sections, copula models help alleviate these limitations. Following derived synchrony measures for early diagnosis of Alzheimer’s the same approach as in [4], we then evaluate the classification (nor- disease. Results on real data are presented. mal vs. subjects with MCI) performance using the leave-one-out cross validation procedure. We use the same data set as in [4] thus Index Terms— EEG, Copula theory, Kullback-Leibler diver- allowing comparison with other synchrony measures studied in [4]. gence, Statistical dependence The paper is structured as follows. We discuss copula theory and its use in modeling multivariate time series data in Section 2. 1. INTRODUCTION Section 3 describes the EEG data set used in the present analysis and considers the application of copula theory for EEG synchrony Quantifying synchrony between electroencephalographic (EEG) quantification.
    [Show full text]
  • Probabilistic Arithmetic
    Probabilistic Arithmetic by Rob ert Charles Williamson BE QIT MEngSc Qld A thesis submitted for the degree of Do ctor of Philosophy Department of Electrical Engineering University of Queensland August Statement of Originality To the b est of the candidates knowledge and b elief the material presented in this thesis is original except as acknowledged in the text and the material has not b een submitted either in whole or in part for a degree at this or any other university Rob ert Williamson ii Abstract This thesis develops the idea of probabilistic arithmetic The aim is to replace arithmetic op erations on numb ers with arithmetic op erations on random variables Sp ecicallyweareinterested in numerical metho ds of calculating convolutions of probability distributions The longterm goal is to b e able to handle random prob lems such as the determination of the distribution of the ro ots of random algebraic equations using algorithms whichhavebeendevelop ed for the deterministic case To this end in this thesis we survey a numb er of previously prop osed metho ds for calculating convolutions and representing probability distributions and examine their defects We develop some new results for some of these metho ds the Laguerre transform and the histogram metho d but ultimately nd them unsuitable We nd that the details on how the ordinary convolution equations are calculated are secondary to the diculties arising due to dep endencies When random variables app ear rep eatedly in an expression it is not p ossible to determine the distribution
    [Show full text]
  • Joint Distributions Math 217 Probability and Statistics a Discrete Example
    F ⊆ Ω2, then the distribution on Ω = Ω1 × Ω2 is the product of the distributions on Ω1 and Ω2. But that doesn't always happen, and that's what we're interested in. Joint distributions Math 217 Probability and Statistics A discrete example. Deal a standard deck of Prof. D. Joyce, Fall 2014 52 cards to four players so that each player gets 13 cards at random. Let X be the number of Today we'll look at joint random variables and joint spades that the first player gets and Y be the num- distributions in detail. ber of spades that the second player gets. Let's compute the probability mass function f(x; y) = Product distributions. If Ω1 and Ω2 are sam- P (X=x and Y =y), that probability that the first ple spaces, then their distributions P :Ω1 ! R player gets x spades and the second player gets y and P :Ω2 ! R determine a product distribu- spades. 5239 tion on P :Ω1 × Ω2 ! R as follows. First, if There are hands that can be dealt to E1 and E2 are events in Ω1 and Ω2, then define 13 13 P (E × E ) to be P (E )P (E ). That defines P on 13 39 1 2 1 2 these two players. There are hands \rectangles" in Ω1×Ω2. Next, extend that to count- x 13 − x able disjoint unions of rectangles. There's a bit of for the first player with exactly x spades, and with 13 − x26 + x theory to show that what you get is well-defined.
    [Show full text]