Financial Mathematics
Total Page:16
File Type:pdf, Size:1020Kb
Financial Mathematics Rudiger¨ Kiesel February, 2005 0-0 1 Financial Mathematics Lecture 1 by Rudiger¨ Kiesel Department of Financial Mathematics, University of Ulm Department of Statistics, LSE RSITÄT V E U I L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 2 Aims and Objectives • Review basic concepts of probability theory; • Discuss random variables, their distribution and the notion of independence; • Calculate functionals and transforms; • Review basic limit theorems. RSITÄT V E U I L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 3 Probability theory To describe a random experiment we use sample space Ω, the set of all possible outcomes. Each point ω of Ω, or sample point, represents a possible random outcome of performing the random experiment. For a set A ⊆ Ω we want to know the probability IP (A). The class F of subsets of Ω whose probabilities IP (A) are defined (call such A events) should be be a σ-algebra , i.e. closed under countable, disjoint unions and complements, and contain the empty set ∅ and the whole space Ω. Examples. Flip coins, Roll two dice. RSITÄT V E U I L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 4 Probability theory We want (i) IP (∅) = 0, IP (Ω) = 1, (ii) IP (A) ≥ 0 for all A, (iii) If A1,A2,..., are disjoint, S P IP ( i Ai) = i IP (Ai) countable additivity. (iv) If B ⊆ A and IP (A) = 0, then IP (B) = 0 (completeness). A probability space, or Kolmogorov triple, is a triple (Ω, F, IP ) satisfying Kolmogorov axioms (i),(ii),(iii), (iv) above. RSITÄT E U A probability space is a mathematical model of a random experiment. IV L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 5 Probability theory Let (Ω, F, IP ) be a probability space. A random variable (vector) X is a function X :Ω → IR(IRk) such that X−1(B) = {ω ∈ Ω: X(ω) ∈ B} ∈ F for all Borel sets B ∈ B(B(IRk)). For a random variable X {ω ∈ Ω: X(ω) ≤ x} ∈ F for all x ∈ IR. So define the distribution function FX of X by FX (x) := IP ({ω : X(ω) ≤ x}). Recall: σ(X), the σ-algebra generated by X. RSITÄT V E U I L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 6 Probability theory • Binomial distribution: Number of successes n IP (S = k) = pk(1 − p)n−k. n k • Geometric distribtion: Waiting time IP (N = n) = p(1 − p)n−1. • Poisson distribution: λ IP (X = k) = e−λ . k! • Uniform distribution: 1 RSITÄT f(x) = 1 . V E U {(a,b)} I L N M b − a U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 7 Probability theory • Exponential distribution: −λx f(x) = λe 1{[0,∞)}. The expectation E of a random variable X on (Ω, F, IP ) is defined by Z Z E X := XdIP, or X(ω)dIP (ω). Ω Ω The variance of a random variable is defined as 2 2 2 V (X) := E (X − E (X)) = E X − (E X). RSITÄT V E U I L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 8 Probability theory If X is real-valued with density f, Z E X := xf(x)dx or if X is discrete, taking values xn(n = 1, 2,...) with probability function f(xn)(≥ 0), X E X := xnf(xn). Examples. Moments for some of the above distributions. Random variables X1,...,Xn are independent if whenever Ai ∈ B for i = 1, . n we have n ! n \ Y IP {Xi ∈ Ai} = IP ({Xi ∈ Ai}). RSITÄT V E U i=1 i=1 I L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 9 Probability theory In order for X1,...,Xn to be independent it is necessary and sufficient that for all x1, . xn ∈ (−∞, ∞], n ! n \ Y IP {Xi ≤ xi} = IP ({Xi ≤ xi}). i=1 i=1 Multiplication Theorem If X1,...,Xn are independent and E |Xi| < ∞, i = 1, . , n, then n ! n Y Y E Xi = E (Xi). i=1 i=1 RSITÄT V E U I L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 10 Probability theory If X, Y are independent, with distribution functions F , G Z := X + Y, let Z have distribution function H. Call H the convolution of F and G, written H = F ∗ G. Suppose X, Y have densities f, g. Then Z H(z) = IP (X + Y ≤ z) = f(x)g(y)dxdy, {(x,y):x+y≤z} Thus Z ∞ Z z−x Z ∞ H(z) = f(x) g(y)dy dx = f(x)G(z − x)dx. −∞ −∞ −∞ RSITÄT V E U I L N M Example. Gamma distribution. U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 11 Probability theory If X is a random variable with distribution function F , its moment generating function φX is Z ∞ tX tx φ(t) := E (e ) = e dF (x). −∞ The mgf takes convolution into multiplication: if X, Y are independent, φX+Y (t) = φX (t)φY (t). Observe φ(k)(t) = E (XketX ) and φ(0) = E (Xk). For X on nonnegative integers use the generating function ∞ X X k γX (z) = E (z ) = z IP (Z = k). SITÄ ER T k=0 V U I L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 12 Probability theory Conditional expectation For events: IP (A|B) := IP (A ∩ B)/IP (B) if IP (B) > 0. Implies the multiplication rule: IP (A ∩ B) = IP (A|B)IP (B) Leads to the Bayes rule IP (Ai)IP (B|Ai) IP (Ai|B) = P . j IP (Aj)IP (B|Aj) RSITÄT V E U I L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 13 Probability theory For discrete random variables: If X takes values x1, . , xm with probabilities f1(xi) > 0, Y takes values y1, . , yn with probabilities f2(yj) > 0, (X, Y ) takes values (xi, yj) with probabilities f(xi, yj) > 0, then the marginal distributions are n X f1(xi) = f(xi, yj). j=1 m X f (y ) = f(x , y ). 2 j i j RSITÄT V E U I L N M i=1 U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 14 Probability theory IP (X = xi,Y = yj) IP (Y = yj|X = xi) = IP (X = xi) f(xi, yj) f(xi, yj) = = Pn . f1(xi) j=1 f(xi, yj) So the conditional distribution of Y given X = xi f(xi, yj) f(xi, yj) fY |X (yj|xi) = = Pn . f1(xi) j=1 f(xi, yj) RSITÄT V E U I L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 15 Probability theory Its expectation is X E (Y |X = xi) = yjfY |X (yj|xi) j P j yjf(xi, yj) = P . j f(xi, yj) RSITÄT V E U I L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 16 Probability theory Density case. If (X, Y ) has density f(x, y), R ∞ X has density f1(x) := −∞ f(x, y)dy, R ∞ Y has density f2(y) := −∞ f(x, y)dx. The conditional density of Y given X = x is: f(x, y) f(x, y) f (y|x) := = . Y |X f (x) R ∞ 1 −∞ f(x, y)dy Its expectation is Z ∞ E (Y |X = x) = yfY |X (y|x)dy −∞ R ∞ −∞ yf(x, y)dy = R ∞ . f(x, y)dy RSITÄT V E U −∞ I L N M U · · S O C I D E N N A D R O U · C · D O O C D E c Rudiger¨ Kiesel N 17 Probability theory General case. Suppose that G is a sub-σ-algebra of F, G ⊂ F If Y is a non-negative random variable with E Y < ∞, then Z Q(B) := Y dIP (B ∈ G) B is non-negative, σ-additive – because Z X Z Y dIP = Y dIP B n Bn if B = ∪nBn, Bn disjoint – and defined on the σ-algebra G, so is a measure on G.