Actuarial Science Exam 1/P Contents
Total Page:16
File Type:pdf, Size:1020Kb
Actuarial Science Exam 1/P Ville A. Satop¨a¨a December 5, 2009 Contents 1 Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles, Permutations and Combinations 5 5 Random Variables and Probability Distributions 5 6 Expectation and Other Distribution Parameters 6 7 Frequently Used Discrete Distributions 7 8 Frequently Used Continuous Distributions 8 9 Joint, Marginal, and Conditional Distributions 10 10 Transformations of Random Variables 11 11 Risk Management Concepts 13 1 1 Review of Algebra and Calculus 1. The complement of the set B: This can be denoted B0, B¯, or ∼ B. 2. For any two sets we have: A = (A \ B) [ (A \ B0) 3. The inverse of a function exists only if the function is one-to-one. 4. The roots of a quadratic equation ax2 + bx + c can be deduced from p −b ± b2 − 4ac 2a Recall that the equation has distinct roots if b2 − 4ac > 0, distinct complex roots if b2 − 4ac < 0, or equal real roots if b2 − 4ac = 0. 5. Exponential functions are of the form f(x) = bx, where b > 0, b 6= 1. The inverse of this function is denoted logb(y). Recall that blogb(y) = y for y > 0 bx = ex log(b) 6. A function f is continuous at the point x = c if lims!c f(x) = f(c). 0 7. The algebraic definition of f (x0) is 1 d f (1) 0 f(x0 + h) − f(x0) = f (x0) = f (x0) = lim dx1 x=x0 h!0 h 8. Differentation: Product and quotient rule g(x) × h(x) ! g0(x) × h(x) + g(x) × h0(x) g(x) h(x)g0(x) − g(x)h0(x) ! h(x) [h(x)]2 and some other rules ax ! ax log(a) 1 log (x) ! b x log(b) sin(x) ! cos(x) cos(x) ! − sin(x) 0 0 9. L'Hopital's rule: if limx!c f(x) = limx!c f(x) = 0 or ±∞ and limx!c f (x)=g (x) exists, then f(x) f 0(x) lim = lim x!c g(x) x!c g0(x) 10. Integration: Some rules 1 ! log(x) + c x ax ax ! + c log(a) xeax eax xeax ! − + c a a2 2 11. Recall that Z 1 Z a f(x)dx = lim f(x)dx −∞ a!1 −a This can be useful if the integral is not defined at some point a, or if f is discontinuous at x = a; then we can use Z b Z b f(x)dx = lim f(x)dx a c!a c Similarly, if f(x) has discontinuity at the point x = c in the interior of [a; b], then Z b Z c Z b f(x)dx = f(x)dx + f(x)dx a a c Let's do one example to clarify this a little bit: Z 1 Z 1 1 −1=2 −1=2 1=2 p x dx = lim x dx = lim (1=2)x = lim[2 − 2 c] = 2 c!0 c!0 c!0 0 c c More examples on page. 18 12. Some other useful integration rules are: R 1 n −cx n! (i) for integer n ≥ 0 and real number c > 0, we have 0 x e dx = cn+1 13. Geometric progression: The sum of the first n terms is rn − 1 1 − rn a + ar + ar2 + ::: + arn−1 = a[1 + r + r2 + ::: + rn−1] = a × = a × r − 1 1 − r and 1 X a ark = 1 − r k=0 14. Arithmetic progression: The sum of the first n terms is n(n − 1) a + (a + d) + (a + 2d) + ::: + (a + nd) = na + d × 2 2 Basic Probability Concepts 1. Outcomes are exhaustive if they combine to be the entire probability space, or equivalently, if at least on of the outcomes must occur whenever the experiment is performed. In other words, if A1 [ A2 [ ::: [ An = Ω, then A1;A2; :::; An are referred to as exhaustive events. 2. Example 1-1 on page 37 summarizes many key definitions very well. 3. Some useful operations on events: (i) Let A; B1;B2; :::; Bn be any events. Then A \ (B1 [ B2 [ ::: [ Bn) = (A \ B1) [ (A \ B2) [ ::: [ (A \ Bn) and A [ (B1 \ B2 \ ::: \ Bn) = (A [ B1) \ (A [ B2) \ ::: \ (A [ Bn) 3 (ii) If B1 [ B2 = Ω, then A = (A \ B1) [ (A \ B2). This of course applies to a partition of any size. To put in this in the context of probabilities we have P (A) = P (A \ B1) + P (A \ B2) 3. An event A consists of a subset of sample points in the probability space. In the case of a discrete probability space, the probability of A is P (A) = P P (a ), the sum of P (a ) over all sample points in event A. a12A i 1 4. An important inequality: n n [ X P ( Ai) ≤ P (Ai) i=1 i=1 Notice that the equality holds only if they are mutually exclusive. Any overlap reduces the total probability. 3 Conditional Probability and Independence 1. Recall the multiplication rule: P (B \ A) = P (BjA)P (A) 2. When we condition on event A, we are assuming that event A has occurred so that A becomes the new probability space, and all conditional events must take place within event A. Dividing by P (A) scales all probabilities so that A is the entire probability space, and P (AjA) = 1. 3. A useful fact: P (B) = P (BjA)P (A) + P (BjA0)P (A0) 4. Bayes' rule: P (A\B) P (BjA)P (A) (i) The basic form is P (AjB) = P (B) = P (B) . This can be expanded even further as follows. P (A \ B) P (A \ B) P (BjA)P (A) P (AjB) = = = P (B) P (B \ A) + P (B \ A0) P (BjA)P (A) + P (BjA0)P (A0) (ii) The extended form. If A1;A2; :::; An form a partition of the entire probability space Ω, then P (B \ Ai) P (B \ Ai) P (BjAi)P (Ai) P (AijB) = = Pn = Pn for each i = 1; 2; :::; n P (B) i=1 P (B \ Ai) i=1 P (BjAi)P (Ai) 5. If events A1;A2; :::; An satisfy the relationship n Y P (A1 \ A2 \ ::: \ An) = P (Ai) i=1 then the events are said to be mutually exclusive. 6. Some useful facts: (i) If P (A1 \ A2 \ ::: \ An−1) > 0, then P (A1 \ A2 \ ::: \ An) = P (A1) × P (A2jA1) × P (A3jA1 \ A2) × ::: × P (AnjA1 \ A2 \ ::: \ An−1) (ii) P (A0jB) = 1 − P (AjB) (iii) P (A [ BjC) = P (AjC) + P (BjC) − P (A \ BjC) 4 4 Combinatorial Principles, Permutations and Combinations 1. We say that we are choosing an ordered subset of size k without replacement from a collection of n objects if after the first object is chosen, the next object is chosen from the remaining n − 1, the next after that from n! the remaining n − 2, etc. The number of ways of doing this is (n−k)! and is denoted nPk 2. Given n objects, of which n1 are of Type 1, n2 are of Type 2, ...., and nt are of Type t, and n = n1+n2+:::+nt, the number of ways of ordering all n objects is n! n1!n2!:::nt! n n 3. Remember that k = n−k 4. Given n objects, of which n1 are of Type 1, n2 are of Type 2, ...., and nt are of Type t, and n = n1+n2+:::+nt, the number of ways choosing a subset of size k ≤ n with k1 objects of Type 1, k2 objects of Type 2, ..., and n1 n2 nt kt objects of Type t, where k = k1 + k2 + ::: + kt is × × ::: × k1 k2 kt 5. Recall the binomial theorem: 1 X N (1 + t)N = tk k k=0 N 6. Multinomial Theorem: In the power series expansion of (t1 + t2 + ::: + ts) where N is a positive integer, k1 k2 ks N N! the coefficient of t × t × ::: × t (where k1 + k2 + :::ks = N) is = . For example, in 1 2 s k1k2:::ks k1!k2!:::ks! 4 2 1 2 2 4 4! the expansion of (1 + x + y) , the coefficient of xy is the coefficient of 1 x y , which is 1 1 2 = 1!1!2! = 12. 5 Random Variables and Probability Distributions 1. The formal definition of a random variable is that it is a function on a probability space Ω. This function assigns a real number X(s) to each sample point s 2 Ω. 2. The probability function of a discrete random variable can be described in a probability plot or in a histogram. 3. Note that for a continuous random variables X, the following are all equal: P (A < X < b), P (A < X ≤ b), P (A ≤ X < b), and P (A ≤ X ≤ b). 4. Mixed distribution: A random variable that has some points with non-zero probability mass, and with a continuous pdf on one or more intervals is said to have a mixed distribution. The probability space is a combination of a set of discrete points of probability for the discrete part of the random variable long with one ore more intervals of density for the continuous part. The sum of the probabilities at the discrete points of probability plus the integral of the density function on the continuous region for X must be 1. For example, suppose that X has a probability of 0.5 at X = 0, and X is a continuous random variable on the interval (0; 1) with density function f(x) = x for 0 < x < 1, and X has no density or probability elsewhere.