Bayesian Decision Theory Chapter 2 (Jan 11, 18, 23, 25)
Total Page:16
File Type:pdf, Size:1020Kb
Bayesian Decision Theory Chapter 2 (Jan 11, 18, 23, 25) • Bayes decision theory is a fundamental statistical approach to pattern classification • Assumption: decision problem posed in probabilistic terms and relevant probability values are known Decision Making Probabilistic model Known Unknown Bayes Decision Supervised Unsupervised Theory Learning Learning (Chapter 2) Parametric Nonparametric Parametric Nonparametric Approach Approach Approach Approach (Chapter 3) (Chapter 4, 6) (Chapter 10) (Chapter 10) “Optimal” Plug-in Density K-NN, neural Mixture Cluster Analysis Rules Rules Estimation networks models Sea bass v. Salmon Classification • Each fish appearing on the conveyor belt is either sea bass or salmon; two “states of nature” • Let denote the state of nature: 1 = sea bass and 2 = salmon; is a random variable that must be described probabilistically • a priori (prior) probability: P (1) and P(2); P (1) is the probability next fish observed is a sea bass • If no other types of fish are present then • P(1) + P( 2) = 1 (exclusivity and exhaustivity) • P(1) = P(2) (uniform priors) • Prior prob. reflects our prior knowledge about how likely we are to observe a sea bass or salmon; prior prob. may depend on time of the year or the fishing area! • Case 1: Suppose we are asked to make a decision without observing the fish. We only have prior information • Bayes decision rule given only prior information • Decide 1 if P(1) > P(2), otherwise decide 2 • Error rate = Min {P(1) , P(2)} • Suppose now we are allowed to measure a feature on the state of nature - say the fish lightness value • Define class-conditional probability density function (pdf) of feature x; x is a r.v. • P(x | i) is the prob. of x given class i , i =1, 2. P(x | i )>= 0 and area under the pdf is 1. Less the densities overlap, better the feature • Case 2: Suppose we only have class-conditional densities and no prior information • Maximum likelihood decision rule • Assign input pattern x to class 1 if P(x | 1) > P(x | 2), otherwise 2 • P(x | 1) is also the likelihood of class 1 given the feature value x • Case 3: We have both prior densities and class- conditional densities • How does the feature x influence our attitude (prior) concerning the true state of nature? • Bayes decision rule • Posteriori prob. is a function of likelihood & prior • Joint density: P(j , x) = P(j | x)p (x) = p(x | j) P (j) • Bayes rule P(j | x) = {p(x | j) . P (j)} / p(x), j = 1,2 j2 where P(x) P(x | j )P( j ) j1 • Posterior = (Likelihood x Prior) / Evidence • Evidence P(x) can be viewed as a scale factor that guarantees that the posterior probabilities sum to 1 • P(x) is also called the unconditional density of feature x • P(1 | x) is the probability of the state of nature being 1 given that feature value x has been observed • Decision based on the posterior probabilities is called the “Optimal” Bayes Decision rule. What does optimal mean? For a given observation (feature value) X: if P(1 | x) > P(2 | x) decide 1 if P(1 | x) < P(2 | x) decide 2 To justify the above rule, calculate the probability of error: P(error | x) = P(1 | x) if we decide 2 P(error | x) = P(2 | x) if we decide 1 • So, for a given x, we can minimize the prob. of error by deciding 1 if P(1 | x) > P(2 | x); otherwise decide 2 Therefore: P(error | x) = min [P(1 | x), P(2 | x)] • For each observation x, Bayes decision rule minimizes the probability of error • Unconditional error: P(error) obtained by integration over all possible observed x w.r.t. p(x) • Optimal Bayes decision rule Decide 1 if P(1 | x) > P(2 | x); otherwise decide 2 • Special cases: (i) P(1) = P(2); Decide 1 if p(x | 1) > p(x | 2), otherwise 2 (ii) p(x | 1) = p(x | 2); Decide 1 if P(1) > P(2), otherwise 2 Bayesian Decision Theory – Continuous Features • Generalization of the preceding formulation • Use of more than one feature (d features) • Use of more than two states of nature (c classes) • Allowing actions other than deciding on the state of nature • Introduce a “loss function”; minimizing the “risk” is more general than minimizing the probability of error • Allowing actions other than classification primarily allows the possibility of “rejection” • Rejection: Input pattern is rejected when it is difficult to decide between two classes or the pattern is too noisy! • The loss function specifies the cost of each action • Let {1, 2,…, c} be the set of c states of nature (or “categories” or “classes”) • Let {1, 2,…, a} be the set of a possible actions that can be taken for an input pattern x • Let (i | j) be the loss incurred for taking action i when the true state of nature is j • Decision rule: (x) specifies which action to take for every possible observation x jc Conditional Risk R( i | x ) ( i | j )P( j | x ) j1 For a given x, suppose we take the action i • If the true state is j , we will incur the loss (i | j) • P(j | x) is the prob. that the true state is j • But, any one of the C states is possible for given x Overall risk R = Expected value of R(i | x) w.r.t. p(x) Conditional risk Minimizing R Minimize R(i | x) for i = 1,…, a Select the action i for which R(i | x) is minimum • This action minimizes the overall risk • The resulting risk is called the Bayes risk • It is the best classification performance that can be achieved given the priors, class-conditional densities and the loss function! • Two-category classification 1 : decide 1 2 : decide 2 ij = (i | j); loss incurred in deciding i when the true state of nature is j Conditional risk: R(1 | x) = 11P(1 | x) + 12P(2 | x) R(2 | x) = 21P(1 | x) + 22P(2 | x) Bayes decision rule is stated as: if R(1 | x) < R(2 | x) Take action 1: “decide 1” This rule is equivalent to: decide 1 if: {(21- 11) P(x | 1) P(1)} > {(12- 22) P(x | 2) P(2)}; decide 2 otherwise In terms of the Likelihood Ratio (LR), the preceding rule is equivalent to the following rule: P( x | ) P( ) if 1 12 22 . 2 P( x | 2 ) 21 11 P(1 ) then take action 1 (decide 1); otherwise take action 2 (decide 2) The “threshold” term on the right hand side now involves the prior and the loss function Interpretation of the Bayes decision rule: If the likelihood ratio of class 1 and class 2 exceeds a threshold value (independent of the input pattern x), the optimal action is: decide 1 Maximum likelihood decision rule is a special case of minimum risk decision rule: • Threshold value = 1 • 0-1 loss function • Equal class prior probability Bayesian Decision Theory (Sections 2.3-2.5) • Minimum Error Rate Classification • Classifiers, Discriminant Functions and Decision Surfaces • Multivariate Normal (Gaussian) Density Minimum Error Rate Classification • Actions are decisions on classes If action i is taken and the true state of nature is j then: the decision is correct if i = j and in error if i j • Seek a decision rule that minimizes the probability of error or the error rate • Zero-one (0-1) loss function: no loss for correct decision and a unit loss for incorrect decision 0 i j ( i , j ) i, j 1,...,c 1 i j The conditional risk can now be simplified as: jc R( i | x ) ( i | j )P( j | x ) j1 P( j | x ) 1 P(i | x ) j1 “The risk corresponding to the 0-1 loss function is the average probability of error” • Minimizing the risk under 0-1 loss function requires maximizing the posterior probability P(i | x) since R(i | x) = 1 – P(i | x)) • For Minimum error rate •Decide i if P (i | x) > P(j | x) j i • Decision boundaries and decision regions 12 22 P( 2 ) P( x |1 ) Let . then decide 1 if : 21 11 P(1 ) P( x | 2 ) • If is the 0-1 loss function then the threshold involves only the priors: 0 1 1 0 P( 2 ) then a P(1 ) 0 2 2P( 2 ) if then b 1 0 P(1 ) Classifiers, Discriminant Functions and Decision Surfaces • Many different ways to represent classifiers or decision rules; • One of the most useful is in terms of “discriminant functions” • The multi-category case • Set of discriminant functions gi(x), i = 1,…,c • Classifier assigns a feature vector x to class i if: gi(x) > gj(x) j i Network Representation of a Classifier • Bayes classifier can be represented in this way, but the choice of discriminant function is not unique • gi(x) = - R(i | x) (max. discriminant corresponds to minimum risk) • For the minimum error rate, we take gi(x) = P(i | x) (max. discrimination corresponds to max. posterior!) gi(x) P(x | i) P(i) gi(x) = ln P(x | i) + ln P(i) (ln: natural log) • A decision rule partitions the feature space into c decision regions if gi(x) > gj(x) j i then x is in Ri In region Ri input pattern x is assigned to class i • Two-category case • Here a classifier is a “dichotomizer” that has two discriminant functions g1 and g2 Let g(x) g1(x) – g2(x) Decide 1 if g(x) > 0 ; Otherwise decide 2 • A “dichotomizer” computes a single discriminant function g(x) and classifies x according to whether g(x) is positive or not • Computation of g(x) = g1(x) – g2(x) g( x ) P(1 | x ) P( 2 | x ) P( x | ) P( ) ln 1 ln 1 P( x | 2 ) P( 2 ) The Normal Density • Univariate density: N( , 2) • Normal density is analytically tractable • Continuous density with two parameters (mean, variance) • A number of processes are asymptotically Gaussian (CLT) • Patterns (e.g., handwritten characters, speech signals ) can be viewed as randomly corrupted (noisy) versions of a single typical or prototype pattern 2 1 1 x P( x ) exp , 2 2 where: = mean (or expected value) of x 2 = variance (or expected squared deviation) of x • Multivariate density: N( , ) • Multivariate normal density in d dimensions: 1 1 P( x ) exp ( x )t 1 ( x ) 1 / 2 ( 2 )d / 2 2 where: t x = (x1, x2, …, xd) (“t” stands for the transpose of a vector) t = (1, 2, …, d) mean vector = d*d covariance matrix || and -1 are determinant and inverse of , respectively • Covariance matrix is symmetric and positive semidefinite; we assume is positive definite so the determinant of is strictly positive • Multivariate normal density is completely specified by [d + d(d+1)/2] parameters • If variables x1 and x2 are “statistically independent” then the covariance of x1 and x2 is zero.