Econ 337901 Financial Economics
Total Page:16
File Type:pdf, Size:1020Kb
ECON 337901 FINANCIAL ECONOMICS Peter Ireland Boston College Spring 2021 These lecture notes by Peter Ireland are licensed under a Creative Commons Attribution-NonCommerical-ShareAlike 4.0 International (CC BY-NC-SA 4.0) License. http://creativecommons.org/licenses/by-nc-sa/4.0/. 1 Mathematical and Economic Foundations A Mathematical Preliminaries 1 Unconstrained Optimization 2 Constrained Optimization B Consumer Optimization 1 Graphical Analysis 2 Algebraic Analysis 3 The Time Dimension 4 The Risk Dimension C General Equilibrium 1 Optimal Allocations 2 Equilibrium Allocations Mathematical Preliminaries Unconstrained Optimization max F (x) x Constrained Optimization max F (x) subject to c ≥ G(x) x Unconstrained Optimization To find the value of x that solves max F (x) x you can: 1. Try out every possible value of x. 2. Use calculus. Since search could take forever, let's use calculus instead. Unconstrained Optimization Theorem If x ∗ solves max F (x); x then x ∗ is a critical point of F , that is, F 0(x ∗) = 0: Unconstrained Optimization F (x) maximized at x ∗ = 5 Unconstrained Optimization F 0(x) > 0 when x < 5. F (x) can be increased by increasing x. Unconstrained Optimization F 0(x) < 0 when x > 5. F (x) can be increased by decreasing x. Unconstrained Optimization F 0(x) = 0 when x = 5. F (x) is maximized. Unconstrained Optimization Theorem If x ∗ solves max F (x); x then x ∗ is a critical point of F , that is, F 0(x ∗) = 0: Note that the same first-order necessary condition F 0(x ∗) = 0 also characterizes a value of x ∗ that minimizes F (x). Unconstrained Optimization F 0(x) = 0 when x = 5. F (x) is minimized. Unconstrained Optimization F 0(x) = 0 and F 00(x) < 0 when x = 5. F (x) is maximized. Unconstrained Optimization F 0(x) = 0 and F 00(x) > 0 when x = 5. F (x) is minimized. Unconstrained Optimization Theorem If x ∗ solves max F (x); x then x ∗ is a critical point of F , that is, F 0(x ∗) = 0: Unconstrained Optimization Theorem If F 0(x ∗) = 0 and F 00(x ∗) < 0; then x ∗ solves max F (x) x (at least locally). The first-order condition F 0(x ∗) = 0 and the second-order condition F 00(x ∗) < 0 are sufficient conditions for the value of x that (locally) maximizes F (x). Unconstrained Optimization F 0(x ∗∗) = 0 and F 00(x ∗∗) < 0 at the local maximizer x ∗∗ and F 0(x ∗) = 0 and F 00(x ∗) < 0 at the global maximizer x ∗. Unconstrained Optimization F 0(x ∗∗) = 0 and F 00(x ∗∗) < 0 at the local maximizer x ∗∗ and F 0(x ∗) = 0 and F 00(x ∗) < 0 at the global maximizer x ∗, but F 00(x) > 0 in between x ∗ and x ∗∗. Unconstrained Optimization Theorem If F 0(x ∗) = 0 and 00 F (x) < 0 for all x 2 R; then x ∗ solves max F (x): x Unconstrained Optimization F 00(x) < 0 for all x 2 R and F 0(5) = 0. F (x) is maximized when x = 5. Unconstrained Optimization If F 00(x) < 0 for all x 2 R, then the function F is concave. When F is concave, the first-order condition F 0(x ∗) = 0 is both necessary and sufficient for the value of x that maximizes F (x). And, as we are about to see, concave functions arise frequently and naturally in economics and finance. Unconstrained Optimization: Example 1 Consider the problem 1 max − (x − τ)2; x 2 where τ is a number (τ 2 R) that we might call the \target." The first-order condition −(x ∗ − τ) = 0 leads us immediately to the solution: x ∗ = τ. Unconstrained Optimization: Example 2 Consider maximizing a function of three variables: max F (x1; x2; x3) x1;x2;x3 Even if each variable can take on only 1,000 values, there are one billion possible combinations of (x1; x2; x3) to search over! This is an example of what Richard Bellman (US, 1920-1984) called the \curse of dimensionality." Unconstrained Optimization: Example 2 Consider the problem: 1 2 1 2 1 2 max − (x1 − τ) + − (x2 − x1) + − (x3 − x2) : x1;x2;x3 2 2 2 Now the three first-order conditions ∗ ∗ ∗ −(x1 − τ) + (x2 − x1 ) = 0 ∗ ∗ ∗ ∗ −(x2 − x1 ) + (x3 − x2 ) = 0 ∗ ∗ −(x3 − x2 ) = 0 ∗ ∗ ∗ lead us to the solution: x1 = x2 = x3 = τ. Constrained Optimization To find the value of x that solves max F (x) subject to c ≥ G(x) x you can: 1. Try out every possible value of x. 2. Use calculus. Since search could take forever, let's use calculus instead. Constrained Optimization A method for solving constrained optimization problems like max F (x) subject to c ≥ G(x) x was developed by Joseph-Louis Lagrange (France/Italy, 1736-1813) and extended by Harold Kuhn (US, 1925-2014) and Albert Tucker (US, 1905-1995). Constrained Optimization Associated with the problem: max F (x) subject to c ≥ G(x) x Define the Lagrangian L(x; λ) = F (x) + λ[c − G(x)]; where λ is the Lagrange multiplier. Constrained Optimization Then, look for a critical point of the full Lagrangian L(x; λ) = F (x) + λ[c − G(x)]; instead of just the objective function F by itself. That is, use the FOC F 0(x ∗) − λ∗G 0(x ∗) = 0: Constrained Optimization L(x; λ) = F (x) + λ[c − G(x)]; Theorem (Kuhn-Tucker) If x ∗ maximizes F (x) subject to c ≥ G(x), then there exists a value λ∗ ≥ 0 such that, together, x ∗ and λ∗ satisfy the first-order condition F 0(x ∗) − λ∗G 0(x ∗) = 0 and the complementary slackness condition λ∗[c − G(x ∗)] = 0: Constrained Optimization In the case where c > G(x ∗), the constraint is non-binding. The complementary slackness condition λ∗[c − G(x ∗)] = 0 requires that λ∗ = 0. And the first-order condition F 0(x ∗) − λ∗G 0(x ∗) = 0 requires that F 0(x ∗) = 0. Constrained Optimization In the case where c = G(x ∗), the constraint is binding. The complementary slackness condition λ∗[c − G(x ∗)] = 0 puts no further restriction on λ∗ ≥ 0. Now the first-order condition F 0(x ∗) − λ∗G 0(x ∗) = 0 requires that F 0(x ∗) = λ∗G 0(x ∗). Constrained Optimization: Example 1 For the problem 1 max − (x − 5)2 subject to 7 ≥ x; x 2 F (x) = (−1=2)(x − 5)2, c = 7, and G(x) = x. The Lagrangian is 1 L(x; λ) = − (x − 5)2 + λ(7 − x): 2 Constrained Optimization: Example 1 With 1 L(x; λ) = − (x − 5)2 + λ(7 − x); 2 the first-order condition −(x ∗ − 5) − λ∗ = 0 and the complementary slackness condition λ∗(7 − x ∗) = 0 are satisfied with x ∗ = 5, F 0(x ∗) = 0, λ∗ = 0, and 7 > x ∗. Constrained Optimization: Example 1 Here, the solution has F 0(x ∗) = 0 since the constraint is nonbinding. Constrained Optimization: Example 2 For the problem 1 max − (x − 5)2 subject to 4 ≥ x; x 2 F (x) = (−1=2)(x − 5)2, c = 4, and G(x) = x. The Lagrangian is 1 L(x; λ) = − (x − 5)2 + λ(4 − x): 2 Constrained Optimization: Example 2 With 1 L(x; λ) = − (x − 5)2 + λ(4 − x); 2 the first-order condition −(x ∗ − 5) − λ∗ = 0 and the complementary slackness condition λ∗(4 − x ∗) = 0 are satisfied with x ∗ = 4 and F 0(x ∗) = λ∗ = 1 > 0. Constrained Optimization: Example 2 Here, the solution has F 0(x ∗) = λ∗G 0(x ∗) > 0 since the constraint is binding. F 0(x ∗) > 0 indicates that we'd like to increase the value of x, but the constraint won't let us. Constrained Optimization: Example 2 With a binding constraint, F 0(x ∗) 6= 0 but F 0(x ∗) − λ∗G 0(x ∗) = 0. The value x ∗ that solves the problem is a critical point, not of the objective function F (x), but instead of the entire Lagrangian F (x) + λ[c − G(x)]. Consumer Optimization 1. Graphical Analysis 2. Algebraic Analysis 3. Time Dimension 4. Risk Dimension Consumer Optimization Alfred Marshall, Principles of Economics, 1890. { supply and demand Francis Edgeworth, Mathematical Psychics, 1881. Vilfredo Pareto, Manual of Political Economy, 1906. { indifference curves Consumer Optimization John Hicks, Value and Capital, 1939. { wealth and substitution effects Paul Samuelson, Foundations of Economic Analysis, 1947. { mathematical reformulation Irving Fisher, The Theory of Interest, 1930. { intertemporal extension. Consumer Optimization Gerard Debreu, Theory of Value, 1959. Kenneth Arrow, \The Role of Securities in the Optimal Allocation of Risk Bearing," Review of Economic Studies, 1964. Extensions to include risk and uncertainty. Consumer Optimization: Graphical Analysis Consider a consumer who likes two goods: apples and bananas. Y = income ca = consumption of apples cb = consumption of bananas pa = price of an apple pb = price of a banana The consumer's budget constraint is Y ≥ paca + pbcb Consumer Optimization: Graphical Analysis So long as the consumer always prefers more to less, the budget constraint will always bind: Y = paca + pbcb or Y pa cb = − ca pb pb Which shows that the graph of the budget constraint will be a straight line with slope −(pa=pb) and intercept Y =pb. Consumer Optimization: Graphical Analysis The budget constraint is a straight line with slope −(pa=pb) and intercept Y =pb. Consumer Optimization: Graphical Analysis The budget constraint describes the consumer's market opportunities. Francis Edgeworth (Ireland, 1845-1926) and Vilfredo Pareto (Italy, 1848-1923) were the first to use indifference curves to describe the consumer's preferences. Each indifference curve traces out a set of combinations of apples and bananas that give the consumer a given level of utility or satisfaction.