
OPTIMIZATION UNDER UNCERTAINTY LECTURE NOTES 2001 R. T. Rockafellar University of Washington CONTENTS 1. INTRODUCTION 1 2. BASIC STOCHASTIC PROGRAMMING 16 3. LINEAR PROGRAMMING MODELS 30 4. EXTENDED LINEAR-QUADRATIC MODELS 46 5. STATES AND CONTROLS 60 6. DYNAMIC PROGRAMMING 75 7. ADVANCED STOCHASTIC PROGRAMMING 96 1 1. INTRODUCTION Problems of optimization under uncertainty are characterized by the necessity of making decisions without knowing what their full effects will be. Such problems appear in many areas of application and present many interesting challenges in concept and computation. For a beginner, it’s most important at the outset to gain some appreciation of just how optimization under uncertainty differs from other branches of optimization, and what the basic modeling issues are. A review of terms will lay the foundation for this. Optimization: This refers to the analysis and solution of problems in which a single choice must be made from a range of “feasible” ones. Feasible choices are modeled as the elements of some set—the feasible set. The goal is to find a “best” choice (not necessarily unique), or at least a “better” one than might readily be detected. Choices are compared according to the values they give to a certain function—the objective function in the problem. The goal may be maximization or minimization of such values. For simplicity, minimization is taken to be the standard. Discrete optimization: There are only finitely many options to compare, but the num- ber of them could be enormous. The feasible set is a finite (discrete) set, hopefully with enough structure to support some degree of effective analysis. Continuous optimization in finite dimensions: The choices are describable in terms of the values of finitely many real variables. The feasible set can therefore be modeled as a subset of a space IRn. Continuous optimization in infinite dimensions: Not just vectors of variables, but entire functions, scalar-valued or vector-valued, have to be chosen. These unknowns, often interpreted as “policies” of response or control to ongoing situations, may be regarded as elements of an infinite-dimensional vector space. Sources of infinite dimensionality: Problems typically have infinitely many “degrees of freedom” when the decisions need to be represented as depending on con- tinuous time or on observations of continuous random variables. Finite-dimensional approximations: Infinite-dimensional problems can’t ordinarily be solved numerically without converting them somehow to problems in finite dimensions. Uncertainty: Decisions must often be taken in the face of the unknown. Actions decided upon in the present will have consequences that can’t fully be determined until a later stage. But there may be openings for corrective action later or even multiple opportunities for recourse as more and more becomes known. 2 Potential applications of optimization under uncertainty: Here are some prime examples. Note that time is almost inevitably involved, because exact knowledge of the future is what most typically is lacking. Generation of electrical power: An electrical utility has to decide each day how much power to produce without yet knowing the demand, which may depend on weather developments (as could affect heating, cooling, etc.) Longer range decisions may concern the amount of coal to purchase or the kinds of buying/selling contracts set up with other utilities. Operation of reservoirs: Systems of water reservoirs and their connecting flows have to be managed in a reasonable manner for drinking water, irrigation, hydropower, flood control, recreation, and the protection of fish runs. As if these conflicting goals didn’t already pose difficulty, there is climate uncertainty affecting both the inflows into the system and the intensity of irrigation needs and the rest. Inventory management: Supplies of retail goods or spare parts must reliably be main- tained despite the vagaries of demand and the costs of restocking. Portfolio selection: Investments must be made in stocks, bonds, foreign currencies and the like without knowing for sure how their values will rise or fall. At the root of the uncertainties are interest rates and political developments. Facility planning: Where should new facilities be located, and what capacity should they have, if they are designed to serve needs that are anticipated from guesswork? Similar: problems of reconfiguration or expansion of existing facilities. Pollution control: What types and sizes of treatment plants should now be built to en- sure water quality in a basin where population growth and future manufacturing requirements can only be estimated roughly, and where anyway a lot depends on what will happen with rainfall over a long term? Stabilization of mechanisms: An antenna must be kept fixed as steadily as possible on the source of signals it’s supposed to receive, despite random wind gusts that can push it out of line. What configuration of forces should be brought into play to restore misalignments as rapidly as possible, without neglecting the potential of additional wind gusts before the process is complete? Analysis of biological systems: Are the feeding and breeding strategies of animals op- timal in the sense of “maximizing survival” in a particular biological niche with respect to its uncertainties of climate, food supply, predation, and so forth? Note that here the issues are theoretical rather than numerical. 3 The modeling of uncertainty: The uncertainties in a problem have to be represented in such a manner that their effects on present decision-making can properly be taken into account. This is an interesting and challenging subject. Stochastic modeling: The uncertain elements in a problem can often be modeled as random variables to which the theory of probability can be applied. For this purpose such elements have to have a “known” probability distribution. Degrees of knowledge: Such a distribution may be available from statistical data (as with weather variables), or it may be no more than an educated guess (as with interest rates or election prospects)—“subjective” probabilities. Either way, the mathematical treatment is the same. Deterministic modeling: This refers to mathematical formulations in which uncertainty plays no role. In practice, such modeling often prevails even in situations with obviously important uncertainties, because the modelers don’t know how to cope with those features, or don’t have adequate data to work with, or don’t yet have good software available for getting numerical solutions. Deterministic versus stochastic: These terms are often contrasted with each other in the description of mathematical models. Range modeling: Sometimes, when a deterministic model is clearly inappropriate, yet there are few clues to the probabilities that would support a stochastic model, it’s useful to work with ranges of uncertainty. Various quantities that would otherwise be data parameters are neither given specific values nor probability distributions but merely viewed as restricted to particular intervals. One tries to guard against whatever might happen by thinking of the actual values to be faced as chosen from these intervals by an adversary, as in a game setting. Uncertain probabilities: This notion can be combined with stochastic modeling by supposing that a probability distribution is present but is incompletely known. The actual distribution to be faced will be chosen by an adversary— like Mother Nature—from some limited set of distributions, described perhaps by ranges on statistical parameters. The role of scenarios: A common tool in planning for the future is to work with scenar- ios, which are particular representations of how the future might unfold. Some kind of probabilistic model or simulation is used to generate a batch of such scenarios. The challenge then, though, is how to make good use of the scenarios in coming up with an effective decision. 4 A common but faulty approach: Often, planners just solve, for each scenario that is generated, an optimization problem which arises from taking that scenario to be the path the future truly will follow. These problems are themselves deterministic in character. Although each yields a prescription of what should be done here and now, there’s no theoretical guidance about the compromise between those prescriptions that should actually be adopted, even when probabilities can be assigned to the individual scenarios. Indeed, the separate prescriptions obtained for the individual scenario problems may be inconsistent with each other and very fragile—not adequately hedged. They’re optimal only in a context where one can act with perfect foresight. The need for modeling the evolution of information: The crucial feature demanded for serious applications is the effective modeling of how observations at various future stages increase the store of knowledge on which decisions can properly be based, not only in the present, but in subsequent stages as well. In the setting of discrete probability, we’ll formalize this later in terms of a “scenario tree.” Problem data in an elementary framework: It will help in understanding the main issues if we first look at an uncluttered formulation. A basic problem of optimization in IRn takes the form: minimize f0(u) over all u ∈ C n n for a function f0 : IR → IR and a set C ⊂ IR . This set might be specified as consisting of the points u in an underlying subset U ⊂ IRn that satisfy some constraints on the values of additional expressions fi(u), say ≤ 0 for i = 1, . , s, f (u) i = 0 for i = s + 1, . , m, but for now it will be better not to focus on such a representation and to think instead of the condition u ∈ C as expressing feasibility more abstractly. Essential objective function: Even simpler notation, very convenient in theoretical dis- cussions, results from the device of enforcing constraints by infinite penalties. For the basic problem just described, the essential objective function is n f(u) := f0(u) if u ∈ C, ∞ if u∈ / C.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages118 Page
-
File Size-