Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations

2007 Risk based multi-objective security control and congestion management Fei Xiao Iowa State University

Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Electrical and Electronics Commons

Recommended Citation Xiao, Fei, "Risk based multi-objective security control and congestion management" (2007). Retrospective Theses and Dissertations. 15848. https://lib.dr.iastate.edu/rtd/15848

This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Risk based multi-objective security control and congestion management

by

Fei Xiao

A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY

Major: Electrical Engineering

Program of Study Committee: James D. McCalley, Major Professor Chen-Ching, Liu William Q. Meeker Ajjarapu Venkataramana Sarah M. Ryan

Iowa State University Ames, Iowa 2007 Copyright © Fei Xiao, 2007. All rights reserved. UMI Number: 3291683

Copyright 2007 by Xiao, Fei

All rights reserved.

UMI Microform 3291683 Copyright 2008 by ProQuest Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code.

ProQuest Information and Learning Company 300 North Zeeb Road P.O. Box 1346 Ann Arbor, MI 48106-1346 ii

TABLE OF CONTENTS

LIST OF FIGURES ...... vii

LIST OF TABLES...... x

ACKNOWLEDGEMENTS ...... xi

ABSTRACT ...... xii

CHAPTER 1: INTRODUCTION ...... 1 1.1 Power System Security Definition...... 1 1.2 Power System Security Assessment ...... 3 1.2.1 Deterministic Approach...... 3 1.2.2 Probabilistic Approach...... 5 1.3 Security Control and Optimization...... 6 1.3.1 Power System Control...... 6 1.3.2 Security Constrained Optimization ...... 8 1.3.3 Optimization Methods Summary ...... 9 1.4 Enhancing Market Security Based on Price Signal ...... 12 1.5 Problem Statement ...... 13 1.6 Dissertation Organization ...... 16

CHAPTER 2: AN OVERVIEW OF RISK BASED SECURITY ASSESSMENT...... 18 2.1 Introduction...... 18 2.2 Architecture of Risk Based Security Assessment...... 20 2.3 Contingency Screening...... 24 2.3.1 Literature Overviews of Contingency Screening ...... 24 2.3.2 Proposed Screening Approach...... 26 2.3.3 Contingency Screening for Overload and Low Voltage ...... 27

iii

2.3.4 Contingency Screening for Voltage Instability...... 29 2.3.5 Test Result...... 29 2.4 Operational Risk Measurement ...... 33 2.4.1 General Definition of Risk ...... 33 2.4.2 Low Voltage Risk ...... 37 2.4.3 Overload Risk...... 37 2.4.4 Voltage Instability Risk...... 38 2.4.5 Cascading Risk ...... 39 2.5 Risk Monitoring and Visualization...... 40 2.6 Risk Identification for Security Control ...... 43

CHAPTER 3: COMPUTATION OF CONTINGENCY PROBABILITY FOR ONLINE OPERATION...... 44 3.1 Introduction...... 44 3.2 Criteria Review...... 46 3.3 Model Improvement for Operational Decisions ...... 48 3.4 Pooling Data and Associated with Statistical Methods...... 49 3.4.1 Pooling Data for Performance Analysis...... 49 3.4.2 Failure Rate Estimation for Each Pool ...... 50 3.5 Failure Rate Calculation Related to Weather...... 52 3.5.1 Why Choose Regression Analysis ...... 52 3.5.2 Applying Multiple Linear Regression ...... 53 3.5.3 Hypothesis Testing ...... 54 3.6 Calculation Procedure...... 55 3.7 Illustrative Example ...... 57 3.8 Summary ...... 62

CHAPTER 4: RISK BASED SECURITY CONTROL DESIGN...... 63 4.1 Introduction...... 63 4.2 Formulation of Risk Based Multi-objective Optimization...... 67 4.3 Basic Properties of Multi-objective Optimization Problem ...... 70

iv

4.3.1 Single and Multi-objective Optimization ...... 70 4.3.2 Concept of Domination...... 72 4.3.3 Pareto-Optimality ...... 73 4.3.4 Finding Efficient Solutions...... 74 4.3.5 Non-Dominated Sorting of a Population ...... 75 4.3.6 Approaches to Multi-objective Optimization ...... 76 4.4 Introduction of Multi-criterion Decision Making Methods...... 78 4.4.1 Value or Utility–based Approaches ...... 78 4.4.2 ELECTRE IV...... 79 4.4.3 Evidential Theory ...... 79 4.4.4 Promethee ...... 80 4.4.5 Goal Programming...... 80 4.4.6 Lexicographic Method ...... 80 4.4.7 Interactive approach ...... 81 4.4.8 Method Summary...... 81 4.5 Risk Based Decision Making and Control Strategy...... 83 4.5.1 Risk Based MCDM Design...... 83 4.5.2 Architecture of Risk Based Security Control...... 86 4.6 Summary ...... 88

CHAPTER 5: CLASSIC METHODS IMPLEMENTATION FOR RISK BASED MULTI-OBJECTIVE OPTIMIZATION ...... 89 5.1 Overviews of Classic Multi-objective Methods ...... 89 5.1.1 Weight Method ...... 90 5.1.2 Constraint Method ...... 93 5.2 Risk Based Multi-objective Optimization Problem Implementation Using ...... 94 5.2.1 Review of Linear Programming Problem, and Application ...... 94 5.2.2 Cost and Risk Objective Linearization ...... 99

v

5.2.3 Power Flow Modeling Using Shift Factor...... 100 5.2.4 Security Constraint Linearization Using Distribution Factor 101 5.2.5 Linear Programming Method Implementation...... 101 5.3 Case Study – Six Bus Test System ...... 102 5.3.1 Security Unconstrained OPF...... 104 5.3.2 Security Constrained OPF...... 104 5.3.3 Risk Based Multi-objective Optimization Problem...... 105 5.3.4 Decision Making Using Objective Trade-off Curve ...... 108 5.4 Summary ...... 109

CHAPTER 6: EVOLUTIONARY IMPLEMENTATION FOR RISK BASED MULTI-OBJECTIVE OPTIMIZATION...... 110 6.1 Introduction...... 110 6.2 Working Principles of Evolutionary Algorithms ...... 114 6.2.1 Binary Algorithms...... 114 6.2.2 Real Parameter Algorithms ...... 118 6.2.3 Constraint-Handling in Evolutionary Algorithms...... 121 6.3 Overviews of Evolutionary Algorithms...... 122 6.3.1 Multi-objective Genetic Algorithm ...... 123 6.3.2 Non-dominated Sorting Genetic Algorithm ...... 123 6.3.3 Niched Pareto Genetic Algorithm ...... 124 6.3.4 Strength Pareto Evolutionary Algorithm...... 124 6.3.5 Multi-objective Particle Swarm Optimization...... 125 6.4 Non-dominated Sorting Genetic Algorithm Implementation...... 125 6.4.1 Fitness Assignment ...... 125 6.4.2 Algorithm Implement and Efficiency Improvement...... 128 6.5 Case Study – IEEE 24 Bus RTS-96 System ...... 129 6.6 Summary ...... 134

vi

CHAPTER 7: RISK BASED CONGESTION MANAGEMENT AND LMP CALCULATION IN ELECTRICITY MARKET ...... 135 7.1 Introduction...... 135 7.2 Different Methods of Congestion Management ...... 137 7.2.1 Deterministic Congestion Management...... 137 7.2.2 Risk Based Congestion Management ...... 139 7.2.3 Two Methods Comparison ...... 140 7.3 LMP DECOMPOSITION AND ANALYSIS ...... 141 7.3.1 Deterministic LMP Decomposition ...... 142 7.3.2 Risk based LMP Decomposition...... 143 7.3.3 LMP Comparison...... 144 7.4 Risk Based LMP Calculation Algorithm Implementation ...... 144 7.5 Case Study ...... 147 7.5.1 Deterministic LMP Result...... 147 7.5.2 Risk Based LMP Result with Inelastic Demand...... 148 7.5.3 Risk Based LMP Result with Elastic Demand...... 150 7.5.4 Social Benefit and Total Transactions ...... 152 7.5.5 Market Surplus ...... 152 7.5.6 The Impact of Contingency Probability on Risk Based LMP.153 7.6 Summary ...... 155

CHAPTER 8: CONCLUSIONS AND FUTURE WORK ...... 156 8.1 Contributions of this Work...... 156 8.2 Suggestions for Future Works ...... 158

REFERENCES ...... 159

vii

LIST OF FIGURES Figure 1-1 Power system operating states ...... 2 Figure 2-1: Integration of OL-RBSA with SCADA/EMS...... 21 Figure 2-2: Illustration of basic OL-RBSA calculation...... 21 Figure 2-3: Architecture of OL-RBSA ...... 22 Figure 2-4: Severity Function Classification ...... 36 Figure 2-5: Severity Function for Low Voltage ...... 37 Figure 2-6: Severity Function for Overload...... 38 Figure 2-7: Loadability and Margin...... 38 Figure 2-8: Severity Function of Voltage Instability...... 39 Figure 2-9: Security diagram ...... 42 Figure 3-1: Single component Markov Model...... 46 Figure 3-2: Single component model with weather...... 47 Figure 3-3: Two-component model with common mode failure...... 47 Figure 3-4: Data process for regression analysis ...... 56 Figure 3-5: Distribution among weather blocks of number of outages ...... 57 Figure 3-6: Distribution among weather blocks of outage duration...... 58 Figure 3-7: Distribution among weather blocks of number of outages per hour...... 58 Figure 3-8: Regression analysis for WZ7 (weather zone), 69kV line outage...... 59 Figure 3-9: Failure rate curve of one line over a day...... 61 Figure 4-1: Solutions of two-objective minimize problem...... 73 Figure 4-2: Illustration of Pareto front for a two-objective optimization problem...... 74 Figure 4-3: Ideal multi-objective optimization procedure...... 76 Figure 4-4: Preference based multi-objective optimization procedure...... 77 Figure 4-5: Security boundary analysis of risk based decision making...... 84 Figure 4-6: Architecture of risk based decision making process...... 87 Figure 5-1: Weight methods for a multi-objective decision problem with convex space of objectives ...... 92

viii

Figure 5-2: Weight methods for a multi-objective decision problem with concave space of objectives ...... 93 Figure 5-3: Generator offer curve and load reduction curve ...... 99 Figure 5-4: Six-Bus Test System ...... 103 Figure 5-5: Security diagram for the solution to Problem 2 ...... 105 Figure 5-6: Pareto-optimal curve for RBMO model...... 108 Figure 5-7: Security diagram for the solution to RBMO...... 109 Figure 6-1: The single-point crossover operator...... 117 Figure 6-2: The bit-wise mutation operator...... 117 Figure 6-3: The BLX-α operator...... 119 Figure 6-4: The random mutation operator...... 120 Figure 6-5: Illustration of fitness computation for MOGA ...... 123 Figure 6-6: Illustration of fitness computation for NSGA...... 124 Figure 6-7: Objective value calculation using sensitivity data ...... 129 Figure 6-8: 24-Bus RTS-96 System...... 130 Figure 6-9: Pareto optimal solutions space...... 131 Figure 6-10: Overload risk visualization ...... 132 Figure 6-11: Generation outputs of SCOPF Model ...... 133 Figure 6-12: Solution CEI evaluation...... 133 Figure 7-1: LMP result with inelastic demand ...... 148 Figure 7-2: Power clearing offer with inelastic demand...... 149 Figure 7-3: Security diagram with inelastic demand ...... 150 Figure 7-4: LMP result with elastic demand...... 151 Figure 7-5: Power clearing offer with elastic demand...... 151 Figure 7-6: Security diagram with elastic demand ...... 151 Figure 7-7: Pareto optimal curve between social benefit and congestion level...... 152 Figure 7-8: Risk impact on market total transactions ...... 152 Figure 7-9: Market surplus for inelastic demand...... 153 Figure 7-10: Market surplus for elastic demand...... 153 Figure 7-11: LMPs variation with contingency probability ...... 154

ix

Figure 7-12: Power clearing offer variation with contingency probability ...... 154 Figure 7-13: Total transaction variation with contingency probability ...... 155

x

LIST OF TABLES TABLE 2-1: Contingency Ranking Result for Overload ...... 30 TABLE 2-2: Contingency Ranking Result for Low Voltage ...... 31 TABLE 2-3: Contingency Ranking Result for Voltage Instability ...... 32 TABLE 3-1: Failure rate prediction (WZ7, 69kV, 10miles line) ...... 60 TABLE 3-2: Representative sample of contingency probabilities (3:10pm, 8/26/2004) ...... 61 TABLE 4-1: Model Expression of SCOPF and RBMO...... 68 TABLE 4-2: Comparison of SCOPF and RBMO...... 69 TABLE 5-1: Branch data for 6-bus system (Base: 100MVA) ...... 103 TABLE 5-2 Generator Cost Curve ...... 104 TABLE 5-3: Solution to Problem1 (Security Unconstrained OPF) ...... 104 TABLE 5-4: Solution to Problem 2 (Security Constrained OPF) ...... 105 TABLE 5-5: Results with RBMO using WM approach...... 106 TABLE 5-6: Results with RBMO using CM approach...... 107 TABLE 6-1: Solution objective comparison ...... 131 TABLE 7-1: Comparison of DCM and RBCM...... 141 TABLE 7-2: Real Time Market Bid Data...... 147 TABLE 7-3 LMPs Result Using DCM...... 148

xi

ACKNOWLEDGEMENTS

I would like to take this opportunity to express my most sincere gratitude to my major professor, Dr. McCalley for his guidance, patience and support throughout this research and the writing of this thesis. His professional accomplishments and dedications are a tremendous source of inspiration to me and I very enjoyed working under his supervision. I greatly appreciate my committee members, Dr. Chen-Ching Liu, Dr. William Q. Meeker, Dr. Ajjarapu Venkataramana and Dr. Sarah M. Ryan for their efforts and generously providing time and assistance in the completion of this work. It has been a great pleasure to be a part of the power graduate students group at Iowa State University. It is a fantastic research group. I would like to thank alumni, Dr. Ming Ni, Dr. Qiming Cheng, Dr. Xiaoyu Wen, Dr. Yong Jiang, Dr. Wei Shao, Dr. Shu Liu, Dr. Dan Yang, Dr. Haifeng Liu, and students, Yuan Li, Gang Sheng, Sheng Yang, Cheng Luo, Zhi Gao and many others. Your great friendship, valuable assistance made my stay at AMES an unforgettable part of my life. I am very grateful to my parents and sister, for their endless support and encouragement in every moment of my life. I sincerely thank my wife, Junjun Zhao, for her understanding, support and sacrifice through my doctorate studies. The author acknowledges the financial support of ERCOT for this research work.

xii

ABSTRACT

Deterministic security criterion has served power system operation, congestion management quite well in last decades. It is simple to be implemented in a security control model, for example, security constrained optimal power flow (SCOPF). However, since event likelihood and violation information are not addressed, it does not provide quantitative security understanding, and so results in system inadequate awareness. Therefore, even if computation capability and information techniques have been greatly improved and widely applied in the operation support tool, operators are still not able to get rid of the security threaten, especially in the market competitive environment. Probability approach has shown its strong ability for planning purpose, and recently gets attention in operation area. Since power system security assessment needs to analyze consequence of all credible events, risk defined as multiplication of event probability and severity is well suited to give an indication to quantify the system security level, and congestion level as well. Since risk addresses extra information, its application for making “BETTER” online operation decision becomes an attractive research topic. This dissertation focus on system online risk calculation, risk based multi-objective optimization model development, risk based security control design, and risk based congestion management. A regression model is proposed to predict contingency probability using weather and geography information for online risk calculation. Risk based multi- objective optimization (RBMO) model is presented, considering conflict objectives: risks and cost. Two types of method, classical methods and evolutionary algorithms, are implemented to solve RBMO problem, respectively. A risk based decision making architecture for security control is designed based on the Pareto-optimal solution understanding, visualization tool and high level information analysis. Risk based congestion management provides a market lever to uniformly expand a security “VOLUME”, where greater volume means more risk.

xiii

Meanwhile, risk based LMP signal contracts ALL dimensions of this “VOLUME” in proper weights (state probabilities) at a time. Two test systems, 6-bus and IEEE RTS 96, are used to test developed algorithms. The simulation results show that incorporating risk into security control and congestion management will evolve our understanding of security level, improve control and market efficiency, and support operator to maneuver system in an effective fashion.

1

CHAPTER 1: INTRODUCTION

1.1 Power System Security Definition

NERC (North American Electric Reliability Council) [1] defines power system reliability as follows: “Reliability, in a bulk power electric system, is the degree to which the performance of the elements of that system results in power being delivered to consumers within accepted standards and in the amount desired. The degree of reliability may be measured by the frequency, duration, and magnitude of adverse effects on consumer service.” Reliability can be addressed by considering two basic functional aspects of the power systems [2][3]: Adequacy — the ability of the power system to supply the aggregate electric power and energy requirements of the customer at all times, taking into account scheduled and unscheduled outages of system components. Security — the ability of the power system to withstand sudden disturbances such as electric short circuits or unanticipated loss of system components. In the past, there has been some diversity in practical interpretation of these definitions. For example, for some companies, “security” only refers to overload and voltage assessment and “adequacy” is associated with analysis of generation/load balance. Yet others have interpreted “security” as the ability of electric systems to withstand sudden disturbances in terms of the short-term, so-called transient effects, whereas “adequacy” has been interpreted as the ability of the system to supply the load without violation of circuit or bus voltage ratings. Under this interpretation, a secure system was one that was stable for all contingencies in the contingency set. And adequate system was one that would maintain uninterrupted supply to all loads, and all bus voltages and flows would be within defined ratings for all contingencies in the contingency set. This division conveniently corresponded to the way in which the two were studied: security was studied using dynamic analysis (time

2

domain simulation) and adequacy was studied using steady-state analysis (power flow simulation). In this thesis, we address the manner in which the potential for outage events influences operating and planning decisions. Similar as [4], the term “security” is interpreted as the ability of the system to withstand sudden disturbances in terms of three types of problems that can result from these disturbances: circuit overload, voltage problems, cascading. We are motivated to include these three types of problems under the same umbrella because our intent is to develop a single assessment and control framework to encompass all of them. All further references to “security” in this dissertation refer to this conceptualization. The assessment and control of dynamic security, referring to transient instability and oscillatory instability, is another important topic which is not addressed here but will be extended in the future work.

Figure 1-1 Power system operating states

The notion of security “states” was proposed in [5]. Here, it was considered that the power system always resides in one of four states: normal, alert, emergency, or restorative, where the emergency state could be extreme, temporary, or controlled. These states are

3 distinguished based on system response to one of a defined, limited set of contingencies. These concepts are illustrated in Figure 1-1. The importance of Figure 1-1 is that it provides a conceptual basis for making decisions related to security. This basis rests on the assumption that any normal state is acceptable, and any other state is unacceptable. Traditionally, security-related decisions in both operations and planning have been made with the criterion being that the power system should remain in the normal state at all times. Although conceptually appealing, application of this criterion must confront a serious problem: there does not exist a quantitative method to measure security level and therefore distinguish between the states. As a consequence, rough rules of thumb are used in the decision process, and resulting boundaries between the various regions of the stable state, where problems are only potential, do not represent the same degree of risk. Most importantly, lack of a security level index disguises the fact that there is no fundamental difference between the normal state and the alert state: in both states, unexpected events may cause undesirable consequences. The only difference is that the likelihood and / or severity of the undesirable consequences change, i.e., the states differ only in terms of the risk corresponding to the operating conditions and configuration, and we need a measurable index to reflect this.

1.2 Power System Security Assessment

1.2.1 Deterministic Approach

The general industry practice for security assessment has been to use a deterministic approach [6]. The power system is designed and operated to withstand a set of contingencies referred to as “normal contingencies” selected on the basis that they have a significant likelihood of occurrence. In practice, they are usually defined as the loss of any single element in a power system either spontaneously or preceded by a single-, double-, or three- phase fault. This is usually referred to as the criterion because it examines the behavior of an

4

-component grid following the loss of any one of its major components. In addition, loss of load or cascading outages may not be allowed for multiple-related outages such as loss of a double-circuit line. Consideration may be given to extreme contingencies that exceed in severity the normal design contingencies. Emergency controls, such as generation tripping, load shedding, and controlled islanding, may be used to cope with such events and prevent widespread blackouts. This approach depends on the application of two criteria during study development. 1. Credibility: The network configuration, outage event, and operating conditions should be reasonably likely to occur. 2. Severity: The outage event, network configuration and operating condition on which the decision is based, should result in the most severe system performance, i.e., there should be no other credible combination of outage event, network configuration, and operation condition which results in more severe system performance. In the practice, the deterministic approach consists of the following aspects:

• Select the time period and loading conditions for the study. • Select the network configuration. • Select the contingency set. • Refine the operating conditions in terms of dispatch and voltage profile. • Perform the simulations of the events and identify any that violate the performance evaluation criteria.

• Identify solutions for any event result in violation of the performance evaluation criteria. The deterministic approach has served the industry reasonably well in the past—it has resulted in so called high security levels and the study effort is minimized. Yet there has been a real and tangible price to pay for using deterministic approach: solutions tend to be overly conservative, due to the emphasis of the most severe, credible event. As a consequence,

5 existing facilities are not fully utilized (operation perspective) and the system becomes overbuilt (planning perspective). One glaring weakness of deterministic approach is that it is difficult to economically evaluate the security level, and therefore it can be difficult to integrate security into the economic decision making process. More fundamental weaknesses are:

• Occurrence frequency of events is not measured. • Performance requirements are not uniform. For example, exceeding a conductor 30 minutes emergency overload rating by 1% for 30 minutes is unacceptable, but there is almost zero economic impact. An out of step condition at a plant is equally unacceptable, yet the economic impact of replacing the energy source is quite large.

• Non-limiting events are ignored.

1.2.2 Probabilistic Approach

In today’s competitive market environment, with a diversity of participants with different business interests, the transmission is heavily utilized. As a consequence, normal operating conditions are moving closer and closer to levels beyond which the system is vulnerable to costly impacts resulting from credible outage events. There is a need to account for the probabilistic nature of system conditions and events, and to quantify and manage risk. The trend will be to expand the use of risk-based security assessment. In this approach, the probability of the system becoming unstable and its consequences are examined, and the degree of exposure to system failure is estimated. This approach is computationally intensive but is possible with today’s computing and analysis tools. It is known that probabilistic methods constitute powerful tools for use in many kinds of decision-making problems [7][8]. In the works from [9] to [12], the authors focus on the development of an index that quantitatively captures the risk of a forecasted operating condition, in a particular time period. One of the most relevant applications of the proposed

6

methodology is that it is suited to perform an Online Risk-Based Security Assessment (OL- RBSA). In this application, the context on which the decision making process occurs corresponds to the time period in which the decisions take their effect; as a consequence, it is possible to take preventive actions in order to increase security, if necessary, over a very short time frame. The problem addressed in [13] and [14] is to assess dynamic transfer capability of a given interface between two transmission areas, including uncertainties in load and generation forecast as well as probability of occurrence for the contingencies in the contingency set. References [15] and [16] focus on the problem of assessing the reliability of the system when a high number of bilateral transactions using a Monte Carlo technique. In [17] and [18], the authors present a probabilistic approach for the on-line security control and operation of power transmission systems. The objective of the transmission system operator becomes to define inter-zonal power transfer limits as a function of weather conditions, zonal price differentials and market situation. The theme of most of above works is that security level can be quantitatively assessed using one or more probabilistic metrics. Although the industry has not reached a conclusion regarding which probabilistic metrics are best, there is consensus that using them has potential to improve analysis and decision-making.

1.3 Security Control and Optimization

1.3.1 Power System Control

A properly designed and operated power system should meet the following fundamental requirements:

• The system must be able to meet the continually changing load demand for active and reactive power.

• The system should supply energy at minimum cost and with minimum ecological impact.

7

• The “quality” of power supply must meet certain minimum standards with regard to the following factors: constancy of frequency, constancy of voltage and level of reliability. Several levels of controls involving a complex array of devices are used to meet the above requirements and maximize social benefit. They are categorized into three subsystems of controls in power system: 1. Generating unit controls: prime mover controls and excitation controls. The control variables include generation voltage, reactive power output. 2. System generation controls: load frequency control with economic allocation. The control variables include unit commitment and MW outputs, MW interchange transactions and load reduction or shedding. 3. Transmission controls: reactive power and voltage control, HVDC transmission and associated controls. The control variables include transformer taps, shunt reactors and capacitors, and line or bus-bar switching. The controls described above contribute to the satisfactory operation of the power system by maintaining system voltages and frequency and other system variables within their acceptable limits. The control objectives are dependent on the operating state of the power system. Under normal conditions, the control objective is to operate as efficiently as possible

with voltages and frequency close to nominal values. When an abnormal condition develops, new objectives must be met to restore the system to normal operation. Based on the different levels of system security as shown in Figure 1-1, there are generally two kinds of security controls: 1. Preventive control actions, such as generation rescheduling or increasing reserve margins, are usually taken to restore the system from the alert state back to the normal state.

8

2. Corrective control actions, which sometimes are also called emergency control actions, are usually employed to restore the system from the emergency state back to the alert state.

1.3.2 Security Constrained Optimization

When we schedule the power system controls to achieve operation at a desired security level, we need to optimize an objective function as well, such as cost of operation. Here, definition of these scheduling controls falls into the generic category “optimal power flow” (OPF), or “security constrained optimal power flow” (SCOPF) [19][20]with system operating limits. The OPF has been studied for decades and has become a successful and flexible analytical tool that could be applied in everyday use. The classical OPF formulation has a single objective. Objective can be defined and incorporated into OPF programs as the engineering needs are identified and clarified. Following four objectives are common used in the industry and study. 1. Minimum cost of operation: this objective comprises the sum of the costs of the controlled generations, plus the costs of any controlled interchanged transactions. 2. Minimum active-power transmission losses: The controls that can address this objective are all the ones without direct costs, that is, all except MW generations and interchanges. Normally, loss minimization is associated with voltage/VAR

scheduling. 3. Minimum deviation from a specified point: the objective is usually defined as the sum of the weighted squares of the deviations of the control variables from their given target values. 4. Minimum number of controls rescheduled: this objective is set in many cases when it is impossible or undesirable to reschedule a large number of controls at the same time.

9

So far, all objectives defined in OPF are rarely related to security level, which are only ensured through security limits formulated into constraints of OPF, such as branch MW flows, bus voltages for the base case and contingency cases. The upper and lower limits are usually “hard” corresponding to the ranges of physical apparatus, which are well suited for deterministic security assessment. However, since system severity is not quantified and event likelihood is not addressed, the decision making based on OPF would be conservative or risky. Therefore, OPF development in probabilistic framework is attractive and prospective in the future.

1.3.3 Optimization Methods Summary

The OPF is a large and complex mathematical programming problem. Almost every mathematical programming approach that can be applied to this problem, each with its particular mathematical and computational characteristics, has been attempted and significant efforts have been made to solve the OPF problem reliably. The methods employed to solve the OPF problem can be generally classified as following four categories.

1.3.3.1 Nonlinear Programming Methods

The nonlinear programming methods deal with problems involving nonlinear objective functions and constraints. The nonlinear programming (NLP) methods are the most essential

and commonly used approaches for OPF problems since power systems behave nonlinearly. Many NLP techniques have been applied since OPF was first discussed and studied and their attributes can be summarized as follows [21]-[25]:

• Lagrange multiplier method: The basis of many standard on-line economic dispatch programs.

• Gradient methods: Slow in convergence and difficult to handle inequality constraints. • Newton’s methods: Fast in convergence but may give problems with inequality constraints.

10

• P-Q decomposition method: decompose real power and reactive power during optimization.

• Penalty function method: make it easy to handle inequality constraints.

1.3.3.2 Methods

Quadratic programming (QP) is a special form of nonlinear programming, which treats the problems with quadratic objective functions but with linear constraints. The QP methods have been employed to solve such special OPF problems as system loss minimization, voltage control and economic dispatch [26]-[29].

1.3.3.3 Linear Programming Methods

Although the NLP-based OPF works very well in many cases, especially for the loss minimization problem, it has difficulties in detecting and handling infeasibility, and is inefficient for the enforcement of complicated constraints such as contingencies. Moreover, the NLP-based OPF is very time-consuming since a power flow calculation has to be conducted at every iteration. Therefore, it is not suitable for on-line operation and control. These reasons encourage the concentration of efforts on the linear programming (LP) based approach [30]-[34]. There are several main advantages of the LP approach. One is the reliability of optimization. Another is its ability to recognize problem infeasibility quickly. A third is that it can easily handle complicated constraints such as contingencies. However, the most attractive issue is the very high speed of the calculation. In many cases, especially in emergency situations, there is no need to converge very accurately to obtain a meaningful or optimal result. This offers greater flexibility for trade-offs between computing speed and convergence accuracy than is possible with most NLP approaches. Another attraction of LP approach is its capability to consider integer variables using mixed integer linear programming (MILP) technique.

11

1.3.3.4 Interior Point Methods

For conventional LP approach, the optimal solution is solved by following a series of points on the “constraints boundary”. However, a new solution algorithm was proposed in the 1980s’ for linear programming problems that found the optimal solution by following a path through the interior of the constraints directly toward the optimal solution on the constraints boundary [35] [36]. This method, which thereby was called interior point (IP) method, featured the choice of a good starting point and fast convergence compared with the conventional LP algorithms and has become the basis for many OPF solutions [37] [38]. The extension of IP method to NLP and QP problems has shown superior convergence qualities and promising results.

1.3.3.5 Evolutionary algorithms

We call above four methods as classical methods, since all of them search the optimal point by gradient information. One of the most striking differences to classical search and optimization algorithms is that Evolutionary algorithm (EA) uses a population of solutions in each iteration, instead of a single solution. Since a population of solutions is processed in each iteration, the outcome of an EA is also a population of solutions. If an optimization problem has a single optimum, all EA population members can be expected to converge to

that optimum solution. However, if an optimization problem has multiple optimal solutions, an EA can be used to capture multiple optimal solutions in its final population. This ability of EA to find multiple optimal solutions in one single simulation run make EA unique in solving MO optimization problems. Paper [39] presented the main application of EA in power system before 1996, which included planning and scheduling, operation optimization, and alarm processing and fault diagnosis. Evolutionary programming applied on the OPF algorithm is introduced in [40]-[43]. Multiple objective evolutionary approaches for load

12 control strategies, environmental/economic power dispatch and reactive power planning are separately illustrated in [44]-[46].

1.4 Enhancing Market Security Based on Price Signal

In the deregulated competitive market environment, all market participants are more interested in the price for revenue or payment, especially in a pool market. Security limit is enforced in the market clearing model, and reflected in the clearing price. Therefore, market price gives a direct signal to show the impact of transaction on the system security limits. The various system limits considered include flow limits, interface limits, active and reactive power reserve requirements, voltage requirements, stability limits, energy balance requirements (including the need to control system frequency), and a multitude of other “practical” system limits[47]. There are direct and indirect means to create “market signals” for out of all these limits. The direct methods use the limits themselves in the creation of the signals, whereas the indirect methods include various ways of converting one type of limit into an (approximately) equivalent limit for the purpose of making the handling of the limit more expeditious. Once limits are established, the manner in which they are revealed to the market can vary. One way of issuing signals to a market is by disclosing and imposing the limits on all transactions and operational conditions based on rules having to do with priority, order of transactions, size of transaction, the use of a reservation system or other such nonpricing technique. We call this the reliability-driven approach. The second alternative to limit enforcement is by the use of price signals that ensure that the limit is not violated by ascribing economic value to the limit. We call this the market-driven alternative, which is commonly implemented in RTO and ISO, such as PJM, and New England ISO [48]-[56] .

13

Within market-driven methods for enforcing limits, there are at least two (not necessarily incompatible) ways of dealing with limit enforcement. One is the conversion of the limit into nodal price signals (Locational marginal pricing, LMPs) of direct interest to market participants. The second alternative is to price the limit itself (FGR) and to calculate the contribution to this limit of any transaction or action among market participants. In the market-driven view of system operations, the operator makes relatively aggressive use of market signals and prices and uses markets as much as possible to assure reliability. One feature of LMPs is that they are sensitive to operating conditions for which limits are binding. Such conditions are referred to as congestion. Congested circuits cause prices to increase and therefore are said to incentivize market agents to make decisions that reduce congestion. This congestion pricing is consistent with present-day monitoring and control philosophy in that the security limits are driven by performance restrictions following specified contingencies. Yet, congestion pricing also suffers from the same problems as the present-day monitoring and control philosophy in that it does not properly account for risk. As a consequence, price would be too high for only one binding limit under small likelihood scenarios and too low for many near binding limits under most likelihood events, and then will not give a right signal to reflect system security level.

1.5 Problem Statement

The objective of this work is to develop neural security control approach in probabilistic framework, where online event likelihood and severity are identified and implemented into a risk index for system security quantification. Advanced decision-support tools based on developed approach will provide decision maker, such as operators, with the ability to assess, monitor, and decide in terms of selecting the most effective alternatives in maneuvering power systems into more secure and efficient regions of operation.

14

The deterministic security control has been maturely developed in the last decades. Especially, SCOPF has been the hot research topic and the most common control approach in operation and market. So far, people are still working in this method, such as how to achieve high economic efficient by scheduling preventive and corrective controls [57][58]. However, at the same decades, deterministic security control gets more doubt because of the conservative operation and risk unawareness. In the analysis of North America 2003 blackout events [59], the main causes are due to inadequate understanding of system and inadequate level of system awareness. Therefore, it is necessary to establish a new methodology on system control that can identify security level and the tradeoff relationship with the economic benefit. It belongs to multi-objective optimization approach, which is expected to operate system in a more secure situation with high economic efficiency. Most probabilistic approaches have been developed for system reliability evaluation, for the purpose of planning. Since most indices used in probabilistic approach are related to adequacy, such as LOLP, it is not suitable for security control. Security control and optimization in probabilistic framework still need to be investigated, especially for real time operation. For example, traditional event probability is computed in the average view, which is not time or condition dependent and not a proper parameter for online application. Power industry has moved into deregulated market environment, where all market

participants make profits through bidding in the day-ahead or real time market. Since all participants aim to maximize whole benefit, congestions are often caused because of not enough transmission capacity, and power flow becomes more uncertain in the real time operation. Without security level defined in market pricing scheme, LMPs can not release signals to predict the consequence of these congestions. Therefore, the impact of some transactions, especially in the real market, will deteriorate system reliability level but will not be reflected in the price signal.

15

This research focused on the methodologies and approaches to solve above problems. The following aspects are investigated to attain this target. 1. Real time risk expression: In order to increase system situation awareness, system health is quantified by risk, which is a result of contingency probability and consequence. Contingency probability, indicating event likelihood, is estimated based on the weather condition and geography information to reflect the real time system states. Time variant contingency probability estimation provides a measurement for operator to balance the control costs and mitigate severity of contingencies. 2. System real time risk visualization: Different problems threaten system security in different degrees, which are captured in risk based security assessment. Risk may be viewed by the contingencies which cause it, or by the components that incur it, as well as by either summed over region or zone, or characterized over time. The ability to visualize risk is essential for its efficient assimilation by operational personnel. 3. Risk based decision making on security control: Traditional decision making on selecting controls always focuses on minimizing cost, without security level understanding. As we all know, power system vulnerability is caused by several problems including overload, and voltage problems. Different contingencies, components or zones could cause or have different risk problems. Decreasing risk of

one contingency or component or zone may increase risk of other contingencies, components, or zones, and it usually increases costs. People would also be interested in knowing how many near violations in the system, to what extend, to what problem and how possible. All these problems are considered in one framework and analyzed using MCDM process. 4. Risk based system control methodology: In the traditional system optimization control methods, reliability criteria establish rigid limitations. The control methodology used to satisfy these limitations is called the security constrained

16

optimal power flow (SCOPF), employing a cost minimization objective with hard reliability constraints. For the risk based security control, we have developed and illustrated an alternative control methodology called the risk-based multi-objective OPF (RBMO) using classical methods or evolutionary algorithms. Risk vs. costs tradeoffs together with potential for control action to decrease one risk while increasing another motivates use of multi-objective (MO) optimization methods. 5. Risk based congestion management and LMP calculation for electricity market: Development of risk based congestion management to reflect the severity of congestions, weighted by event likelihood. As a result, risk based LMPs will release effective price signals to support reliability-related market operation and resource allocation.

1.6 Dissertation Organization

This dissertation consists of 8 chapters. CHAPTER 1 provides the introduction, background, and motivation for this work as well as a general summary of the techniques used. The remainder of the dissertation is organized as follows: CHAPTER 2 gives a detailed overview of risk based security assessment. The first part of the overview deals with risk definition and assessment. The second part introduces the risk visualization tool, which provides a communication between probabilistic and deterministic security understanding. CHAPTER 3 develops real time contingency probability estimation approach. Data pooling technology and regression analysis are used for data process. As a result, contingency probability, influenced by weather condition and geographic location, varies with operation conditions and can be used for real time reliability control purpose. CHAPTER 4 designs risk based decision making (RBDM) process on security control. Firstly, risk assessment associated with visualization tool is used for decision maker to

17

quantify security level. Secondly, fast and efficient algorithm, essence of RBDM, is applied to search for the optimal controls regards to the security requirements. Finally, the feasibility of controls is checked through stability and cascading analysis. CHAPTER 5 proposes an LP-based multi-objective OPF for security and economy trade off analysis. Different classical methods for multi-objective problems are compared and implemented. The developed algorithms are used to solve a common control problem, transmission loading relief. Result shows system can be operated more secure and lower cost. CHAPTER 6 proposes an evolutionary algorithm, Non-dominated Sorting Genetic Algorithm (NSGA), for risk based multi-objective (RBMO) security control. System overload risk, low voltage risk and operation cost are optimized in a multi-objective framework. Compared with other optimization methods, EA can easily attain global optimization and get all tradeoff solutions in a single run. Parallel computation and algorithm robust give EA unique advantages in the real time controls. CHAPTER 7 derives risk based congestion management and LMPs (RBLMPs) pricing model. Risk based congestion management provides a market lever to uniformly expand a security “VOLUME”, where greater volume means more risk. Meanwhile, risk based LMP signal contracts ALL dimensions of this “VOLUME” in proper weights (state probabilities) at a time.

CHAPTER 8 summarizes the main contributions of the research work and discusses the future work that needs to be done.

18

CHAPTER 2: AN OVERVIEW OF RISK BASED SECURITY ASSESSMENT

2.1 Introduction

Electric transmission systems are required to operate under disturbance-performance criteria such that a disturbance resulting in loss of any single component (NERC class B: line, transformer, or generator), and in certain cases, two components (NERC class C), results in performance that is within stated criteria, e.g., branch flows within designated ratings, bus voltages and margin to voltage stability above specified thresholds, no cascading, and no out of step conditions. As a result, operating conditions in terms of system load, area transfers, line loadings, and voltage profiles, must remain within limitations so that the performance criteria is satisfied for the specified disturbances. To accomplish this, control center personnel must continuously assess the conditions to detect when they are unacceptable, monitor the assessment, and decide when actions should be taken, and what actions to take, in order to maneuver the system back into acceptable conditions. The standard approach today in control center assessment, monitoring, and decision depends on the security analysis function within the Energy Management System (EMS) where a power flow program (or some variant) is used to perform static security assessment

for disturbances (contingencies) on a designated contingency list. Such assessment generally assesses branch overload and low voltage, and although it is relatively uncommon in today’s control centers, it may also assess voltage instability assessment. There are very few control centers having functionality to assess cascading or out of step conditions. Monitoring and decision is performed on a per-contingency, predicted violation basis. That is: operators use security assessment results to monitor the predicted post-contingency performance of each contingency on the list in terms of each performance criterion, and as soon as any single contingency is predicted to violate any one performance criterion, a

19

decision-making process begins. Overload problems generally require generation redispatch to effect transfer reduction, whereas low voltage problems require shunt capacitor insertion or generator terminal voltage adjustment. Voltage instability is addressed by increasing reactive supply (shunt capacitor insertion or committing additional generation units) or decreasing reactive demand (removing shunt reactors or generation redispatch). Additional actions that may be considered include transmission configuration changes through branch switching, and in extreme cases, load interruption; in addition, doing nothing is always an alternative. Information used to make such decisions typically includes some indication of the effectiveness of each action in relation to the specific performance violation encountered. Some EMS do in fact provide the latter information in terms of linear sensitivities associated with z The change of a parameter corresponding to the violated criterion (e.g., line flow, bus voltage, voltage instability margin). z The change corresponding to the prospective action (e.g., real or reactive bus injection). Automated information supporting the control room security-related decision-making process rarely extends significantly beyond that which is described above. However, there are at least five other influences that operators subjectively consider in making their decision.

These influences and corresponding illustrating examples follow: 1. Contingency likelihood: Loss of a 10 mile-long line running through desert in normal weather is much less likely than loss of a 100 mile-long line running through an area of heavy vegetation in stormy weather. 2. Extent of violation: A 10% post-contingency overload on a line rated for 1000 amperes may be of greater concern than a 15% post-contingency overload on a line rated for 500 amperes.

20

3. Number of violations and number of contingencies causing them: An operating condition whereby 3 different N-1 contingencies results in a total of 5 different post- contingency violations is of greater concern than an operating condition whereby just one N-1 contingency results in a single post-contingency violation. 4. Changing operating conditions: Typically, with some exceptions, a contingency violation detected from an assessment of current conditions, as load or transfer is increasing, is of more concern than the same contingency violation detected from an assessment of current conditions as load or transfer is decreasing. 5. Economics: Load interruption may be more effective than redispatch in relieving an overload, and committing an additional unit may be more effective than shunt capacitor insertion in relieving the potential for voltage instability, but economics may dictate selection of the latter alternatives in both cases. What is missing from today’s EMS security assessment functionality is the ability to provide automated decision-support in ways that account for the above influences. This provides the motivation for this research – to develop approaches and tools that provide quantifiable results which account for some or all of the above influences.

2.2 Architecture of Risk Based Security Assessment

Online risk based security assessment (OL-RBSA) computes indices based on

probabilistic risk for the purpose of performing on-line security assessment of high voltage electric power transmission systems. The indices computed are for use by operators in the control room to assess system security levels as a function of existing and near-future network conditions. Uncertainties in contingency conditions are modeled, and uncertainties in near-future loading conditions can be modeled if desired. Any number of contingencies may be included in the assessment. Severity functions are adopted to uniformly quantify the severity of network performance for overload and voltage security. Figure 2-1 illustrates how

21

OL-RBSA software would be integrated with the existing SCADA/EMS system for control room application. RTU

RTU Information from power RTU system (SCADA)

Topology Weather RTU Processing data State estimation

Bad data processing OL-RBSA Database Network Model

Inner & outer Visualization external model Contingency List Techniques

Figure 2-1: Integration of OL-RBSA with SCADA/EMS

Our implementation of OL-RBSA includes overload security (flow violations) and voltage security (voltage magnitude violations and voltage instability). It does not include dynamic security assessment. The risk index is an expectation of severity, computed by summing over all possible outcomes the product of the outcome probability and its severity.

Determine voltage instability severity for the system

Determine low voltage severity for each bus

Current or Determine overload Forecasted severity for each circuit operating conditions Determine cascading severity for the system

Possible Selected near-future near-future operating cdts contingency (bus injections) states

Figure 2-2: Illustration of basic OL-RBSA calculation

22

In Figure 2-2, if we assign probabilities to each branch, then the probability of each terminal state is the product of the probabilities assigned to the branches that connect the initial state to that terminal state. For real time analysis, the uncertainty of operating condition is so small that we overlook its influence. However, for risk assessment in a time trajectory, it will identify more system weak situations and have important impact on the risk estimation. In this thesis, out target is to develop security control tool for real time application, so uncertainty of operating condition is not included in the analysis.

Figure 2-3: Architecture of OL-RBSA

In Figure 2-3, following functions are designed in the architecture of RBSA:

23

1. Contingency screening. Contingency screening is a rapid and approximate assessment of contingencies comprising a prescribed contingency list to identify those contingencies that may have unacceptably severe impact and consequently should be assessed with greater refinement and accuracy. It is a standard function in most security assessment software applications for the simple reason that for most power systems, under most encountered conditions, the number of risky contingencies is relatively small, so that contingency screening can significantly decrease the computation time required to perform the security assessment function. The detail is given in section 2.3. 2. Contingency probability estimation. Event likelihood is an important parameter to disclose system risk. Traditionally it is estimated in average way, which does not address the influence of environment conditions. Therefore, it is not suitable for operation need. In chapter 3, we will give a regression model to indicate the influence of weather and geography factors on contingency probability. 3. Risk measurement. System risk defined as a multiplication of two elements: likelihood and severity. Contingency probability is estimated to show next contingency possibility, while severity is defined to express the stress concern once the contingency happens. Severity functions are provided in section 2.4 to reflect the consequence of different security problems.

4. Risk visualization. Risk quantifies system security level. However, operators like to absorb security information in a deterministic understanding in long decades. In section 2.5, a visualization tool, security diagram, is developed as a bridge to communicate risk index with deterministic security boundary. 5. Risk identification. Generally, larger risk scenario represents security problem is more serious for a certain system. However, achieving the lowest risk level is often not a feasible choice because of over conservation operation and unacceptable cost. Therefore,

24

security criterion to identify proper risk level is required. Furthermore, catastrophic consequence should be avoided whatever the risk is. Chapter 4 gives more details. 6. Risk based security control. When system is running in an alert or emergency state, preventive or corrective controls are applied to restore system into normal state. Instead of using deterministic boundary constraints, risk indicating system security level is used to be another objective for security control. Operators are able to maneuver system into a better performance. We have implemented two approaches for risk based security control in Chapter 5 and 6, respectively.

2.3 Contingency Screening

The risk-based security assessment (RBSA) application under development is similar to any other security assessment function in this sense, and contingency screening can be quite useful in decreasing computation time. And so developing this function for the RBSA application is important.

2.3.1 Literature Overviews of Contingency Screening

The basis of almost all contingency screening methods is the development of a ranking of the contingencies in ascending order of severity. Then more refined analysis can be done on the contingencies in ascending order until N contingencies are processed that results in acceptable performance, where N is typically chosen to be some small number regards to system scale. Unfortunately, the fastest screening methods for overload, based on DC approximation, do not apply for voltage-related analysis (i.e., low voltage or voltage instability analysis) because an assumption in the DC approximation is that all voltages are at 1.0 per unit. As a result, we consider screening techniques that are problem-specific. In the following analysis, we do not consider screening technique associated with dynamic security assessment.

25

PQ decomposition (commonly referred to as decoupled power flow), where the P-δ portion of the power flow Jacobian matrix is decoupled from the Q-V portion, although slower than the DC power flow, provides both overload and undervoltage assessment. Although a single iteration of a decoupled power flow is significant faster than a single iteration of a full power flow, a decoupled power flow generally requires more iterations to converge and therefore computation savings to obtain a fully converged solution can be modest. A common approach is to gain computational efficiency at the expense of accuracy by limiting the PQ decomposition approach to just one or only a few iterations. An approach for overload screening was presented in reference [60] which treats the line outage as a disturbance under normal operating conditions and uses Taylor series expansion method to compute nodal injection power change due to line outages. This approach is similar, in terms of assessment of the outage effect, to that described in reference [61], which uses the normal Newton-Raphson method, a sensitivity matrix, and increments of real power nodal injections to simulate the effect of outages. One promising method for overload and low voltage screening is based on adaptive localization is presented in reference [62], where portions of the network unaffected by the contingency are eliminated to allow very fast processing using a modification of the fast decoupled power flow algorithm. This approach exploits the property that the impact of most

contingencies encountered in security analysis is limited to “localized” regions of the network. Methods for contingency screening and ranking for voltage stability analysis are summarized in [63] , and a new generalized curve fit method is proposed. Reference [64] suggests some improvements on the generalized curve fit method. One helpful discussion about voltage stability contingency screening and ranking can be seen in [65], where two methods are developed: reactive support index (RSI) and iterative filtering. In a discussion to this paper, T. Van Cutsem, C. Moisse, and V. Sermanson provide many good suggestions,

26

especially on the iterative filtering method, and these same researchers also present a related method in reference [66]. A method based on sensitivities from a single nose curve is presented in reference [67], where the linear and quadratic sensitivities of the loading margin to each contingency are computed and used to estimate the resulting change in the loading margin. Reference [68] suggests that a line outage can be represented as the real and reactive powers injected at the ends of the line into the network, and the sensitivity of the load margin with real and reactive powers can then be used to predict the effect of line outages.

2.3.2 Proposed Screening Approach

The two important performance metrics for screening methods are the computational efficiency and the accuracy of the resulting contingency ranking. Considering the previously described papers together with our own experience, we draw the following conclusions: 1. Screening for overload and low voltage: The PQ decomposition (fast decoupled) method is simple and provides a relatively accurate result very fast. The main problem in using it is the selection of the iteration times. Although accuracy is typically reasonable even for a low number of iterations, it is easy to obtain any desired level of accuracy with increased iterations. We use this approach to perform contingency ranking in terms of overload and low voltage risk. 2. Screening for voltage instability: It is well-known that the continuation power flow (CPF)

is an accurate but computationally-intensive way to determine voltage stability margin. The sensitivity method described in [67] [68] is a very fast screening method, but ranking errors can occur. Yet other methods appear to be no better in terms of accuracy, and they are more computationally-intensive. So the sensitivity method seems most attractive for screening contingencies for voltage instability, and we will employ it in this research. 3. Shared ranking: To improve the accuracy of the voltage instability screening without sacrificing speed, we propose to share the ranking from the PQ decomposition with the

27

voltage instability ranking in the following manner: we first use CPF to process down the voltage instability ranking obtained from the sensitivity method. Then we track through the overload/low voltage ranking and use CPF to process any high-risk contingencies that have not already been processed with CPF via the voltage instability ranking. In this way, screening and ranking time remains relatively short, and we compensate for possible missed contingencies in the voltage instability ranking by adding contingencies severe in overload or low voltage that are not already on the voltage instability list.

2.3.3 Contingency Screening for Overload and Low Voltage

Ranking contingencies requires computation of a performance index to evaluate the severity of each contingency. For a given contingency, a traditional performance index for overload performance index (PI) is P iPI = i )()( 2 Pi max n = ∑ iPIPI )( i=1 (2-1)

where Pi is the load flow of circuit i following the contingency;

Pimax is the rated flow of circuit i;

PI(i) is the performance index for circuit i;

n is total number of circuits;

PI is the overload performance index for the contingency.

If the circuit flow Pi is less than Pimax, the circuit’s overload performance index PI(i) is

small, but when circuit flow Pi exceeds rated flow Pimax, it is large, increasing quadratically with increasing circuit flow. So the more severe the overload, the larger is the performance index. The low voltage performance index can be given in a similar way.

28

The performance index of eq. (2-1) has two main weaknesses. First, it is slightly influenced by lightly loaded circuits, in spite of the fact that such circuit loadings are not an indicator of contingency severity. Second, its nonlinear behavior with high circuit loading inhibits physical interpretation of the index. As a result of these weaknesses, we propose a new index. This new index is based on the severity function used in the RBSA software as shown in section 2.4. It is given by eq. (2-2).

⎧ P P i − ),9.0(*10 if i > 9.0 n ⎪ ⎪ Pi max Pi max = iPIiPIPI )(,)( = ⎨ (2-2) ∑ P i=1 ⎪ ,0 if i <= 9.0 ⎪ ⎩ Pi max

The performance index of eq. (2-2) provides that only the circuits having loading beyond 90% of rating are of influence. In addition, its linear form provides that it may be interpreted as the sum total over all branches of percent flows exceeding 90%. The 10.0 coefficient provides that PI=1 when Pi=Pimax. We give a similar PI for low voltage, which also conforms to the severity function used in the RBSA software, as shown in eq. (2-3).

m ⎧ ifVV ii <− 1),1(*5.12 = ∑ iPIiPIPI )(,)( = ⎨ (2-3) i=1 ⎩ ifV >= 1,0

where m is the total number of buses. In computing the performance index of eq. (2-3), only buses having voltage magnitude below 1.0 per unit contribute. The 12.5 coefficient provides

that PI=1 when Vi=0.92.

The PQ decomposition (fast decoupled) method uses fixed Jacobian submatrices , BB ''' to iterate, and each iteration can therefore be done very rapidly. Contingency screening can be effectively performed by iterating just a few times (without achieving solution convergence) and computing performance indices from the unsolved (but reasonably close) case. We denote two different PQ methods:

29

• 1P1Q performs 1 iteration using each of the B’ and B’’ matrices.

• 2P2Q performs 2 iterations using each of the B’ and B’’ matrices.

2.3.4 Contingency Screening for Voltage Instability

The method employed for screening contingencies for voltage instability uses sensitivity of loadability with respect to P and Q injections at both ends of the outaged line. These sensitivities are computed at the nose (bifurcation) point of no-outage case. Procedures for obtaining these sensitivities are included in [4]. Here, we describe how we use these sensitivities.

We use CPF to determine loadability for the no outage state, denoted by L0 which is computed by increasing load in the same proportion for all load buses. Also, denote the nodes of the outage line as i, j. The flow of the outage line at the nose point is

=Δ [ QQPPP jiji ] (2-4)

The sensitivity of loadability with respect to P and Q at buses i,j is denoted by ∂L ∂L ∂L ∂L Sen = ],,,[ (2-5) ∂P ∂P ∂Q ∂Q jiji Then the new loadability on the outage state can be estimated from

= 0 + * ΔPSenLL (2-6)

which is the performance index used in screening contingencies for voltage instability.

This method generalizes well for a two-line outage or for a generator outage.

2.3.5 Test Result

We tested the accuracy of performance indices in expressions (2-2), (2-3), and (2-6) using ERCOT case. We have ranked 66 contingencies. The ranking was done for overload

(TABLE 2-1), low voltage (TABLE 2-2), and voltage instability (TABLE 2-3). In each table, we compared the ranking obtained from our screening methods to the “correct” ranking obtained from analysis using a fully solved AC power flow solution (for overload and low voltage) and a continuation power flow (for voltage instability) on each contingency case.

30

TABLE 2-1: Contingency Ranking Result for Overload Contingency Number Rank Contingency Number Rank P1Q1 P2Q2 Correct P1Q1 P1Q1 Correct 1 15 10 10 34 50 9 9 2 10 15 15 35 49 5 5 3 63 63 14 36 48 48 48 4 14 14 63 37 55 55 55 5 40 40 40 38 29 58 58 6 13 66 66 39 44 59 59 7 66 12 12 40 58 44 44 8 22 22 22 41 59 29 29 9 12 13 13 42 53 53 54 10 3 65 65 43 42 54 53 11 18 3 3 44 54 42 17 12 65 28 28 45 52 52 42 13 21 25 25 46 17 17 52 14 28 62 62 47 27 27 57 15 25 21 21 48 43 57 27 16 62 18 18 49 57 61 8 17 1 35 35 50 8 41 61 18 4 16 47 51 61 8 60 19 16 47 16 52 41 60 41 20 35 33 33 53 60 43 43 21 47 11 11 54 64 64 64 22 33 20 20 55 36 26 39 23 11 37 37 56 39 36 36 24 20 34 34 57 26 39 26 25 34 1 1 58 30 7 7 26 37 46 46 59 38 38 38 27 24 45 4 60 7 31 30 28 9 50 50 61 31 30 31 29 2 4 45 62 32 32 32 30 45 51 51 63 19 19 19 31 46 2 49 64 23 23 23 32 51 49 2 65 56 56 56 33 5 24 24 66 6 6 6

31

TABLE 2-2: Contingency Ranking Result for Low Voltage Contingency Number Contingency Number Rank P1Q1 P2Q2 Correct Rank P1Q1 P2Q2 Correct 1 10 10 10 34 37 66 37 2 15 47 65 35 46 52 46 3 47 51 51 36 31 26 27 4 51 65 47 37 52 43 54 5 3 3 13 38 43 55 63 6 65 13 3 39 62 35 55 7 13 22 21 40 55 27 38 8 22 21 15 41 27 31 57 9 21 25 25 42 54 54 45 10 25 16 22 43 41 57 43 11 16 1 42 44 35 41 35 12 1 15 16 45 57 19 61 13 63 6 18 46 19 45 56 14 6 42 14 47 50 38 53 15 14 18 1 48 38 53 32 16 5 14 6 49 61 61 41 17 18 5 5 50 53 49 36 18 12 12 4 51 49 36 49 19 66 4 12 52 36 30 30 20 20 20 29 53 30 56 31 21 4 34 20 54 45 39 39 22 34 29 33 55 64 60 50 23 42 33 11 56 60 48 19 24 33 11 28 57 56 44 58 25 11 2 17 58 39 64 59 26 29 24 34 59 48 50 60 27 2 9 2 60 44 58 8 28 17 17 24 61 58 59 48 29 24 40 9 62 59 32 64 30 9 28 40 63 32 7 23 31 28 63 52 64 7 8 44 32 40 37 66 65 23 23 7 33 26 46 26 66 8 62 62

32

TABLE 2-3: Contingency Ranking Result for Voltage Instability Rank sensitivity CPF Rank sensitivity CPF 1 63 10 34 29 37 2 66 15 35 42 35 3 65 63 36 47 29 4 62 66 37 10 28 5 13 65 38 55 5 6 21 51 39 52 61 7 50 13 40 53 59 8 3 62 41 59 58 9 11 3 42 58 57 10 33 1 43 27 55 11 34 50 44 32 54 12 20 18 45 35 53 13 9 21 46 17 52 14 24 4 47 30 49 15 2 33 48 61 46 16 51 11 49 7 43 17 22 40 50 38 39 18 64 22 51 48 38 19 6 12 52 26 36 20 18 6 53 57 32 21 12 24 54 39 30 22 1 16 55 43 27 23 25 9 56 60 26 24 40 2 57 19 7 25 28 47 58 5 60 26 37 34 59 49 48 27 56 20 60 31 44 28 14 14 61 36 41 29 46 45 62 54 31 30 45 25 63 8 23 31 16 17 64 23 19 32 15 64 65 44 8 33 4 56 66 41 42

TABLE 2-1 and TABLE 2-2 give contingency ranking using the performance metric computed from the P1Q1 method (fast decoupled algorithm run for only one iteration) and from the P2Q2 method (fast decoupled algorithm run for two iterations). For the ERCOT

33

system, the P2Q2 method has an increased run-time of about 90%. Although the ordering from both P1Q1 and P2Q2 differ from the correct ordering, the identity of the top 10 contingencies (which includes all outages resulting in deterministic violations) is the same, as observed by comparing the columns of the yellow highlighted rows in both tables. Although the P2Q2 method is more accurate than P1Q1, the reduced computation time of P1Q1 makes it preferable. The top 10 ranked contingencies based on sensitivity and based on the “correct” approach, are highlighted in TABLE 2-3. As in the overload and low voltage ranking, the ordering is different. In addition, here, there is some difference in the identity of the top 10 ranked contingencies in that the sensitivity method misses 4 of the top 10 contingencies, and one of them, contingency 15 (highlighted in green), which is actually the 2nd most severe contingency, is ranked 32 by the sensitivity method. However, we observe the union of contingencies identified as top 10 in the voltage instability table (TABLE 2-3) and the low voltage table (TABLE 2-2) includes the top 10 most severe voltage instability contingencies, with exception of contingency 1 which is 10th most severe voltage instability contingency. We conclude from this that that voltage instability screening based on our sensitivity method and the low voltage performance indicator is an effective approach.

2.4 Operational Risk Measurement

2.4.1 General Definition of Risk

If we assign severity values to each terminal state, the risk can be computed as the sum over all terminal states of their product of probability and severity, as shown in (2-7)

ft, = ∑ i ∑ ft,jt, × jt,i ))X,Sev(E)X|Pr(X)(Pr(E)Risk(X (2-7) Here: ij

Xt,f is the forecasted condition at time t. It is typically predicated on the last state estimation result together with a forecast of how these conditions change during the

34

time between the last state estimation result and t. t is limited by the time associated

with the unit commitment and the accuracy of the load forecast. If t=0, then Xt,f is the last state estimation, and there is no uncertainty in loading conditions.

th Xt,j is the j possible loading condition. It provides that load forecast uncertainty be

included in the assessment. Pr(Xt,j|Xt,f) is the probability of this condition, obtained from a probability distribution for the possible loading conditions.

th Ei is the i contingency and Pr(Ei) is its probability. Here, we assume the existence of a contingency list.

th Sev(Ei,Xt,j) quantifies the severity, or consequence, of the i contingency occurring under the jth possible operating condition. It represents the severity for overload, low voltage, voltage instability, and cascading overloads.

Equation (2-7) is written in terms of Xt which characterizes a pre-contingency operating condition to emphasize that the source of uncertainty associated with Pr(Xt,,j|Xt,f) is loading conditions and therefore independent of the selection of contingency. To clarify the computational procedures, however, we express (2-7) in terms of post-contingency performance measures, according to:

ft, = ∑∑i ft,jt, × jt, ))Sev(Y)Y|Pr(Y)(Pr(E)Risk(X (2-8) ij

where Yt,f and Yt,j are pos-contingency values of a performance measure corresponding to

th the forecasted loading condition Xt,f and the j possible loading condition Xt,j, respectively, at

time t, following contingency Ei. Examples of such performance measures include circuit flow, bus voltage, and system loadability. Identification of the performance measure results directly from knowledge of the contingency and loading conditions through a power flow

solution, expressed as (Yt,j)Å(Ei,Xt,j). . We have developed a fast calculation procedure to translate the distribution on loading to distributions on performance measures, described in

[12][69].

35

Contingency probability as shown in (2-8) identifies likelihood of event. Traditionally, it is approximately computed through average outage frequency for years during which the relative data are accumulated. It is reasonable for planning purpose, but not suitable for online security analysis. In the real time, contingency occurring probability varies with ambient environment, such as weather condition and geography location, and circuit property, such as line length and voltage level. The detail procedure how to estimate this condition based parameter is described in Chapter 3. Condition probability refers to the uncertainty of system situation, especially the load condition. The method how to address this issue in RBSA is well discussed in [4] . For online application, the difficulty is how to achieve the data and develop a forecast model to solve a load distribution. In this research, we assume the load can be well located and forecasted as the way deterministic approach works, so it is looked as known parameter. Severity provides a quantitative evaluation of what would happen to the power system in the specified condition in terms of severity, impact, consequence, or cost. Criteria have been identified for a good severity function to be used in risk calculation. First, the severity function should reflect consequence of the contingency and loading condition, rather than the consequences of an operator’s decision. For example, operator- initiated load curtailment or re-dispatch reflects the consequence of the operator’s decision to

interrupt load or modify the dispatch, respectively. Thus, use of a load-interruption based index, such as LOLP or EUE, familiar to planners, or an index based on cost of re-dispatch, is inappropriate for use here. This is because the assessment is being used to facilitate the operator’s decision making; to construct the index based on load interruption presupposes the very decision the index is supposed to facilitate. Second, the severity for contingencies should reflect consequences that are physically understandable in terms of network parameters by the operator. This criterion ensures that the resulting indices provide engineering insight and intuition to operators with respect to the problems they face. Third,

36 the severity functions should be tied to deterministic decision criteria, to the extent possible, in order to facilitate the transition that their use requires of operators. Fourth, the severity functions should be simple. Fifth, the severity functions should reflect relative severity between different problems to enable calculation of composite indices. Finally, the severity function should measure the extent of a violation.

Severity 1: exponential 2: linear 3: logarithmic 4: deterministic

1 2 3 4

0.9 Rating

Figure 2-4: Severity Function Classification

Generally, severity functions are divided into two types: Linear and Non-linear. And Non-linear functions can be typically simulated using exponential (1) and logarithmic (3) function as shown in Figure 2-4. Exponential function shows conservative operation since severity sharply goes up after warning point (0.9 Rate), while Logarithmic function responds slowly until stress gets to rate. Actually, deterministic security classification (4) is a special case of 3 and does not give any information about how stressful the system is. The basic criterion how to select severity function is that the indicator should be able to reliably measure the level of stress in the system if the loading level is fairly high or bus voltage is fairly low under credible states, and gives some warning that the situation is worsening. Therefore, an indicator whose value increases linearly with the level of stress

37 would be much more useful than a nonlinear indicator. Our basic approach is to use linear functions of network performance measures.

2.4.2 Low Voltage Risk

The severity function for low voltage is defined specific to each bus. The voltage magnitude of each bus determines the low voltage severity of that bus. The severity function for low voltage is illustrated in Figure 2-5. For each bus, the severity evaluates to 1.0 at the deterministic limits (e.g., 0.92 p.u.) and increases linearly as voltage magnitude falls below the limit. This severity function measures the extent of the violation, and it is composable. In addition, its use results in non-zero risk for performance close to, but within a performance limit, reflecting the realistic sense that such a situation is in fact risky.

1 Severity Severity

0.92 1.0 Voltage magnitude (p.u.)

Figure 2-5: Severity Function for Low Voltage

2.4.3 Overload Risk

The severity function for overload is defined specific to each circuit (transmission lines and transformers). The power flow as % of rating (PR, which is the ratio between real load and rated load) of each circuit determines the overload severity of that circuit. The discrete and continuous severity functions for overload are shown in Figure 2-6 and the percentage of violation severity function is defined as:

38

1 Severity Severity

90 100 % rating

Figure 2-6: Severity Function for Overload

2.4.4 Voltage Instability Risk

The severity function of voltage instability is a system severity function rather than a component severity function. We use the loadability corresponding to the system bifurcation point to determine the voltage instability severity. Here we define ‘%margin’ as the percentage difference between the forecasted load and loadability, as illustrated in Figure 2-7 and expressed in (2-3).

Forecaseted load margin

0 Loadability MW

Figure 2-7: Loadability and Margin

yLoadabilit − (Forecasted_Load) %margin = 100%* (2-9) (Forecasted_Load)

39

The severity function for voltage stability is illustrated in Figure 2-8. If %margin=0, a voltage collapse will occur for the given contingency state at the particular operating condition. The actual effects of such an outcome are quite difficult to identify, as the system dynamics play a heavy role. Nonetheless, it is safe to say the consequence is very severe and generally unacceptable under any condition. We therefore assign severity B to it, where B depends on the decision maker’s valuation of a voltage collapse relative to a violation of the deterministic criteria. If %margin is less than 0, an infinite positive number will be delivered.

Severity Severity

B

1

0 10 %margin

Figure 2-8: Severity Function of Voltage Instability

2.4.5 Cascading Risk

As we know, transmission line could run in the emergency rate for a short moment after contingency. This is safe under the traditional N-1 rule. Differently, we will assume that continuous trip (N-2) would happen after the primary event, the probability of which relates to load level of transmission line. This assumption is reasonable with following consideration: 1) Increase of circuit sag levels up the probability of the second contingency [70]; 2) Protective systems are more likely to operate in an undesirable fashion, particularly during the transient. Similar concern is also shown in [71], where cascading outage probability relates to loading level with a logistic model. In our work, the N-2 trip probability is calculated as below:

40

= ()L − 0 ( trip − PPPPP 0 ) (2-10)

Where: PL: load level of line after primary event; P0: start load level with cascading concerned Ptrip: load level with 1.0 tripping probability; We only use this probability for N-2 trip, which is defined as the level 1 of the cascading analysis, to widen trigger events in a proper consideration. After that, whether more lines need to be tripped depends on whether loading exceeds Ptrip, which is 125% rate in MISO operation rule. We do not apply probability to the level 2 and beyond tripping level to avoid the complication. The consequence of this trip is quantified as total number of tripped lines result from N-2 trip. The iteration regards to different level is needed in this simulation, since system power flow is changing with continuous tripping. Whole process stops with level 5, and if there is still a tripping line, a large number such as 100 will be counted as a consequence. The cascading severity regards to contingency scenario i is expressed as below: M i = ∑ SevPSEV ijij (2-11) j=1 where M: Number of transmission line with non-zero N-2 trip probability; Pij: N-2 probability of line j under scenario i; Sevij: Cascading severity under scenario i and N-2 event j;

2.5 Risk Monitoring and Visualization

Visualization of the results is the means by which the assessment tool communicates to the decision maker, and as such, it is a critical function. We have identified some basic requirements associated with the visualization of information characterizing security levels. These requirements are: z Easily understandable high-level views for fast determination of whether the decision maker needs to investigate further.

41

z The ability to drill down from high-level views to low-level views for precise identification of problems. z Flexibility in specifying and obtaining views of low network level, number of contingencies, and index types, or any combination of them. These requirements suggest that the quantification of security level must be composable and decomposable, so that the various levels of views may be given. Risk-based security assessment is well suited for this task, in contrast to traditional deterministic security assessment. Relative techniques to attain the above requirements are well discussed in [4]. The main features of monitoring and visualization include: z Three dimensional characteristics of the indices to obtain different views of security level. Network level, contingencies, Index type are well combined in the single view. z Visualizing security by risk level. z Visualizing security level temporally. z Visualizing security level spatially. z Drill-down capabilities. The weakness of the above work is loss of communication with deterministic security understanding. Decision maker, getting used to deterministic understanding, would not quickly absorb the result, and then would rather insist on using deterministic approach. In this research, we have developed a means of communicating both deterministic and risk- based security assessment results in a single diagram we call the security diagram. As shown in Figure 2-9, there are three pieces of security-related information as follows:

42

Figure 2-9: Security diagram

C2,C5,C7,C8,C9 are contingencies, ②,⑤,⑧,⑨ are circuit numbers.

1. Probability sectors: There are 5 sectors to the circle, denoted C2, C5, C7, C8 and C9, with each one corresponding to a violating or near-violating contingency in the following results. The angular spread of each sector is proportional to the contingency probability. 2. Severity circles: There are 5 small circles, denoted as circuit numbers, with each one corresponding to a post-contingency violation or near-violation. The radial distance from the center of the diagram to each small circle is proportional to the extent

(severity) of the violation. 3. Security regions: There are 3 regions in the diagram. The center (white) region corresponds to loadings less than 90% of the emergency rating. The yellow “doughnut” corresponds to loadings between 90% and 100% of emergency rating. The red outside region corresponds to loadings in excess of emergency rating. For example, under the state of contingency C2, the loading level of circuit 5 is 92.66% of emergency rating which is in the yellow region.

43

The security diagram provides for efficient human assimilation of the pre-contingency security level. The viewer immediately determines whether the pre-contingency condition is secure or insecure based on the presence of severity circles in the “red-zone.” Perhaps of more importance is the number and location of severity circles in the yellow doughnut. A preponderance of such circles suggests a highly stressed system, even more so if many circles are located close to the red zone. In Figure 2-9, the solution is insecure as contingency C9 results in violation on circuit 8. This is of some concern since C9 has a relatively high probability of occurrence as indicated by its sector spanning the angle of any contingency. Most meaningfully, since deterministic judgment is well incorporated in the probabilistic approach, security diagram simplifies risk understanding and provides an assistant tool for decision maker to select the risk level for the security control.

2.6 Risk Identification for Security Control

In section 2.4, we have introduced 4 types of risk corresponding to security problems: low voltage, overload, voltage instability and cascading. They are differently treated when operators make decisions. Risks relating to low voltage and overload are soft and could only have limit impact on part of system, such as several transmission lines or buses. Therefore, system operation is still under control even if the index is worse. However, the problem caused by voltage instability and cascading will bring catastrophic consequence, like 2003

North America grid blackout. Based on this understanding, we could use low voltage risk and overload risk as a health diagnosis index or security level, which can be an objective in the security control. On the other hand, voltage instability risk and cascading risk will be used to evaluate the feasible operation points as a catastrophic estimation index (CEI). Finally, the best control is decided by both security level and relative cost, at the same time the possibility that system is in serious danger is avoided. In Chapter 4, we will give more details about risk based security control.

44

CHAPTER 3: COMPUTATION OF CONTINGENCY PROBABILITY FOR ONLINE OPERATION

3.1 Introduction

An essential element in risk-based security assessment [12] [69] and related economy- security decision-making [72] is a contingency list together with the probability associated with each contingency on the list. In this paper, we have assumed that a contingency list is provided. Generation of such a list is often done manually, but automated procedures are also available [73]. In order to obtain contingency probabilities, one needs to (a) assume a probability model to use in characterizing the random process associated with the circuit outages, and (b) obtain historical data which may be used to estimate parameters within the assumed probability model. In the past, analyses using contingency probabilities, which have been mainly confined to planning applications, a Poisson or Markov model was typically employed, and then model parameters were estimated based on available circuit outage data. In applying probabilistic analysis to operations-based security assessment, there are two basic issues that need particular attention. Lack of data: It is typical that, for many circuits, there may exist little or even no historical data. We address this issue by pooling data associated with equipment having

characteristics that suggest their outage statistics, if available, would be similar, e.g., geography and voltage level. Dependence on environmental conditions: Operational decisions are influenced by weather conditions. Therefore, it is important in computing contingency probabilities that the effect of weather conditions be considered. In contrast to planning applications where the long-run average-over-time probabilities (referred to as steady-state probabilities) are used, operational decision-making requires instantaneous probabilities, i.e., probabilities characterizing the different possible events that reflect the influence of the current weather

45

state. Having such information provides that contingencies can be assessed not only on their severity but also on their probabilities. A data pooling procedure for generation unit performance indices was described in [74]. To improve the estimators of unit parameters, resulting in reduction of bias and variance, several suggestions were made in [74] regarding the estimation approach: 1) increase the amount of data to reduce the variance by combining data from different units, i.e. pooling, 2) pool units with common, i.e., homogeneous, characteristics to reduce variance, 3) weight data within pools to reduce bias and also variance, and 4) censor extreme data, i.e., outliers. In order to define what constitutes a nominally homogeneous pool, a set of criteria is needed which forms the basis for unit selection. The examples in [74] show that a large sample obtained from pooling data reduces the impact of random errors and smoothes out effects of extremes. The Bonneville Power Administration (BPA) has developed a tool to estimate outage performance of facilities by evaluating historical outage behavior of transmission lines [75]. Voltage, zone, number of overhead ground wires, and outage cause category were used to group the outages. Then the correlation between outages and some of these factors was examined. It was suggested that the information could be used to identify where risks are higher in the grid and to improve outage performance levels.

The objective of the work is to develop a contingency probability estimator for use in operational decision-making constrained by contingency violations. The traditional application of Poisson and Markov models to contingency probability estimation for planning purposes is described in 3.2. Section 3.3 motivates the need to adjust the traditional procedures for use in operational decision-making. The data pooling approach using geographic location, voltage level, and weather conditions is described in Section 3.4. Section 3.5 reports on application of multiple linear regressions to capture the dependence of outage rates on weather condition as characterized by temperature and wind speed. The

46

overall calculation procedure is described in Section 3.6 and illustrated in Section 3.7, and conclusions are drawn in Section 3.8.

3.2 Criteria Review

A random process characterizes the statistical behavior of a process such as the failure and repair of a transmission circuit. The simplest and most common model used for such a process is illustrated in Figure 3-1. The parameters λ and μ are the failure and repair rates, respectively, of the component. The failure rate λ is appropriately computed as 1/MTTF, and the repair rate μ is appropriately computed as 1/MTTR, where MTTF is mean time to failure and MTTR is mean time to repair. Failure rate is sometimes computed for this model by dividing the number of failures in a time interval by that time interval. This calculation actually gives frequency f, not failure rate, where the two attributes relate through f = p1λ, and p1 is the long-run probability of residing in state 1 (up-state), given by p1=μ/(λ+μ). However, it is usually the case for most power system circuits that p1 is very close to 1, that is, the circuit is up most of the time. Therefore, using f for λ is not a bad approximation in this case.

λ State 1: State 2: Cct up Cct down μ

Figure 3-1: Single component Markov Model

The probability p1 is often called the unit availability and is the complement to the unit unavailability (often called the forced outage rate) p2=λ/(λ+μ). It is called a long-run or steady-state probability because it is computed from the Markov model in the limit as t→. It indicates the probability of finding the component in the up state over a long period of time. One may think of it as the fraction of time the component is in the up state over its lifetime. As a result, this probability is appropriate when considering decision-making that is intended

47

to apply, on average, over a long period of time, as in the case of planning decisions. However, this probability may not be appropriate when considering shorter time intervals when the probability is transient as a result of deterioration or variations in conditions to which it is exposed. A number of more complex models have been proposed for capturing transmission circuit failure processes [76][77][78][79]. We mention two in particular. The influence of weather on transmission circuit failure is addressed via the model shown in Figure 3-2, which represents two different weather states for a single circuit. The potential for two circuits to fail via a common mode event, applicable to circuits on the same tower, is captured by the model shown in Figure 3-3. Variations on these models are discussed in [80] [81].

Ω13

Ω31 State 1: State 3: Cct up, Cct up, good stormy weather weather

λ μstormy λstormy μ Ω24

State 2: Ω42 State 4: Cct down, Cct down, good stormy weather weather

Figure 3-2: Single component model with weather

1 up μ1 2 up μ2

λ1 λ2 1 down 1 up 2 up 2 down λC μ1 μ2

λ1 λ2 1 down 2 down

Figure 3-3: Two-component model with common mode failure

Transmission unit performance prediction using advanced regression models was presented in [82]. The failure rate of transmission line is related to physical characteristics of circuit, such as line length, quarry exposure, terminal number. This approach is superior to using the average performance of several transmission units. However, the outage rate,

48

calculated from this regression model, can not capture the variation of environment, such as weather condition. Since this approach results in constant outage rate, it is still for planning purpose, not suitable for real time operation.

3.3 Model Improvement for Operational Decisions

In operational decision-making, the decision-maker always knows the current weather state and in most cases, can predict the weather of the next few hours with high certainty, so that the stochastic transitions between weather states, characterized by the transition rates between weather states (denoted as Ωjk in Figure 3-2) are of little interest. In addition, decisions (mainly related to security-driven redispatch) are most heavily influenced by the transitions from the up-state to the down state, and not from the down state to the up state, so the transitions from down to up states, characterized by the repair rates (denoted by μ in Figure 3-2 and Figure 3-3) are also of little interest. A practical issue for many companies is that data is often not available for many circuits, and failure rates based entirely on historical data yield 0 for such circuits, clearly inappropriate. Finally, we observe that discretization of weather is an over-simplification that does not well capture the effects of weather on circuit failure rates, and the ability to capture weather effects within operations is essential. We make three adjustments in addressing these issues. The first one is that we use the simpler Poisson model rather than the Markov model to characterize the random process associated with transmission line failure. The Poisson model is effective in representing the probability of the “next” failure of a component. It is not appropriate for representing an extended future, because it does not capture the complexities that the distant future brings to the process (e.g., repair, or a second and third failure). This is acceptable for operational security assessment where we are typically only interested in the “next” failure. The second adjustment is motivated by limited availability of data on which to base estimates used in the probability models. This problem has also been encountered in

49

probabilistic analysis of generator adequacy, where a proposed solution is data pooling [74]. In data pooling, data from different components having similar characteristics are combined. This increases the size of the database on which we operate to compute probability model parameters, and as a result, reduces the variance in the estimators, thus providing that they are better estimators for representing the failure rate of the components modeled. We have pooled transmission outage data by voltage level and by weather zone. A weather zone is a geographical area where one expects that the weather, in terms of temperature and wind speed, will be relatively homogenous throughout the region. Thus, we classify each circuit into a particular group, where the number of groups equals the product of number of voltage ratings and the number of weather zones. The third adjustment is to represent the effect of weather within the Poisson model by expressing the failure rate as a continuous function of the temperature and wind speed. Thus, for each data group, we desire a function of the following form: λ′ = ),()( = xxxXXFX ),...,( 21 n (3-1) where λ` is the log of failure rate per unit mile for the given data pool, X=(x1, x2) is a vector representing the weather conditions known to influence the failure rates, where x1 is wind speed and x2 is temperature, and F is the model used to describe the dependence of the log of failure rate on the state X.

3.4 Pooling Data and Associated with Statistical Methods

3.4.1 Pooling Data for Performance Analysis

In fitting a probability model to a set of data, we improve the estimator by grouping the lines and pooling their outage history data so that data from different components having similar characteristics are combined. This increases the size of the database on which we operate to compute probability model parameters, and as a result, reduces estimator variance. The following steps are used to improve the estimator.

50

1. Group the lines by voltage level and weather condition. We assume that outage rates for line groups characterized by the same voltage level and weather condition are identically distributed. 2. Pool outage data for each group by weather condition. 3. Use maximum likelihood estimation to compute the distribution parameter for each pool. In the first step, we are trying to obtain a series of homogeneous populations. The attributes used for line selection may include both design attributes (voltage level, circuit type, etc.) and operating conditions (weather condition, load condition, etc.). In applying this approach, we have used voltage level and weather zone as attributes to define the homogeneous group. There are 3 voltage levels and 8 weather zones in our example, resulting in 3×8=24 data pools.. For each group (data pool), the data is further classified into 30 discrete weather blocks as characterized by wind speed and temperature. Failure rates are then estimated using only the set of outage records corresponding to the weather block. The result is a set of 30 different failure rates, one for each weather block, for each of the 24 data pools.

3.4.2 Failure Rate Estimation for Each Pool

Maximum likelihood (ML) is used to estimate the transmission outage parameter. ML is

a very general algorithm for obtaining estimators of population parameters. One major advantage of the ML method of estimating parameters is its applicability to a wide variety of situations. In particular, when a multiple linear regression model is fitted to normally distributed data, the least squares estimators of the regression coefficients are identical to the ML estimators.

Let L(X,λ) denotes the likelihood function for a data set X= (X1, X2, …, Xn) of n observations from one population characterized by the parameter λ. The likelihood function

51

L(X,λ) can informally be treated as the probability distribution of the multivariate variable X= (X1, X2, …, Xn) involving the n random variables (X1, X2, …, Xn). Thus, L(X,λ) can be thought as a measure of how “likely” λ is to have produced the observed X. Thus, maximum likelihood finds the estimator of λ that is “most likely” to have produced the data. That is

^ XL λ = {XL :),(max),( λλ Θ∈ } (3-2)

where Θ denotes the parameter space. The following equation is solved to get the estimation of λ. ∂ [XL λ ]= 0),(ln (3-3) ∂λ

As indicated in Section 3.3, the Poisson model is used to characterize the random process

associated with transmission line failure. Since the outages of lines (X1, X2, …, Xn) are identical independent events, the likelihood function is

n ∑ xi n e −λn λ i =1 (3-4) XL Xf )(),( λ = ∏ i = n i=1 ∏ xi ! i=1

where xi is the number of failures of each observed transmission line i. The log likelihood function is written as

n n (3-5) λλλ +−== xnLl i log)())(ln()( λ − ∑∑ xi )!log( i=1 i=1

Using equation (3.3), the ML estimator of λ is

n * 1 λ = ∑ xi (3-6) n i=1

Why λ∗ maximizes (3.4) can be verified by confirming that the second derivative of (3.5) with respect to λ is negative when λ is replaced by the valueλ∗. That is

∂2l n 1 * −= x )(| < 0 (3-7) 2 =λλ ∑ i *2 ∂λ i=1 λ

52

If the sample time is T hours and average line length is 1 n , then the failure rate per L = ∑ Li n i=1 mile (failures/(hour•mile) is 1 n n n * (3-8) λ = i i •=• ∑∑∑ LTxLTx i )/()/( n i=1 i=1 i=1

3.5 Failure Rate Calculation Related to Weather

3.5.1 Why Choose Regression Analysis

Regression is a statistical tool for capturing from data the relationship between one or

more independent variables, (X1, X2, …, Xk) and a single, continuous variable Y. In our application, we seek a relation to describe (e.g., predict) the dependent variable Y, failure

rate, as a function of the independent variables (X1, X2, …, Xk), weather condition. We note, however, that the finding of a “statistically significant” association does not establish a causal relationship. We have explored a 4 year database of outages and weather conditions using significance testing to detect valid associations between failure rate and the following weather conditions: wind speed, temperature, dew point, illumination, humidity. Among these, valid statistical associations were found only for wind speed and temperature. It is intuitive that wind speed as failure rate would be related, as there is a clear causal relationship in high wind and conductor disturbances. Temperature is perhaps less intuitive, but we conjecture that this association is due to the additional heat created by the higher ambient temperatures together with the higher resistive heating in the conductor caused by generally heavier loaded circuits. Lighting, the other important issue for security decision, would be considered in the regression model. Since relative data is not available, we only give two ways to implement it in the model without test result. Several general strategies are used to study the relationship between variables by means of regression analysis. The most common of these is called the forward method. This strategy

53

begins with a simply structured model, usually a straight line, and uses progressively more complex models until the association is captured with acceptable accuracy. We have found that linear regression yields acceptable accuracy in our capturing the association between failure rate and associated weather conditions.

3.5.2 Applying Multiple Linear Regression

A transmission outage is a rare event, and as a result, the failure rate λ is typically a small value, so we choose to capture the log of failure rate in our regression model. The multiple

T linear regression [83][84] yields B=[b0, b1, b2] in

λ′ ),( = + + xbxbbxx 2211021 (3-9)

In order to apply regression, we must have data samples of λ’ vs. x1 (temperature) and x2 (wind speed). But initially we have only records of outage occurrence together with the temperature and wind speed at which they occurred. We apply a data transformation process to each data pool to obtain the desired failure rate data that will serve as the input to the regression analysis. Two approaches are given below for handling lighting associated with different type of lighting data.

1. If lighting is quantitatively measured, we can add one variable x3, indicating lighting

strength associated with parameter b3, in expression (3-9). Then influence of lighting

on probability is shown by regression parameter b3. 2. If lighting is assessed as binary variable, then it can be used as a group criterion. In this case, influence of lighting on probability is shown by regression parameter

vector B B for each group. We will not give further discussion about impact of lighting in the following illustration. In the data transformation process, for a given weather zone/voltage group, weather is

discretized in the x1-x2 plane into a number of weather blocks each of which will be

associated with a particular value of λ’. Then we identify the nb contingencies in the pool of

54 outage data that occurred under weather characterized by weather block b. If the exposure time is T hours (the time over which the outages in the database are recorded) and average line length is L, then, the desired λ’ for the block, in units of failures/(hour-mile), is

⎧ 1 nb ⎫ ⎧ nb nb ⎫ (3-10) λ = log' ⎨ ∑ i • LTz ⎬ = ⎨ i • ∑∑ LTz i )/(log)/( ⎬ ⎩nb i =1 ⎭ ⎩ i=1 i=1 ⎭ where zi is the number of outage occurrences of the ith circuit. Equation (3-10) provides a failure rate estimate for each weather block in the weather zone. We have used intervals of 10°F for temperature from 0 to 100°F, and 3 mph for wind-speed from 0 to 60 mph. For each data pool, linear regression is performed to obtain the functional dependence of failure rate on temperature (x1) and wind speed (x2). Application of regression results in the vector of

T regression parameters B=[b0, b1, b2] , so that, if the near future weather condition X and the line length L are known, the failure rate of that circuit may be forecast using equation (3-11) as below. T λ = );*exp(* = [ 21 ];1 = [ bbbBxxXBXL 210 ] (3-11)

3.5.3 Hypothesis Testing

The most important hypothesis test dealing with the parameters of the straight-line model relates to whether the slope of the regression line differs significantly from zero or, equivalently, whether X helps to predict Y using a straight-line model. The appropriate null hypothesis for test (3.9) may be generally stated as H0: “Both independent variables, temperature and wind speed, considered together do not explain a significant amount of the variation in λ’ ”. Equivalently, we may state the null hypothesis as

H0: b1 = b2 = 0 (3-12)

To perform the test, we calculate the F statistic

MS regression ( − ) kSSESSY F = = (3-13) MS residual ()knSSE −− 1

55

2 2 where k=2, n and n ⎛ Λ ⎞ are the total and residual sums of squares, SSY (i −= YY ) SSE −= YY i ∑i=1 ∑i=1 ⎜ i ⎟ ⎝ ⎠ respectively. The computed value of F is then compared with the critical point Fk,n-k-1,1-α, where α is the preselected significance level. We reject H0 if the computed F exceeded the critical point.

3.6 Calculation Procedure

T We have established an automated procedure to compute the parameters B=[b0, b1, b2] . It involves use of four data tables, as described below: 1. Outage history: This table contains the circuit outage histories. 2. Data dictionary: This table contains, for each bus: station code, voltage level, county of bus, weather zone, load zone, CM zone, and SCADA name. 3. Line length table: This table contains the length of all circuits. 4. Weather history: This table contains temperature, wind speed, dew point, humidity, and illumination for each hour of year 2001 to year 2004 by weather zone. There are 8 weather zones: WZ1~8. We used the from-bus of each circuit to determine its weather zone. The flowchart of the procedure is shown in Figure 3-4. The steps of Figure 3.4 are described as follows: 1. Find weather zone for each outage: For each outage in the outage history table, sort

through the data dictionary to identify the weather zone for that outage. 2. Find line length for each outage: For each outage, sort through the line length table to identify the line length for that outage. If the length information is not available for an outage, use circuit impedance to estimate its length. 3. Find temperature and wind speed for each outage: For each outage, use the outage time and the weather history table to determine temperature and wind speed during the outage.

56

Figure 3-4: Data process for regression analysis

4. Pool the outages: Pool (or group) the outages by weather zone and voltage level. Since there are 8 weather zones and 3 voltage levels, this results in 24 different pools of outages. 5. Transform the data: Each of the 24 pools (or groups) of outage records are further segregated into different weather partitions associated with step size 10°F for temperature from 0 to 100°F, and 3mph for wind-speed from 0 to 60 mph. Equation (3.10) is then used to transform the outage records into the values of λ and corresponding temperature and wind-speed (x1,x2).

57

6. Determine functional dependence: For each group, perform regression analysis as described in Section V to obtain the functional dependence of failure rate (we actually regress on log(λ)) on temperature (x1) and wind speed (x2). This procedure results in the vector of regression parameters B=[b0, b1, b2]T.

3.7 Illustrative Example

To illustrate, the failure rate for a sample pool, WZ7 weather zone and 69kV lines, was calculated. A total of 260 outages were found in this group after step 4.The total line length is about 1962.6 miles. The distribution of number of outages among the weather blocks (described by temperature and wind speed) is shown in Figure 3-5. A similar plot is shown in Figure 3-6, except total outage duration is plotted among the weather blocks. The index outage number per hour is plotted in Figure 3-7.

2001~2004

40

30

20

10 Number of outages 0 1 2 3 4 5 10 6 9 7 8 8 7 9 6 10 5 4 3 2 Temperature (F)/10 1 Wind speed (mph)/3

Figure 3-5: Distribution among weather blocks of number of outages

58

2001~2004

3000

2000

1000 Duration Time(h)

0 1 2 3 4 5 10 6 9 7 8 8 7 9 6 10 5 4 3 2 x1:Temperature (F)/10 1 x2:Wind speed (mph)/3

Figure 3-6: Distribution among weather blocks of outage duration

2001~2004

0.2

0.15

0.1

0.05

Outages/Time(times/h) 0 1 2 3 4 5 10 6 9 7 8 8 7 9 6 10 5 4 3 2 x1:Temperature (F)/10 1 x2:Wind speed (mph)/3

Figure 3-7: Distribution among weather blocks of number of outages per hour

In the data transformation of step 5, log(λ) was calculated using eq. (3-10) and shown as a series of point in Fig. 5. The regression parameters of this group can be calculated in step 6, and the result is: b0 = -14.0625, b1 = -0.1588 and b2 = 0.1921, so that the regression function is

λ′ xx 21 +−= 1 + 1921.01588.00625.14),(log xx 2 (3-14)

59

The 3D shaded surface of regression is plotted in Figure 3-8.

-9

-10

-11

-12 y Log(Failure rate) -13

-14 10 8 6 0 4 5 10 15 2 20 x1:Temperature (F)/10 x2:Wind speed (miles/h)/3

Figure 3-8: Regression analysis for WZ7 (weather zone), 69kV line outage

The F statistic calculated by eq. (3.13) is 11.3259. In this example, k=2, n=43. The critical point for α = 0.01 is F2,40,0.99 = 5.18. Thus, we reject H0 at α = 0.01. In interpretation of the test result, we conclude that, based on observed data, temperature and wind speed significantly help to predict failure rate λ. One of the important applications of (3.11) is predicting the failure rate for a single

circuit. As an example, we apply it to a 10 mile long 69 kV circuit in WZ7 weather zone for a day in the future with forecasted daily weather distribution, shown in TABLE 3-1. Then the failure rate can be calculated by (3.14) and the results are shown in TABLE 3-1 and plotted against time in Figure 3-9.

60

TABLE 3-1: Failure rate prediction (WZ7, 69kV, 10miles line)

Temperature Wind Speed Failure Rate Time Failure Rate (°F) (mph) (Log)

'××-××-×× 0:00:00' 71 4.5 -12.647 3.22E-05 '××-××-×× 1:00:00' 70 5 -12.631 3.27E-05 '××-××-×× 2:00:00' 69 4.5 -12.679 3.12E-05 '××-××-×× 3:00:00' 68 4.5 -12.695 3.07E-05 '××-××-×× 4:00:00' 67 5.5 -12.647 3.22E-05 '××-××-×× 5:00:00' 66.5 6.5 -12.591 3.40E-05 '××-××-×× 6:00:00' 66.5 7 -12.559 3.51E-05 '××-××-×× 7:00:00' 69.5 7 -12.511 3.69E-05 '××-××-×× 8:00:00' 73 8 -12.391 4.15E-05 '××-××-×× 9:00:00' 76.5 8.5 -12.304 4.53E-05 '××-××-×× 10:00:00' 80.5 9 -12.208 4.99E-05 '××-××-×× 11:00:00' 82.5 9 -12.177 5.15E-05 '××-××-×× 12:00:00' 85 10 -12.073 5.71E-05 '××-××-×× 13:00:00' 86.5 10.5 -12.017 6.04E-05 '××-××-×× 14:00:00' 87.5 11 -11.969 6.34E-05 '××-××-×× 15:00:00' 88 11.5 -11.929 6.60E-05 '××-××-×× 16:00:00' 88 10 -12.025 5.99E-05 '××-××-×× 17:00:00' 86.5 8 -12.177 5.15E-05 '××-××-×× 18:00:00' 84 6.5 -12.313 4.49E-05 '××-××-×× 19:00:00' 82 7 -12.313 4.50E-05 '××-××-×× 20:00:00' 80 8 -12.28 4.64E-05 '××-××-×× 21:00:00' 78 9 -12.248 4.79E-05 '××-××-×× 22:00:00' 76 8 -12.344 4.36E-05 '××-××-×× 23:00:00' 74 8 -12.376 4.22E-05

61

-5 x 10 7

6.5

6

5.5

5 failure rate failure 4.5

4

3.5

3 0 5 10 15 20 25 Time(one day)

Figure 3-9: Failure rate curve of one line over a day

Equation (3.11) may be used in computing contingency probabilities at a particular time and date. These probabilities are used to provide operators with system, zone, and contingency risks. Some contingencies were chosen as representative samples and area given in TABLE 3-2.

TABLE 3-2: Representative sample of contingency probabilities (3:10pm, 8/26/2004) Contingency ID Weather Voltage level Line length Temp Wind speed Contingency Zone (kV) (miles) (F) (mph) Probability (per hour) xx_1 WZ2 345 79.1 92.5 13.5 1.0974e-004 xx_2 WZ5 345 26.5 95.5 18.75 1.2671e-004 xx_3 WZ7 345 38.3 97.6 19.2 7.1014e-005 xx_4 WZ7 138 59.4 97.6 19.2 3.9730e-004 xx_5 WZ6 138 23.8 95.5 15 6.2626e-005 xx_6 WZ5 138 18.8 95.5 18.75 6.8272e-005

62

3.8 Summary

A procedure for contingency probability estimation with application in online risk-based security assessment is described in this chapter. The proposed approach applies statistical methods to address two problems: limited data and influence on contingency probability of weather and geographic conditions. The proposed approach has the following characteristics: 1. Poisson distribution is applied for computing failure rate. It is a good model to describe event occurrences; 2. Parameter estimation for the Poisson distribution is performed based on the maximum likelihood estimation. 3. Estimator variance is reduced by grouping transmission lines and pooling the associated outage data. 4. Transmission line failure rate depends on weather conditions, and so distribution of outage information with respect to weather conditions can be used in predicting it. In this paper, regression analysis is applied and the result is used to compute failure rates. Results show failure rate is not constant with time. Probabilities for different contingencies can be computed at the same time. The result is useful for online risk assessment, where contingency probability and post-contingency severity are combined to identify high risk contingencies and zones, information that is useful for security-economy decision within the control center.

63

CHAPTER 4: RISK BASED SECURITY CONTROL DESIGN

4.1 Introduction

Quantification of the security level via the risk calculation previously described offers us another approach for decision-making in power systems. Below, we provide a few typical security control applications where this approach is applicable. This list is not exhaustive, as it is expected that additional applications would be identified following its use. 1. Unit commitment: In deciding whether to commit a unit to relieve a high transmission flow, the operator would want to weigh the risk associated with the flow against the cost of committing the additional unit. 2. Economic dispatch: Dispatching interconnected units to minimize production costs is often constrained due to security limits. Traditionally, these limits have been hard. However, use of hard limits sometimes results in acceptance of high energy costs even though the actual risk may be very low. A “Risk” approach can identify and quantify these situations. 3. Market lever: The risk is able to function as a lever to adjust the behavior of market participants via an economic mechanism to avoid system security problems, rather than using mandatory cutting of transactions based on hard rules.

4. Preventive/corrective action selection: The preventive/corrective (P/C) action is very important for maintaining the power system at an acceptable risk level. The selection of such an action is a complicated decision-making process where the influence of an action must be assessed for multiple problems, and frequently, what improves one problem may degrade another one. Offering the best action or a possible action list will help the operator to efficiently operate under highly stressed conditions. The traditional corrective/preventive action selection is to solve an optimization problem, commonly known as the security constrained optimal power

64

flow (SC-OPF). The objective function is normally the production cost or and the constraints include the power flow equalities and the limits on the component performance (branch flows, bus voltage limits, generator capability). The above four applications are related to some extend. For example, economic dispatch, using generation outputs as controls, could be either a preventive action if it is taken ahead of emergent state or a corrective action after that. The method we developed in this research is not control dependent, which means various controls can be implemented, such as capacitor switching, transformer tap control, FACTS and so on. However, in the examples of following chapters we heavily use generation outputs as controls because of the clear identified cost. Human beings are decision makers. Sometimes they take a choice among alternative courses of actions with little consciousness. While for some major personal or business problems, careful weighing of all alternatives and their consequences is required and the complexity of the problems, competitive pressures and limited time frame can make the process of decision making difficult and agonizing. It is very beneficial to decision makers if they can obtain some aid or guidance in the process. In power systems, control center operators are concerned with day-to-day operation of the power system. They have to ensure that the power system runs economically and securely. To make these decisions, the operators need to consider the security level of the power system and whether it is accepted, and if not, the cost and effect of enhancing the security level. Different alternatives the operator has to consider include: when and how to re-dispatch generation for security purposes, run the system at risk in spite of outage and violations, how to initiate load curtailment if system risks cannot be eliminated by other measures. Because continuously increasing load is not matched with a commensurate increase in transmission capability, operators are facing more frequently the control room security economy decision problems. It is desirable if there is a decision support system to provide some suggestions and aid them to in making consistent and right decisions.

65

In order to improve the quality of decisions, some knowledge of decision theory is necessary. A decision problem consists of the following elements: decision maker, candidate alternatives, control variables, states of nature, outcome, and decision criteria. Different decision problems have their own contexts and thus their own characteristics. We can classify them into different groups according to these characteristics. Based on the candidate alternatives, decision problems can be treated as descriptive models, in which a limited fixed number of alternatives are evaluated to choose a best choice to the decision maker, or prescriptive models, in which there are infinite alternatives and the task of the models is to indicate good choices for decision makers. Based on states of nature, a decision is under certainty if the decision maker knows which state of nature will obtain and what the outcome of each alternative will be, under risk if the decision maker can identify each state of nature and the probability of occurrence of each state of nature and knows the outcome associated with the each alternative and state of nature, and under uncertainty if the decision maker knows the specific outcomes associated with each alternative under each state of nature, but he does not know, or is unwilling to express quantitatively the probabilities associated with the states. Based on decision criteria, decision problems can be divided two groups: single-criterion if there is only one criterion and multi-criterion if several conflicted criteria are involved.

In electric power systems, control center operators need to consider the economy and security of operation. Economy and security are usually in conflict with each other. Thus the decision problem faced by control center operators is essentially a multi-criterion decision making problem. Identifying the trade-off character of economy and security, and finally achieving a comprehensive optimal solution is the main task of security related decision making.

66

Risk based optimization control, representing system health with risk index, can drive system to a more secure situation than traditional SCOPF control. Investigating on risk based security control, we notice some issues are meaningful and helpful to system awareness. 1. System may run in the different risk level in the N-1 security boundary. A probabilistic security assessment has been previously developed, illustrated and compared to the traditional deterministic method [12][85] . Risk contours associated with deterministic contingency constraint show that deterministic boundary does not necessarily result in constant risk. Thus, decisions based on deterministic assessment may result in either very low risk or unintended high risk. In additional, People are interested in operation cost corresponding to different risk level. The decision maker can balance the economical profit and system security by the trade off analysis on risk and cost. 2. Various risks (overload, low voltage, voltage instability) exist, each of which would endanger system in different scenario. Usually, system risk coming from several problems (expressed as the risk) could not be mitigated by the same control action. SPS and RAP are used to correct the abnormal system conditions, but the adverse consequence could be also caused. Decision maker would like to trace the optimal trade off curve of these risks regards to the control variables, and compromise the

conflicting risk index, which is a multi-objective (MO) problem. 3. Frequently, decreasing risk of one contingency increases risk of another one, making risk control complicated; 4. Traditional single objective optimization can only give one choice for the conflicting objectives, comprehensive control result is not easily attained; When an optimization problem involves more than one objective function, the task of finding one or more solutions is known as multi-objective optimization. In the parlance of management, such search and optimization problems are known as multiple criterion

67

decision-making (MCDM). Different solutions may produce trade-offs (conflicting scenarios) among different objectives. A solution that is extreme (in a better sense) with respect to one objective requires a compromise in other objectives. This prohibits one to choose a solution which is optimal with respect to only one objective. Multi-objective optimization methodologies have been widely used in the economy and reliability trade off analysis of electricity market [86][87] . Multi-objective methods will provide a set of optimal solutions, the Pareto set, instead of one solution point by traditional optimization method. Therefore, multi-objective optimization techniques can handle conflicts between multiple objectives and help decision maker to choose a better solution. In this Chapter we will apply the multi-objective methods to provide decision-making support for identifying trade-off between problems minimization of costs and different risks as multiple objectives. In this chapter, a risk based multi-objective optimization problem is formulated to trace the trade-off relationship between risk (security level) and cost (economy) in Section 4.2. Section 4.3 summarizes the basic properties of multi-objective optimization problem. Typical multi-criterion decision making methods are introduced in Section 4.4. Risk based security control design (RBSCD) is presented in Section 4.5. Section 4.6 concludes.

4.2 Formulation of Risk Based Multi-objective Optimization

Optimal power flow (OPF) has been widely used in daily operation in the control center, especially in the LMP based market. The ability to use different objective functions and constraints provides a very flexible analytical tool. And the adjustable or control variables would include generator MW outputs, voltage, load shedding, LTC transformer tap position and so on. To classify the essential difference of models, we specify generator MW output as the control variables.

68

Traditionally, the sum of generation cost is the only objective function in the security constrained OPF model. Since security is treated as deterministic constraints in SCOPF, system stress level and contingency likelihood are not addressed. Differently, risk information not only captures all system state information, but also is quantified and can be used as other control objectives. Based on this idea, power system OPF control approach could be transformed from a single economic objective to a multiple objectives with trade-off analysis between security and economy. TABLE 4-1: Model Expression of SCOPF and RBMO

Model 1: (SCOPF) Method 2: (RBMO) PRiskPf )(),(min Pf )(min { } Subject to Subject to Ph = 0)( Ph = 0)( )( ≤≤ gPgg )( ≤≤ gPgg min max min max ≤≤ gPgg ')('' = PgPgPRisk ))('),(,Pr()( min max TABLE 4-1 gives the expression of classic SCOPF model and risk based multi-objective optimization (RBMO) model, where P are the real power injections at each bus, objective function f(P) is the sum of generation cost, Risk(P) are the system risks, equality constraints

h(P)=0 are the power flow equations, and inequality constraints gmin

load limits), line flows and voltage level, g’min

which are constructed for all N-1 contingencies, Pr is the probability vector regards to each N-1 contingency. Similar as SCOPF, we also use a contingency list, which is created by selection or ranking program, in RBMO. Here, Risk calculation does not consider the uncertainty of operation condition, but using following expression: n Risk t = ∑ it ESevE it )()(Pr (4-1) i=1

69

where Prt(Ei) is the state probability, Sevt(Ei) is the severity resulting from state i, and n is the total number of system states. In addition, contingency probability is computed using real time data. In Chapter 3, we use statistical theory to characterize the relationship of contingency probability with circuit (line length, voltage level), environment (weather), and geographic (location) features. This characterization is expressed as:

= Y = 321 = yyYxxxXXFYX 21 ),(),,,(),(),Pr( (4-2)

where X=(x1, x2, x3) is a vector representing the line length, weather conditions (temperature,

wind speed) and Y= (y1, y2) indicates the voltage level and geographic location. Severity of each contingency is computed using the online EMS data. Then overload risk and low voltage risk are evaluated, sum of the multiplications of each contingency probability and severity. In the RBMO model, corrective action can be added to mitigate the severity of contingency, so risk will reduce. Also, load can be looked as elastic as the load shedding model we will introduce in Chapter 5. Additionally, other costs, such as generation redispatch or reserve, can be considered. Since risk and related optimization are what we emphasize, we will not discuss the impacts of above factors. TABLE 4-2: Comparison of SCOPF and RBMO

Model SCOPF RBMO

Objective Single Multiple

Security stress defined? No Yes

Security level controllable? No Yes

Trade Off Curve No Yes

The main features of SCOPF and RBMO models are contrasted in TABLE 4-2. Since risk index defines the system health level and is another objective in the optimization model,

70

RBMO has the capability to identify and control the system security in a trade off optimal way associated the economy cost.

4.3 Basic Properties of Multi-objective Optimization Problem

4.3.1 Single and Multi-objective Optimization

When an optimization problem modeling a physical system involves only one objective function, the task of finding the optimal solution is called single-objective (SO) optimization. When more than one objective function involved, the task of finding one or more optimum solutions is known as multi-objective (MO) optimization. Without the loss of generality, we will discuss the fundamental difference between single and multi-objective optimization with a two-objective optimization problem. For two conflicting objectives, each objective corresponds to a different optimal solution. Optimizing both objectives will result in trade-off solutions, none of which is the best with respect to both objectives. The reason lies in the fact that no solution from this set makes both objectives look better than any other solution from the set. Since a number of solutions are optimal, in a MO optimization problem many such (trade-off) optimal solutions are important. Compared to SO problems, MO problems are more difficult to solve, because there is no unique solution; rather, there is a set of acceptable trade-off optimal solutions. This set is

called Pareto front. MO optimization is in fact considered as the analytical phase of MCDM process, and consists of determining all solutions to the MO problem that are optimal in the Pareto sense. The preferred solution – the one most desirable to the designer or decision maker – is selected from the Pareto set. The irony is that none of these trade-off solutions is the best with respect to both objectives. This is the fundamental difference between a SO and a MO optimization task. The important solution in a SO optimization is the lone optimum solution, whereas in MO optimization, a number of optimal solutions arising because of trade-offs between conflicting objectives are important. With this point in mind, SO

71

optimization is a degenerate case of MO optimization. In detail, two differences between single and multi-objective optimization are given in the following subsections [88].

4.3.1.1 Two Goals Instead of One

In a single objective optimization, there is one goal – the search for the optimum solution. Although the search space may have a number of local optimal solutions, the goal is to find the global optimum solution. However, in multi-objective optimization, there are clearly two goals. One is progressing towards the Pareto-optimal front. The other is maintaining a diverse set of solutions in the non-dominated front. An algorithm that finds a closely packed set of solutions on the Pareto-optimal front satisfied the first goal of convergence to the Pareto-optimal front, but does not satisfy maintenance of a diverse set of solutions. Since both goals are important, an efficient multi-objective optimization algorithm must work on satisfying both of them.

4.3.1.2 Dealing with Two Search Spaces

Another difficulty is that a multi-objective optimization involves two search spaces, instead of one. In a single objective optimization, there is only one search space – the decision variable space. An algorithm works in this space by accepting and rejecting solutions based on their objective function values. For multi-objective optimization, in

addition to the decision variable space, there also exists the objective or criterion space. Although these two spaces are related by a unique mapping between them, often the mapping is nonlinear and the properties of the two search spaces are not similar. Proximity of two solutions in one space does not mean proximity in the other space. In some algorithms, the resulting proceedings in the objective space are used to steer the search in the decision variable space.

72

4.3.2 Concept of Domination

We assume that there are M minimization objective functions. We use the operator < between two solutions i and j as to denote that solution i is better than solution j on a particular objective. Similarly, i > j for a particular objective implies that solution i is worse than solution j on this objective. Similarly, if an objective function is to be maximized, the operator would have opposite mean. In many applications, the duality principle is used to convert a maximization problem into a minimization problem and treat every problem as a combination of minimizing all objectives. Here, we only refer the following definition to minimization objective. A solution x(1) is said to dominate the other solution x(2), if both conditions 1 and 2 are true:

(1) (2) (1) (2) 1. The solution x is no worse than x in all objectives, or fj(x ) <= fj(x ) for all j = 1, 2, …, M.

(1) (2) (1) 2. The solution fj(x ) is strictly better than fj(x ) in at least one objective, or fj(x ) <

(2) fj(x ) for at least one j = 1, 2, ….M. If any of the above condition is violated, the solution x(1) does not dominate the solution x(2). If x(1) dominates the solution x(2), it is also customary to write any of the following:

• x(2) is dominated by x(1). • x(1) is non-dominated by x(2). • x(1) is non-inferior to x(2) . Figure 4-1 shows a two-objective minimization optimization problem with five different solutions shown in the objective space. We can use the above definition of domination to decide which solution is better among any two given solutions in terms of both objectives. For example, if solutions 1 and 2 are to be compared, we observe solution1 is better than 2 in both objective functions. Thus, both of the above conditions for dominations are satisfied and we may write that solution 1 dominates solution 2. We take another instance of comparing

73

solution 1 and 5. Here, solution 5 is better than solution 1 in the first objective and solution 5 is no worse (in fact, they are equal) than solution 1 in the second objective. Thus, both the above conditions for domination are also satisfied and we may write that solution 5 dominates solution 1.

4.3.3 Pareto-Optimality

Continuing with solution comparisons in the previous section, let us compare solution 2 and 4 in Figure 4-1. We observe that solution 2 is better than solution 4 in the first objective, while worse in the second objective. Thus, the first condition is not satisfied for both of these solutions. This simply suggests that we can not conclude that solution 2 dominates solution 4, nor can we say that solution 3 dominates solution 5. Both of them are non-dominated with respective to each other. When both objectives are important, it can not be said which of two solutions 2 and 4 is better.

Figure 4-1: Solutions of two-objective minimize problem

For a given finite set of solutions, we can perform all possible pair-wise comparisons and find which solution dominates with and which solutions are non-dominated with respect to

74

each other. At the end, we expect to have a set of solutions, any of which do not dominate each other. This set also has another property. For any solution outside of this set, we can always find a solution in this set that will dominate the former. Thus, this particular set has a property of dominating all other solutions which do not belong to this set. In simple term, this means that the solutions of this set are better compared to the rest of solutions. It is called the non-dominated set, or Pareto-optimal set for the given set of solutions. Figure 4-2 [89] marks the Pareto-optimal set (Pareto Front) with continuous curves for two scenarios with two objectives. Each objective can be minimized or maximized.

Figure 4-2: Illustration of Pareto front for a two-objective optimization problem

4.3.4 Finding Efficient Solutions

It is clear from the above discussion that, in principle, the search space in the context of multiple objectives can be divided into two non-overlapping regions, namely one which is optimal and one which is non-optimal. Although a two-objective problem is illustrated above, this is also true in problems with more than two objectives. In the case of conflicting objectives, usually the set of optimal solutions contains more than one solution, as shown in Figure 4-2. In the presence of multiple Pareto-optimal solutions, it is difficult to prefer one solution over the other without any further information about the problem. If higher level

75 information is satisfactorily available, this can be used to make a biased search. However, in the absence of any such information, all Pareto-optimal solutions are equally important. Hence, in the light of the ideal approach, it is important to find as many Pareto-optimal solutions as possible in a problem. Thus, it can be conjectured that there are two goals in a multi-objective optimization. 1. To find a set of solutions as close as possible to the Pareto-optimal front. 2. To find a set of solutions as diverse as possible. The first goal is mandatory in any optimization task. Converging to a set of solutions which are not close to the true optimal set of solutions is not desirable. It is only when solutions converge close to the true optimal solutions that one can be assured of their near- optimality properties. This goal of multi-objective optimization is common to the similar optimality goal in a single objective optimization. On the other hand, the second goal is entirely specific to multi-objective optimization. In addition to being converged closed to the Pareto-optimal front, they must also be sparsely spaced in the Pareto-optimal region. Only with a diverse set of solutions, can we be assured of having a good set of trade-off solutions among objectives.

4.3.5 Non-Dominated Sorting of a Population

The target of multi-objective optimization algorithms is to find only the best non- dominated front in a population. The population (feasible solutions) is classified into two sets: non-dominated set and the remaining dominated set. Moreover, some algorithms require the entire population to be classified into various non-domination levels. In such algorithms, the population needs to be sorted according to an ascending level of non-domination. The best non-dominated solutions are called non-dominated solutions of level 1. Once the best non-dominated set is identified, they are temporarily disregarded from the population in order to find solutions for the next level of non-domination. This procedure is continued until

76 all population members are classified into a non-dominated level. It is important to reiterate that non-dominated solutions of level 1 are better than non-dominated solutions of level 2, and so on.

4.3.6 Approaches to Multi-objective Optimization

Although the fundamental difference between single and multi-objective optimization lies in the cardinality in the optimal set, from a practical standpoint a user needs only one solution, no matter whether the associated optimization problem is single-objective or multi- objective. In a multi-objective optimization, ideally the effort must be made in finding the set of trade-off optimal solutions by considering all objectives to be important. After a set of such trade-off solutions are found, a user can then use higher-level qualitative consideration to make a choice. Therefore, the first approach, called as ideal multi-objective optimization procedure, is defined in two steps, which is also shown in Figure 4-3.

Figure 4-3: Ideal multi-objective optimization procedure

77

Step1 Find multiple trade-off optimal solutions with a wide range of values for objectives. Step2 Choose one of the obtained solutions using higher-level information. However, if a relative preference factor among the objectives is known for a specific problem, we may not need to follow the above procedure, but form a composite objective function as the weighted sum of the objectives, where a weight for an objective is proportional to the preference factor assigned to that particular objective. This method converts the multi-objective optimization problem into a single-objective optimization problem. When such a composite objective function is optimized, in most cases it is possible to obtain one particular trade-off solution. This procedure is called as a preference-based multi-objective optimization, as shown in Figure 4-4. Based on the higher-level information, a preference vector w is first chosen. Then the composite function is constructed and optimized to find a single trade-off optimal solution.

Figure 4-4: Preference based multi-objective optimization procedure

78

The trade-off solution obtained by using the preference-based strategy is largely sensitive to the relative preference vector used in forming the composite function. A change in this preference vector will result in a different solution. Furthermore, an analysis of the non- technical, qualitative and experience-driven information is required to find a quantitative relative reference. Without any knowledge of the likely trade-off solutions, this is an even more difficult task.

4.4 Introduction of Multi-criterion Decision Making Methods

There are so many discussions about the advantage and disadvantages of different MCDM methods that it is hard to say a decision making method is better than others. The decision of using a decision-making method itself becomes a decision-making problem. In this section, we are introducing some commonly used methods.

4.4.1 Value or Utility–based Approaches

A value function usually describes a person’s preference regarding different levels of an attribute under certainty. Utility functions (a specific type of value function) also represent attitude of risk. This method changes all of the objectives to a corresponding value (utility) reflecting the decision maker’s preference to the objective. The value (utility) is additive. By adding the values or utilities of all objectives, an index for a certain action can be obtained.

This index can be used for making the decision. Additive function is unquestionably the amalgamation method in widest use, for two good reasons: its simplicity, and the fact that its result tends to be robust with respect to mild non-linearity. Some people are against this method. Because they think that considering a single objective function (profit, cost, utility, etc.), as in the classical mathematical optimization, presuppose that the decision maker’s preference and values are both complete and definite at the beginning of the decision maker’s involvement. The decision was implicitly made; it only remains to be explicated through mathematical search or algorithms. The decision maker can then only say yes or no to the

79

final outcome. The solution to mathematical optimization is usually unique and no further decision-making effort is required. These people support the interactive techniques that will be mentioned later. Some people think the value or utility-based methods are more interactive than it thought to be. After DM learns more about the problem in the process of using the method, they can learns more about the problem and themselves and then change the utility function to reflect more of his preference. Thus more reasonable and consistent decisions could be made by the decision maker.

4.4.2 ELECTRE IV

It is the latest version of a series of ELECTRE outranking method. Decision maker should indicate what his thresholds of different criteria are with respect to indifference and preference. The basic idea of ranking alternatives is to compare the numbers of alternatives that they strongly outranked and weakly outranked respectively. This method ranks the options without introducing any criterion importance weightings. The model avoids weights by assuming that no preference structure should be based on the greater or less importance of the criteria. No single criterion dominates the decision making process. But some people are against this method saying that it introduces “further ad hoc functional forms, which although intuitively appealing, are difficult to verify empirically as models of human preferences.”

4.4.3 Evidential Theory

This method offers an efficient way to represent uncertainty and to perform reasoning under uncertainty. For MCDM, each criterion can be regarded as a piece of evidence. Using Dempster’s Rule of combination, we can obtain aggregated information from that of different evidences. The final decision is made based on the combined information.

80

4.4.4 Promethee

This method is quite similar to the ELECTRE III method, except that it does not use a discordance index. It also takes advantage of the most recent developments of preference modeling at that time.

4.4.5 Goal Programming

The DM specifies a goal or a target value for each one of the criteria and tries to find the alternative that is closest to those targets according to a measure of distance in the form of I /1 p ⎧ p ⎫ ⎨∑[− iii AVGw )( ]⎬ . Goal programming in its most general form uses both cardinal and ⎩ i=1 ⎭ ordinal preference information. The cardinal information required consists of target values for the attributes involved together with a set of weighting factors indicating the degree of importance the decision maker attaches to the attainment of these goals. The ordinal information takes the form of a ranking of the attributes by assigning them to several priority classes. In [8], the method is assessed as “a popular amalgamation method that yields a ranking of alternatives. One literature review cites 280 applications of the approach (White,1990). It has been proposed for use in energy and resource planning in India (Ramanhan and Ganesh, 1995); for electric resource bidding systems (Stanton, 1990); and for dispatching energy systems (Chang and Fu, 1998; Hota et al., 1999). ” This method has a serious drawback. It is possible that sometimes the target levels will be chosen so that the option chosen is indeed a dominated alternative. This is particularly likely if the goals are chosen in the absence of a complete knowledge of the set of choices and their impact.

4.4.6 Lexicographic Method

This method involves a process of ranking objectives according to the most important criterion. The top subset is identified according to the first most important criterion, and the

81

alternatives in this subset are ranked according to the second most important criterion, and so on, until a single alternative left or all criteria have been examined.

4.4.7 Interactive approach

Information concerning preference can be elicited from the decision maker either non- interactively or interactively. A non-interactive technique requires that preference be elicited only once, either before or after analyzing the system. All the multi-criteria method mentioned above are non-interactive method. On the other hand, an interactive approach requires active progressive interaction between the decision maker and the analyst throughout the solution process. An interactive approach is normally characterized by three basic steps: Step 1: Solve the problem based on some initial set of parameters to obtain a feasible, preferably non-inferior solution. Step 2: Get the decision maker to react to this solution. Step 3: Use the decision maker’s response to formulate a new set of parameters, form a new problem to be solved. Step (1)–(3) are repeated until the decision maker is satisfied with the current solution or no further action can be taken by the method. The stem method, the method of Geoffrin, Dyer and Feinberg, and the evolving target

method belong to this kind of method. In interactive methods, there exists specification of feedback processes, which brings flexibility and adaptability to the real situation. Interactive methods are search-oriented and learning-oriented. Some people prefer the interactive methods to utility-based method.

4.4.8 Method Summary

One of the major differences between different decision making methods is they use different preference structure. Lexicographic method assumes decision maker have an order

82

of importance of the criteria, in this method, a disadvantage on one criterion cannot be counterbalanced by an advantage on another, i.e., it is a non-compensatory method. In ELECTRE-IV method, two different degrees of preference are defined, and the numbers of two kinds of outranks are considered to compare different alternatives. One of the advantages of this method is that it has a concept of incomparable alternatives. AHP, value and utility based approach, goal programming consider the relative importance of different criteria, which is very import information of decision maker’s preference. ELECTRE series, Promethee, and evidential theory can only applied to decision cases with finite alternatives, while value or utility based approach and goal programming can be applied to cases with infinite alternatives, i.e., continuous problem. In value-based approach, utility based approaches, the attitude of decision maker towards risk is considered. If the attitude of the decision maker is risk-neutral, then this method can be looked on as weighting method. Evidential theory can combine different decision maker’s decision and can separate that which is unknown from that which is uncertain. Its method of handling different criteria is using Dempster’s Rule. Another difference between these methods is the technique used. MCDM methods are divided into holistic, heuristic and wholistic techniques [9]. Holistic methods evaluate the alternatives independently to choose the highest-rated one and differ from each other in terms of the scoring functions used. AHP, value or utility-based method, and goal programming belong to this kind of technique. Heuristic method is a process of sequential comparison of alternatives to determine the preferred choice. Methods include ELECTRE and other outranking methods, in which the most satisfying alternative is obtained by pair-wise preference ordering of a finite set of alternatives. Wholistic judgment is based on previous experience and including intuition, standard operating procedures, and reasoning by analogy. The model used is naturalistic decision model. Up to now, very few wholistic methods have been developed. The methods mentioned above all belong to holistic and heuristic methods.

83

4.5 Risk Based Decision Making and Control Strategy

4.5.1 Risk Based MCDM Design

According to the review of decision making approach introduced in 4.2.5, SCOPF belongs to preference based decision making. The preference of security is set as a threshold (no deterministic violation) and has been decided before the optimization, which results in one optimal solution. Therefore, SCOPF is not able to provide a trade off analysis for security and cost, and only one solution is delivered to decision maker. Even if this so called optimal solution results in a high risk level or low risk level with high cost, decision maker need to accept it since there are no other alternatives. Differently, RBMO is an ideal multi-objective optimization approach. We do not need any preference at the beginning, but find entire Pareto-optimal set. Then we will apply high level information to select the best solution. Compared with all MCDM methods introduced in Section 4.3, it is most similar as interactive method. We divided risk based MCDM in the following steps. Step 1: Generate Pareto-optimal set regards to the objectives. Step 2: Decision makers (operator) use their knowledge, experience to select a series of candidates. Step 3: High level information is applied to assist decision maker to make a judgment.

We will give two approaches on Pareto-optimal search in Chapter 5 and 6, respectively. One is classical method, the other is evolutionary algorithm. In this section, we assume whole Pareto-optimal set is available for evaluation as shown in Figure 4-5 (Left), as well as SCOPF result, solution D. Here, Figure 4-5 is a conceptual example for illustration. We present two main steps to identify the preferring solution.

84

4.5.1.1 Security Boundary Analysis Based on Risk Criterion

In Figure 4-5 (Left), whole Pareto-optimal set are divided into discrete segments based on number of deterministic violations. Also for Pareto-optimal solutions, near violations are also given. This information would strength the risk understanding of decision makers, who are getting used to making judgment by watching violations. Furthermore, security diagram can be shown to describe the distribution of violation associate with event likelihood.

Cost CEI

0 DV C 2 NV D 5 NV

2 NV B A 3 NV

Risk {0 DV 1 DV{ 2 DV{ >2 DV Acceptable Risk Alert Risk Reject Risk

Figure 4-5: Security boundary analysis of risk based decision making.

I: Pareto-optimal curve (DV: deterministic violation, NV: near violation).

II: Catastrophic Expectation Index curve

The number of post-contingency deterministic violations (DV) and near violations (NV) is corresponding to the number of points in the red (DV) and yellow (NV) zones of the security diagram, respectively. Figure 4-5 illustrates the addition of this information to the Pareto front. Solution A has the lowest cost, but is seriously stressed with two deterministic

85

and three near violations, which cause high risk. Solution C has a good shape of risk, no deterministic violation and only two near violations. However, Solution C sacrifices the economic efficiency to get the benefit of risk. It is interesting to observe from Figure 4-5 that solution D, without DV, has a risk level higher than B (a Pareto-optimal solution) having DV. Traditionally, we run SCOPF to find solution D, which has no deterministic violations. However, there are five near violations. As a result, the risk of SCOPF solution is still high. Solution B, compared with Solution D, has better performances on both risk and cost. Even if there is one deterministic violation, the whole risk level is low. The above information, violations number and extent (can be seen in security diagram), provides a physical understanding of risk value, which can be used as a reference to group solutions associated with occurring probability. An appropriate criterion could be: Select the lowest cost operating points on the Pareto front that has no more than 2 DVs and no more than 3 DVs and NVs. Based on this criterion, therefore, only points to the left of B on the Pareto curve would be acceptable, and since point B is lowest cost, it would be selected. At the same time, the group criterion should also take system size into consideration. For example, 3 DV can be categorized as reject risk level for a 30-branch system, but as acceptable risk level for a 1000-branch system.

4.5.1.2 High Level Information Evaluation

The risk assessment presented so far focuses on overload and low voltages, Although this assessment provides a good first-level indication of system stress, it is possible that one or more contingencies may result in what we refer to as a catastrophic outcome: cascading overload, voltage instability, transient instability, or oscillatory instability, even if the overall risk level is lower than SCOPF [90][91]. Such an outcome is unacceptable, and so a selected operating point should be screened for any of these problems that are of concern. We assume

86 we have a quantitative measure of the catastrophic consequence, which we refer to as the catastrophic expectation index (CEI). Computing this index for each Pareto-optimal solution would result in curve II of Figure 4-5. An appropriate criterion could be: Select the lowest cost operating point on the Pareto front satisfying CEI<1. A very simple, yet reasonable CEI expression for voltage instability, transient instability, and oscillatory instability is: ⎧ ,0 stable CEI = ⎨ (4.1) ⎩ ,1 unstable In this research, cascading severity (2-11), introduced in Chapter 2, can be defined as catastrophic expectation index (CEI). As shown in curve II of Figure 4-5 (left), CEI is computed for each Pareto-optimal solution. We can see, solution of SCOPF would result in a high CEI, and we will give an example to show how it works in Chapter 6. Based on the deterministic understanding and CEI evaluation of Pareto-optimal set, we categorize all solutions into following three groups: Acceptable Risk (Green), Alert Risk (Yellow) and Reject Risk (Red), as shown in Figure 4-5 (left). Then we can get a conclusion: Solution B is the best of three (A, B, C). Compared with SCOPF solution (D), it not only belongs to the acceptable risk group, also has a low cost advantage.

4.5.2 Architecture of Risk Based Security Control

Since power system operation and control is a complicate process, a friendly and intelligent tool is preferred to avoid human error in the decision making process. Risk based decision making process is designed for this consideration, as shown in Figure 4-6. The main features of RBDM are below: 1. Criteria (objectives) can be selected by the investigation purpose. Under different situation, people would like to find the tradeoff relationship between cost and overload risk if low voltage risk is not a concern, or between overload risk and low voltage risk to find a low risk optimal solution if both are concerns but cost is not an objective, or among all three objectives to find a comprehensive solution.

87

Figure 4-6: Architecture of risk based decision making process

2. High level information or additional judgment can be inserted as different evaluation blocks, such as cascading analysis, voltage stability analysis or transient stability analysis. Decision maker can pick up necessary blocks to evaluate the candidate points. 3. Trajectory (multiple time periods) optimization is an option, such as unit commitment, since control variables can be adjusted during a period of time and people may be interested in finding a long term optimal solution.

88

4. A friendly graphic interface provides a strong user-computer communication tool, such as the diagram security we developed. This will provide an interactive tool for decision maker to understand the result generated by the algorithms

4.6 Summary

Traditional decision making based on SCOPF is a security level unknown process, where deterministic security criterion is predefined before optimization. RBSCD, on the contrary, provides an interactive way for decision maker to make a comprehensive choice on conflictive objectives: risk and cost. Firstly, risk and cost trade-off optimization curve is provided without preference judgment. In the next step, deterministic understanding, high level information and visualization tool are used for final solution choice. Theoretically, RBSCD provides a better security tracing and control methodology for operator to maneuver system into both secure and economic state.

89

CHAPTER 5: CLASSIC METHODS IMPLEMENTATION FOR RISK BASED MULTI-OBJECTIVE OPTIMIZATION

5.1 Overviews of Classic Multi-objective Methods

Classical multi-objective (MO) optimization methods have been investigated for at least the past four decades. Cohon (1985) classified them into the following two types: 1. Generating methods (GM); 2. Preference-based methods (PM). In the generating methods, a few non-dominated solutions are generated for the decision maker, who then chooses a solution from the obtained non-dominated solutions. No a priori knowledge of relative importance of each objective is used. On the other hand, in the preference-based methods, some known preference for each objective is used in the optimization process. GM methods are not further described here since they are inferior to evolution methods in the theory. We outline here a number of classical methods in the order of increasing use of preference information. The classical methods consist of converting the multi-objective (MO) problem into an SO problem by either aggregating the objective functions or optimizing one objective and treating the other as constraints. The SO problem can then be solved using traditional scalar-

valued optimization techniques. These techniques are geared towards finding a single solution and are ideal for cases where preferential information about the objectives is known in advance. It is also possible, by modifying the aggregation parameters and solving the newly created SO problem, to approximate the non-dominated front. We outline a number of classical methods in the order of increasing use of preference information, and give more details about first two methods in this section.

1. Weight Method (WM).

2. ε-constraint Method (CM).

90

3. Weighted Metric Method. 4. Goal Programming Method. 5. Interactive Method

5.1.1 Weight Method

5.1.1.1 Principles of Weight Method

We assume a set of nonnegative, not all zero numbers, w1, w2, …, wn is given, which are considered to be the relative importance of the objectives. The multi-objective programming problem is reduced to the single objective problem, as shown in (5-1) M Minimize )( = xfwxF ,)( ∑m=1 mm xgtosubject =≥ Jj ;,,2,1,0)( j L (5-1) k xh k == L K;,,2,1,0)( L)( U )( i ii =≤≤ L nixxx ;,,2,1, For a relatively simple problem, one may solve analytically for the set of efficient non- inferior solutions x* and the required set of weights by applying the set of necessary and sufficient conditions. The drawback of analytical approach is it is only practical for a rather limited number of problems. Very often, a numerical solution procedure is required to solve the weighting problem for a given w. When the objective functions and the constraint set have no properties, it seems natural to proceed in an ad hoc manner. We can consider each

weight wi between 0 and 1 as having a reasonable number of discrete values. Then we solve

the single objective problem numerically for each combination of values of w1, w2, …, wn. Two theorems regards to weight method (WM) method are given below: 1. The solution to the problem represented by expression (5-1) is Pareto-optimal if the weight is positive for all objectives.

91

2. If x* is a Pareto-optimal solution of a convex multi-objective optimization problem, then there exists a non-zero positive weight vector w such that x* is a solution to the problem given by equation (5-1). WM method is the simplest way to solve an MO problem. And the concept is intuitive and easy to use. For problems having a convex Pareto-optimal front, this method is guaranteed to find solutions on the entire Pareto-optimal set. However, it is clearly not practical to generate the entire set of efficient solutions in this manner. The main difficulties of WM method are summarized as below. 1. In most Nonlinear MO problems, a uniformly distributed set of weight vectors need not find a uniformly distributed set of Pareto-optimal solutions. Since the mapping is not usually known, it becomes difficult to set the weight vector to obtain a Pareto- optimal solution in a desired region in the objective space 2. Different weight vectors could lead to same Pareto-optimal solutions. The information about which weight vectors do correspond to a unique optimal solution of F is not obvious in nonlinear problems, thereby wasting the search effort on one dominant solution. 3. Some Pareto-optimal regions will be undiscovered if the problem is nonconvex. Following subsection gives the detail.

5.1.1.2 Difficulties with nonconvex problems

Though very simple and easy to understand, the weighting method has a limitation. If all

wm are strictly positive, the subset we get are efficient solutions. But not all efficient solutions can be generated through this method, that is, some efficient solutions of the “vector maximum” problem are not solutions to the weighting problem no matter what values of the set of weights are chosen. In the following example, it is shown why some efficient solutions cannot be generated through this method under certain condition.

92

To simplify the problem, let’s suppose that we only want to minimize two objectives: f1, f2. H is the space of vector (f1 (x), f2(x)) and the corresponding area is shown in Figure 5-1 and Figure 5-2.

f2(x)

A H

C C B

f1(x)

Figure 5-1: Weight methods for a multi-objective decision problem with convex space of objectives

Figure 5-1 regards to convex problem, all solutions on the curve of A-C-B can be generated by weighting method by choosing different weights goals. Figure 5-2 relates to concave problem, the complete set of efficient solutions are those x

corresponding to (f1 (x), f2(x)) on the curve of A-D-C-E-B. But only those efficient solutions on the curve of A-D or the curve of E-B can be generated through the weighting method. The

efficient solutions on the curve D-C-E cannot be generated. This is because for any w1 and

w2, there exists a point in A-D or E-B with a value of w1 f1 (x) + w2 f2(x) less than any point on the curve of D-C-E. It is pointed out that when the space of objectives is convex, then all the efficient solutions can be generated by the weighting method using appropriate w.

93

f2 (x)

A H

D C

E B

f1(x)

Figure 5-2: Weight methods for a multi-objective decision problem with concave space of objectives

5.1.2 Constraint Method

Assume that the objective fμ has the greatest preference and for m≠μ, there are given

numbers εm, which are considered upper bounds for the objective fm such that the decision maker does not accept any solution with value larger than εk in at least one of the objectives

fm. Then the multi-objective programming problem can be reduced to the single objective minimization problem having the form:

Minimize μ xf ),(

m xftosubject ε m m =≤ L,,2,1,)( mandM ≠ μ

j xg =≥ L Jj ;,,2,1,0)( (5-2)

k xh k == L K;,,2,1,0)( L)( U )( i ii =≤≤ L nixxx ;,,2,1, There are three principal variations of the ε-constraint method: the inequality constraint approach, the equality constraint approach, and the hybrid (weighting-constraint) approach. The hybrid approach is very efficient when we are merely interested in generating efficient solutions numerically. The ε-constraint method is best suited to integration of interactive

94 multi-objective decision-making, because it not only generates efficient solutions but also furnishes trade-off information at each efficient solution generated. Constraint method (CM) can be used for any arbitrary problem with either convex or nonconvex objective space, since different Pareto-optimal solutions can be found by using different εm values. However, the solution to the problem largely depends on the chosen ε vector. Therefore, if there is no experience about the objective values, it is hard to select ε, which could lead to no solution or optimal solution. Especially, as the number of objectives increases, there exist more elements in the ε vector, thereby requiring more information from the user.

5.2 Risk Based Multi-objective Optimization Problem Implementation Using Linear Programming

5.2.1 Review of Linear Programming Problem, Algorithm and Application

5.2.1.1 Basic Principle of Linear Programming Problem

A linear programming (LP) problem is an optimization problem for which we do the following: 1. We attempt to maximize (or minimize) a linear function of the decision variables. 2. The values of the decision variables must satisfy a set of constraints. Each constraint must be a linear equation or linear inequality. There are two implications for the above definition: 1. Proportionality assumption: The contribution of the objective from each decision variable is proportional to the value of the decision variable. Similarly, the contribution of each variable to the left-hand of each constraint is proportional to the value of the variable.

95

2. Additivity assumption: The contribution to the objective function for any variable is independent of the values of the other decision variables. Therefore, the value of objective function is the sum of the contributions from individual variables. Constraint has the same character. Linear programming problems can be expressed in canonical form: Maximize Subject to , (5-3) Where x represents the vector of variables, while c and b are vectors of coefficients and A is a matrix of coefficients. The expression to be maximized or minimized is called the objective function (cTx in this case). The equations Ax ≤b are the constraints which specify a convex polyhedron over which the objective function is to be optimized. Every linear programming problem, referred to as a primal problem, can be converted into a dual problem, which provides an upper bound to the optimal value of the primal problem. The corresponding dual problem is: Minimize Subject to (5-4) Geometrically, the linear constraints define a convex polyhedron, which is called the feasible region. Since the objective function is also linear, hence a convex function, all local optima are automatically global optima (by the KKT theorem). The linearity of the objective function also implies that the set of optimal solutions is the convex hull of a finite set of points - usually a single point. There are two situations in which no optimal solution can be found. First, if the constraints contradict each other (for instance, x ≥ 2 and x ≤ 1) then the feasible region is empty and there can be no optimal solution, since there are no solutions at all. In this case, the LP is said to be infeasible. Alternatively, the polyhedron can be unbounded in the direction of the objective function (for example: maximize x1 + 3 x2 subject to x1 ≥ 0, x2 ≥

96

0, x1 + x2 ≥ 10), in which case there is no optimal solution since solutions with arbitrarily high values of the objective function can be constructed. Barring these two pathological conditions (which are often ruled out by resource constraints integral to the problem being represented, as above), the optimum is always attained at a vertex of the polyhedron. However, the optimum is not necessarily unique: it is possible to have a set of optimal solutions covering an edge or face of the polyhedron, or even the entire polyhedron (This last situation would occur if the objective function were constant).

5.2.1.2 Algorithms for Linear Programming Problems

The , developed by George Dantzig, solves LP problems by constructing an admissible solution at a vertex of the polyhedron and then walking along edges of the polyhedron to vertices with successively higher values of the objective function until the optimum is reached. Although this algorithm is quite efficient in practice and can be guaranteed to find the global optimum if certain precautions against cycling are taken, it has poor worst-case behavior: it is possible to construct a linear programming problem for which the simplex method takes a number of steps exponential in the problem size. In fact, for some time it was not known whether the linear programming problem was solvable in polynomial time (complexity class P). This long standing issue was resolved by Leonid Khachiyan in 1979 with the introduction of the ellipsoid method, the first worst-case polynomial-time algorithm for linear programming. It consists of a specialization of the nonlinear optimization technique developed by Naum Shor, generalizing the ellipsoid method for convex optimization proposed by Arkadi Nemirovski, a 2003 John von Neumann Theory Prize winner, and D. Yudin. Khachiyan's algorithm was of landmark importance for establishing the polynomial- time solvability of linear programs. The algorithm had little practical impact, as the simplex

97 method is more efficient for all but specially constructed families of linear programs. However, it inspired new lines of research in linear programming with the development of interior point methods, which can be implemented as a practical tool. In contrast to the simplex algorithm, which finds the optimal solution by progresses along points on the boundary of a polyhedral set, interior point methods move through the interior of the feasible region. In 1984, N. Karmarkar proposed a new interior point projective method for linear programming. Karmarkar's algorithm not only improved on Khachiyan's theoretical worst- case polynomial bound, but also promised dramatic practical performance improvements over the simplex method. Since then, many interior point methods have been proposed and analyzed. Early successful implementations were based on affine scaling variants of the method. For both theoretical and practical properties, barrier function or path-following methods are the most common nowadays. The current opinion is that the efficiency of good implementations of simplex-based methods and interior point methods is similar for routine applications of linear programming.

5.2.1.3 Linear Programming Application in Power System

LP solvers are in widespread use for optimization of various problems in power industry. Alsac etc. describe further development in LP based OPF [30]: using coupled linear network models, applying LP approach to give the same loss minimization solutions as nonlinear programming, and incorporating contingency constraints for preventing schedule. They approved LP has unique ability to recognize problem infeasibility. In [34], Lobato etc. applied LP based OPF for transmission losses and generator reactive margin minimization. The value is mainly the capability to deal with the inequality constraints. The LP approach also works quite well in case of separable objective functions such as the minimization of the total generation costs. LP approaches are still attractive due

98 to their capability to consider integer variables using mixed linear-integer programming techniques. In addition, the representation of contingencies is much easier in terms of the sensitivities of the power flow equations. The proposed method is an iterative process that linearizes both the objective function and the constraints in each iteration. In addition, the objective function is represented by a set of tangent cuts. The discrete nature of control variables, transformer taps, shunt reactors and capacitors are modeled by integer variables. Yan etc. presents a new approach to security-constrained OPF analysis using LP method in [92].The new approach provides an integrated solution method based on linear programming with line flow and bus voltage as decision variables. The general procedure used in LP-based SCOPF solution is to piecewise linearize the cost function, and linearize the power flow equation around the operating point by the Jacobian matrix. Inequality constraints and loss equation are again linearized around the operating point. Rahmouni presents two steps optimization models in [93] for solving the “Optimal Power Flow” problem in electric power systems. The result of the initial optimization model is used to start the final optimization, which is for accuracy improvement. It is an interesting idea that the objective function is constructed by the marginal cost described by line flow in the initial model and the cost function is depicted by the voltage angle and magnitude in the final model. Both models are solved using Linear Programming which is easy to implement and provides a fast solution. When the generation cost functions are similar or identical, traditional LP methods exhibit very slow convergence rate. In [33], Olofsson etc. provides an extended LP-based OPF method. In essence, the extension of the method consists of the use of second order sensitivities for the calculation of total system active power losses. Moreover, the active power and voltage optimization problems are solved in parallel. In the follow sections, we will developing a LP based multi-objective optimization model to minimize both cost and risk, including objective function linearization, equality constraints

99 linearization and inequality constraints linearization. To simplify the illustration, we only use active power (generation or load) as control variable, and only overload risk is addressed.

5.2.2 Cost and Risk Objective Linearization

In the electricity market environment, we may not know the generation cost curve, but bidding curve. Whatever the data source is, we still can assume the generation cost curve is represented by multiple segments linear cost functions, as shown in Figure 5-3(a). The generator price offer segment slopes are computed as follows:

GijGi + − PfPf GijGi − )()( sGij = (5-5) Gij+ − PP Gij−

th where PGij+ and PGij- are the values of PGi at the end of j offer curve segment. Figure 5-3(b) describes the load reduction model, where the load is divided into two parts:

PLi1 = -PL0 (original load MW) and 0≤PLi2 ≤ PL0. Thus, PLi = - (PLi1 + PLi2). The slope of the linear curve may be interpreted as the price of load shedding, typically set to a relatively high value since load serving entities (LSE) typically want to avoid this (inelastic). This load reduction model is equivalent to a linear incremental bid [14]. Other bid mechanisms may be implemented as needed.

Figure 5-3: Generator offer curve and load reduction curve

100

The cost objective function is expressed as the following linear equation: m ⎛ 3 ⎞ k 1 = ⎜ + ⎟ + PsPsfF ∑∑⎜ Gi min GijGij ⎟ ∑ LiLi 2 (5-6) ij==1 ⎝ 1 ⎠ i=1 where m is the number of generator buses and k is the number of load buses. Corresponding to the severity function of Figure 2-6, overload risk of each contingency scenario is expressed as below:

⎧ −× PP llj max ),9.0/(10 l max lj −<<−− 9.0 PPP l max ⎪ Sev j = ⎨ 9.0,0 l max lj <<− 9.0 PPP l max (5-7) ⎪ ⎩ × PP llj max − 9.0),9.0/(10 l max << PPP llj max Based on (5-7), we divide the line flow into three segments in the linear model:

21 ++= PPPP llll 3 (5-8) where Pl1 ≤ -0.9Plmax, -0.9Plmax ≤Pl2 ≤ 0.9Plmax, 0.9 Plmax ≤Pl3. The severity function provides that only Pl1 and Pl3 contribute to the risk. The risk objective function is expressed as the following linear equation: N +1 N +1⎛ N ⎞ 2 RiskF == ERisk = ⎜ )Pr()( SevE ⎟ ∑ i ∑∑⎜ i j ⎟ (5-9) i=1 i=≠1,⎝ =1 ijj ⎠ where Pr(Ei) is the contingency probability, N is the total branch number (N+1 is the number of system states, that include normal and all N-1 states).

5.2.3 Power Flow Modeling Using Shift Factor

0 MW flows for branchs l (1~n) are computed from initial flows Pl obtained from a solved AC power flow together with generation shifts according to: m k 0 PP ll α PGili ∑∑ α Δ+Δ+= PLjlj (5-10) i=1 j=1 where Pl is the line flow, αli and αlj are the generation shift factors corresponding to generator and load, respectively. Power balance is enforced by:

101

m k Gi ∑∑ Lj += PPP loss (5-11) i =1 j =1 where Ploss is calculated from the AC power flow before optimization. As we introduce in section 5.2.1.3, loss can also be linearized regards to control variables. We are not doing this in this work, so loss is approximated as constant.

5.2.4 Security Constraint Linearization Using Distribution Factor

Security consideration requires all line flow information under N-1 contingency states. To avoid solving power flow for each contingency, we directly use line outage distribution factor, which is computed offline, to calculate the post contingency flows. MW flow of branch l in contingency state i is:

Δ+= PdPP ililli (5-12) where i (1~n) indicates the outage line, and dli is the line outage distribution factor. Factors

αli and dli are prepared offline, so that the fast running speed is available. Incorporating (5- 10), (5-12) into (5-9), risk objective is linearly related to control variables.

5.2.5 Linear Programming Method Implementation

In this section, we give a detail formulation of RBMO model corresponding to Weight Method (WM) and Constraint Method (CM), respectively. The LP_RBMO model using WM is implemented as follows: Objective function: *min XV T (5-13) m1 k1 6474 484 6474 484 n}1}1

= [ G1 L Gm1 L1 L ssssV Lk1 β ]0 m1 k1 n1 6474 484 6474 484 6474 484 }1

= [ G1 L Gm1 L1 L Lk1 l1 L 1ln RiskppppppX ] where m1, k1 and n1 are the number of segments of generation, load and line flow, respectively. β is the weight factor of risk compared with cost. For CM approach, we only minimize cost, so β is set to zero.

102

Equality constraint: P ⎡ I 00000 ⎤ ⎡ loss ⎤ ⎢ +km 11 ⎥ ⎢α A 0000 ⎥ α − PP ⎢ li 0 ⎥ ⎢ ∑ ili 0 l 0 ⎥ ⎢ ⎥ ⎢ i=1 ⎥ (5-14) 0 l Ad 01 000 T 0 ⎢ ⎥ * X = ⎢ ⎥ ⎢ 0 0 MMOM ⎥ ⎢ ⎥ ⎢ M ⎥ ⎢ 0 d A 000 ⎥ ⎢ ln 0 ⎥ ⎢ 0 ⎥ ⎢ 0 AAA −1⎥ ⎢ ⎥ ⎣ rr 10 L rn ⎦ ⎣ 0 ⎦ where A0 is the n ×3n matrix with -1 in the diagonal, Ari is the 1 ×3n vector with risk coefficients, which is the product of contingency probability and the severity coefficients calculated by (5-9) under pre and post contingency states. Inequality constraints: 6474m1 484 6474k1 484 6474n1 48 18764 ++ ++ ++ + + = [ G1 L Gm1 L1 L Lk1 l1 L 1ln RiskppppppX ] 6474m1 484 6474k1 484 6474n1 484 1 −− −− −− − } = [ G1 L Gm1 L1 L Lk1 l1 L ppppppX 1ln ]0 (5-15) where Risk+ is the up-limit of risk. In the WM approach, it can be set as infinite since risk is one part of objective function. In the CM approach, it should be set to a known parameter. The low-limit of risk is set as zero, since risk is a non-negative.

5.3 Case Study – Six Bus Test System

In this section, we will use a Six-Bus test system [19] to compare the results of different control problems: (1) security unconstrained OPF, (2) SCOPF and (3) RBMO. The detail model formulation can be seen in Chapter 4. WM and CM approaches implemented in Section 5.2 are used to solve RBMO, respectively. The system configuration is shown in Figure 5-4. Network data and generation cost curve of Six-Bus system are summarized in the TABLE 5-1 and TABLE 5-2.

103

Figure 5-4: Six-Bus Test System

TABLE 5-1: Branch data for 6-bus system (Base: 100MVA)

Branch From bus To bus R (pu) X(pu) BCAP(pu) Rating(MW)

1 1 2 0.10 0.20 0.02 40

2 1 4 0.05 0.20 0.02 60

3 1 5 0.08 0.30 0.03 60

4 2 3 0.05 0.25 0.03 60

5 2 4 0.05 0.10 0.01 75

6 2 5 0.10 0.30 0.02 40

7 2 6 0.07 0.20 0.025 60

8 3 5 0.12 0.26 0.025 40

9 3 6 0.02 0.10 0.01 70

10 4 5 0.20 0.40 0.04 40

11 5 6 0.10 0.30 0.03 40

104

TABLE 5-2 Generator Cost Curve

Gen a ($/MW2) b ($/MW) c ($) Pmin (MW) Pmax (MW)

1 0.00533 11.669 213.1 50 200

2 0.00889 10.333 200.0 37.5 150

3 0.00741 10.833 240.0 45 180

Without losing generality, we give the N-1 contingency probability data expressed in occurrence expectation per year by multiplying a weight (8760h) as below. Pr = [1.75 2.63 2.63 2.63 3.5 1.75 2.63 1.75 3.5 1.75 1.75]

5.3.1 Security Unconstrained OPF

The model of Problem 1 is applied to obtain a security unconstrained OPF, with solution indicated in TABLE 5-3. The total generation cost is $2962.4/hr and risk is 49.05 violations per year. Contingency analysis using a power flow reveals there are five deterministic violations: 1. Post-contingency overload of line 2 for outage of line 1. 2. Post-contingency overload of line 1 for outage of line 2. 3. Post-contingency overload of line 1 for outage of line 3. 4. Post-contingency overload of line 2 for outage of line 3. 5. Post-contingency overload of line 2 for outage of line 5. TABLE 5-3: Solution to Problem1 (Security Unconstrained OPF)

G1(MW) G2(MW) G3(MW) Gen cost ($) Overload risk

125 48.37 45 2962.4 49.05

5.3.2 Security Constrained OPF

The model of Problem 2 is applied to obtain a security constrained OPF (SCOPF), with solution indicated in TABLE 5-4.

105

TABLE 5-4: Solution to Problem 2 (Security Constrained OPF)

G1(MW) G2(MW) G3(MW) Gen cost ($) Overload risk

82 65.63 70.76 2986.8 6.79

Figure 5-5 indicates that the SCOPF has removed all deterministic violations. As a result, Risk for this condition is 6.8 violations per year; operating cost is $2986.8/hr. Although this condition is secure, it is still relatively stressed since some high-probability circles are located close to the red zone.

Figure 5-5: Security diagram for the solution to Problem 2

5.3.3 Risk Based Multi-objective Optimization Problem

5.3.3.1 WM Approach

In WM, risk is incorporated as one part of objective function. We adjust the weight β to get a series of optimization points as shown in TABLE 5-5. The weight β reflects the preference between reliability level and the economic cost. Higher β means the decision maker desires increased system reliability level per unit of cost. TABLE 5-5 illustrates a weakness of the WM method. Very small changes in β (e.g., from 6120 to 6130) can result in very large changes in risk. Some risk values, for example between

106

21.9 and 7.796, cannot be observed since WM method is not able to generate solutions on the concave or linear curve of the objectives space. Also weight factor β change may not always result in different risk level and control scheme. For example, β changes from 6130 to 9300, risk level keeps in the level of 7.796, so does the generation output. Therefore, the search efficiency of WM approach for new solutions is not high. TABLE 5-5: Results with RBMO using WM approach

β($/Per overload Gen1 Gen2 Gen3 Gen Cost Risk) risk (MW) (MW) (MW) ($)

0 49.056 125 48.37 45 2962.4

1700 47.304 124.21 49.15 45 2962.7

2400 44.676 122.45 50.92 45 2963.4

2700 30.66 112.2 61.17 45 2967.5

3000 25.404 107.74 65.63 45 2969.2

4500 22.776 105.52 65.63 47.22 2970.6

5100 21.9 104.3 65.63 48.45 2971.3

6120 21.9 104.3 65.63 48.45 2971.3

6130 7.796 87.89 65.63 64.86 2981.1

9300 7.796 87.89 65.63 64.86 2981.1

9500 7.446 87.5 66.07 64.8 2981.4

13700 7.096 87.07 66.55 64.74 2982

24370 7.096 87.07 66.55 64.74 2982

24380 2.19 76.92 78.17 63.28 2995.6

346650 2.19 76.92 78.17 63.28 2995.6

346700 2.0148 69.79 86.33 62.25 3005.2

107

Theoretically, to search the total Pareto-optimal curve, β should be selected from 0 to a large number. As a result, the WM approach can be very time consuming. In TABLE 5-5, β = 346700 reflects the best reliability performance point of Problem 3.

5.3.3.2 CM Approach

The CM approach of Problem 3 is used to search for the complete Pareto-optimal set, we solve the RBMO for a series of risk values. TABLE 5-6 lists the optimization results corresponding to different risk levels. TABLE 5-6: Results with RBMO using CM approach

Overload risk G1 (MW) G2 (MW) G3 (MW) Gen cost ($)

49.056 125 48.37 45 2962.4

39.42 118.51 54.86 45 2965

26.28 108.27 65.09 45 2969

13.14 94.14 65.63 58.6 2977.3

7.796 87.87 65.65 64.85 2981.1

6.785 86.41 67.32 64.64 2982.9

6.272 85.34 68.54 64.49 2984.3

5.799 84.36 69.66 64.35 2985.6

5.326 83.38 70.78 64.21 2987

4.853 82.40 71.90 64.07 2988.3

4.38 81.42 73.03 63.92 2989.6

3.907 80.44 74.15 63.78 2990.9

3.434 79.46 75.27 63.64 2992.2

2.961 78.47 76.39 63.50 2993.5

2.488 77.49 77.52 63.36 2994.8

2.015 69.79 86.33 62.25 3005.2

108

5.3.4 Decision Making Using Objective Trade-off Curve

Figure 5-6 plots part of the Pareto-optimal results of RBMO model corresponding to risk level less than 6.79. The operating point from SCOPF is also shown in this figure, indicating the ability of RBMO to achieve operating conditions with both lower risk and lower cost compared with SCOPF. For example, the dark-circle point on the curve has risk of 6.2 violations per year; operating cost is $2984/hr.

Figure 5-6: Pareto-optimal curve for RBMO model

Figure 5-7 provides the security diagram corresponding to the dark-circle point on Figure 5-6, from which it can be observed that lower cost, lower risk is achieved by decreasing severity on high probability violations (2, 8) while increasing severity of low probability violations (1, 5). Although one violation (1) exceeds its deterministic limit, the overall system risk is lower. Of course, Figure 5-6 provides other, less risky, higher cost operating conditions, an effective decision-support tool allowing decision-makers to make optimal tradeoffs between security level and economy.

109

Figure 5-7: Security diagram for the solution to RBMO

Risk = 6.2 violations/year

5.4 Summary

A RBMO problem for operational decision-making is formulated by LP method and is solved by two typical classical multi-objective optimization methods. Comparing SCOPF results with RBMO results, we can see RBMO not only outperforms SCOPF by finding operating points lower both in risk and in cost, but it provides an effective decision-aid that enables efficient security-economy tradeoff analysis. CM approach is superior to WM since it does not depend on problem. However, both approaches require knowledge and experience to set the relative parameters in the optimization problem, such as risk constraints and weight factors.

110

CHAPTER 6: EVOLUTIONARY ALGORITHMS IMPLEMENTATION FOR RISK BASED MULTI-OBJECTIVE OPTIMIZATION

6.1 Introduction

In Chapter 4, two fundamentally different approaches to Multi-objective optimization (MO) optimization are introduced: Preference-based and ideal MO optimization. In the preference-based approach, a relative preference vector needs to be supplied without any knowledge of the possible consequences. Classical MO optimization methods which convert multiple objectives into a single objective by using a relative preference vector of objectives work according to this preference-based strategy. Unless a reliable and accurate preference vector is available, the optimal solution obtained by such methods is highly subjective to the particular user. However, in the ideal approach, the problem information is used to choose one solution from the obtained set of trade-off solutions using higher-level information. Based on this point, the ideal approach is more methodical, more practical, and less subjective. The evolutionary methods are developed regards to this principle In Chapter 5, we observe a number of difficulties from descriptions of some of the most popular classical multi-objective optimization algorithms, particularly if the user is interested in finding multiple Pareto-optimal solutions: 1. Only one Pareto-optimal solution can be expected to be found in one simulation run of a classical algorithm; 2. Not all Pareto-optimal solutions can be found by some algorithms in nonconvex MO problems. 3. All algorithms require some problem knowledge, such as suitable weights or ε or target values.

111

Furthermore, we can also classify classic optimization methods into two distinct groups: direct methods and gradient-based methods. In direct search methods, only the objective function and the constraint values are used to guide the search strategy, whereas gradient- based methods use the first and second-order derivatives of the objective function and constraints to guide the search process. Since derivative information is not used, the direct search methods are usually slow, requiring many function evaluations for convergence. For the same reason, they can also be applied to many problems without a major change in the algorithm. On the other hand, gradient-based methods quickly converge near an optimal solution, but are not efficient in non-differentiable and discontinuous problems. In addition, there are some common difficulties with most classical direct and gradient-based techniques, as follows: 1. The convergence to an optimal solution depends on the chosen initial solution; 2. Most algorithms tend to get stuck at a suboptimal solution; 3. An algorithm efficient in solving one optimization problem may not be efficient in solving a different optimization problem; 4. Algorithms are not efficient in handing problems having a discrete search space; 5. Algorithms can not be efficiently used on a parallel machine. It is mentioned above that the classical way to solve MO optimization problems is converting the task of finding multiple trade-off solutions in a MO optimization to one of finding a single solution of a transformed SO optimization problem. However, the field of search and optimizations has changed over the last few years by the introduction of a number of non-classical, unorthodox and stochastic search and optimization algorithms. Of these, the evolutionary algorithm mimics nature’s evolutionary principles to drive its search towards an optimal solution. Reference [88] reviews the history of Evolutionary Algorithm methods and presents a number of algorithms for multi-objective optimization. Evolutionary computing emulates the

112 biological evolution process. A population of individuals representing different solutions is evolving to find the optimal solutions. The fittest individuals are chosen, mutation and crossover operations applied, thus yielding a new generation (offspring). These methods include genetic algorithms (GA), evolutionary algorithms (EA) and evolutionary strategies (ES), which only differ in the way the fitness selection, mutation and crossover operations are performed. Evolutionary techniques have been successfully applied to all sort of SO optimization, especially those where the objective functions is not well-behaved (not differentiable, discontinuous, and/or no analytical formulation). One of the most striking differences to classical search and optimization algorithms is that EA use a population of solutions in each iteration, instead of a single solution. Since a population of solutions is processed in each iteration, the outcome of an EA is also a population of solutions. If an optimization problem has a single optimum, all EA population members can be expected to converge to that optimum solution. However, if an optimization problem has multiple optimal solutions, an EA can be used to capture multiple optimal solutions in its final population. This ability of an EA to find multiple optimal solutions in one single simulation run make EA unique in solving MO optimization problems. Since step1 of the ideal strategy for MO optimization requires multiple trade-off solutions to be found, an EA population-approach can be suitably utilized to find a number of solutions in a single simulation run. Following factors make EA method superior to classical methods in the application of power system: 1. In power system optimization problems, since nonlinearities and complex interactions among problem variables (P, Q, V, θ) exist ,the search space usually contains more than one optimal solution, of which most are undesired locally optimal solutions having inferior objective function values. While solving these problems, when classical methods get attracted to any of these locally optimal solutions, there

113

is no escape. EA, otherwise, can find the global optimal solution (if it exists) or multiple optimal solutions. 2. At the same time, some decision variables, such as transformer taps and capacitor switching, are restricted to take discrete values only. A usual practice of classical methods to tackle such problem is to assume that all variables are continuous during the optimization process. Thereafter, an available size closer to the obtained solution is recommended. However, there are major difficulties with this approach. First, since infeasible values of a decision variable are allowed in the optimization process, the optimization algorithm spends enormous time in computing infeasible solutions. This makes the search effort inefficient. Secondly, as post-optimization calculations, the nearest lower and upper available sizes need to be checked for each infeasible discrete variable. For n such discrete variables, a large number of additional solutions need to be evaluated. Thirdly, two options checked for each variable may not guarantee formation of the optimal combination with respect to other variables. However, these difficulties are eliminated in EA since only feasible values of the control variables are allowed during the optimization process. 3. As discussed in the Chapter 4, one of the two tasks in the ideal multi-objective optimization is to find widely spread solutions in the obtained non-dominated front. Finding and maintaining multiple solutions in one single simulation run is a unique feature of evolutionary optimization techniques. 4. As we know, parallel computing systems can speed the solving process. Since most classical methods use the point by point approach, where one solution gets update to a new solution in one iteration, the advantage of parallel systems can not be fully exploited. The process of EA can be easily implemented by the parallel computation since the crossover and mutation process can be individually handled for each offspring.

114

Evolutionary computing techniques are being applied for a variety of power system problems system problems. Paper [39] presented the main application of EA in power system before 1996, which included planning and scheduling, operation optimization, and alarm processing and fault diagnosis. Evolutionary programming applied on the OPF algorithm is introduced in [40][41][42]. Multiple objective evolutionary approaches for load control strategies, environmental/economic power dispatch and reactive power planning are separately illustrated in [44][45][46]. Paper [89] presents the fundamental knowledge on solving multi-objective optimization problems for power system. In this Chapter, Working principles of EAs are presented in Section 6.2. Overview of EAs is given in Section 6.3. Section 6.4 implements Non-dominated Sorting Genetic Algorithm, and a corresponding example is provided in Section 6.5. Section 6.6 concludes.

6.2 Working Principles of Evolutionary Algorithms

In this Section, we will describe the working principles of binary-coded EA, dealing with discrete search space, and real-parameter EA that is ideally suited to handle problems with continuous search space. Regards to different algorithms, the basic conceptions of selection, crossover, and mutation are introduced. Constraints can be handled in a much better way than the way in which they are handled in classical search and optimization algorithms. Some approaches for handling constraints in EA are also presented.

6.2.1 Binary Algorithms

The working principles of EAs are very different form that of most classical optimization techniques. Following nonlinear optimal problem, with constraint and objective, is used to illustrate how EAs work. The objective of the design is to minimize the cost of the cylindrical can material. The parameter c is the cost of the can material per square cm.

115

⎛ πd 2 ⎞ Minimize ),( chdf ⎜ += πdh⎟, ⎝ 2 ⎠ π 2 hd hdgtosubject ),( ≥= ,300 (6-1) 4 d ≤≤ ,310 h ≤≤ .310

Solution representation

In order to use EAs to find the optimal decision variables d and h, which satisfy the constraint g and minimizes f, we can represent them in binary strings. Each of the two decision variables is coded using 5 bits with decision variable constraints considered, thereby making the overall string length equal to 8. As shown in (6-2), the string represents a can of diameter 8cm and height 10cm.

0101000010 (6-2) 1424 43 14 424 434 d h (6-2) shows only integer values in the range [0, 31] are considered for the variables. Actually, using following expression, EAs can be assigned to use any other integer or non- integer values just by changing the string length and lower and upper bounds:

max min min − xx ii xx ii += sDV i )( (6-3) li −12

Where li is the string length used to code the i-th variable and DV(si) is the decoded value of the string si.

It is important to know that binary EAs work with strings representing the decision variables, instead of decision variables themselves.

Assign fitness to a solution

Once a string (or a solution) is created by EAs operators, it is necessary to evaluate the solution, particularly in the context of the underlying objective and constraint functions. A feasible solution means all constraints are bounded. Then the fitness is made equal to the

116 objective function value. For example, the fitness of the solution (6-2) is (assuming c = 0.065): sF = [ ()2 ππ ( )( )]=+ 2310828065.0)( .

Since the objective of the optimization here is to minimize the objective function, it is to be noted that a solution with a smaller fitness value compared to another solution is better.

Reproduction or selection operator

The primary objective of the reproduction operator is to make duplicates of good solutions and eliminate bad solutions in a population, while keeping the population size constant. This is achieved by performing the following tasks:

1. Identify good (usually above-average) solutions in a population.

2. Make multiple copies of good solutions.

3. Eliminate bad solutions from the population so that multiple copies of good solutions can be placed in the population.

There exists a number of ways to achieve the above tasks. Some common methods are tournament selection, proportionate selection and ranking selection.

Crossover Operator

Since the reproduction operator can not create any new solutions in the population and only makes more copies of good solutions at the expense of not-so-good solutions. The creation of new solutions is performed by crossover and mutation operators. Like the reproduction operator, there exists a number of crossover operators, but in almost all crossover operators, two strings are picked from the mating pool at random and some portion of the string are exchanged between the strings to create two new strings. In a single-point crossover operator, this is performed by randomly choosing a crossing site along the string and by exchanging all bits on the right side of the crossing site.

117

Figure 6-1 shows genotype (strings) of two solutions (parent solutions) from the new population created after the reproduction operator. The third site along the string length is chosen at random and the contents of the right side of this cross site are exchanged between the two strings. The process creates two new strings (offspring solutions). The fitness of each solution is also shown in this figure. It is important to note that the above crossover operator created a solution (having a cost of 22) which is better in cost than both of the parent solutions.

Figure 6-1: The single-point crossover operator

In order to preserve some good strings selected during the reproduction operator, not all strings in the population are used in a crossover. If a crossover probability of pc is used, then 100pc% strings in the population are used in the crossover operation and 100(1-pc) % of the population are simply copied to the new population. The concept of exchanging partial information between two strings can also be achieved with more than one cross sites.

Mutation Operator

Like crossover operator, mutation operator is also used for the search aspect of EAs. The bitwise mutation operator changes a 1 to a 0, and vice versa, with a mutation probability pm. The need for mutation is to keep diversity in the population. Figure 6-2 shows how a string, obtained after the use of reproduction and crossover operators, has been mutated to another string, thus representing a slightly different solution. Here, the solution (offspring) obtained in the example is better than the original solution (parent).

Figure 6-2: The bit-wise mutation operator

118

The three operators – reproduction (selection), crossover, and mutation – are simple and straightforward. The reproduction operator selects good string, while the crossover operator recombines together good substrings from two good strings to hopefully form a better substring. The mutation operator alters a string locally to hopefully create a better string. They constitute potential search and optimization principles of EAs.

6.2.2 Real Parameter Algorithms

When binary-coded EAs need to be used to handle problems having a continuous search space, a number of difficulties arise. One difficulty is that a binary coding causes artificial hindrance to a gradual search in the continuous search space. The other difficulty is the inability to achieve any arbitrary precision in the optimal solution. The more the required precision, then the larger is the string length. For large strings, the population size requirement is also large, thereby increasing the computational complexity of the algorithm. There exist a number of real-parameter EA implementations, where crossover and mutation operators are applied directly to real parameter values. Since real parameters are used directly (without any string coding), solving real-parameter optimization problems is a step easier when compared to the binary-coded Gas. Unlike in the binary-coded EAs, decision variables can be directly used to compute the fitness values. Since the selection operator works with the fitness value, any selector operator used with binary-coded EAs can also be used in real-parameter EAs. Here, we will describe real-parameter crossover operators and mutation operators.

Crossover Operator

Reference [88] lists a number of real-parameter crossover operators, including linear crossover, naïve crossover, blend crossover, simulated binary crossover and so on. Here, we just introduce blend crossover (BLX-α), which is implemented in the research.

119

t),1( t),2( t ),1( t),2( For two parent solutions xi and xi (assuming xi < xi ), the BLX-α randomly

t),1( t t t),2(),1(),2( t t),1(),2( picks a solution in the range [ i α( i −− i ), i α ( i −+ xxxxxx i )]. This crossover operator is illustrated in Figure 6-3.

Figure 6-3: The BLX-α operator.

(Parents are marked by filled circles)

Thus, if ui is a random number between 0 and 1, the following is an offspring:

t+ )1,1( t t),2(),1( xi ()1 ii +−= γγ xx ii (6-4) where γi= (1+2α)ui - α.

t t),2(),1( If α is zero, this crossover creates a random solution in the range[ i , xx i ]. In a number of test problems, the investigators have reported that BLX-0.5 (α=0.5) performs better than

BLX operators with any other α value. It is important to note that the factor γi is uniformly distributed for a fixed value of α, which brings BLX-α an interesting property: the location of the offspring depends on the difference in parent solutions. This will be clear if we rewrite equation (6-4) as follows:

t+ t),1()1,1( t t),1(),2( i i γ ( ii −=− xxxx i ) (6-5)

If the difference between the parent solutions is small, the difference between the offspring and parent solutions is also small. This property of a search operator allows us to constitute an adaptive search. If the diversity in the parent population is large, an offspring population with a large diversity is expected, and vice versa. Thus, such an operator will

120 allow the searching of the entire space early on (when a random population over the entire search space is initialized) and also allow us to maintain a focused search when the population tends to converge in some region in the search space.

Mutation Operator

If the diversity in participating parents is large, the crossover can create offspring which are also diverse with respect to each other. For example, with BLX operator, it is possible to achieve adaptively large or small perturbations without any predefined setting of the range of perturbation. However, a local perturbation caused by mutation is also useful in maintaining diversity in a population. Reference [88] mentions some of the most commonly used real- parameter mutation operators, such as random mutation, non-uniform mutation, normally distributed mutation and polynomial mutation. Following we only introduce the random mutation to show how mutation operator produces new solution.

A solution in the vicinity of the parent solution with a uniform probability distribution (shown with a dashed line in Figure 6-4) can be chosen by following equation:

t+ t),1()1,1( i i (rxy i 5.0 )Δ−+= i (6-6) where ri is a random number in [0,1], Δi is the user-defined maximum perturbation allowed in the i-th decision variable. Care should be taken to check if the above calculation takes

t+ )1,1( yi outside of the specified lower and upper limits.

Figure 6-4: The random mutation operator.

121

6.2.3 Constraint-Handling in Evolutionary Algorithms

There are a number of efficient methods for handling constraints. We classify them into five categories, as follows:

1. Methods based on preserving feasibility of solutions.

2. Method based on penalty functions.

3. Methods biasing feasible over infeasible solutions.

4. Methods based on decoders.

5. Hybrid methods.

We will describe first two methods briefly in the following subsections.

6.2.3.1 Methods Based on Preserving Feasibility of Solutions

In the above introduction, we have known that the crossover and mutation operations are used to create offspring. In order to keep the offspring feasible, we can

First, one decision variable can be eliminated by using an equality constraint either implicitly or explicitly. In this way, the equality constraint is always satisfied. For example,

2 for the following equality constraint: 2)( 21 xxxxh 3 =−= 0 , the variable can be eliminated

2 by the following equation: = 21 xxx 3 2/ in the objective function and in all other constraints. In this way, an EA can use one decision variable less. This also allows the above equality constraint to be satisfied automatically in all solutions used in the optimization process. Similarly, for K linear simultaneous equalities, K decision variables can be eliminated (only (n-K) decision variables left in the optimization), thereby automatically satisfying all K linear equality constraints. In the implicit method of handling an equality constraint h(x), a variable

i)( xk can be calculated from the constraint by finding the root of k xxxh k = 0))\(,( , where

122

i)( xx k )\( is the (n-1) real-parameter vector (without xk) corresponding to the i-th solution.

Therefore, the estimated root of the above equation is used as a value of xk.

Since in most interesting optimization problems, the optimal solution lies on the intersection of constraint boundaries, this method is efficient to search in the boundary area between feasible and infeasible region.

6.2.3.2 Method Based on Penalty Functions

In this method, an exterior penalty term which penalizes infeasible solutions is used.

Based on the constraint violation gj(x) or hk(x), a bracket-operator penalty term is added to the objective function and a penalized function is formed as below:

J K )()( += jj )( + ∑∑ kk xhrxgRxfxF )( (6-7) j=1 k=1 where Rj and rk are user-defined penalty parameters. For “less than” inequality constraints, the bracket-operator denotes the absolute value of the operand, if operand is positive. Otherwise, if the operand is non-positive, it returns a value of zero. For equality constraints, the absolute vale of the operand is returned.

6.3 Overviews of Evolutionary Algorithms

EA approaches can be divided into Non-Pareto-Based and Pareto-Based. Here we only introduce the latter since which is not restricted to MO problem type. Pareto-Based EAs explicitly use Pareto-ranking in order to determine the probability of replication of an individual. The basic idea is to find the set of non-dominated individuals in the population. These are assigned the highest rank and eliminated from further contention. The process is then repeated with the remaining individuals until the entire population is ranked and assigned a fitness value. In conjunction with Pareto-based fitness assignment, a niching mechanism is used to prevent the algorithm from converging to a single region of the Pareto front. A popular niching technique called sharing consists of regulating the density of

123 solutions in the hyperspace spanned by either the objective vector or the decision vector. The schemes presented below essentially differ in the way the fitness value of an individual is determined prior to the selection step of the EA. Sharing is often used in the computation of the fitness value. Mutation and crossover operations are then performed to get the next generation of individuals.

6.3.1 Multi-objective Genetic Algorithm

A simple and efficient method is Multi-objective Genetic Algorithm (MGA). The fitness value of an individual is proportional to the number of other individuals it dominates (Figure 6-5). Niching can be performed either in the objective space or the decision space.

Figure 6-5: Illustration of fitness computation for MOGA

6.3.2 Non-dominated Sorting Genetic Algorithm

Non-dominated Sorting Genetic Algorithm (NSGA) uses a layered classification technique, which is shown in Figure 6-6. All non-dominated individuals are assigned the same fitness value and sharing is applied in the decision variable space. The process is repeated for the remainder of the population with a progressively lower fitness value assigned to the non-dominated individuals.

124

Cost 8 5 1

9 6 10 Front3

2 7 3 Front2 4

Front1

Risk

Figure 6-6: Illustration of fitness computation for NSGA

6.3.3 Niched Pareto Genetic Algorithm

In the Niched Pareto Genetic Algorithm (NPGA) instead of bilateral direct comparison, two individuals are compared with respect to a comparison set (usually 10% of the entire population). When one candidate is dominated by the set while the other is not, the latter is selected. If neither or both the candidates are dominated, fitness sharing is used to decide selection. NPGA introduces a new variable (size of the comparison set), but is computationally faster than the previous techniques, since the selection step is applied only to a subset of the population.

6.3.4 Strength Pareto Evolutionary Algorithm

Strength Pareto Evolutionary Algorithm (SPEA) uses an external archive to maintain the non-dominated solutions found during the evolution. Candidate solutions are compared to the archive. A MOGA-style fitness assignment is applied: fitness of each member of the current population is computed according to the strengths of all external nondominated solutions that

125 dominate it. A clustering technique is applied to maintain diversity.

6.3.5 Multi-objective Particle Swarm Optimization

More recently, swarm intelligence approaches have been developed for MO problems. In particular, in Multi-Objective Particle Swarm Optimization (MOPSO), the global best (towards which the particles flock while exploring the search space) changed after a specified number of PSO epochs to a heuristically selected point of the emerging nondominated front. The selection method is designed to emphasize regions of low density, thus at the same time maintaining diversity. The algorithm also features a mutation operator and a dynamic grid- based Pareto front management mechanism.

6.4 Non-dominated Sorting Genetic Algorithm Implementation

6.4.1 Fitness Assignment

NSGA uses a layered classification technique to classify all solutions, which are divided into three distinct non-dominated sets, as shown in Figure 6-6. Once the classification task is over, it is clear that all solutions in the first set belong to the best non-dominated set in the population. The second best solutions in the population are those that belong to the second set, and so on. Obviously, the worst solutions are those belonging to the final set. Thus, it makes sense to assign the highest fitness to solutions of the best non-dominated front and then assign a progressively worse fitness to solutions of higher fronts. This is because the best non-dominated solutions in a population are nearest to the true Pareto-optimal front compared to any other solution in the population. The fitness assignment procedure begins from the first non-dominated set and successively proceeds to dominated sets. Any solution of the first non-dominated set is assigned a fitness equal to F = N (the population size). This specific value of N is used for a particular purpose. Since all solutions in the first non-dominated set are equally important in

126 terms of their closeness to the Pareto-optimal front relative to the current population, we assign the same fitness to all of them. With respect to Figure 6-6, solutions 1 to 4 are all assigned a fitness equal to 10 (the population size). Assigning more fitness to solutions belong to a better non-dominated set ensures a selection pressure towards the Pareto-optimal front. However, in order to achieve the second goal, diversity among solutions in a front must also be maintained. Unless an explicit diversity-preserving mechanism is used, EAs are not likely to ensure diversity. In NSGA, the diversity is maintained by degrading the assigned fitness based on the number of neighboring solutions. In order to explain the need of diversity preservation, let us refer to solutions in the first front in Figure 6-6. It is clear that the front is not represented adequately with the four solutions 1 to 4. Three solutions (2 to 4) are crowded in one portion of the front, whereas there is only one solution representing the entire top-left portion of the front. If the solution 1 is not emphasized adequately and if gets lost in subsequent genetic operations, the EA has to rediscover this part of the Pareto-optimal front. In order to avoid this problem, we have to make sure that less crowded regions in a front are adequately emphasized. In NSGA, the sharing function method is used for this purpose. The sharing function method is used front-wise. That is, for each solution i in the front 1, the normalized Euclidean distance dij from another solution j in the same front is calculated, as follows:

i j)()( 2 P1 ⎛ − xx ⎞ d = ⎜ k k ⎟ (6-8) ij ∑⎜ max min ⎟ k=1 ⎝ k − xx k ⎠ where P1 is the number of solutions in the first front. Sharing distance can be calculated with the decision variables, as well as objective function values. Once these distances are calculated, they are used to compute a sharing function value by using

127

α ⎧ ⎛ d ⎞ ⎪1− ⎜ ⎟ , dif ≤ σ dSh )( = ⎜ ⎟ share (6-9) ⎨ ⎝σ share ⎠ ⎪ ⎩ ,0 otherwise The parameter d is the distance between any two solutions calculated by (6-8). The above function take a value in [0,1], depending on the values of d and σshare. If d is zero (meaning that two solutions are identical or their distance is zero), Sh(d) = 1. This means that a solution has full sharing effect on itself. On the other hand, if d ≥σshare (meaning that two solutions are at least a distance of σshare away from each other), Sh(d) = 0. This means that two solutions which are a distance of away from each other do not have any sharing effect on each other. Any other distance d between two solutions will have a partial effect on each. Thus, it is clear that in a population, a solution may not get any sharing effect from some solutions, may get partial sharing effect from few solutions, and will get a full effect from itself. If these sharing function values are calculated with respect to all population members (including itself), are added and a niche count nci is calculated for the i-th solution, as follows:

P1 i = ∑ dshnc ij )( (6-10) j=1 The niche count provides an estimate of the extent of crowding near a solution. It is important to note that nci is always greater than or equal to one. This is because the right side includes the term Sh(dii) = Sh(0,0) = 1. The final task is to calculate the shared fitness value

’ as Fi =Fi/ nci . Since all over-represented optima will have a large value, the fitness of all representative solutions of these optima will be degraded by a large amount. All under- represented optima will have a smaller nci value and the degradation of the fitness value would not be large, thereby emphasizing the under-represented optima. In the context of

’ Figure 6-6, the shared fitness of solution 1 will be F1 =10, whereas that of the other three solutions would be close to 10/3 or 3.33. The above procedure completes the fitness assignment procedure of all solutions in the first front. In order to proceed to the second front, we note the minimum shared fitness in the

128 first front and then assign a fitness value slightly smaller than this minimum shared fitness value. This makes sure that no solution in the first front has a shared fitness worse than the assigned fitness of any solution in the second front. Once again, the sharing function method is applied among the solutions of the second front and the corresponding shared fitness values are computed. This procedure is continued until all solutions are assigned a shared fitness.

6.4.2 Algorithm Implement and Efficiency Improvement

The NSGA algorithm developed for RBMO problem can be processed in following steps: Step1 Generate randomly an initial parent population (generator output vector) and run power flow to get all unknown states and the objective function value (cost, risk) of each solution in the population; Step2 Create offspring through crossover and mutation operation and calculate the objective function value of each offspring; Step3 With respect to whole population (parent and offspring), all solutions are classified into different fronts using layer classification technique, and new generation is formed according to the assigned fitness. For example, in Figure 6-6, if we select 6 solutions for the next iteration, solution 1 to 4 in Front 1 will be firstly selected. Other two solutions are in Front 2 and decided by Fitness value. Step4 Compare with the result of prior iteration, if converge condition is satisfied, or a certain pre-specified number of generations are attained, end and output the solution, otherwise go back to step2; Step5 The solutions of Font 1 in the last iteration are the Pareto optimal solutions for RBMO. In the other words, Pareto optimal set provides the candidate operation points. To calculate cost and risk of each solution, we require MW outputs of generator units (to obtain cost), line flow and bus voltage (to obtain risk) under normal and contingency states.

129

Running AC power flow for each solution is computationally excessive for on-line analysis, especially when the number of contingencies is large. We reduce computation within the EA by utilizing sensitivity information. A full power flow solution is solved when generating the parent solutions in the first generation. At the same time, sensitivity data for bus voltages and line flows, with respect to control variables (generation output), are obtained. Therefore, instead of using full power flow, we can use sensitivity information of the closest parent to compute objective function values for each offspring, as shown in Figure 6-7. Since parent solutions are well distributed in the decision space and the sensitivity data of the closest parent is used, we obtain good objective value approximations for the offspring solutions. This simplification results in around 95% computing time saved.

x3 Decision space Objective space

Cost O1 P1 x2 P2

O2

O3

P3 Risk

P: Parent O: Offspring x1

Figure 6-7: Objective value calculation using sensitivity data

6.5 Case Study – IEEE 24 Bus RTS-96 System

In this section, we use IEEE 24 bus Reliability Test System (RTS) [94], as shown in Figure 6-8, to illustrate risk-based decision-making and compare its results to those of SCOPF. We make the following changes to the data, relative to that presented in [94].

130

Figure 6-8: 24-Bus RTS-96 System

1. The continuous ratings given in ([94], Table 12) is used as long-time emergency rating, and reduced to 80% of itself. In addition, Emergency rating of Line 12~23, Line 13~23 and Line 14~16 are changed to 400MVA, 550MVA and 550MVA individually; 2. 100MW of load is shifted from bus 18 to buses 1 and 2 (50 MW to each), and 100MW of load is shifted from bus 15 to bus 19; 3. 155 MW generation capacity is added at buses 13 and 23; 4. The outage rates of circuits 16~17 and 15~16 are increased to 3.33 outages/year. Figure 6-9 shows the Pareto front obtained from RBMO, which in this case is a surface since

131 overload and low voltage risks are given separately. In addition, the one solution produced by the SCOPF is indicated. For comparison, we select one solution from the Pareto front, indicated with a large black dot in Figure 6-9, with the comparison summarized in TABLE 6-1.

4700 SCOPF 4650

4600

4550

4500

Generation cost(*100$) Generation 1

0.38 0.4 0.5 0.42 0.44 0.46 Overload risk Low voltage risk 0.48 0

Figure 6-9: Pareto optimal solutions space

TABLE 6-1: Solution objective comparison

Objective SCOPF RBMO

Cost($) 451,383 446,420

Overload Risk 1.51 0.84

Low voltage Risk 0.49 0.47

DV number 0 1

NV number 6 1

Figure 6-10 shows the overload violation distribution for both models using the security diagram (low voltage risk distribution is very similar for the two solutions and so not shown in this way). We observe from Figure 6-10 that the solution of SCOPF is security constrained by the emergency rating of circuit 1 (12~23) under the C2 (13~23) contingency. Figure 6-10 also shows that the RBMO solution brings a distinct change to the violation distribution:

132 violations under the C4 (15~16), C5 (15~21), and C7 (16~17) disappear, while violation extent on circuit 1 (12~23) increased (moves toward the red zone) under both contingencies C2 (13~23) and C3 (14~16). Yet, TABLE 6-1 indicates these differences result in a much lower overload risk and a significantly lower cost for the RBMO solution than for the SCOPF solution. The total cost saving is 4963$/hr, a savings realized by a greater use of less expensive generation at the four units of bus 23, as shown in the generation output results of Figure 6-11.

7 1 1

4 C3 1 C 1 C2 C5 7 C C6 7 7 4

Figure 6-10: Overload risk visualization

(Circle violation: SCOPF solution; Rectangle violation: RBMO solution)

1 : Line 12~23 ; 2 : Line 13~23 ; 3 : 14~16 ; 4 : 15~16 ; 5 : 15~21(I) ; 6 : 15~21(II) ; 7 : 16~17

133

SCOPF RBMO 4

3.5

3

2.5

2

1.5

1 nrtrotu(0MW ) output(100M G enerator

0.5

0 1 3 5 7 9 111315171921232527293133 G enerator Num ber

Figure 6-11: Generation outputs of SCOPF Model

{ {

Figure 6-12: Solution CEI evaluation

Figure 6-12 illustrates the results of catastrophic outcome assessment for cascading. Here, we limit the Pareto solutions tested to only the one solution addressed in TABLE 6-1, Figure 6-10, and identified by black dot in Figure 6-9. Figure 6-12 shows that SCOPF solution causes a high CEI: 5.81. The main reasons are contingencies 4 (15~16) and 7 (16~17). In contrast, RBMO eliminates cascading probability for these events as shown in Figure 6-10. This provides evidence that RBMO generates operating points that do much better than

134

SCOPF in catastrophic outcome testing, an attribute resulting from the fact that RBMO controls security level uniformly throughout the system, and not just one circuit at a time.

6.6 Summary

This paper provides a new technique for security and economy trade off analysis through risk-based multi-objective optimization (RBMO). System security level, quantified as overload risk and low voltage risk, and cost, are used as objectives in RBMO through which the Pareto-optimal solutions are identified. Compared with the traditional SCOPF method, the RBMO approach identifies solutions lower in both cost and risk. The paper provides evidence, through use of catastrophic outcome assessment for cascading overloads, that such RBMO solutions, having more uniform security control, are also less exposed to occurrence of cascading overloads. It is not unreasonable to expect testing for voltage instability, transient instability, and oscillatory instability would result in a similar observation as well.

135

CHAPTER 7: RISK BASED CONGESTION MANAGEMENT AND LMP CALCULATION IN ELECTRICITY MARKET

7.1 Introduction

Energy Policy Act of 1992 (EPACT 92) started the competition in the electrical power industry of US. One of the stated purposes of EPACT92 was “to use the market rather than government regulation wherever possible both to advance energy security goals and to protect consumers”. Based on this Act, the Federal Energy Regulatory Commission (FERC) was authorized to govern ISO and RTO adopting the standard market design (SMD) which can be seen in the website of FERC [95]. The access of transmission is required to be open to all market participants. More discussions about SMD are given in [96][97]. In deregulated competitive environment, the target of market, benefit maximization, results in various congestions if transmission capacity is not enough. For example, transmission line overload, low bus voltage, voltage stability [98] and dynamic security problem [99]. Congestion management has been a hot research area since competitive market has replaced traditional regulation maneuver. It belongs to reliability category (secure system), but must be solved in the market process (day-ahead and real time market). Previous works [100]-[103] , introduced several congestion methods. Basically, the congestion management methods are divided into two categories: non-market based and market based. In the first category, a transmission loading relief (TLR) [104] procedure will be called when transactions violate security rules. Differently, the latter will use market price signal and bidding to avoid power delivery risk and manage congestion [47], especially in the Localtional marginal prices (LMPs) based market. Some other works, [106] [107], gave congestion management experiences in industry, such as PJM, MISO and ERCOT. Industry practice shows TLR is uncommon in systems where LMPs are used as the primary means for

136 congestion management. In this research, we will focus on the LMPs based market environment. Congestion is a key issue in causing different market price, which reflects how much buyer to pay and supplier to sell. As we all know, LMPs consist of three parts: energy, congestion and losses [48] [108]. LMP at a given node of a power system is defined as the incremental cost of supplying power at that node [48] [109]. In a lossless system with no active constraints, the LMP at all the nodes are equal to the marginal cost of meeting the last increment of demand. Under constrained conditions, the LMP reflect the increased cost to deliver energy from marginal generating units to load buses when insufficient transmission exists. Main characteristics of LMP can be seen in [110]. As we have introduced in the above chapters, SCOPF is widely used to reach optimal security control. We also see that it is the fundamental of congestion management in the market settlement. Same problem comes out as what we have shown in the security control. That is decision based on deterministic assessment may result in either very low risk or unintended high risk. Therefore, deterministic congestion management (DCM) is not able to avoid security risk in the market, and may cause inadequate understanding of the system and inadequate level of situation awareness. Currently, the LMP system is in use in several wholesale power markets of US, such as PJM RTO ([49]-[52]), New England ISO ([53]-[55]), and New York ISO [56]. In these wholesale markets, LMPs are applied to both day-ahead market and real-time balancing market. LMPs are the lagrangian multipliers, which are the byproducts from optimization problem. In PJM market, the same single contingency criteria and transmission equipment ratings are considered in the LMP calculation for day-ahead market and real-time market using least–cost security constrained economic dispatch program. Form risk point of view, deterministic LMP (DLMP) based market, relating to DCM, is not able to give efficient price signals to support reliable grid operations and management.

137

Obviously, using risk information properly decision makers (market coordinators) will move the system to a less risky situation. In Chapter 5 and 6, we have developed risk based multi-objective (RBMO) optimization model and addressed risk based security assessment has unique advantages. In this Chapter, risk, result of contingency probability and consequence, is used as a security index to represent system congestion level in the congestion management. Also risk impact on LMPs is analyzed and a multi-objective DCOPF model for risk based LMP calculation is developed. The objective of this paper is to develop RBCM procedure and give an effective price signal to mitigate the security impact from market transactions on the system operation. This Chapter is organized as follows. Section 7.2 describes the difference of deterministic and risk based congestion management. LMP decompositions for different congestion methods are formulated and analyzed in Section 7.3. Section 7.4 presents a multi-objective DCOPF model for RBLMP calculation. Study results are shown in Section 7.5. Section 7.6 concludes.

7.2 Different Methods of Congestion Management

7.2.1 Deterministic Congestion Management

Deterministic criteria are usually expressed in terms of “tests” where the system is required to withstand a predetermined set of disturbances. The deterministic index is intuitive and physical significance, such as thermal limit of transmission line, voltage limit of the load bus, load margin requirement for voltage stability. In the market based congestion management, all these limits can be converted to market signals in the congestion-relieving OPF model. The target of deterministic congestion management (DCM) is to guarantee system performances must not violate these limits. The mathematical model of deterministic congestion management (DCM) is formulated as below.

138

LP1

Min = T PcC (7-1-1)

T S.T. ()=− ( 2 n 2 ,...,,,..., llppLossLPe n ) (7-1-2)

max1 k ()2 n 2 ,...,,,..., ≤ FllppF kn (7-1-3)

max2 − jk ()2 n 2 ,...,,,..., ≤ FllppF kn (7-1-4) = ,...,2,1, mjk where c Generation bid price vector.

P Market generator output vector: (p1,p2,…pn) E Vector whose elements are 1.

L Market demand vector: (l1,l2,…ln) Loss Loss function. Sum of nodal input equals to loss. n Total number of nodes. Node 1 is reference. F Line flow of normal state. F1max Line flow normal limit. F2max Line flow emergent limt;

Fl-j Line flow under contingency j (N-1). m Total number of lines. Equation (7-1-1) represents the objective function of DCM, which can also include load cost if demand bid prices are available. Equation (7-1-2) is the energy balance constraint, where reference nodal (1st node) input has no impact on loss. Transmission line flow calculation, under normal and contingency state, is provided by Equation (7-1-3) and (7-1-4), in which injection of reference does not appear. The DCM model results in no deterministic congestion. In the other words, there is no circle in the red region in security diagram. All congestions are controlled inside the

139 deterministic boundary. However, the following two weaknesses can be observed in the result of DCM: 1. The extent and numbers of congestions can not be identified. We believe that the congestion situation of one case, which has 10 near congestions (95%~100%), is more serious than that of the other case, which has only one 100% deterministic congestion. However, both cases are treated with no difference in DCM. 2. The probability of congestion incurring contingency is not addressed. In the congestion analysis, all contingency states could happen in different possibilities. Because of the above two weaknesses, DSM is only a local congestion relieve process, instead of a global one which could manage congestion in a uniform way. That is the motivation to develop a quantified congestion metric, which can indicate the system based congestion level and be used for congestion relief.

7.2.2 Risk Based Congestion Management

In order to compensate the drawbacks of DCM, risk based congestion management (RBCM) combines two key attributes within the philosophy of reliability criteria: probability and severity (congestion extent) of disturbances. The expectation, product of the two attributes, is called congestion level, which quantitatively indicates the congestion situation of the system. Security limits are looked as boundary in the DCM. All near congestions are not addressed. However, in RBCM, we define severity functions for overload and low voltage congestion, as shown in Figure 2-5 and Figure 2-6, where Pmax and Vmin correspond to the outer boundaries of security diagram. The linear continuous severity function provides the measurement of congestion. All severity functions increase linearly as congestion extent close to the rating.

140

Incorporating risk metric in the congestion relief process, we can formulate RBCM as below: LP2:

Min = T PcC

T S.T. ()=− ( 2 n 2 ,...,,,..., llppLossLPe n ) (7-2-1)

max1 k ()2 n 2 ,...,,,..., ≤ FllppF kn (7-2-2) m m m ⎧ ⎫ 0 * ∑ ()FSevp k + ∑∑⎨ j * ()FSevp − jk ⎬ k=1 j==11⎩ k ⎭ ≤ Risk max (7-2-3)

= ,...,2,1 mk where

p0 Normal state probability.

pj N-1 contingency state probability. Sev Congestion severity function. Riskmax System congest level target.

7.2.3 Two Methods Comparison

In the deterministic congestion management, all security limits must be absolutely satisfied as shown in equation (7-1-4). Even if it removes all congestions located in red region of security diagram, system awareness is not adequate, since only directions provided by congested components are pointed out. The congestion information from yellow region, which is also a stressful area, is not observable. In addition, contingency occurring probability, which plays an important role in the expectation of congestion, is not addressed. On the contrary, RBCM defines a congestion indicator, risk, which properly takes all congestion factors into account. The comparison of DCM and RBCM is summarized in Table 7-1, which shows RBCM is superior to DCM in many aspects. The most important improvement of RBCM is to use a quantified congestion level as a market reliability lever to

141 maneuver stressful circuits caused by transactions. It uniformly expands a "security volume" that has its distance from center, for each direction, associated with deterministic and near congestions, whereas DCM expands limited dimensions only corresponding to deterministic congestions. In a word, DCM is a local congestion relief process, while RBCM globally relieves congestion. TABLE 7-1: Comparison of DCM and RBCM

Elements Included? DCM RBCM

N-1 Scenario Yes Yes

Congestion likelihood No Yes

Congestion numbers No Yes

Congestion extent No Yes

congestion level No Yes

7.3 LMP DECOMPOSITION AND ANALYSIS

In the market environment, congestion management is not only an approach to enhance system security, but also to transfer system limits into market price signals to all participants, who can make a bidding strategy. Any limit will impose a constraint on the market. In both equations (7-1) and (7-2), locational marginal price (LMP) can be calculated to reflect the cost of delivering one additional MW of electricity to a given location while respecting all system limits in effect, and used as clear prices to ensure market reliability. The other utilization of LMP is giving an incentive price signal, which provides important financial information to make planning investment decision. Following subsections present the price signal from DCM and RBCM, respectively.

142

7.3.1 Deterministic LMP Decomposition

LMP calculation based on deterministic congestion method can be calculated using LP1, where line flows are deterministically constrained for normal and contingencies. The Lagrangian expression for LP1 is

T ψ += π [ ()2 n 2 ,...,,,..., n ( −− DPellppLossC )] m max1 + ∑ μ []kl − k 2 n 2 llppFF n ),...,,,...,( km=1 m max2 (7-3) + ∑∑μ − [kjk − − jk 2 n 2 llppFF n ),...,,,...,( ] k ==11j where π Shadow price for the energy balance equation.

μk Shadow price for the transmission congestion under normal state.

μk-j Shadow price for the transmission congestion under contingency state. Karush-Kuhn-Tucker (KKT) conditions for nodal load are ∂ψ dC π =−= 0 (7-4-1) ∂l dl11 ∂ψ dC ⎛ ∂Loss ⎞ m ∂F m m ∂F ⎜ 1⎟ k − jl += π ⎜ − ⎟ − ∑ μk − ∑∑μ − jl ∂l dlhh ⎝ ∂lh ⎠ k=1 ∂lh k==11j ∂lh

= 0 h > )1( (7-4-2)

We assume there is only one deterministic congestion: flow of line i violating the limit under contingency q. Then all μl and μl-j equal to zero, except μi-q. Thus, the LMP is defined as: dC = π dl 1 (7-5-1) dC ⎛ ∂L ⎞ ∂F ⎜ ⎟ −qi , h > 1 (7-5-2) π ⎜1−= ⎟ + μ −qi dlh ⎝ ∂lh ⎠ ∂lh

In the above, DLMPs are consisted of energy component (π), loss component (-π∂L/∂Ik) and congestion component (μi-q∂F i-q /∂Ik). The energy component is equal to the LMP at the reference. The congestion part of all nodal prices, except reference node, is related to μi-q and

∂F i-q /∂Ik (sensitivity of congested line flow regards to nodal injection).

143

7.3.2 Risk based LMP Decomposition

RBCM based LMPs (RBLMPs) are solved using LP2, where line flow constraints under contingency states are replaced by system congestion level indicator. The Lagrangian expression for LP2 is

T ψ += π [ ()2 n 2 ,...,,,..., n ( −− DPellppLossC )] m ⎛ m max1 max + ∑ μ []kk − FF k + φ⎜ − 0 ∑ ()FSevpRisk k km=1 m ⎞ ⎝ k =1 − FSevp ⎟ (7-6) ∑∑j ()− jk ⎟ j==11k ⎠ where φ is shadow price of risk constraint. The corresponding KKT conditions for nodal load are ∂ψ dC π =−= 0 (7-7-1) ∂l dl11 ∂l dC ⎛ ∂Loss ⎞ m ∂F ⎜ ⎟ k += π ⎜ −1⎟ − ∑ μ k ∂l dlhh ⎝ ∂lh ⎠ k =1 ∂lh m m m ⎧ ∂ ()FSev ∂ (FSev − jk )⎫ k −φ⎨ p0 + p j ∑∑∑ ⎬ ⎩ k =1 ∂lh j= k =11 ∂lh ⎭ = 0 (h > 1) (7-7-2) We assume there are two risky congestions in the system, line s above 100% rate under contingency q and line t at 95% rate under contingency r, respectively. As a result, risk based LMP (RBLMP) is expressed as below: dC = π (7-8-1) dl1

dC ⎛ ∂Loss ⎞ ∂ (FSev −qs ) ⎜ ⎟ π⎜1−= ⎟ φ pq ∗∗+ dlh ⎝ ∂lh ⎠ ∂I h ∂ FSev ()−rt ( ) φ pr ∗∗+ h > 1 (7-8-2) ∂I h

RBLMPs also have three parts. Energy component and loss component are the same as DLMPs. However, the congestion component, emphasized by underline, is related to not only deterministic congested line s, but near congested line t. In addition, probabilities of

144 contingencies (q and r), as well as the nodal injection sensitivities, have impacts on the price signal.

7.3.3 LMP Comparison

From equations (7-5-1) and (7-5-2), DLPMs only reflect the deterministic congestions. Therefore, DLMPs will release low price signal in the heavy congestion situation, where a lot of near congestions locate in the yellow region of security diagram. Planners will be over optimistic with transmission capacity without observing the effective price signal. As a result, Transmission Companies and Generation Companies will not like to make investment on facilities without incentive price signal. On the other hand, DLMPs will also result in a relative high price signal in the light congestion situation, where there are only one deterministic congestion and rare near congestions and all congestions are caused by events having small occur probability. In that case, we can ease the congested line, and then improve the market efficiency by sending the nodal price down. Comparatively, RBLMPs contracts all dimensions of "security volume" in proper weights (contingency probability) at a time, whereas the deterministic use of DLMPs contracts only limit directions without reasonable weight reference. As a result, RBLMPs provide an effective price signal to relieve system congestion and allocate the resource.

7.4 Risk Based LMP Calculation Algorithm Implementation

Both of AC and DC models can be applied into OPF to solve economic dispatch and alleviate transmission congestion. A comparison of the AC and DC power flow models for LMP calculation is shown in [111]. AC model is more accurate but time consuming, especially when a large number of N-1 contingencies considered. DC model can achieve fairly close result to the AC model with little loss of accuracy, and has high computing efficiency by using linear programming method. In the work of this paper, DC model is adopted for simplification, and so only overload risk is addressed here. Expressions (7-9-1) ~

145

(7-9-9) give the DC model for RBLMP model. One of the objectives of the optimization model is to maximize the social benefit. Another objective is system risk, which is treated as constraint here.

Maximize F −= ∑∑ SP GikGik + ∑ ∑ SP DikDik iB∈∈I k Gi iB∈∈I k Di Equal to:

Minimize = ∑∑ SPF GikGik − ∑ ∑ SP DikDik (7-9-1) iB∈∈I k Gi iB∈∈I k Di Subject to:

∑ Gik ∑ Dik ∑bPP −=− θθ jiij )( (7-9-2) ∈Bk Gi ∈Bk Di ∈Lij i

0 θθ jiijij ),( ∈−= LijbP (7-9-3)

0* 0 mmlllm ,, ∈∗+= LmlPdPP (7-9-4)

* = PPfRisk ll ),(Pr, (7-9-5)

− + Gik Gik ≤≤ PPP Gik (7-9-6)

− + Dik Dik ≤≤ PPP Dik (7-9-7)

max0 max00 l ≤≤− PPP ll (7-9-9) 0 ≤≤ RiskRisk max (7-9-9) Where

PGik Power supply ( kth segment) of unit at bus i;

SGik Supply bid price ( kth segment) at bus i;

BGiB Supply bid set at bus i;

PDik Demand ( kth segment) at bus i;

SDik Demand bid price ( kth segment) at bus i;

BDiB Demand bid set at bus i;

bij Transmission line susceptance; θ Bus angle; I Set of all nodes; L Set of all lines;

146

Li Set of all lines connected to node i;

dml Line outage distribution factor; Pr Probability of system states (N and all N-1);

0 Pl Line flow of normal (N) state;

* Pl Line flow of N-1 state;

max0 Pl : Thermal limit; In this model, expression (7-9-2) is the DC flow equations. Equation (7-9-4) solves the transmission line flows of N-1 scenarios by using line outage distribution factors. Equation (7-9-5) calculates system overload risk using contingency probability and severity function. Since both the objective and equal constraint are linear, this model is easily to be implemented by linear programming method, as we introduced in Chapter 5. The Lagrangian expression for RBMO problem can be written as below: ⎧ ⎫ L = SP − SP ∑∑ GikGik ∑∑ DikDik ∑∑λi ⎨ Gik ∑∑ Dik bPP −++−+ θθ jiij )( ⎬ iB∈∈I k Gi iB∈∈I k Di iL∈∈I ⎩ ∈Bk Gi ∈Bk Di j i ⎭ α bP −+−+ θθ )( β 0* ∗++−+ PdPP 0 λ +−+ PPfRisk * ),(Pr, ∑ { jiijijij } ∑ { mmlllmlm } r { ll } ∈Lij , ∈Lml + + −− + ∑∑ρ ( GikGik PP Gik )+− ∑ ∑ ρ ( GikDik − PP Gik ) iB∈∈I k Gi iB∈∈I k Gi + + −− + 000 + +− 000 + ∑∑ρ ( DikDik PP Dik )+− ∑ ∑ ρ ( DikDik − PP Dik ) + ∑σ ( PP lll ) ∑σ ( −−+− PP lll ) iB∈∈I k Di iB∈∈I k Di ∈Ll ∈Ll + *** + +− *** + ∑σ ( PP lmlmlm ) ∑σ ( −−+− PP lmlmlm ) , ∈Lml , ∈Lml φ + ( RiskRisk ) φ −+ (−+−+ Risk) (7-10)

Where λi, αij, βlm, λr, ρGik, ρDik,σl, σlm and φ are the Lagrangian multipliers associated with constraints (7-9-1) ~ (7-9-9).

Once the optimal solution has been determined, the Lagrangian multipliers λi regards to constraints (7-9-2) are LMPs of each node and φ+ regards to up-constraint (7-9-9) is the shadow price of risk constraint. The following expression is satisfied at the optimal point:

∂L −+ S GikiGik ρρλ Gik =−+−= 0 (7-11) ∂PGik

147

∂L −+ S DikiDik ρρλ Dik =−+−= 0 (7-12) ∂PDik Therefore, the relation of RBLMP with generation and load bidding and shadow price of bid constraint can be written as below. −+ + − λ SGiki Gik ρρ Gik S Dik Dik −+=−+= ρρ Dik (7-13)

7.5 Case Study

We use 6-Bus test system (Figure 5-4) to calculate RBLMP in the real time market. Table 7-2 gives the generation and demand bid price and corresponding bid limits. PG0 and PL0 are from day-ahead market. To analyze the impact of demand, we give the price signal and bid result for inelastic and elastic demand, respectively. We also use the same network data and contingency probability information as shown in Chapter 5. TABLE 7-2: Real Time Market Bid Data

Participant Bid ($/MWh) Pbid (MW) PG0 (MW) PL0 (MW)

G1 12.4 35 50 0

G2 12.25 40 60 0

G3 11.75 30 40 0

L4 14.5 30 0 45

L5 13.5 20 0 50

L6 12.7 10 0 55

7.5.1 Deterministic LMP Result

DCM results in no deterministic congestion in the market. In this example, the only active congestion is line 8 under contingency 9, which causes different congestion components of LMPs from equation (7-5). LMPs result using DCM model is given in Table 4, where bus 3 is assumed as a reference node. We can see that LMP 5 has the highest

148 congestion component because of its strong relationship with deterministic congestion. The congestion level calculated for this scenario is 7.56. TABLE 7-3 LMPs Result Using DCM

Bus 1 2 3(Ref) 4 5 6

DLMP($) 12.4 12.3 11.75 12.4 12.6 12.4

Congestion 0.65 0.55 0 0.65 0.85 0.65

7.5.2 Risk Based LMP Result with Inelastic Demand

Figure 7-1 shows RBLMP result regards to congestion level (risk). When risk value is larger than 11.4 violations/year, LMPs of all nodes are same, 12.25$/MWh. Figure 7-2 shows the power clearing offer result. Corresponding to risk 11.4, G3 gets its max power supply offer because of the lowest bid price, while G1 gets zero offer because of the highest bid price. G2 is the marginal unit, whose bid price decides the LMP in market. In this case, φ of equation (7-8-2) is zero. Figure 7-3 visualizes the congestions distribution regards to incurring contingency. The congestion levels of different scenarios are shown in a rule under the security diagram. The circle scenario shows contingency 9 causes a deterministic congestion, line 8. Hence, market cleared at this congestion level is not deterministically secure.

Bus1 Bus2 Bus3 Bus4 Bus5 Bus6 14

13.5

13

12.5 LMP($/MWh)

12

11.5 5.00 6.00 7.00 8.00 9.00 10.00 11.00 12.00 Risk(violations/year)

Figure 7-1: LMP result with inelastic demand

149

PG1 PG2 PG3 PL4 PL5 PL6 50 45 40 35 30 25 20 15 10 Market clearing offer(MW) 5 0 5.00 6.00 7.00 8.00 9.00 10.00 11.00 12.00 Risk(violations/year)

Figure 7-2: Power clearing offer with inelastic demand

. We manage system congestion level by reducing risk. Firstly, the power supply offer transfer from unit G3 to G2 is used to mitigate the congestion. With risk going down below 8.2, G2 reaches to offer limit, and then the power supply offer transfer from unit G3 to G1is applied for congestion relief. Furthermore, system can be maneuvered into deterministic boundary by reducing risk to 7.4 (square scenario), or more strict risk level, as shown in Figure 7-3. Since demand is inelastic, system congestion level can not get more reduction when it reaches 5.3 (diamond? scenario). Figure 7-1 shows the congestion components of LMPs, which are the difference between LMPs with LMP3. Compared with TABLE 7-3, congestion component of LMP4 is the highest, instead of LMP5. The reason is that RBLMP is uniformly related to not only the deterministic, but also near congestions through the probabilities of congestion incurring contingencies. From equation (7-8-2), other two near congestions, line 5 under contingency 2 and line 2 under contingency 5, are also involved into price signal. And from network model, Demand of Bus 4 has most effective influence on these two congestions, which sends the LMP4 up. Therefore, RBLMPs can contract all stressful dimensions and send out incentives for investment.

150

Figure 7-3: Security diagram with inelastic demand

7.5.3 Risk Based LMP Result with Elastic Demand

With elastic demand, system congestion can be managed in the lower level. As shown in Figure 7-4 ~Figure 7-6, when congestion level is larger than 5.3, the LMPs and power clearing offer curves are same as inelastic demand case. To ensure a more secure market, power supply offer from G1 and demand offer from L6 have been cut down, which maneuvers congestion level lower than 5.3. After the offer from G1 has been used up (risk = 3.8), L4 starts to shed offer associated with L6 offer increasing, which sends congestion level lower. Finally, all congestions can be maneuvered inside the center region, as shown in Figure 7-6. In the congestion relief process, LMPs increase at most of nodes, especially node 4.

151

Bus1 Bus2 Bus3 Bus4 Bus5 Bus6 15

14.5

14

13.5

13

LMP($/MWh) 12.5

12

11.5 0.00 2.00 4.00 6.00 8.00 10.00 12.00 Risk(violations/year)

Figure 7-4: LMP result with elastic demand

PG1 PG2 PG3 PL4 PL5 PL6 50 45 40 35 30 25 20 15 10 Market clearing offer(MW) 5 0 0.00 2.00 4.00 6.00 8.00 10.00 12.00 Risk(violations/year)

Figure 7-5: Power clearing offer with elastic demand

Figure 7-6: Security diagram with elastic demand

152

7.5.4 Social Benefit and Total Transactions

RBCM model we implemented can also be treated as a multi-objective optimization problem with two objectives: social benefit and congest target (risk). Figure 7-7 provides the Pareto optimal curve, which gives the trade off relation between benefit and risk. It is evident that social benefit will decrease by enhancing market security with strict congest level. For inelastic demand scenario, social benefit is affected by using expensive unit offer with fixed transaction level, as shown in Figure 7-8. But for the elastic demand case, demands are used to mitigate congestion to send congestion level lower, which results in benefit reduction and transaction change.

Elastic demand Inelastic demand 315

310

305

300

295

Social benefit ($) benefit Social 290

285

280 024681012 Risk(Violations/year)

Figure 7-7: Pareto optimal curve between social benefit and congestion level

Elastic demand Inelastic demand 212

210

208

206

204

202

Total Transactions (MW) Transactions Total 200

198 024681012 Risk(Violations/year)

Figure 7-8: Risk impact on market total transactions

7.5.5 Market Surplus

In the market, generation suppliers get revenue and demands make a payment based on LMPs shown in Figure 7-1 and Figure 7-4. The balance between them, we call market

153 surplus, is not zero if congestion exists. In the RBLMP market, the surplus will vary with the congestion management level (risk) as shown in Figure 7-9 and Figure 7-10. When risk is larger than 11.4, generation revenue equals to demand payment. With more strict congestion level target, surplus is going up. For elastic demand case, even if generation revenue and demand payment will reduce after demand is shed for congestion relief, surplus is still getting higher.

Inelastic demand 3000

2500

2000

1500

1000

500 Revenue,Payment,Surplus($) 0 56789101112 Risk(violations/year) Generator Revenue Comsumer Payment Surplus

Figure 7-9: Market surplus for inelastic demand

Elastic demand 3000

2500

2000

1500

1000

500

Revenue,Payment,Surplus($) 0 024681012 Risk(violations/year) Generator Revenue Comsumer Payment Surplus

Figure 7-10: Market surplus for elastic demand

7.5.6 The Impact of Contingency Probability on Risk Based LMP

Form the expression (7-8-2) of RBLMP, contingency probability, as well as violation severity sensitivity, have impact on price signals, whereas only the flow sensitivities of deterministic congested line take effect for DLMP. In Figure 7-11~ Figure 7-13, we show the result corresponding to different probability of contingency 5, which is expressed as

154 contingency occurrences expectation per year, and system congestion level is fixed at 2.6 violations per year. Figure 7-11 shows the LMPs are going down with probability of contingency 5 decreasing, especially fewer than 2 occurrences per year. The probability deduction diminishes the influence of congestion under contingency 5, and results in increase of cheaper offers as shown in Figure 7-12. For instance, offers from G1 and L6 gradually go up after contingency 5 occurrence is smaller than 3. At the same time, Figure 7-13 shows total transactions increase and finally reach maximum demand after occurrence is less than 1. Therefore, contingency probability properly weights the congestion extend, and becomes an important factor in the congestion management.

Bus1 Bus2 Bus3 Bus4 Bus5 Bus6 15

14.5

14

13.5

13 LMP($/MWh) 12.5

12

11.5 012345 Contingency 5 occurrence expectation per year

Figure 7-11: LMPs variation with contingency probability

PG1 PG2 PG3 PL4 PL5 PL6

45 40 35 30 25 20 15 10 Powerclearing offer(MW) 5 0 012345 Contingency 5 occurrence expectation per year

Figure 7-12: Power clearing offer variation with contingency probability

155

212

210

208

206

204

Total Transaction Total 202

200

198 012345 Contingency 5 occurrence expectation per year

Figure 7-13: Total transaction variation with contingency probability

7.6 Summary

DCM enhances market security through a simple judgment: no deterministic violation. However, DCM does not tell how serious the congestion is and to what extent the congestion can be mitigated. Therefore, DLMP is not able to give an effective price signal to give a congestion warning and investment incentives. Differently, RBCM defines congest level, which identifies a hyper-dimensional volume, where greater volume means more congestion. RBCM gives you a single Market lever to uniformly expand that volume as you like. And RBLMP contracts all dimensions in proper weights (state probabilities) at a time.

156

CHAPTER 8: CONCLUSIONS AND FUTURE WORK

8.1 Contributions of this Work

In this research, a new philosophy of security control and congestion management is proposed. Risk, explicitly indicating system health level, evolves the understanding of security and congestion into a quantitative extent. Therefore, system shape is well kept by incorporating risk in optimization model to search best control and management scheme. The main contributions of this work are summarized as below. 1. Risk definition and visualization. Risk index, multiplication of probability and severity, utilizes complete system information and gives adequate awareness of system. However, absence of tool for risk understanding makes it less attractive for online application. In this work, we developed a security diagram, in which three kinds of information are provided: probability sectors, severity circles, security regions. Security diagram is a bridge to communicate deterministic and probabilistic security understanding, and is expected to help operators to select best control actions. 2. Online risk calculation. Probabilistic approach has been restricted in planning area for a long time, since state probability has been calculated in the average view, which is not proper for online purpose. In the risk calculation, we proposed a regression model to capture the influence of weather and geography on contingency probability. The model we obtained can be used to predict the next hour contingency probability, and so real time risk is computed associated with the result of state estimation. 3. Risk based multi-objective optimization (RBMO) model. SCOPF has been widely used in the system control. However, it is based on a simply security understanding: no deterministic violation. To incorporate system security level in the control problem, we developed a multi-objective optimization model that has conflict

157

objectives: risk and cost. A trade-off curve is provided to trace the best solutions: low cost and low risk. 4. Risk based security control design. Traditional control based on SCOPF is a preferring approach, where security is predefined in the optimization model. The operators can not do better in deciding system performance. In risk based security control process, we firstly apply RBMO model to search all Pareto-optimal solutions. Then operators can use assistant tool, such as security diagram, and high level information to make a choice for final solution. This approach provides an iterative dialog to decision maker to talk with optimization module, and system can be maneuvered in a good shape (low risk, low cost) under people’s understanding. 5. Classical methods implement for solving RBMO. Two classical methods, WM and CM, are implemented using LP to solve RBMO problem in our work. The proposed LP algorithm has high computation efficiency since all objectives, equality and inequality constraints are linearized. Both two methods convert multi-objective problem to single-objective, but CM method is more robust because of problem independence. 6. Evolutionary Algorithm implement for solving RBMO. Classic methods require prior knowledge about the optimization problem and could get stuck in a local optimal solution. To compensate these weaknesses, a novel algorithm based on EA is proposed to solve RBMO problem. The whole Pareto-optimal set can be found in a single run. The sensitivity information has been well utilized in the model to improve the efficiency. 7. Risk based market pricing and congestion management. The target of congestion management is to enhance market security and give non-discriminative price signals. Incorporating risk into congestion management, we can indicate congestion level and clear market in a reasonable security level. In addition, risk based LMP is deduced,

158

and strictly related to security elements: contingency probability, violation numbers and extent. Results show risk based congestion management can provide a market lever to maneuver market transactions and release a system congestion level based price signal. We also provide a DCLMP calculation model, which considers the risk influence on price signal.

8.2 Suggestions for Future Works

This work proposes a novel control methodology for decision maker to maneuver system into a better performance. It can be incorporated into EMS functions to give additional system information or support operators to take preventive and corrective actions, or into market clearing software to manage congestion and compute prices. Following works can be extended from what we have done in this dissertation. 1. More comprehensive risk analysis. In this work, risks only refer to overload and low voltage problem. The risk of voltage stability, transient stability can also be addressed in this framework. Then decision based on Pareto-optimal set is more comprehensive. 2. Apply model in a time interval. All simulations refer to one snap shot in our work. However, the model we developed can be extended to multi-period problems, for example, maintenance schedule and unit commitment. 3. Investigate detail risk based security standard. Even if probabilistic approach has been well developed, it is not practically applied in the industry since relative security standard is not available. Therefore, before we apply risk based methodology in the industry, we need to analyze present operation rules and form risk based security criterion.

159

REFERENCES [1] North American Electric Reliability Council, “NERC Planning Standards”, September, 1997 [2] IEEE Working Group Report, “Reliability indices for use in bulk power system supply adequacy evaluation,” IEEE Trans. Power Apparatus and Systems, vol. PAS-97, pp. 1097-1103, July/Aug. 1978 [3] CIGRE TF 38.03.12 Report, “Power system security assessment: A position paper,” ELECTRA, no. 175, Dec. 1997 [4] J. McCalley, V. Vittal, “Risk Based Security Assessment,” EPRI Final Report, 1999, Iowa State University [5] P. Kundur, Power System Stability and Control, McGraw-Hill, New York, 1994. [6] P. Kundur, etc. “Definition and Classification of Power System Stability,” IEEE Trans. on Power Systems, Vol. 19, No. 2, MAY 2004, pp 1387-1401 [7] R.N.Allan, R. Billinton, A. M. Breipohl, and C. H. Grigg, “Bibliography on the Application of Probability Methods in Power System Reliability Evaluation,” IEEE Trans. on Power Systems, Vol. 14, No. 1, February 1999, pp 51-57 [8] A.Berizzi, etc, “Security assessment in operation: a comparative study of probabilistic approaches,” Power Tech Conference Proceedings, Vol. 1, June 2003, pp 23-26 [9] J.D. McCalley, V. Vittal, N. Abi-Samra, “An Overview of Risk Based Severity Assessment,” Proc. IEEE Power Engineering Society Summer Meeting, Vol. 1, Edmonton, Alberta, Canada, 18-22 July 1999. [10] H. Wan, J.D. McCalley, V. Vittal, “Increasing Thermal Rating by Risk Analysis”, IEEE Trans. on Power Systems, Vol. 14, No. 3. August 1999, pp. 815-828 [11] J. D. McCalley, V. Vitlal, H. Wan, Y. Dai, N. Abi-Sanua, “Voltage Risk Assessment,” Proc. IEEE Power Engineering Society Summer Meeting, Vol. 1, Edmonton, Alberta, Canada, 18-22 July 1999. [12] M. Ni, J. D. McCalley, V. Villal, T. Tayyib, “Online Risk-Based Security Assessment”, IEEE Trans. on Power System, Vol. 18, No.1, February 2003, pp. 258-265.

160

[13] E. De Tuglie, M. Diwmto, M. La Scala, P. Scarpellini, “A Static Optimization Approach to Assess Dynamic Available Transfer Capability”, IEEE Trans. on Power Systems, Vol. IS, N. 3, August 2000, pp. 1069-1076

[14] E. De Tuglie, M. Dicamto, M. La Scala, P. Searpellini, “A Probabilistic Approach for Dynamic Available Transfer Capability Evaluation,” Proc CIGRE, Paris, September 2000 [15] J.W.M. Cheng, D.T. McGillis, F.D. Galima, “Power System Reliability in a Deregulated Environment,” Proc. IEEE Canadian Conference on Electrical and Computer Engineering, Vol. 2, Halifax, Nova Scotia, 7-10 May, 2000 [16] J.W.M. Cheng, D.T. McGtllis, F.D. Galiana, “Bilateral Transactions Considered as Interconnections in a Deregulated Environment”, Proc. IEEE Canadian Conference on Electrical and Computer Engineering, Vol. 2, Waterloo, Ontario, 24-28 May, 1998 [17] K. Uhlen, G. H. Kjhlle, G. G. LIVaS, 0. Breidablik, “A Probabilistic Approach Security Criterion for Determination of Power Transfer limits in a Deregulated environment”, Proc. CIGRE, Paris, September, 2000 [18] I. Wangensteen, 0. S. Grande, “Alternative Models for Congestion Management and Pricing Impact on Network Planning and Physical Operation”, Proc. CIGRE, Paris, September, 2000 [19] Allen J. Wood, Bruce F. Wollenberg, Power Generation, Operation, and Control, 2nd ed. pp514-551 [20] Stott. B, Alsac. O, Monticelli A.J, "Security Analysis and Optimization", Proceedings of the IEEE, 1987, pp 1623-1644 [21] H.W. Dommel, W.F. Tinney, “Optimal Power Flow Solutions”, IEEE Trans. on Power Apparatus and Systems, vol. PAS-87, no. 10, Oct. 1968, pp. 1866-1876. [22] O. Alsac, B. Stott, “Optimal Power Flow with Steady-state Security”, IEEE Trans. On Power Apparatus and Systems, vol. PAS-93, no. 3, May/June 1974, pp. 745-751. [23] D. Sun, B. Ashley, B. Bewer, et al, “Optimal Power Flow by Newton Approach”, IEEE Trans. on Power Apparatus and Systems, vol. PAS-103, no. 10, Oct. 1984, pp.2864- 2875.

161

[24] R.R. Shoults, D.T. Sun, “Optimal Power Flow based on P-Q Decomposition”, IEEE Trans. on Power Apparatus and Systems, vol. PAS-101, Feb. 1982, pp.397-405. [25] Z. Yan, N.D. Xiang, B.M. Zhang, et al, “A Hybrid Decoupled Approach to Optimal Power Flow”, IEEE Trans. on Power Systems, vol. 11, no. 2, May 1996, pp. 947-954. [26] G. Reid, F. Hasdorf, “Economic Dispatch Using Quadratic Programming,” IEEE Trans. on Power Apparatus and Systems, no. PAS-92, no. 3, May/June 1973, pp. 745-751. [27] K. Aoki, A. Nishikori, R.T. Yokoyana, “Constrained Load Flow Using Recursive Quadratic Programming,” IEEE Trans. on Power Systems, vol. 2, no. 1, Feb. 1987, pp.8- 16. [28] R.C. Burchett, H.H. Happ, D.R. Vierath, “Quadratically Convergent Optimal Power Flow,” IEEE Trans. on Power Apparatus and Systems, vol. PAS-103, no. 8, Aug. 1984, pp. 9267-3276. [29] M.A. El-kady, B.D. Bell, V.F. Carvalho, et al, “Assessment of Real-time Optimal Voltage Control,” IEEE Trans. on Power systems, vol. 1, no. 1, Feb. 1986, pp. 99-107. [30] O. Alsac, J. Bright, M. Prais, et al, “Further Developments in LP-based Optimal Powerflow,” IEEE Trans. Power Systems, vol. 5, no. 3, Aug. 1990, pp. 697-711. [31] D.S. Kirschen, H.P. Van Meeteren, “MW/Voltage Control in a Linear Programming based Optimal Power Flow”, IEEE Trans. on Power Systems, vol. 3, no. 2, May 1988, pp. 481-489. [32] B. Stott, O. Alsac, “Experience with Successive Linear Programming for Optimal Rescheduling of Active and Reactive Power”, Paper 194-01, CIGRE-IFAC Symposiumon Control Applications to Power System Security, Florence, Sept. 1983. [33] M. Olofsson, G. Andersson, L. Soder, “Linear Programming Based Optimal Power Flow Using Second Order Sensitivities”, IEEE Trans. on Power Systems, vol. 10, no. 3, Aug.1995, pp. 1691-1697. [34] E. Lobato, L. Rouco, M.I. Navarrete, et al, “An LP-based Optimal Power Flow for Transmission Losses and Generator Reactive Margin Minimization,” Proc. 2001 IEEE Power Tech Conf., Porto, Portugal, Sep. 2001. [35] N. Karmarkar, “A New Polynomial-time Algorithm for Linear Programming,” Combinatorica, vol. 4, no. 4, 1984, pp. 373-395.

162

[36] P.E. Gill, W. Murray, A. Saunders, et al, “On Projected Newton Barrier Methods for Linear Programming and an Equivalent to Karmarkar’s Projective Method,” Mathematical Programming, vol. 36, pp. 183-209. [37] K. Clements, P. Davis, K. Frey, “An Interior Point Algorithm for Weighted Least Absolute Value Power System State Estimation,” Paper 91-WM 235-2 PWRS, 1991 IEEE PES Winter Meeting, New York, Feb. 1991. [38] Y. Wu, A.S. Debs, R.E. Marsten, “Direct Nonlinear Predictor-corrector Primal-dual Interior Point Algorithm for Optimal Power Flows,” Proc. 1993 IEEE Power Industry Computer Applications Conf., pp. 138-145. [39] D. Srinivasan, F. Wen, C.S.Chang, A.C.Liew, “A Survey of Applications of Evolutionary Computing to Power System,” International Conference on Intelligent Systems Applications to Power Systems, pp35 – 41, Jan 1996. [40] J. Yuryevich, K. P. Wong, “Evolutionary Programming Based Optimal Power Flow Algorithm,” IEEE Trans. on Power Systems, Vol. 14, No. 4, pp 1245-1250,November 1999 [41] L. Shi, G. Xu, Z. Hua, “A New Heuristic Evolutionary Programming and Its Application in Solution of the Optimal Power Flow, Part1 & Part 2,” International Conference on Power System Technology, Vol. 1, pp 762-770, Aug 1998 [42] A.G. Bakirtzis, P. N. Biskas, C. E. Zoumas, V. Petridis, “Optimal Power Flow by Enhanced Genetic Algorithm,” IEEE Trans. on Power Systems, Vol. 17, No. 2, pp 229- 236, May 2002 [43] J. Tippayachai, W. Ongsakul, I. Ngamroo, “Parallel Micro Genetic Algorithm for Constrained Economic Dispatch,” IEEE Trans. on Power Systems, vol. 17, no. 3, Aug. 2002, pp. 790-797. [44] A. Gomes, C. H. Antunes, A. G. Martins, “A Multiple Objective Evolutionary Approach for the Design and Selection of Load Control Strategies,” IEEE Trans. On Power Systems, Vol. 19, No. 2, pp1173 – 1180, May 2004 [45] M. A. Abido, “Environmental/economic Power Dispatch Using Multi-objective Evolutionary Algorithms,” IEEE Trans. On Power Systems, Vol. 18, No. 4, pp1529 – 1537, Nov 2003

163

[46] K. Y. Lee, F. F. Yang, “Optimal Reactive Power Planning Using Evolutionary Algorithms,” IEEE Trans. On Power Systems, Vol. 13, No. 1, pp101 – 108, Feb 1998 [47] Fernando L. Alvarado, “Converting System limits to Market Signals,” IEEE Trans. on Power Systems, Vol. 18, No. 2, pp. 422-427, May 2003 [48] “Locational Marginal Pricing Overview,” PJM training material, [online]: http://www.pjm.com [49] “Market Settlements,” PJM training material, [online]: http://www.pjm.com [50] J, Tong, “Overview of PJM Energy Market Design, Operation and Experience,” IEEE International Conference on Electric Utility Deregulation, Restructuring and Power Technologies, April 2004 [51] A. L. Ott, “Experience with PJM Market Operation, System Design and Implementation,” IEEE Trans. on Power Systems, Vol. 18, No. 2, pp. 528-534, May 2003 [52] F. Bresler and S. Bresler, “PJM Implementation of FERC Standard Market Design,” IEEE Power Engineering Society General Meeting, Vol. 4, July 2003 [53] K. W. Cheung, “Standard Market Design for ISO New England Wholesale Electricity Market: An Overview,” IEEE International Conference on Electric Utility Deregulation, Restructuring and Power Technologies, April 2004 [54] D. Gan and Q. Chen, “Locational Marginal Pricing-New England Perspective,” IEEE Power- Engineering Society Winter Meeting, Vol. 1, pp. 169-173, Feb. 2001. [55] NEPOOL, “ISO New England Market Rules and Procedures,” (available at http:// www. Iso-ne.com\market rules_and_procedures/documents/). [56] K. Tammer, “Effects of SMD on System Operation NYISO Experience,” IEEE Power Engineering Society General Meeting, Vol. 4, July 2003. [57] Florin Capitanescu and Louis Wehenkel, “Improving the Statement of the Corrective security-constrained Optimal Power Flow Problem,” IEEE Trans. on Power Systems, Vol. 22., No. 2, May 2007, pp. 887-889 [58] X. Chen, "Preventive/corrective Actions for Power Systems in the context of Risk Based Security Assessment", M.S. Thesis, Iowa State University, Ames, 2001

164

[59] G. Andersson, etc. “Causes of the 2003 major grid blackouts in North America and Europe, and recommended means to improve system dynamic performance,” IEEE Trans. on Power Systems, Vol. 20., Issue 4, Nov. 2005, pp. 1922-1928 [60] X.Wang, J.R.McDonald, “Modern Power System Planning,” McGraw-Hill, c1994 [61] K. Manmandur and G. Berg, “Efficient simulation of line and transformer outage in power systems,” IEEE Trans. on PAS, vol. PAS-101, no.10, October 1982, pp.3733- 3741. [62] G. Ejebe , “An Adaptive Localization Method For Real-time Security Analysis,” IEEE Trans. on Power System, Vol.7, No.2, May 1992. [63] G. Ejebe, “Methods for Contingency Screening and Ranking for Voltage Stability Analysis of Power Systems,” IEEE Trans. on Power System, Vol.11, No.1, February 1996. [64] Zhihong Jia, “Contingency Ranking for On-Line voltage Stability Assessment,” IEEE Trans. On Power System, Vol. 15, No,3, August 2000. [65] E.Vaahedi, “Voltage Stability Contingency Screening and Ranking,” IEEE Tran. On Power Systems, Vol. 14, No.1, February 1999. [66] T. Van Cutsem, C. Moisse, and V. Sermanson, “Determination of secure operating limits with respect to voltage collapse,” IEEE Trans. On Power Systems, Vol. 14, No. 1, February 1999. [67] S. Greene, I. Dobson, F. Alvarado, “Contingency Ranking for Voltage Collapse via Sensitivities from a single Nose Curve,” IEEE Trans. on Power Systems, Vol. 14, No.1, February 1999. [68] S. Greene, I. Dobson, and F. Alvarado, “Sensitivity of the loading margin to voltage collapse with respect to arbitrary parameters” IEEE Trans. Power Syst., vol. 12, pp. 262– 272, Feb. 1997. [69] M. Ni, J. McCalley, V. Vittal, S. Greene, C. Ten, V. Gangula, and T. Tayyib, “Software Implementation of on-line risk-based security assessment,” IEEE Trans. on Power Systems, Vol. 18, No. 3, August 2003, pp 1165-1172. [70] J. Pu, “On-line analysis of cascading risk,” Ph.D. Dissertation, Iowa State University, August 2005

165

[71] J. Zhang, Fernando L. Alvarado, “A Heuristic Model of Cascading Line Trips,” Proceedings Of International Conference on Probabilistic Methods Applied to Power Systems, Sep. 2004 [72] X. Chen, M. Ni, and J. McCalley, “Use of Multicriterion Techniques for Control-Room Security Economy Decision-Making,” Proc. of the 2002 Probabilistic Methods Applied to Power Systems, Sept. 2002, Naples, Italy [73] Q. Chen and J. McCalley, “Identifying high-risk N-k contingencies for on-line security assessment,” IEEE Trans. On Power Systems, Vol. 20, No. 2, May 2005 [74] A.M. Breipohl, Chmn., P.Albrecht, R. Allan, S.Asgarpoor, M. Bhavaraju, R.Billinton, M. Curley, C.Heising, M. Mazumdar, M. McCoy, R. Niebo, A.D. Patton, N. Rau, R. Ringlee, A. Schneider, C. Singh, L. Wang, M.Warren, P. Wong, “Pooling Generating Unit Data for Improved Estimates of Performance Indices,” IEEE Trans. on Power Systems, Vol. 10, No. 4, November 1995 [75] William A. Mittelstadt, Sudhir K. Agarwal, D. Paul Ferron, Ron E. Baugh, “Outage Probability Evaluation of Lines Sharing a Common Corridor,” Proc. of the 2004 Probabilistic Methods Applied to Power Systems, Sept. 2004, Iowa, USA [76] G. Anders, “Probability Concepts in Electric Power Systems,” John Wiley, New York, 1990 [77] J. Endrenyi, “Reliability Modeling in Electric Power Systems,” John Wiley, New York, 1978 [78] R. Billinton and R. Allan, “Reliability Evaluation of Engineering Systems: Concepts and Techniques,” Second edition, Plenum Press, New York, 1992 [79] H. Saddock (chair), M. Bhavaraju, R. Billinton, C. DeSieno, J. Endrenyi, G. Horgensen, A. Patton, D. Piede, R. Ringlee, and J. Stratton, “Common Mode Forced Outages of Overhead Transmission Lines,” Task Force on Common Mode Outages of Bulk Power Supply Facilities of the IEEE PES Application of Probability Methods Subcommittee, IEEE Trans. on Power Apparatus and Systems, Vol. PAS-95, No. 3, pp. 859-863, May/June 1976

166

[80] R. Billinton, T. Medicheria, “Station originated multiple outages in the reliability analysis of a composite generation and transmission system,” IEEE Trans. Power Apparatus and Systems, vol. PAS-100, no. 8, pp 3870-3878 [81] G. Landgren, A. Schneider, M. Bhavaraju, and N. Balu, “Transmission Outage Performance Prediction: Unit or Component Approach,” IEEE Trans. on Power Systems, Vol PWRS-1, No. 2, May, 1986, pp. 54-61 [82] A. Schneider, G. Landgren, M. Bhavaraju, “Analysis and Prediction of Transmission Unit Performance Using Advanced Regression Models,” IEEE Trans. On Power Apparatus and Systems, Vol. PAS-104, No.5, May 1985 [83] David G. Kleinbaum, Lawrence L. kupper, Keith E. Muller, Azhar Nizam, “Applied Regression Analysis and Other Multivariable Methods,” 3nd ed., pp111-136 [84] William Q. Meeker, “Statistical Methods for Reliability Data,” New York, Wiley, 1998 [85] J. McCalley, etc. "Probabilistic Security Assessment for Power System Operations", Power Engineering Society General Meeting, Vol.1, June 2004, pp:212 - 220 [86] A.Berizzi, C.Bovo, P.Maranino, M.Innorta, “Multi-objective Optimization Technique Applied to Modern Power Systems,” Proceedings of the IEEE/PES 2001 Winter Meeting, Columbus, January 2001 [87] A.Berizzi, C.Bovo, P.Maranino, “The Surrogate Worth Trade Off analysis for power system operation in electricity markets,” Power Engineering Society Summer Meeting, 2001, IEEE, pp1034-1039 [88] Kalyanmoy Deb, “Multi-Objective Optimization using Evolutionary Algorithms,” 1st Edit, 2001, John Wiley & Sons, LTD [89] P. Ngatchou, A. Zarei, M. A. El-Sharkawi, “Pareto Multi Objective Optimization,” 13th International Conference on Intelligent Systems Applications to Power Systems, pp84 – 91, Nov 2005 [90] Yacov Y. Haimes, Mark R. Leach, “Risk Assessment and Management in a Multiobjective Framework,” Lecture Notes in Economics and Mathematical Systems, 242, 1984

167

[91] Vladimiro Miranda, L.M. Proenca, “Why Risk Analysis Outperforms Probabilistic Choice as the Effective Decision Support Paradigm For Power System Planning,” IEEE Trans. on Power Systems, Vol. 13, No. 2, May 1998 [92] Yan Ping, Sekar. A, “A New Approach to Security-Constrained Optimal Power Flow Analysis,” IEEE Power Engineering Society Summer Meeting, Vol.3, July 2001, pp:1462 – 1467 [93] Rahmouni, A., “Secure and optimal operation of a power generation and transmission system-application to the Moroccan system,” IEEE Trans. on Power Systems, Vol. 13, No.3, August 1998, pp850-856 [94] IEEE RTS Task Force of APM Subcommittee, “The IEEE Reliability Test System – 1996,” IEEE Trans. on Power Systems, Vol. 14, No. 3, August 1999. [95] Electrical Energy Market Competition Task Force and the Federal Energy Regulatory Commission. [online]. Available: http://www.ferc.gov [96] N. S. Rau, “Issues in the Path Toward and RTO and Standard Markets,” IEEE Trans. on Power Systems, Vol.18, No.2, pp. 435-443, May 2003 [97] X. Ma, D. I. Sun, and K. W. Cheug, “Evolution Toward Standardized Market Design,” IEEE Trans. on Power Systems, Vol. 18, No. 2, pp. 460-469, May 2003 [98] Antonio J. Conejo, Federico Milano, Raquel García-Bertrand, “Congestion Management Ensuring Voltage,” IEEE Trans. on Power Systems, Vol. 21, No. 1, February 2006 [99] L.Y.C. Amarasinghe, B. Jayasekara, U.D. Annakkate, “The effect of Dynamic Security Constraints on the Locational Marginal Prices,” IEEE Power Engineering Society General Meeting, pp. 370-375, June 2005 [100] I. Androcec, M.Sc., I. Wangensteen, “Different Methods for Congestion Management and Risk Management,” 9th International Conference on Probabilistic Methods Applied to Power Systems, KTH, Stockholm, Sweden - June 11-15, 2006 [101] M. Lommerdal, and L. Soder, “Simulation of Congestion Management Methods,” IEEE Power Tech Conference, Bologna, Italy, June 23-26, 2003 [102] R.S. Fang, A.K. David, “Transmission Congestion Management in a Electricity Market,” IEEE Trans. on Power Systems, Vol. 14, No. 3, August 1999

168

[103] Harry Singh, Shangyou Hao, Alex Papalexopoulos, “Transmission Congestion Management in Competitive Electricity Markets,” IEEE Trans. on Power Systems, Vol. 13, No. 2, May 1998 [104] Roberto Méndez, Hugh Rudnick, “Congestion Management and Transmission Rights in Centralized Electric Markets,” IEEE Trans. on Power Systems, Vol. 19, No. 2, MAY 2004 [105] NERC Standard IRO-006-1 – Reliability Coordination – Transmission Loading Relief, ftp://www.nerc.com/pub/sys/all_updl/standards/rs/IRO-006-1.pdf [106] Bowe, T.T., Mallinger, T., Rodriquez, A.J., Zwergel, D., “PJM-MISO Congestion Management Process,” IEEE PES Power Systems Conference and Exposition, vol.3, Oct. 2004, pp:1582 – 1587 [107] Jun Yu, “ERCOT Zonal Congestion Management,” Power Engineering Society Summer Meeting, Vol. 3, July 2002, pp:1339 – 1344 [108] Eugene Litvinov, Tongxin Zheng, Gary Rosenwald, Payman Shamsollahi, “Marginal Loss Modeling in LMP Calculation,” IEEE Trans. on Power Systems, Vol. 19, No. 2, MAY 2004 [109] J. Bastian, J. Zhu, V.Banunarayanan, R. Mukerji, “Forecasting Energy Prices in a Competitive Market,” IEEE Computer Applications in Power, V. 12, pp. 40-45, July 1999 [110] Z. Li, H. Daneshi, “Some Observations on Market Clearing Price and Locational Marginal Price,” IEEE Power Engineering Society General Meeting, Vol.2, pp. 2042- 2049, 2005 [111] T. J. Overbye, X. Cheng, Y. Sun, “A Comparion of the AC and DC Power Flow Models for LMP Calculations,” Proceeding of the 37th Hawaii International Conference on System Sciences, 2004