Quick viewing(Text Mode)

Epistemic Uncertainty and the Limits of Objective Safety

Epistemic Uncertainty and the Limits of Objective Safety

Safety and Security 63

Analysing safety: epistemic uncertainty and the limits of objective safety

N. Möller Unit, Royal Institute of Technology, Sweden

Abstract

Much research has been devoted to studies of safety, but the concept of safety is in itself under-theorised. Often, safety is indirectly defined through processes and classifications vital for practical safety engineering. However, without a substantial understanding of the concept, the subject matter of and safety research remains fuzzy. The aim of this paper is to provide a framework for such a substantial understanding, capturing what experts in risk and safety research as well as ordinary laypersons should include in the concept of safety. When safety is directly defined, it is traditionally defined as the inverse of risk: the lower the risk, the higher the safety. I argue that such a definition of safety is inadequate, since it leaves out the crucial aspect of deficiencies in . In socio- technical contexts, every evaluation of risk is an estimation, and therefore involves a certain amount of epistemic uncertainty. An analysis of safety must consider that complication. Epistemic uncertainty points to the epistemic primacy of safety. It is concluded that, strictly speaking, an objective safety concept is not attainable. Instead, an epistemic, intersubjective concept is proposed that brings us as close as possible to the ideal of an objective concept. Keywords: conceptual analysis, safety, risk, uncertainty, objectivity, intersubjectivity.

1 Introduction

Even though much research has been devoted to studies of safety, the concept itself is under-theorised. The actual meaning of ‘safety’ is often entirely taken for granted or is very loosely defined. A typical example comes from the context of nuclear power, where safety is defined in the following way: “Safety is what provides protection, averts danger, fosters confidence.” ([1], ch. 1, p. 6). Such a

WIT Transactions on The Built Environment, Vol 82, © 2005 WIT Press www.witpress.com, ISSN 1743-3509 (on-line)

64 Safety and Security Engineering vague characterisation may be sufficient for some contexts, especially if used only as a preamble for a deeper discussion. However, even in technical manuals there exists no such direct deepening of the notion. Instead, reference is made to standards and processes only. My aim in this paper is to provide a conceptual analysis of safety. Three points will be stressed. First, I argue for the importance of a substantial notion of safety. Second, I argue for a more complete understanding of safety than as the inverse of risk, an understanding according to which safety is analysed in the three dimensions of harm, and epistemic uncertainty. Third, I argue for an intersubjective notion of safety instead of the unattainable goal of an objective concept.

2 Substantial and procedural definitions

Normally, discussions of safety only indirectly determine the meaning of the concept. From what is brought up for discussion and from the treatment of different themes, there emerges an understanding of the subject matter. However, no direct determination is provided. One way of capturing this difference is by invoking the distinction between procedural and substantial definitions. A procedural definition of safety uses no independent criterion, but states that something is safe when a correct procedure is (successfully) applied. In contrast, a substantial definition of safety supplies an independent criterion for when something is safe. Consider an analogue with two definitions of ‘warm’ to determine whether the shower of the two subjects Eva and Niklas is sufficiently heated:

Wp: The water in the shower is warm if and only if neither Niklas nor Eva has any complaints regarding the temperature when entering the shower.

Ws: The water in the shower is warm if and only if the water temperature is above 30 degrees Celsius.

In the first definition, we test whether the shower is warm or not by applying the procedure of having Niklas and Eva enter the shower and observe if they complain about the temperature or not. Since they could also complain about the water being too cold (or hot rather than warm), this procedural definition is imperfect in any case; but the important point is that there is no independent criterion to use but a procedure to follow in order to judge whether the shower is warm or not. In the second definition, the independent criterion for ‘warm’ is given in terms of a temperature range (>30 ºC). It is thus a substantial definition. In the context of safety engineering, there is ambivalence between substantial and procedural notions of safety. On the one hand, safety is conceived as something substantial, something having to do with not being harmed. On the other, the primary focus of safety work is procedural. For the most part, no substantial notion of safety is referred to. Instead, reference is normally made to different types of procedures to follow for acceptable safety. The checklist used

WIT Transactions on The Built Environment, Vol 82, © 2005 WIT Press www.witpress.com, ISSN 1743-3509 (on-line)

Safety and Security Engineering 65 by the flight captains before takeoff is a paradigm case of such a procedural usage of the safety concept. Naturally, in safety engineering as with other areas concerned with safety, procedures for reaching a high level of safety is paramount. However, without a substantial grounding of the concept of safety, the subject matter of risk and safety research remains fuzzy. The general idea may be understood using only such vague substantial definitions to guide safety engineering as exemplified above: the aim of providing protection, averting danger and foster confidence. However, they are in the local context insufficient for making nuanced choices between different alternatives for bringing about safety, and are in the overall context insufficient for decision-making. This is especially true in cases of limited resources and/or time frames. If there is no clear overall notion of, say, traffic safety, how is the aim of reaching it to be carried out? If we have no clear concept of what safety means, how are we to judge whether the level of driver safety attained by one safety belt is higher than for another, when one is better at preventing damage to most part of the body than another, but more prone to cause whiplash damage? And how are we, when trying to improve the overall safety of a car, to prioritise among the improvements of different parts if we have only a partial understanding of the goal at hand? Considering that safety is an overall aim, not something that can be dealt with only eclectically, a substantial concept of safety must be developed. The upshot of this paper is an outline of a framework for such a concept.

3 Objective and subjective safety

What type of claim are we making in analyses of safety? An important distinction is that between an objective and a subjective notion of safety. The concept of objectivity tries to capture the intuition that safety is something independent of our opinion or awareness. A full account of the notion of objectivity is a large and controversial matter outside the scope of this paper (for different perspectives, see [2,3]). For our purposes, however, a sufficiently precise criterion of objectivity is the existence of a criterion independent of individual beliefs and feelings. That my height is 178 cm and that the snow outside my window is white are examples of propositions whose and falsehood usually are said to be a matter of objective fact fitting with such a criterion. Whether I am aware of it or not, these propositions are either true or false depending (in the relevant sense) only on the external world. In contrast, on a subjective interpretation, “X is safe,” means that S that X is safe, where S is a subject from whose viewpoint the safety of X is assessed. This is a different distinction from the one between procedural and substantial definitions. However, in this case the above definitions Ws and Wp may serve also as an example of the distinction between the objective and the subjective: Ws defines ‘warm’ in terms of an independent property (i.e. temperature) whereas Wp refers to the of Eva and Niklas. Obviously, in the case of safety engineering the primary is in the objective safety concept. If we only use the subjective safety concept we will not

WIT Transactions on The Built Environment, Vol 82, © 2005 WIT Press www.witpress.com, ISSN 1743-3509 (on-line)

66 Safety and Security Engineering have a language fit for dealing with the dangers of the real world. The airplane is not safe just because the pilots (and, say, the technicians) believe that it is working properly; rather, the degree of safety is dependent on whether their is justified by how it in fact is: the status of all functional systems of the airplane is what is really important. The objective safety concept constitutes a terminological ideal that may be difficult to realise. If we do not have objective knowledge about all the determinants of safety, it may be impossible to construct a fully objective concept of safety. We will return to this issue after having introduced the three necessary dimensions of the safety concept, extracting the first two from the standard notion of safety in the next section and supplementing this notion with the additional aspect of epistemic uncertainty in Section 5. Before going into the analysis of a substantial concept of safety, I will, to avoid confusion, point out one further distinction: the distinction between an absolute and a relative concept of safety. According to an absolute concept, safety against a certain harm implies that the risk of that harm has been eliminated. For example, an ordinary kitchen stove may be completely safe as far as ionising radiation is concerned, simply because it is physically impossible for the stove to emit any radioactivity. According to a relative concept of safety, safety means that the risk has been reduced or controlled to a certain (tolerable) level. Even though it is not uncommon in the literature to take an absolute concept of safety for granted [4,5], in general it is problematic, since it represents an ideal that can never be attained. In the context of radiation protection, the National Radiological Protection Board (UK) states: “There is no absolute level of safety and this needs to be acknowledged more readily by scientists, professionals and politicians.” ([6], p.8) This is a reasonable assumption for most engineering contexts as well, and operationalisations of the safety concept that have been developed for specific technological purposes typically allow for risk levels above zero. Therefore, even though both concepts can be retained, the relative concept will be our starting point, and we may regard “absolute safety” as the limiting case when there is no risk at all (for certain). Summing up, our conceptual aim is thus to reach a substantial, objective and relative concept of safety.

4 Safety as the antonym of risk

Risk is an essential concept for understanding safety, and safety is frequently defined as the inverse of risk: the lower the risk, the higher the safety. We may call this the standard notion of safety [7]. This definition is complicated by the fact that risk is in itself not a very clear concept. It may mean the probability of a negative outcome, or a negative outcome itself, or the cause of a negative outcome [8]. Thus, all relevant conceptions of risk refer in some way to negative outcomes and/or some sort of estimate of the likelihood of them. Even though there are several more specific, technical uses of safety, the final interest of safety engineering is to prevent human harm. Therefore, ‘negative outcome’ will be interpreted mainly as an event harmful to humans, i.e. regarding human safety

WIT Transactions on The Built Environment, Vol 82, © 2005 WIT Press www.witpress.com, ISSN 1743-3509 (on-line)

Safety and Security Engineering 67

(see Section 6 for further discussion). Since we are interested in a notion sufficient for comparison and decision-making, it seems as though the two aspects of harm and probability rather than the cause are of interest. Probability and severity of harm should be considered as vital dimensions of safety. The standard technical approach to quantifying is to consider the risk to be the statistical expectation value of the severity of the harmful event [9]. Thus, to get a measure of the risk of a certain activity, for each harmful event we should take the best estimation possible of its probability multiplied by the value of the severity of it, and calculate the sum of the products to get the magnitude of the risk. The problem with this approach to risk is the very strong premises needed. We must be able to compare the of outcomes not only on an ordinal scale, like grading them from best to worst, but on an interval scale, like the of temperature in Celsius or Kelvin [10]. When we consider only the number of casualties this assumption may seem reasonable, but in more complex situations it is far-fetched. How much worse is e.g the loss of a leg compared to a whiplash injury? That these effects may be measured on an interval scale is very much a matter of controversy. Even when the unit of measurement is human causalities, the evaluative matters are unclear. If we are comparing two activities, for example, one killing ten people a year for sure and another that may kill no one but may also kill a hundred, is it really the expected value of causalities that should be used when comparing them? The general case for the expected value approach thus looks bleak. It seems clear, however, that probability and harm are the major components of risk in the context of interest for us. Paul Slovic’s definition of risk as “a blend of the probability and the severity of the consequences” ([11], p. 365) captures this reluctance to reduce risk into the expected value of harm. Everything else being equal, it should be obvious that safety increases as the probability of harm or the severity of harm decreases. A typical example taken from a safety application states: “[R]isks are defined as the combination of the probability of occurrence of hazardous event and the severity of the consequence. Safety is achieved by reducing a risk to a tolerable level” [12]. We may thus interpret risk as the combination of harm and probability, but should make a weaker claim of aggregation than the expected value. This weaker concept of risk acknowledges the grounding in severity of harm and probability for the concept of safety, without falling into the trap of oversimplifying the relation between the different aspects. However, also this more reasonable, weakened conception is in itself insufficient for the concept of safety. Safety is not attained, as Slovic states, “by reducing a risk to a tolerable level”. The for this is the evident fact of epistemic uncertainty.

5 The additional aspect of epistemic uncertainty

The referred to in a risk or safety analysis are in most cases not known with and are therefore subject to epistemic uncertainty. This aspect is paramount for the notion of safety, but is often neglected in analyses of

WIT Transactions on The Built Environment, Vol 82, © 2005 WIT Press www.witpress.com, ISSN 1743-3509 (on-line)

68 Safety and Security Engineering the concept. The importance of (un)certainty for decision-making has been shown in empirical studies, e.g. by Daniel Ellsberg [13]. Ellsberg shows that people have a strong tendency in certain situations to prefer an option with low uncertainty to one with high uncertainty, even if the expectation value is somewhat lower in the former case. Ellsberg’s empirical study concerns subjective preferences, but the analogy for safety considerations is illustrated by the following example: Suppose you are a dedicated glider fanatic. On your travels you happen to drive by an airfield where you see a glider and ask the manager of the field if you may go for a spin. Closing in on the glider you notice that it does look old, but the manager assures you that the probability of a crash is just as low as with other gliders, one in a hundred thousand (say), so you have nothing to worry about. You have a general knowledge of the risk involving gliders, but cannot help feeling a bit worried, even though the manager’s estimation is the best one available to you. Compare this situation to one where the passenger travelling with you happens to be a technical expert on gliders. After a full and thorough examination he concludes that in his expert opinion the glider is a bit more risky than the manager thought: the probability of one in fifty thousand for a crash is a better assessment. Even though this gives worse odds, you have reason to feel a lot safer when entering the glider. An example of this kind indicates that safety should be construed as something that decreases with the probability of harm, with the severity of harm, and with increasing epistemic uncertainty. Different estimations have different types of uncertainties connected with them. A simple construction designed for and used many times in the same kind of circumstances as the present one has a lower degree of uncertainty regarding its performance than a new and complicated one. Insights like these are common engineering know-how and should be included also in the concept of safety, not only in the processes of trying to increase safety. The relevance of uncertainty for safety is also evident from other common engineering practices such as the “life-boat technique”: even if the probability of colliding with an iceberg happens to be very low, adding life-boats just in case is a standard technique (at least since the Titanic disaster). Like-wise, it is standard practice to add an extra safety barrier even if the probability that this barrier will be needed is estimated to be extremely low, e.g. in the context of nuclear waste facilities. Yet another technique for handling uncertainty is the practise of designing an assembly part so that it can be attached in only one way. Even if a trained worker would have the knowledge to attach the part the right way, you avoid human error due to unforeseen circumstances if it is simply impossible to “do it wrong”. Similarly, an alcohol interlock is a solution not for improving the functional safety of the car as such, but for eliminating the possibility of being used by a drunken driver. Even if techniques such as these may be said to lower the probability of unwanted events for the overall system, they are in many circumstances best argued for in terms of the possibility that the probability estimate may be incorrect. Thus, methods for handling uncertainty are very much a part of safety practice. A substantial analysis of the safety concept must take this into consideration.

WIT Transactions on The Built Environment, Vol 82, © 2005 WIT Press www.witpress.com, ISSN 1743-3509 (on-line)

Safety and Security Engineering 69

A fundamental question is how epistemic uncertainty should be characterised more in detail. This is a difficult matter and only for probability estimates have some – albeit controversial – methods been developed. Here is not the place for a discussion of different measures (but see [7]), only to briefly sketch out the field. Two major types of measures of incompletely known probabilities have been proposed, binary measures and multivalued measures. Binary measures divide the probability values into two groups, possible and impossible values, the normal approach being to form a “probability range” within which the probability is judged to be for certain (e.g. “the probability for rain tomorrow is between five and ten percent”). Multivalued measures generally take the form of a function that assigns a numerical value to each probability value between 0 and 1. This value represents the degree of reliability or plausibility of each particular probability value. The most common multivalued measure is second-order probability, which is to be interpreted as the probability of the probability estimate to be correct.

6 The limits of objective safety

With the aim of elucidating important aspects of the safety concept for technical and scientific use, we have identified three dimensions: severity of harm, probability and uncertainty. As noted in Section 3 our aim is a safety concept that is as close to the objective ideal as possible (in contrast to a subjective understanding of the concept). We are looking for a concept distinguishing the feeling or belief of a person, from the actual safety regarding a system or an activity. If there is a malfunction in the airplane engine, the safety of the system is less than without the malfunction, whether no one knows about it or not. Our concept must reflect this distinction. However, the goal of an objective concept is troublesome, as this section will show. The first problem concerns the dimension of harm. As should be obvious from the discussion in Section 4, finding an objective, subject-independent criterion may not be possible. The reason for this is that harm is essentially value-laden. Even if we undoubtedly are able to compare a lot of harms, it is hard to understand what in general would be meant by saying that harm X is objectively worse than harm Y. For harms of the same type we may reasonably make this “objective” claim: for example, comparing a broken finger to a broken finger and a broken leg, the latter is of course more severe. Likewise, in a comparison between two accidents, one which caused a few serious injuries and the other a larger number of less severe injuries, it might be far from clear which is the most severe accident. Claiming harm to be an objective quality thus faces difficulties on two levels. First there is the question of ‘’: in what are we to measure harm? Second, whether the final quality of measurement could be argued for among candidates such as functional consequences or qualitative measures (such as subjective preferences, number of healthy years or simply Euros) some comparisons still seem undecided. It seems far-fetched to claim an objective status for the comparison of such different types of consequences as death and a

WIT Transactions on The Built Environment, Vol 82, © 2005 WIT Press www.witpress.com, ISSN 1743-3509 (on-line)

70 Safety and Security Engineering broken arm. Rational persons can disagree about the comparative severity of different harms and have no access to objective means by which their disagreement can be resolved. Therefore, the subjective element cannot be eliminated from assessments of the severity of harm. What we can often achieve, however, is an intersubjective assessment that is based on evaluative judgments that the vast majority of humans would agree on (after due consideration). An intersubjective notion is different from the intuition of objectivity in that it gives up the idea of finding a measure independent of human experiences and values, but at the same time tries not to make the notion dependent on the beliefs or experiences of any specific individual. Attempts such as QALY (quality adjusted life years) in medical ethics, have been made to solve such issues of comparison [14]. As can be seen from studies of the QALY concept, there is a rather high degree of agreement on a wide range of such judgments. Another way of trying to reach an intersubjective measure is to assign monetary values for harms such as deaths and injuries, a method used in some variants of risk-benefit analysis. It may be objected that in some engineering contexts harm may refer to something else than human harm, and that it then may be objectively measured. Such a ‘harm’ should perhaps rather be characterised as an ‘unwanted event’. For example, the breakdown of a computer operating system can perhaps be objectively measured using certain stress testing as an (objective) criterion. There are two ways of answering to that objection. The first is that there is still an open question exactly what criteria to use since there seem to be no objective way to settle this matter for any complex systems of interest. Even if there may be a possibility that the ‘safety’ of a very simple switch may be objective in the sense previously specified, this is uninteresting for safety engineering since the applicability is too limited: most systems are too complex to argue for value independent criteria of the type we need. The second answer is to question whether any other quality than human harm may be the final aim for safety engineering. Most artefacts and systems, nuclear power plants as well as lawn mowers, are designed the way they are because of human safety. The functional safety and reliability of parts of the system is secondary to the primary aim of human safety. From a purely functional and economic perspective safety levels should often be very different from today. That is not to say that other usages than human safety are irrelevant to safety engineering: obviously they are not. However, for the primary safety perspective of human safety there is no objective notion of degree of harm. The relevant notion is thus intersubjective rather than objective. The second dimension to consider is probability. There is a well-established distinction in between subjective and objective interpretations of probability [15]. According to the subjective interpretation, to say that the probability of a certain event is high means that the speaker’s degree of belief that the event in question will occur is high. The direct reference to individuals’ belief clearly prevents this interpretation to be a suitable candidate for our concept. According to the objective interpretation, on the other hand, probability is a property of the external world, e.g. the propensity of a coin to land heads up. For a technological system to be describable in the theoretical models of

WIT Transactions on The Built Environment, Vol 82, © 2005 WIT Press www.witpress.com, ISSN 1743-3509 (on-line)

Safety and Security Engineering 71 objective probability, however, it has to be a very simple construction indeed. Some large systems may have parts where the case for an objective probability interpretation might be good, but in total it seems unlikely that many complex societal activities or systems can rely exclusively on such models, especially if they involve human action and decision making. Even for technological systems with historically known failure frequencies, it may be to stretch it to call the probabilities in question objective. In any event, in most cases when a safety analysis is called for, such frequency data are not available, unless perhaps for certain parts of the system under investigation. Therefore, (‘objective’) frequency data will have to be supplemented or perhaps even replaced by expert judgment. Expert judgments of this nature are not, and should not be confused with, objective facts. Neither are they subjective probabilities in the classical sense, since they do not express (mere) degrees of belief. They are better described as subjective estimates of objective probabilities. However what we aim at in a safety analysis is not a personal judgment but the best possible judgments that can be obtained from the community of experts. Therefore, this is also essentially an intersubjective judgment. Finally, concerning the dimension of epistemic uncertainty, procedures for expressing and reporting uncertainties are much less developed than the corresponding procedures for probabilities [16]. However, the aim should be analogous to that of probability estimates, namely to obtain the best possible judgment that the community of experts can make on the extent and nature of the uncertainties involved. Objective knowledge about uncertainties is at least as difficult to attain as objective knowledge about probabilities. In summary, an objective concept of safety is not attainable. On the other hand we do not have to resort to an exclusively subjective safety concept that is different for different persons. The closest we can get to objectivity is a safety concept that is intersubjective in two important respects: (1) it is based on the comparative judgments of severity of harm that the majority of humans would agree on after careful consideration, and (2) it makes use of the best available expert judgments on the probabilities and uncertainties involved. This intersubjective concept of safety should be our main focus in technical and scientific applications of safety.

7 Conclusion

When analysing the concept of safety, it is paramount to go beyond the simple view of safety as the antonym of risk, even if risk is understood as including the two dimensions of harm and probability. An understanding of the relevance of epistemic uncertainty is of great importance when discussing safety and safety matters. The analysis in this paper has resulted in a three-dimensional notion of safety, identifying the three dimensions severity of harm, probability and epistemic uncertainty. The analysis supplies a framework important for a common understanding of human safety in engineering as well as other areas. The exact form of how to combine these three aspects of safety has to be further developed for specific contexts, since precise construal requires special attention

WIT Transactions on The Built Environment, Vol 82, © 2005 WIT Press www.witpress.com, ISSN 1743-3509 (on-line)

72 Safety and Security Engineering to the subject matter at hand and the exact context in which one is working. Only within a common framework, however, can a unified ground for decision- making be possible. On closer scrutiny, the intersubjective character of safety is revealed, for harm as well as probability and epistemic uncertainty. An understanding of the limits of objectivity helps us focus on what may and may not be done in safety analysis and points to the direction of further research.

References

[1] Electricité de France, Nuclear Power Plant Operating Safety Handbook, http://www-ns.iaea.org/tutorials/edf-op- safety/edfmaterial/edfhomepage.htm (23 March 2005). [2] Moser, P., Philosophy After Objectivity, Oxford University Press: New York, 1993. [3] Rorty, R., Philosophy and the Mirror of Nature, Princeton University Press: Princeton, 1979. [4] Miller, C.O., System Safety’, Human factors in aviation, E.L Wiener & D.C. Nagel, San Diego, pp. 53 – 80, 1988. [5] Tench, W., Safety is no accident, Collins: London, 1985. [6] NRPB, In terms of Risk: Report of a seminar to help define important terms used in communicating about risk to the public, 14 (4), 2004. [7] Möller, N., Hansson, S.O. & Peterson, M., Safety is more than the antonym of risk (submitted manuscript). Some ideas that are developed in the current paper, as the standard notion of safety and the three safety dimensions, are based on parts of this article. [8] Hansson, S. O., Philosophical Perspectives on Risk, Techne, 8(1), 2004. [9] Cohen, B., Probabilistic Risk Analysis for a High-Level Radioactive Waste Repository, Risk Analysis, 23, pp. 909-915, 2003. [10] Resnik, M., Choices: An introduction to , University of Minnesota Press: Minneapolis, 1987. [11] Slovic, P., Do Adolescent Smokers Know the Risks?, The Perception of Risk, Earthscan: London, pp. 364-371, 2000. [12] Misumi, Y., and Sato, Y., Estimation of average hazardous-event- frequency for allocation of safety-integrity levels, & System Safety, 66(2), pp. 135-144, 1999. [13] Ellsberg, D., Risk, and the Savage axioms, Quarterly Journal of 75, pp. 643-669, 1961. [14] Nord, E., Cost-Value Analysis in Health Care: Making Sense out of QALYs. Cambridge University Press: Cambridge, 1999. [15] Savage, L., The foundations of (2nd rev. ed), Dover: New York, 1972 [1954]. [16] Levin, R., Hansson, S.O., and Rudén, C., Indicators of Uncertainty in Chemical Risk Assessments, Regulatory Toxicology and Pharmacology, 39, pp. 33-43, 2004.

WIT Transactions on The Built Environment, Vol 82, © 2005 WIT Press www.witpress.com, ISSN 1743-3509 (on-line)