Overview of Risk Analysis
Total Page:16
File Type:pdf, Size:1020Kb
-1-
OVERVIEW OF RISK ANALYSIS METHODS
M. MONTEAU ; M. FAVARO - INRS (Institut National de Recherche et de Sécurité - VANDŒUVRE - FRANCE).
Formely published in two issues of the INRS "Cahiers de Notes Documentaires" no. 138 and no. 139, dated Q-1 and Q-2 1990. Translated on request by SCHLUMBERGER. Revised by M. FAVARO.
ABSTRACT
This risk analysis review is made up of six parts.
The first three parts review methods ranging from inspection and checking procedures to more complex ones designed mainly for the diagnosis of organisational risks. The increasing complexity of the methods used stems from the need to pinpoint risks which are increasingly random and difficult to detect.
The three following parts cover methods under the general heading "systems safety". In condensed form it covers the essentials of this predominantly technical approach to occupational risk prevention. As well as a methodological and technical review of the subject, it addresses the methods and the questions raised by the implementation such as data reliability, probability evaluation and the notion of human reliability.
Keys words : Risk Analysis/Accident Analysis/Occupational Accident/Systems Safety/Ergonomics/Human Factor/Work Organisation/Methods
INTRODUCTION
"A Priori" Vs "A Posteriori" Methods
We propose to review and examine the practices and methods whose objective is to identify the occupational accident risks before the accident occurs.
Such practices and methods are generally referred to as "a priori", in the sense that they permit a true prevention, that is to say, action before the accident, as opposed to methods of accident analysis that come into effect after the accident has already occurred, such methods being called "a posteriori" (Monteau, 1979).
In practice, in real situations this conventional "a priori/a posteriori" distinction is not as clear-cut ; it is quite certain that any preventive action taken after the accident should envisage not only preventing a repeat of the same event but also other accidents that are more-or-less comparable (that is, those whose analysis would reveal factors in common with the accident just occurred).
We will nevertheless refer to "a priori" methods, in the absence of an accident, to designate the use of this knowledge to identify the risks at the origin of potential accidents or undesired events. -2-
Risk identification is as much concerned with the most simple (and often the most accident-causing) work situations as those where the risks degenerate into catastrophe. The multiplicity of risks at the origin of any accident is such that the methods employed to get rid of them are themselves very diverse, ranging from the most empirical practices to procedures that are very complex.
Evidently, the task of discerning the risk of slipping at a work place will not use the same methods, nor will it require the same level of competence as, for instance, demonstrating the possibility of a dangerous dysfunction of a press or again, calculating the probability of explosion of a chemical reaction vessel.
From this point on, doesn't the essential problem posed by "a priori" risk diagnosis in the company consist of defining "who does what" as regards prevention ? Yes, but without a doubt such a definition is only possible if we are conversant with the range of existing methods ; this being the objective of this paper
As we have just emphasized, the "a priori" risk diagnosis (method) encompasses very varied practices and the literature on this subject, itself very diverse, is to be found dispersed at random among technical reviews, booklets specializing in a given risk, practical manuals or other works. The lack of similarity between these documents, their number and the occasional difficulty of access to them makes it hardly possible to envisage establishing a catalogue of existing methods that includes an operational description of each of the procedures to be put into effect.
Classification Of Methods
On the other hand, it is more realistic to conceive of a way of classifying the collection of methods that allows one to leave to the end every unreviewed (or future) method in the proposed typology and to compare them with classified and analyzed examples.
In this manner, Figure 1, p. 21, proposes a relative position for each type of method, setting them out along two axes :
- The horizontal axis allows for classification of the methods according to their application in the system's design or in its use.
- The vertical axis corresponds to the principal domain of investigation of the methods reviewed. The "technical" pole is placed opposite to the "organizational" one, although these two aspects may be considered interdependent in certain cases.
This typology, which is itself worth dwelling on, is based on the dominant characteristics of each of the different approaches under review. Thus, for example, Figure 1 presents the inspections and checks as they apply "en masse" to the technical aspects of risk analysis, even though there may equally 1 All tables and figures have been regrouped in 42 pages in Chapter IX. -3- well exist regulatory requirements involving the company which consequently are the object of inspection.
The two previously defined axes demarcate four Quadrants, inside which the "a priori" risk analysis methods are distributed, however unequally. In fact we can note that the methods most in use today occupy Quadrants II and IV. Cconsequently the paper will be dedicated to those lasts.
The other Quadrants remain practically empty. However, there is every reason to believe that the methods shown there are the seed of tendencies that are destined to develop in future.
Firstly, referring to Quadrant III, one can expect a growth in research whose aim is a formal exposition of the technical knowhow related to the use of complex systems. These efforts at modelling knowledge have already led to the creation of expert systems used for diagnosis and inspection tasks. Certain expert systems are now designed to take on a more-and-more active role, in the management of chemical processes in particular. Such systems must be capable, in real time, of updating their knowledge base depending on the state of the process that they are assigned to control. These devices are of local importance to the "a priori" diagnosis of risk to the extent that they enable us to avoid, identify and control the dysfunctions that are prejudicial to safety.
Secondly (referring to Quadrant I), the development of simulation techniques enables us to respond to the concerns of installation designers, and in particular to workshop designers seeking to optimize the location and arrangement of machines and work stations as a function of a defined production objective. Recourse to methods of simulation provides the user with a representation of the objects circulating in the workshop ; the resources (machines, means of transport, people), stocks and queues. Simulation permits us to describe different scenarios of normal or degraded working conditions, as a function of the occurrence of undesirable events (machine breakdowns, bottlenecks etc.). Here again, a good "a priori" knowledge of the effects of these variances and their modes of optimum recovery is important to safety, in that it limits the subsequent risks of improvisation which are often at the root of accidents.
Whether it is a matter of expert systems or of simulation, both types of tools have until now found applications which fulfill the specific needs of users of existing or planned installations. They permit a growth in the reliability of processes and thus their safety, which can therefore turn out to be a fortunate consequence of having an efficient organization.
However it is still premature to estimate the impact of these techniques on the day-to-day risks at the origin of non-specific accidents, which in the case of the French chemical industry account for some 75 % to 80 % of occupational accidents.
I - INSPECTIONS AND CHECKS -4-
I.1 - Notion Of Risk (In The Context Of Inspections And Checks)
Without a doubt, inspections and checks constitute the oldest procedures of "a priori" risk diagnosis. Their aim is clear ; it is a matter of discovering, in an existing work situation, and in relation to regulations or codes of practice, any omissions, anomalies or insufficiencies, in particular those involving technical devices, installations or operational modes.
The risk is equivalent to a lack of application of the rules likely to cause an accident (often very directly). In this case, the notion of risk is frequently very close to that of "danger", danger being by nature an entity incompatible with human presence, so that "injury will necessarily arise from its encounter with man" (Seillan, 1981).2
With the above perspective in mind, risk is defined as the possibility of an encounter between the human and the danger. This notion of risk follows naturally from the concept of an accident according to which the injury is the result of an encounter between the human and the object : machines, technical equipement, etc. (cf. for example, Skiba, 1972). This notion is observed, in particular, in "technical safety notes" of INRS ; for example in "Metal Rolling and Bending Machines" (Mauge, 1986), the risks are identified by their "appearance" (type of injury) and their "causes" (the danger), such as :
- Access to transmission parts. - Contact with moving elements while dismounting parts. - Wearing of loose clothing near the drive zone, etc.
The means of prevention are illustrated with regard to their causes and referred to the regulations. From then on, risk detection consists of verifying the presence of stipulated prevention methods and the risks discovered will then indeed be discrepancies from the regulatory norms.
According to this logic, the concern for improving prevention leads to a renewal and development of the regulatory aspects.
I.2 - Levels Of Application
Inspections and checks are by nature made in the context of visits which can apply to :
- A sector of activity. - A firm or establishment. - A service, a workshop. - An installation, a machine or a work place. - A particular risk.
2This author emphasizes, moreover, that the "Code du Travail" (French Code of Labor Law) has long ignored the word "risk" : "the Code only recognized the word "Danger". -5-
The following examples will permit us to verify that the procedures used are similar to each other in principle if not in form. At the start, the analyst must have at his disposal a reference in the form of a checklist or a questionnaire which reviews in a more-or-less detailed fashion the points that must be observed.
I.2.1 - Inspections And Checks Applied To A Sector Of Activity
Certain sectors of activity are subject to regulations which are numerous but widely dispersed and which can be grouped together for operational purposes in handy-to-use manuals.
The Accident Prevention Service of the regional Health insurance fund of Aquitaine (CRAMA, without date) has in this way, for example, created a "Building Code" where the rules for hygiene and safety applicable to the work site are laid out. This manual has the special feature of directly specifying the prevention actions that must be carried out, if the case arises (104 in total). The task of the inspector is not to look for solutions but to identify their possible absence (Table I, p. 3).
When the objective is not solely the detection of weak links but also the development of a certain level of safety, it is possible to construct an observation grid which will allow establishing a "safety profile" for the company or the work site.
Durand et al. (1986) have proposed in this manner a "work site quality grid" composed of 46 questions concerning the general state of a given work site (27 questions (Q)), the excavation (6Q), steelwork (4Q), concrete (4Q) and infilling work (5Q). Each question has two, three or four possible ansvers (Table II, p. 4), from which a total observed score is calculated ; this is compared (in %) with the ideal score. This way of scoring is certainly subject to criticisms of its methodology as already formulated by Faverge (1967a) but we can equally well see in it a proposal for a Safety Index which is of interest to Safety Management.3
I.2.2 - Inspections And Checks Applied To The Company
The inspections and checks related to the company can be carried out by outside inspectors but above all they correspond to one of the objectives laid out for the establishment's Health Safety and Working Conditions Committee (HSC) : "the committee proceeds to analyze the professional risks to which the employees can be exposed, just like the analysis of the working conditions" (French Law of 23.12.1983). To this end, the HSC in particular carries out periodic inspections : the examination of workstations and worshops and also any aspects of interest to the firm as a whole. At this level, the committee can consider drawing up the following three forms :
3According to Faverge, an indicator should possess essentially two qualities : reliability and congruence (adequacy for the object being measured). -6-
- Examination of risks common to different workshops. - Identification of zones or workshoops that should become the object of deeper studies. - A comment on the capacity of the firm to take charge of its own safety.
- The first form consists of a review and examination of the risks common to the different worshops which make up the firm and which merit a comprehensive study.
In practice, this review is often carried out in an empirical fashion, the experience of the members of the HSC compensating for any absence of method. However, observation of the defects in this plan (Monteau, 1979a) prompts us to implement more rigorous practices. So, there exist certain outlines, such as those proposed in the "Safety Officer's Guide" published by CRAM Center-West (CRAMCO, before 1979) where mention is made of the type of information to be collected during visits to the company (Table III, p. 5). These subjects of investigation are likely to be developed in the form or a questionnaire ; Thony and coll (1986), for example, distinguish between six possible risk categories, common to the company :
- Fire risks (discovery of danger zones, vulnerable sectors, cases of possible fires). - Electrical risks (identification of materials and equipment needing specific prevention measures). - Risks associated with movement of men and machines (pile drivers, obstacles, state of the ground, lighting). - Risks linked to handling, stocking of products and risks of pollution.
Each category is the subject of a form, which is presented as a "closed" questionnaire.
The questions enable us to make an inventory (of storage areas, for example) then to discover anomalies (excessive height, bad balance, overloading etc.) considered as being so many specific risks calling for safety measures.
- The second form for possible HSC action at the company level consists of identifying zones (workshops, sets of work stations) whose level of risk justifies a specific examination. These priorities are easily unearthed by examining the geographic spread of incidents or accidents - that is, "a posteriori" - with the help of a plan of the establishment on which there is a precise indication of the place where the undesirable events have occurred, or by publishing the results achieved by workshop or sector (frequency rates of reported accidents, for example).
An "a priori" risk analysis is moreover better targeted when it is based upon "a posteriori" results. In fact, experience shows that, as a general rule, the risk of accident is very unequaly distributed in a given establishment (Monteau, 1983).
- The third form is concerned with the capability of the company of taking care of its own safety. -7-
Even if we are well aware that risk avoidance is not an end in itself, we can nevertheless fear that certain visits are definitely settled with a list, whose use for the purpose of prevention remains a dead letter.
The control function, then, is equally concerned with the capacity of the enterprise to transform the acquired knowledge into action. In this context, an attentive review of the HSC's activities and in particular those of the Safety Department can have a forecasting value for the future of recommended or projected actions. Thony (op.cit), for example, proposes an evaluation grid for the Safety Organization, "to make a global assessment of the risks to be averted and the steps to be undertaken to establish priorities, with a view to deciding on an action programme (Table IV, p. 5).
Finally, the collection of general information about the firm (type of activity, safety results, history) even if it doesn't lead directly to the identification of risks, gives, on the other hand, a knowledge of their context (technical, economic and social) which often turns out to be one of the fundamental determinants of the company's level of risk. For example, Dogniaux (1978) has been able to demonstrate the influence of the social climate, and of certain cultural aspects of the firm, on the likelihood of accidents occurring.
I.2.3 - Inspections And Checks Applied At The Service Workshop Level
The implementation of inspections and checks involving a service or workshop can rest upon the same procedures as those adopted at the preceding level, that is :
- the identification of risks common to the set of work stations that make up the workshop,
- the locations/discovery of geographical zones, activities or work stations that deserve a more thorough investigation at a later date.
One can then use analysis grids similar to those intended for the review of the company (Thony, op.cit.). However, the service, workshop or sector constitutes, in general, an entity that is sufficiently specific and homogeneous, as far as the risks encountered are concerned, to allow introduction at this level of some measures of the physical surroundings (noise, heat, lighting, etc.). These aspects are the subjects of abundant literature but their study can raise practical difficulties insofar as small firms, in particular, do not have the material resources essential to deal with them.
On the other hand, the identification of geographic zones or work stations that produce the most of the accidents continues to be easily attainable if, on top of it, all one records systematically the location of the undesirable events that have occurred in the worshop concerned.
I.2.4 - Inspections And Checks Applied At The Installation, Machine, Or Work Station Level -8-
After selecting the installations, machines or work stations that produce most of the accidents, the analyst can set up an inventory of risks with the help of grids or questionnaires, the most complete of which is without doubt that published by Renault. "The Ergonomic Memorandum" (RNUR, 1983), is composed of three parts, the first being devoted to "questions related to the risk of accidents" (risk of injury and falls to the person, falling objects, risks of mechanical origin, risk of cutting or puncturing through handling, etc.). These questions allow one to proceed to a detailed analysis of the work place "by examining closely the physical characteristics of the work station that are directly measurable or observable". The "Observation Guide" consists of 24 pages of which one third is concerned with accident risks, the two remaining parts dealing with problems that are more strictly ergonomic : the architecture and layout of the work place, and its physical environment.
The above procedure has the advantage of being applicable to any installation, machine or work station ; however its systematic application in a given company can appear to be tedious. Some practitioners, then, recommend using questionnaires that are more limited in scope but also more or less superficial (Boisselier and Boué, 1980 ; Lefebvre, 1986).
The level of detail analysed can vary to a considerable extend as shown by the two examples presented in Table V, pp. 6-7. On this point, Damel (1967) appears to provide an extreme example, this being a kind of pitfall which is always possible. The author proposes in effect a questionnaire about examining a sluice project, this is made up of 27 questions offering 70 modes of response. As a result when a complex installation is involved, the inventory of known risks and their precise description can give rise to voluminous technical manuals. Take for example "Safety in the Paper Industries" (INRS pub.) ; starting from a detailed study of the installations and processes, and systematically extracting from these the lessons to be learned from real or plausible accidents, the authors pinpoint the elements in the installations calling for preventive measures. This example shows, moreover, that the knowledge used for an "a priori" analysis of such risks derives to a large extent from Accident Analyses (by way of illustration, some 56 accidents are analyzed in the volume "Sécheries" "Safety in the Paper Industries").
In many cases, then, the practitioner has available certain aids (grids, questionnaires, technical manuals) that allow making an "a priori" risk diagnosis through inspections and checks at the place of work or on the installations. These aids differ essentially as to the level of detail of the risks under examination ; which presents the problem of choosing one adapted to the need.
We can then conclude that the pertinent choice depends above all on the level of safety already attained by the unit under examination. Faced with glaringly dangerous situations a simple aid turns out to be sufficient, at least in the first instance. But, as the situation improves, inspections and checks will have to be more and more precise and exhaustive. However this progression doubtless has its limits as to feasibility and efficiency ; in effect, it can appear paradoxical to devote more and more time and effort to examine less and less dangerous situations (the example mentioned in the following paragraph is an illustration of this). For all that, this diminishing return does not condemn this type of investigation, rather it determines its practical limits. -9-
I.2.5 - Inspections And Checks Related To The "A Priori" Diagnosis Of A Particular Risk
Many activities possess the peculiarity of having certain dominant risks associated with them : risks of falling, in the building trade, risks of being cut when making cold laminations, etc. In these cases, it appears to be quite justifiable to concentrate first of all on the predominant risk, out of concern for efficiency. As we have seen previously, the investigation is equally well improved by being focused on the most affected sector of the workshop. This double limitation makes the analyst's or expert's task easier. By way of example, Table VI, p. 8, reviews the "a priori" diagnosis of the purely mechanical risks carried out in three sectors of an automobile factory (Mougeot and Diné, 1980). The mechanical risks are defined as "the resultants of mechanical energy potentials that can exist between a person and fixed or mobile structures". The analysts first draw the production line on a setup diagram (1/100 or 1/200 scale) on which all observation points are indicated. All material elements are taken into account whether they be fixed, mobile or non- mobile, even if no particular observation can be made about them. This review of possible risks is carried out in the course of a meticulous visit to the worshops (two experts, three days of observations) and this gives rise to a Table consisting of five columns :
- One column "Review", specifying the order of inspection for the materials. - One column "Designation" of the materials. - One column "Observations". - One column "Risks". - One column "Prevention Measures".
At the conclusion of the visit, the analysts state that "in the Risks column, empty spaces are practically always the rule". The authors do not conclude, for all that, that the risk of accident is absent.
With examples to support them, the analysts prove in effect that it is relatively easy to imagine the actions of operators liable to bring about accidents (recovery from incidents, dangerous handling, etc.). But, in the same breath, these examples bear witness to the fact that in work units where there is concern for safety, obvious risks become rare. Some risks still remain, often described as "undefined", these are short-lived and transitory, and inspections and checks detect them poorly.
"We must remark that these "indefinite" risks bring most often into play elements which do not possess any significant potential energy, but which rather result from the movement of people, stacking up of parts in boxes, falling equipment, etc. They result, then, from circumstantial situations which cannot be taken into consideration during a systematic screening but which could become apparent through the analysis of accidents which occurred in these, or in analogous, conditions (Mougeot and Diné, op. cit.).
I.3 - Management Of Inspections And Checks -10-
In practice, inspections and checks rely essentially upon the use of an arsenal of regulations, whether these be legal dispositions (Code of Labor Law), decrees and decisions, or Standards and Codes of Practice.
These modes of action take on different forms according to function and have the objective sought by the analyst. They are carried out either by intervenors from outside the firm (work inspectors, CRAM Accident Prevention Service, approved organizations) or by internal authorities (Safety Service, HSC, etc.).
Many periodic technical inspections are entrusted to approved organizations but when the firm reaches a certain size, the question of inspections can become sufficiently complex to pose a problem of planning and implementation for the services responsible for them. To this end, the (French) National Powder and Explosives Company has conceived a computerized safety inspection system (ISAO) (Didier, 1965). This system is essentially made up of files in which the following items are listed, with concern for completeness :
- The safety rules applicable to the firm concerned. - The points of application of these rules (equipment to be inspected, procedures to be established, etc.). - The frequency of these different operations.
This ISAO system was put in place in 1982 in a fireworks firm employing 1,750 people. In this case, it implemented 4,678 "safety rules/points of application" pairings and brought into action some 193 persons charged with application. Along with the Safety Department, these people constitute a real and integrated safety network.
This kind of procedure links up well with one of the Safety Management forms. More than that, it should equally well allow outside inspectors to make an easier check of the regulatory side of large companies. However, "for the simplest cases a good planning chart and the desire to make use of it could be sufficient" (HO, 1985).
I.4 - Value And Limitations Of Inspections And Checks
As far as technical equipment is concerned, the efficiency of the "a priori" risk diagnosis, in terms of inspections and checks, does not need any further demonstration and the lack of accidents of solely material origin confirms this. On the other hand, it seems that inspections and checks achieve their maximum efficiency in two extreme cases :
- When it is a question of rapidly reducing an excessively hith level of risk. - When it is a question of maintaining a low level of risk. -11-
In the first case, inspections and checks apply to work situations that are very shaky from the safety point of view. The risks are obvious and characterized by visible discrepancies with respect to regulations or standards.4
In the second case, the objective is to avoid any insidious drift in the fuctioning of a process, an installation or indeed the organization style in use (drift being to the organization what wear is to material). This concept of prevention appears to be predominant in the large U.K/U.S. corporations. In this manner, the good safety results obtained by high risk firms (petrochemicals, for example) are doubtless greatly indebted to the institution of quasi-permanent internal technical audits (Walters, 1983).
One must, however, note here that this type of practice represents only one part of a prevention policy (not taken into account in the first case). A prevention policy is effectively characterized first and foremost by detailed and very strict (and consequently controllable) definition of the tasks, requirements and responsibilities at all levels. This is the price to be paid for the feasibility and efficiency of internal inspections. According to Gilardi and Tarondeau (1987), this rigor is evident in highly automated systems where it becomes necessary to define production tasks with progressively greater precision.
This evolution would not follow from a "command philosophy" but rather from the requirements of the material (being produced), from which one cannot hope to obtain full value except by strictly following the procedures for start-up and use.
If these two extreme situations are indeed observed, there also exists a range of intermediate ones characterizing companies where the most obvious risks have disappeared but where sporadic and fleeting ones do emerge. In such situations, internal inspection visits progressively lose their initial efficiency without being able to be replaced by internal controls whose deployment implies sufficiently structured tasks.
As we will see in the next part, the work station survey will seek precisely that, namely, to structure the tasks to be carried out, but only the ergonomie approach will be able to get hold of the transient risks.
II - SOCIO-TECHNICAL CONCEPTS AND METHODS
II.1 - From Scientific Work Organization To Ergonomics
With the dominance of Taylorism, the systematic study of the work station develops essentially into a process of rationalizing the work. From their viewpoint, the "preventers" discover in the occupational accident statistics a
4However, one can note with Lievens (1976) that the normative reference sometimes constitutes an obstacle to technical progress : for example, according to this author, "too strict a regard for the previous regulations would have condemned the development of modern aviation". -12- major reason to focus their efforts on the study of the work stations. "If the number of potential accidents are classified according to their technological causes, we note that 60 to 80 % of these accidents stem from causes classified as Physical Elements 01 through 04"5. These statements show that for the bulk of accidents that do occur, the environment of the work station, and the work station itself, are involved. (CRAMCO, op. cit.).
The effort of the organizers, in the perspective of work rationalization, is going to be translated into the development of a prolific algebra of elementary operations destined to simplify and formalize the tasks to be carried out. The most efficient movements must also conform to a certain number of "rules of economy" and the work stations must be laid out ergonomically. Safety is not ignored, but rather one takes the view that, to the degree that the machine is reliable and production conditions stable, the most efficient movement is also the safest ; in short, the prescribed movement incorporates safety.
Certain observations (private documents) appear to reinforce this point of view. For example, of more than 5,000 accidents that occurred in 1973 in an automobile manufacturing plant which was very "Taylorized" at that time, we note that : - 80 % of the accidents happened at the work station, - 70 % of the accidents involved activities that were not timed.
Faverge (1976b) also quotes statistical results obtained in a transformer factory where 60 % of the accidents occurred "outside the usual working circumstances, at moments that occupied about 5 % of the operator's time".
Such observations lead practitioners to conclude that the risk of an accident is lessened in proportion to the degree in which the real work approaches the prescribed formal task. Prevention then consists of reducing this possible discrepancy. However, as to the way of achieving this objective, points of view and practices are going to evolve. So, we can distinguish in succession two kinds of "a priori" approach to the risks :
1. Analysis of risks in terms of "dangerous conditions and actions", 2. The ergonomic approach to the man-machine system.
II.2 - Analysis In Terms Of Dangerous Conditions And Actions
II.2.1 - Origin Of This Type Of Analysis
During the decade of the fifties, the accident analysis diagram proposed by Heinrich (1950) then made widespread by Lateiner (Monteau, 1979b), became
501 : Location of the work and surface areas of circulation (case of accidents occurring on a given level) 02 : Location of the work and surface areas of circulation (case of accidents comprising a fall with a difference in level) 03 : Objects in course of handling 04 : Objects in course of manual transportation. -13- predominantly used in firms, in such a way that risks are analyzed in the same way, along the same grid : dangerous actions correspond to dangerous conditions (see Table VII, p. 9).
Theoretically, the "dangerous action" has the status of an intermediate variable ; it can be the consequence of a "dangerous condition" or of a "human deficiency", and it has three possible causes :
- "A lack of knowledge of the work or ignorance of a non-hazardous method of work", - "Poor attitude", - "A deficiency or lack of adaptation that is physical, intellectual or mental".
In practice, the feeling that real progress has been made in the technical field has motivated prevention people to concentrate their attentions on the "human factor" from now on. "We know that the discovery and elimination of risks caused by dangerous conditions is relatively easy and it is certainly efficient. Technical prevention methods make progress every day because, in this case, each risk e)liminated is done away with for good. The same is not true for dangerous acts which are risks taken by the person at work" (Rousseaux, 1965).
In the field, it is certainly the supervisory staff who must unearth the risks with the help of a checklist which the company eventually adapts to its needs (Table VII, p. 9). Detection can take the form of instantaneous observations (Table VIII, p. 10) or of "campaigns" limited then to a specific risk (handling, wearing of individual protection, in particular).
II.2.2 - Topicality Of This Type Of Analysis
The interest accorded to this type of analysis is not solely historical, far from it. Three observations confirm the topicality of the subject :
1. Experience shows that today's "a priori" risk diagnostic practices remain widely underlain by dichotomies ; for example, "human factors/technical factors", "dangerous actions/dangerous conditions". For an example, see "Method of Instantaneous Observations with User Forms", published by the (French) Association for Accident Prevention and Improvement of Working Conditions (APACT, 1976).
2. Starting from the list of dangerous actions we can affirm that the risk (possibility of an encounter between the person and the danger) is connected with a discrepancy (judged unacceptable) between the observed behavior and the stipulated or "virtual" procedure. This behavior can result from an inadequate or dangerous fitting out of the work place (failure external to the person) or from human deficiency (internal failure). Today, it is as if certain practitioners had in some way transferred the definition of "dangerous action" to that of "human error", as the following given by Nicolet and Celier (1985) demonstrates :
"Human error is : - A behavior which exceeds acceptable limits. -14-
- A discrepancy between what has been done/perceived/understood and what should have been done".
An analogy exists here, not only in the definition, but also in certain of the practical implications, because if it is true that these authors recommend the tuning of the interface between the person and the system, they do no less than rehabilitate the personnel selection6 with the aid of psychometric tests whose efficiency had, however, been judged doubtful until then : "In spite of the progress achieved in the last two decades in matters of selection, much remains to be done in order to be in a position to guarantee, for a given "person/job" pairing, the required level of reliability. In this, there lies a whole field of investigation for human reliability".
On the other hand, even if Leplat (1985) notes that individual characteristics play a role in generating errors "so evident that sometimes one wants to impute the errors to these characteristics alone", as regards personality testing he emphasizes that human errors "and even more, accidents" are very varied in origin and they can be related to activities of very different types, in such a way that the same character trait can sometimes have positive effects and sometimes negative ones.
Complex tasks in particular can be carried out correctly with various different procedures, from which we can surmise that the relationships involving such and such a personality trait are not constant.
For all that, the debate on the role of individual factors in risk generation is far from being closed : "Few studies, if any, have attempted to find a direct connection between risk-taking behavior and one or more individual characteristics" (Cuny, 1978a). As regards these individual aspects, then, risk diagnosis more often reflects the convictions of the analyst rather than a reference to a properly established reality.
3. The classification of dangerous actions (Table VII, p. 9) which is apparently still compiled in an empirical fashion, is at the origin of numerous checklists of the same type.
Ramsey and coll. (1986), for example establish a taxonomy of dangerous behavior based upon about 18,000 observations taken during 14 months of investigations in a metal fabricating firm (at a rate of 60 instantaneous observations per day). Those authors observe that 10 % of the behavior is dangerous and that this is attributable to the operator himself (or herself) (73 %), to the use of tools, material or equipment (22 %) or to the use of handling equipment (5 %) (Table IX, p. 11). According to these authors, this approach demonstrates the feasibility of "a priori" safety measures.
A comparison of Tables VII, p. 9 and IX, p. 11, shows that the list of dangerous behavior established by Ramsey et coll., using safe behavior as a reference, is very close to the one made by Heinrich thirty years before.
6On this subject, de Montmollin (1972) declared : "it is necessary then, in the majority of cases, to abandon personality testing" and more recently Goguelin (1987) observes the development of selection by co-option in companies and sees in expert systems a tool capable of perfecting recruitment techniques. -15-
Thus it could be conclude that a good number of "dangerous behavior" have been listed for some time now and that the problem lies rather with the clarification of their "raison d'être". Besides, according to Faverge (1982) "the adoption of the criterion of a safe behavior (in order to define, by comparison, a dangerous behavior) has more to do with attitude to warning signs and discipline than with safety itself ; it even penalizes the experienced person who knows how to measure risks and to differentiate between them".
II.3 - Implementation Of The Ergonomic Approach
II.3.1 - Safety And Ergonomics
Up to this point, we have seen that in order to judge the (possibly dangerous) discrepancy between the real job and the stipulated one, the analyst was able to adopt different references : - Regulations, standards. - The "state of the art". - The operating mode as defined by the organizer. - Safe behavior.
On this subject, HO (1974) observes that "recourse to a reference is unavoidably necessary and establishing a reference and developing it indeed are the fundamental problems in the field of accident prevention".
In the first place, the recording of a discrepancy with respect to a norm or reference can create a "normative" attitude in the observer which is motivated, if the case arises, by the critical character and urgency of the situation. However, Ombredane and Faverge (1955) considered, on a more general plane, that the analyst commits "a fundamental error" when "taking a normative point of view, he sets out what the operator must do or is supposed to do instead of what he really is doing". In short, do we not risk stopping ourselves from discovering what is, in the name of what shoud be ?
So, with the ergonomic perspective under discussion here, a dangerous behavior is first considered to be the symptom of the lack of adaptation of the man/machine pairing, whose origins may be found in the defective arrangement of the work station, production demands that are not compatible with safety, organizational problems, etc. Inversely, the man/machine system functions normally when its behavior conforms to that expected of it, that is, when it guarantees the required production without harming the integrity of its component parts, namely :
- The human (absence of accidents or excessive tiredness, stress) - The technical (absence of incidents, breakdowns, breakages).
One sees then that safety is only of the objectives of the ergonomic approach, and that the adaptation of the work station to the person - a prerequisite for safety - implies taking into account technical, organizational and ambient factors, etc. Moreover, in an overabundance of cases, profound accident -16- analysis demonstrates the accident-producing nature of these factors (CECA 1967 ; Monteau, 1973 ; De Keyser, 1979 ;Moyen and coll., 1980).
II.3.2 - Industrial Practices
The field of "a priori" risk analysis is, then, considerably enlarged. If this enlargement does not seem to raise any serious problem of feasibility in research work, the same is not true for common industrial practice. Here, a technique is viable only to the extent that its yield is judged satisfactory : when all the improvements obtained are in keeping with the time (hence the cost) devoted to their application. Although, in practice, it is always very difficult to estimate the improvement in safety - the accidents avoided - that "a priori" action can bring.7
In this way, all the "a priori" analyses used in companies seek to optimize the cost/benefit ratio by various means and more or less explicitly. Without pretending to be complete, we can list the following seven processes :
1. Standardize to the maximum the work station analysis. 2. Provide a minimum (written) guide. 3. Extend the importance of the analysis by using it as a training aid. 4. Select the problematic work stations. 5. Start from an existing accident analysis. 6. Apply general rules in order to work out the risks from them. 7. Draw up and have at one's disposal a checklist.
Standardize To The Maximum The Job Analysis
Appearing for the first time in 1974, Renault's "Ergonomic Memorandum" (RNUR, op. cit.) corresponds quite well to this option. As Lucas indicates in the preface, the aim is to put at the disposal of the users "a method of fault diagnosis in matters of safety and working conditions and a guide which indicates the rules and standards that are essential to eliminate these anomalies, whether at the design stage or, if it is too late for that, the implementation stage".
The importance effectively given to rules and standards allows one to use the first part of this guide in the context of inspection visits. In contrast, the second part, dedicated "to the architecture and layout of the work station" already presupposes that an analysis of the work station has been made that takes into account the operator's activity and the characteristics of the means of information and intervention at his disposal. Lastly, the third part (questions related to the job environment) requires making measurements of the physical characteristics of the work station environment (noise, heat, pollution).
Applied in its entirety, this method does appear to be tedious despite the extreme degree of standardization. However it is still quite suitable for use on jobs which are well defined and unlikely to develop rapidly.
7This improvement can be estimated by the reduction in accident probability, as calculated with systems safety methods. -17-
Provision Of A Minimum Guide
In this case, the objectif is to provide the analyst with a guide accompanied by a questionnaire whose application should be able to guarantee a good relationship between the time devoted to the analysis and the importance of the knowledge derived.
APACT (op. cit.), for example, has drawn up a work-study guide which is sufficiently simple to be used in common by many companies (Table X, p. 12).
The questionnaire that accompanies it provides a lot of space for the opinions of those involved (supervisor, operator). In other respects, the guide justifiably emphasizes the need to arrive at some concrete conclusions, an essential point which gives the process creditility with the personnel concerned.
Extending The Signifiance Of The Analysis By Using It As A Training Aid
The significance of a work station analysis is notably increased when one can equally well make use of its lessons for personnel training. The example presented in Table XI, p. 13, is an illustration of this. The work analysis form, established by working groups composed of foremen, technicians and operators, details the frequency of operations, the risks encountered, and their origin, in regard to which the techniques and means of prevention to put into effect are listed.
Selecting The Problematic Work Stations
Since they appeared some ten years ago, "Evaluation grid for working conditions" procedures can be used to compare the work stations with each other, to select rapidly the most critical ones or to take a "before and after" measure of the modifications stemming from a rearrangement.
The best known grids are those developed by the (French) Laboratory of Work Economy and Sociology (Guelaud, coll., 1975), by Renault and SAVIEM (AVISEM, 1977).
These grids are all based upon the same principle : namely, of observing certain elements of the work conditions that are considered to be determinant factors. In this regard, the above-mentioned grids are very similar. The information gathered on each element of work serves as a base for the evaluation with the aid of a ranking system composed of 10 (Guelaud, coll. op. cit.) or 5 levels (AVISEM, op. cit.) which allows for placing each observation on a scale whose extremes range from "satisfactory" to "serious nuisance". The operators' opinions are not directly taken into account. These procedures have the advantage of obliging the analyst to examine a large enough set of factors which determine the working conditions. In this way they avoid giving priority to elements that are most easily modified and avoid ignoring aspects sometimes considered as inevitable "a priori". -18-
According to Piganiol (1978) : "the use of an analysis grid for working conditions allows one to go beyond partial and partisan views and can serve as a base for social dialogue with those persons affected and their representative authorities". On the other hand, this author emphasizes that the grids cited present "several fundamental defects which make other types of analysis necessary" : a conventional choice of elements to observe, a rigid ranking system corresponding to the stable reactions of an average individual, absence of measurement of the effects of disturbances upon the individual... With more hindsight, Montmollin (1986) considers that it is a matter of procedures which are "often very useful in practice, account being taken of the speed of implementation" but adds that "their necessarily "speedy" character and the taking into account of sole subjective appreciations does not permit these grids to replace an analysis and they can sometimes even mask the critical components of the work".
It is without a doubt difficult to estimate the impact of these techniques in industry but one can observe that they are still generally thought of as practical examples in a sector such as the construction industry (OPPBTB, 1985).
Starting From An Accident Analysis
The application of the preceding techniques is easily conceived of for work stations that are, if not fixed in position, at least geographically limited, and where the activity of the operator is fairly repetitive. So, the grids allow to escape through their meshes the work of the adjuster, the maintenance man or the supervisor.
While on this point, moreover, we observe that the traditional idea of a work station - a man, a machine - is on the way out in a number of companies. In consequence, "a priori" risk analysis must find points of reference other than the work station alone, for instance a series of operations that involve the transformation of a product. To do this, Dumaine (1977, 1986) has developed a "mixed" procedure (HO, 1976) ; this is "mixed" in the sense that it starts with an "a posteriori" analysis of incidents or accidents, but ends up nonetheless with an "a priori" identification of potential accident factors.
This process is founded on a concept which is very fruitful in practice, although debatable on the theoretical plane (Leplat, 1982) ; a concept according to which incidents, near-misses and accidents are the result of very comparable processes and factors, the possibility of an injury or its gravity being practically random.8 Thus, according to Dumaine (1977) : "very often, incidents are only serious because of their conceivable consequences". In concrete terms, on the basis of analysis of a "benign" accident (that is, an incident), the working group, having already established a real causal tree, makes an effort to complete it by grafting onto it some imaginable "anomalies" which would have aggravated the material or human consequences ; this new causal tree then becomes the "worst-case" scenario. The new facts taken into account (the dysfunctions) are
8According to Leplat (1982) there are no grounds for the incident being assimilated into the accident by considering the incident as a particular mode of the accident (an accident without injury) and so "certain types of accidents reveal dysfunctions which are also expressed by incidents, but not all incidents reveal dysfunctions that lead to accidents". -19- gathered from among the personnel on the basis of their working experience ; this gives a lot of realism to the process and consequently bestows a strong creditility to this type of analysis.
The "worst-case" scenario thus presents several advantages :
- Its starting point is work already carried out (the accident analysis). - It is addressed to participants who have already been made sensitive. - It is an opportunity to take the operator's experience into account. - Finally, the debate about plausible causes is such as to blur the problems of responsibility (because these are fictitious situations being discussed).
However, we fear that reality may exceed fiction, and that the memory and imagination of the operators may inevitably be limited as far as being able to cover the whole spectrum of probable scenarios. Dumaine (1985) proposes to shore up our imagination with an accident model linked to a checklist. The nature of these two tools is such as to give a systematic aspect to the whole process.
Application Of General Rules For Deducing Risks
In the "worst-case" scenario, the subject of the previous paragraph, an important step consists of establishing a list of anomalies (dysfunctions) already observed or in any case those that can be thought of. This list grows more and more complete in the course of the analysis, so much so and so well that the central concept of an anomaly, while difficult to specify "a priori", is somehow found to be defined "in extension" (through progressive accumulation of concrete examples). We can equally well define the concept of an anomaly "in process of understanding", that is conceiving of rules whose application may make some plausible risks evident, without having recourse to a preliminary list of dysfunctions. This is precisely the case of the HAZOP method (Hazard and Operability Studies, British Chemical Industry Safety Council, 1974) whose application permits the detection of deviations by referring to the normal functioning of a process, with the aid of only seven key-words (Table XII, p. 14).
In principle, the method consists first of detailed description of the normal functioning of the process, breaking it down into a series of predicted operations from which we then envisage possible deviations by applying the key words (not, more, less) that are called to mind. Each key word describes a type of discrepancy : absence, quantitative excess, undesirable concomitant effect, etc. This analysis leads to the creation of a table where the possible causes of the discrepancies are indicated, along with their consequences and of course the actions required or the technical modifications necessary to guarantee the safety or proper working of the system : moreover certain deviations can cause undesirable effects from a production point of view without having, for all that, any disastrous safety consequences. So, the HAZOP method can be used to advantage, and even more clearly than the methods already examined, as much for safety improvements as for production purposes. -20-
In other respects, the list of key words suggest a possible taxonomy for "human errors" applicable from the instant when the operation under analysis ceases to be completely automated.
However, the simplicity of the principles underlying this method should not disguise the rigor needed for its implementation. Moreover, the reference document (op. cit.) insists widely on this aspect : define the objective of the study, establish a preparatory file, create and motivate a working group, plan its activities, present the result in a synthetic document, choose which prevention measures have priority and follow up on their realization - these are just so manu stages needed for the success of the operation.
Drawing Up A Checklist And Using It
Of all the techniques examined in this second chapter, risk analysis using a checklist can appear, in the first instance, to be one of the simplest ; in principle it is enough to collate the accident factors already observed, and to establish from them a structured list (for example, classify the factors according to their type, Monteau, 1975 ; Faverge, 1977), then observe their presence in the workplace, in order to take action.
In fact, when it is a matter of uncovering technical failures, in particular, in the context of visits or inspections (cf. Chapter I), the checklist can turn out to be an adequate basis for an investigation. Certain regulatory aspects, in particular, obligations as to "means", are particulary suited to this type of practice.
So, as a general rule, the checklist is an efficient aid to tracking down risk only to the extent that it is composed of potential accident factors that are as directly observable as possible.
This demand implies firstly that the potential accident factors concern aspects of the work situation that are practically permanent ; it is easier to point out the absence of a set of guards on a machine than to notice a dangerous tool whose use is occasional and short-lived. The risk and the manner in which it is expressed must be, as a result, sufficiently concrete ; formulations that are too general, or concepts that are too abstract, lose all practical interest. But we equally well come across the inverse stumbling-block : interminable checklists made up of detailed and meticulous elements, which quickly become tiresome to use.
The identification of potential accident factors with the help of a checklist is sometimes only a step in a more ambitious procedure. Thus Barthod (1985) states that many potential factors remain unnoticed as long as they do not combine to cause an accident (many detail anomalies are not very dangerous in themselves). The author proposed then to combine the observed potential factors by constructing plausible causal trees. This procedure, very akin to the "worst-case" scenario (cf. para "Starting From An Accident Analysis") allows the operator to anticipate the accident : "to put together the mechanism of the accident, imagining such events at his place of work, is to acquire a necessary defensive reflex in work situations similar to the scenari that were put together" (Barthod, op. cit.). -21-
Finally the checklist can be conceived of, not only as a support listing the potential factors to be observed in the field, but rather as a kind of "aide- memoire" destined to guide the thought process of a work group : for example Dumaine (1983) reports on the work of a "development and safety circle" concerning the risk of falling in stairways. The group lists the possible causes of falling, starting from the experience of each participant. At first, these are classified in fairly large columns (Table XIII, p. 15) then in more detailed ones using the now famous Ishikawa diagram whose objective is to show the relationship of cause to effect, unifying the potential accident factors.
Dumaine is really describing an introspective procedure which allows to construct a checklist whose significance is to suggest in an intuitive fashion the multiple causes of a given category of accidents, and the complexity of their mechanisms.
Despite their diversity all the techniques or methods previously examined (cf. para "Industrial Practices") have one important point in common : their implementation is always a group affair, whether this be a "Safety Committee", a "Development Circle", a "Work Team" or the HSC. Risk diagnosis is from then on the result of a common train of thought which is as much of interest to the operator as to the expert. The accent placed upon operator participation responds, moreover, to a need : many undefined risks are of a "fleeting" character and so they are better known by those who build up the daily work routine (Laumont and Crevier, 1986). Further, we see that many operators are in a position to propose solutions to the problems found (de Keyser, 1987) and, on the other hand, the company is aware that a problem is never so well resolved as when the solution has been worked out by the people who have to apply it. However, it is obvious that we can hardly ask for operator participation without first putting together a certain number of favorable conditions ; and so, essentially the credibility given to these participative methods depends upon the improvements resulting from their application.9
Certainly, implementing participative risk analysis methods consists of a series of steps (choise of objectives, creation and motivation of a work group, follow- up on its achievements), whose definition and articulation demand from the very start a minimum of strategic thought involving the company's executive authorities righ up to the top manager.
III - SYSTEMS ERGONOMICS CONCEPTS AND METHODS
III.1 - Four Statements To Justify The Approach
9"The managers who set up, precisely as their principal aim, an important participative system as an end in itself, run the risk of missing their goal. Rather, they should aim to create more efficient activity and to improve working conditions in such a way that those involved may be able to sense the improvement" (Swedish Employers'Federation, 1977). -22-
Earlier, we stated that regulations and norms, useful and justifiable though they may be, do not do away with the need for an analysis of the work station. Regulatory and standards data appear, then, above all as indicators to guard against errors but they are no longer an end in themselves. So, different techniques, inspired more-or-less directly by ergonomics, are used to diagnose risk, essentially at the work station. However, the approach centered on the job is quickly going to appear as necessary but not sufficient. This inadequacy follows in fact from four statements which are very widely verified in industry :
- Risks appearing at the work station have outside origins. - Work station risks are more and more often known locally. - The traditional organization - a man and a machine - is tending to disappear. - Safety is the result of the firm's overall way of working.
Risks Appearing At The Work Station Have Outside Origins
This is a matter of common observation from now on : the majority of accident analyses effectively show that the injury results from of a network of factors, some of which can involve elements very far removed from the work station implicated in the accident. In other words, the appearance of a risk does not imply a judgement as to its origins. This statement does not invalidate investigating the work station as long as it is a question of identifying local failure effects or deficiencies of various origins. On the other hand, this kind of analysis turns out to be insufficient when the optimum solution should be sought elsewhere than at the work station. According to Liu (1983), only a team procedure, in the context of a social and technical approach, is capable of avoiding this pitfall.
This approach "refuses to consider a local solution to be a general one, and rejects confusing a solution which is efficient at a given level with one which give answers to problems of another level" (op. cit. p. 85). As a general rule, then, the question of the relevant level of action (work station ? team ? shop ?) must always be asked.
In principle, certainly, the reply is obvious ; it is better to eliminate the risks than having to compensate for the effects, but in real life the solutions adopted in the firm are more often the result of compromise between what one would like and what is possible (van Daele, 1987).
Work Station Risks Are More And More Often Known Locally
At the individual level, de Keyser (1984) notes that "a good many risks are known in the field, even if they can be ignored by higher management levels". In studying the perception of risks by the operators in an industrial boiler- making works, Laumont and Crevier (1986) observed that 77 % of the risks associated with machine tools, handling jobs or to work station layout are correctly perceived. In this case, however, the risks inherent in noise, dust and other emanations continue to be less well-known (39 %). -23-
These statements relating to the individuals' knowledge of risk seem to be equally true on the collective level. Lievin and Pham (1980) studied the case of a wire-mill where the majority of the risks, identified early on by the HSC, were finally rediscovered in the accident analysis. Kjellen (1982) makes observations of the same sort regarding the Health and Safety Committees of different industrial firms. Dumaine (1983) tells us about the capacity of "development circles" for establishing a thorough list of the risks of falling on stairways and of those associated with traffic flow in a steel mill.
So all this leads us to believe that accident risks are being progressively recognized by those who live with them, even if this knowledge is sometimes biased or incomplete.
From then on, the risk diagnosis can no longer be limited to a list, but it must equally be made to respond to a series of questions whose object is to determine what knowledge of the risks the operators have, the ability of the works authorities to take up this knowledge and to prepare plans of action, and the capability of the firm to change work situations that are at fault.
The Traditional Organization - A Man And A Machine - Is Tending To Disappear
For some time now, automation of continuous processes (energy production, petrochemicals) has made the traditional work station lose ground, if not disappear entirely.
More recently, automation has been progressively extended to discontinuous processes in the fields of mechanical engineering (machining), main assembly (parts welding), handling, and, in the near future, precision control and detail assembly will not escape this evolutionary process.
This transformation of modes of production profoundly changes the role of the operators of whom the majority are more and more removed from operations that they control at a distance. The complexity of the installations is becoming such that the risks associated with process intervention (adjustment, fault diagnosis or recovery from incidents) can no longer be analyzed without considering them in their totality. As a result, it is becoming more difficult to talk of a "work station" in the sense understood by classical ergonomists. Moreover, the work organization expresses this evolution in such a way that the distinction between maintenance and fabrication activities, for instance, becomes blurred in favor of an operating concept where the two activities are more or less mixed together (Decoster, 1988).
Faced with this, the systems approach comes to the fore ; the safety of an installation can no longer be taken into account as a separate item, rather it must be considered jointly with production concerns, reliability (reducing breakdowns), maintainability (speed of restarting following a breakdown) and quality.
"Safety Is The Result Of The Firm's Overall Way Of Working" (Leplat and Cuny, 1974) -24-
The idea of safety being a product of the entire company is, doubtless, not new ; Cavé (1977), for example, emphasizes that : "production and safety are not opposed to each other but rather they are effectively two inseparable elements of good management policy on the part of the firms".
In reality, the non-dissociable nature of safety and good management became evident thanks to the introduction of the concept of "systems ergonomics" (by Faverge, 1965) and that of the "socio-technical system" by the Tavistock Institute of Human Relations.
So according to Cuny (1972), the transition from the ergonomics of one system (man/machine) to the ergonomics of multiple systems has first of all permitted us to have "a new perspective of industrial work" out of which have come concepts that are today classic, such as co-activity, frontier zones between production units, process intersection... But it is in the socio-technical approach that Cuny (op. cit.) sees a process "suitable for the integration and specification of different ergonomic approaches" proceeding from the overall to the elementary, from the factory to the man/machine system10.
Lastly, Leplat and Cuny (1974) consider the socio-technical perspective as the generalization of the man/machine system. "Elementary systems can themselves become elements of assemblies that are more or less complex. We will speak, then, of man/machine systems (M x M) or more generally of socio- technical systems" (Leplat and Cuny, 1974).
Definitely, we will retain the idea that safety will, in future, appear as one of the sometimes very indirect consequences of the manner of working of the complex system that makes up the firm. We will hardly be astonished then, that such-and-such a recruitment policy, such-and-such a commercial service or such-and-such an economic constraint can have effects on safety. On the other hand, this kind of dysfunction will be a great deal more difficult to establish ; we are far from the discovery of deviations from the norms ; the risk will be expressed in terms of incompatible demands between subsystems, internal contradictions, means not adapted to the ends, etc.
III.2 - A Continuum Of Methods
All of the methods brought up in what follows adopt, then, the socio-technical perspective, but the fullness of this approach is equally the source of methodological difficulties ; the scope offered for analysis is in effect sufficiently vast to cause us to lose our way. We could also distinguish between risk diagnosis methods by virtue of the number of aspects, and the degree of depth to which these methods propose to explore them. In fact, each method is underpinned by a distinct and more-or-less explicit conception of the accident phenomenon which largely determines the mode of application. In this way, it appears possible to distinguish between two opposing concepts as regards the
10Cf., for example, Cooper and Foster (1971), Emery and Trist (1978), Liu (1981), Pasmore and coll. (1982). -25- need to base the diagnosis of risk on the accident model : in one case, economize in some way on an accident model, while at the other extreme, the model plays a fundamental role in determining the relevant questions.
According to the first concept, accidents are the consequence of a set of defective working conditions, each observed accident only representing a particular combination of these. The accident, essentially changeable in form, is one phenomenon among others, such as material damage and incidents. So the "a priori" diagnosis must take the set of working conditions into account, even if this means subsequently selecting those which call for urgent action.
At the opposite extreme, there exist methods which rely upon the use of an accident model. It is then enough to "make the model work", and this supplants the analyst's imagination or experience to produce useful questions. In other words, the model produces its own "a priori" question algorithm.
Between these two extremes, we will find methods for which the accident model plays a more or less auxiliary role to an extent where it most often is only an aid to thought.
So one can divide up the different methods according to the situation they occupy between these two poles. The following examples mark out this continuum in such a way that it is always possible to add future methods or those which have not yet been reviewed.
III.2.1 - Diagnostic Safety Form - DSF (Tuttle and col., 1974)
The Diagnostic Safety Form - DSF - is the very example of a method that does not rely on any accident model. It has as its objective the location of a set of deficiencies in a series of fields which, according to the authors, determine the total "Safety Performance". This diagnosis results directly in "Action Modules" whose aim is to reduce the number of the identified deficiencies.
The DSF is presented in the format of a "closed" list of questions tackling nine themes of investigation (organizational characteristics, physical work environment, tools, machinery and equipment, Training activities, etc.) which group together fifty questions. Each question is followed by a series of items, these being propositions for a response whose importance or frequency the respondent must assess. In this way, for example, the supervisor or operators concerned are asked to evaluate the importance that they give to : - The fact that the firm has an explicit safety policy - Providing adequate rewards for safe job performance
In these examples the persons concerned have the choice of five modes of reply, from "much below average importance" to "much above average imortance", ranked from 0 to 5 respectively.
The application of the DSF is made up of four stages : -26-
- The choice of the type of work station or activity ; the DSF is devised in order to detect problems common to a set of work stations, or to the same type of activity (mechanical maintenance, for example). However, the questions are not limited to the work station's problems or those of the particular activity investigated, but equally well touch upon some general themes (reception of new employees, role of managers in safety).
- The second stage consists of identifying the persons concerned : this being not only the operators assigned to the station in question or those carrying out a defined type of task but also their supervisors or the safety engineer, and finally the person responsible for training.
- The questionnaire is then distributed to the respondents, with certain questions addressed specifically to the safety engineer or to the person responsible for training.
- Pre-diagnosis and in-depth analysis.
The fourth stage corresponds to the development of results, according to the following principles :
First are grouped together those items which define the same problem. Adopting a medical analogy, one would say that the replies to these items permit to identify those symptoms whose specific combination validates the diagnosis of an illness requiring treatment. In this particular case, the "illness- treatment" pairing is called an "action module". For example, there exists a module called "increasing motivation to work safely" whose score is calculated by taking the mean of the results of items 1 to 17, 20 and 26.
- The score is then determined for each module, which allows one to establish a classification for the problems to be dealt with. This classification is, in some sense, a pre-diagnosis.
An action module corresponds to a type of problem that is as defined and specific as possible. Thus there exist 33 modules, very varied, and the actions they suggest can require the skills of an outside consultant (for example : put together a system of personnel selection, make the work more rewarding, devise a system of safety awards...).
Starting from the pre-diagnosis, modules are kept which will be the object of an in-depth analysis later. Each module is presented in the form of a digest which overviews the problem encountered by providing with the path that is likely to lead rapidly to a solution. For example, the module already referred to ("increasing motivation to work safely") is made up of the following six points :
1. "Identify specific safe and unsafe behavior".
2. "Conduct interviews with workers and supervisors to identify specific circumstances, conditions, and practices which may interfere with people's attempts to work safely".
3. "Identify desirable and undesirable outcomes for individual employees" -27-
4. "Determine what really happens to workers and what workers expect to happen as a result of safe vs. unsafe behavior (is there any difference ? which are believe to lead to more positive outcomes ?)"
5. "Modify organizational conditions to insure that a. workers are able to perform safe behaviors b. job outcomes that workers desire are provided only for behaviors that are safe"
6. "Allow workers to participate in deciding what changes are necessary and how to bring them about".
As we can see, this "in-depth" phase mostly makes use of the interview technique supported in the example shown by data from a questionnaire (for instance, dangerous behavior can have as a cause the poor condition of the equipment).
In conclusion, the DSF has first of all the advantage of being a procedure that is sufficiently standardized to be applied by a practitioner, such as the firm's Safety Engineer. The DSF allows to carry out a pre-diagnosis inside the issue in question, from which it is possible to call upon the services of different specialists, so as to clearly resolve the problems identified. The application of the DSF is a means of going beyond daily practices, by clearing the way for some priority prevention routes.
On the other hand, the DSF is incontestable a tiring procedure, insofar as there exists a great variety of work stations and activities inside the firm. Besides that, the "Action Modules" involve a certain number of options (selection and safety awards notably) which are difficult to transfer into other contexts "as is".
III.2.2 - Diagnosis Of Working Conditions (Piotet and Mabile, 1984)
In the course of the last decade, the focus on problems of working conditions has given rise to a multitude of concrete examples in companies, supported by an abundance of documentation. However, practical methods that are intended for local social participants who want to develop a process for improvement of their working conditions, remain rare. The work cited above, published by the (French) ANACT (National Association For Working Conditions Improvement), is certainly the most complete example.
Its objective is to fill a gap ; according to the authors, the principal obstacle to the improvement of working conditions is effectively not indifference, nor is it ill- will, but rather "the difficulty encountered by a group of partners to identify problems in their working conditions, and to master the relationships which they maintain not only with each other but also with the whole management system of the company" (op. cit.). -28-
The objective, then, is to produce a tool which is as simple as possible and which allows us to carry out an evaluation of the firm's working conditions. The proposed solution consists of five steps :
1. Get a general view of the state of the different sectors or services making up the firm, especially as regards working conditions.
This "global vision" will serve to select the sector or sectors where the in- depth analysis is essential. To do this, the analysis group (a special commission of the Works Committee, for example) first lists the sectors then describes their interconnections ("dependence analysis") then extracts the constraints existing in each sector that create major problems (accidents, sickness, stress, etc.). This summary of the state of the work place relies just as much upon quantitative date ("tension and dysfunction indices") such as absenteeism, rotation of personnel, internal changes and the existence of serious conflicts.
2. Discover the sectors which are problematic and which it would be useful to get to know better.
This selection, which is referred to as the "pre-diagnosis", is carried out by drawing up a summary of the comparative state of the sector.
3. Make an in-depth analysis, with the objective the highlighting of strong and weak points of the sector under study.
This stage is carried out with the help of a set of "Elementary Evaluation" tables (Table XIV, p. 16), touching on nine themes of investigation which group 63 questions together. Each question is composed of one to three items (176 items in total) which in fact correspond to associated sub- questions calling for the analysts' evaluation. For example, under the heading of "Work Content", Question A1 is concerned with "the suitability of tools", and an evaluation is made of the three following aspects : - The state of the tools (good, average, bad) - Their suitability for the work (good, average, bad) - Breakdowns (never, sometimes, often).
This "Elementary Evaluation" allows to construct a questionnaire adapted to the case being studied, by selecting the relevant questions. If it appears, for example, that the suitability of the tools seems to pose a serious problem, this question will be put to those involved (operators and supervisors) in the form given in Table XV, p. 17.
4. Set the terms for diagnosis within the sector.
As the authors emphasize, and rightly, this step is vital but difficult because "the establishment of a viable diagnosis is a matter of method and art" (op. cit.). So above all they provide "suggestions" as to the method and -29-
procedure to be respected.11 The diagnosis is thought of as a special occasion for debating the causes of the problems found : "we must take time for discussion because the detection of causes is not always as obvious as we may think" (op. cit.). This diagnosis can put to the fore problems which stem as much from working conditions as from safety, as testified to by the analysis summarized in Table XVI, p. 17.
5. Draw up a plan of action.
This phase is presented, with some realism, as a delicate stage, as much from the point of view of technique (generally, there are several possible solutions) as from the psychological one : there are no miracle solutions but "the best answer is always the one that comes from negotiation, that is, from confrontation between points of view and expert evaluation" (op. cit).
Finally, the authors underline the need to follow up on and evaluate the actions taken.
In conclusion, the method examined is of special interest in that it is an attempt at development of the tools intended essentially for local social participants. It undoubtedly is still open to perfection as regards the use which can be made of it, but it appears difficult to simplify its operational mode much without running the risk of gravely distorting it. For all that, the analytical method seems to be demanding (in terms of time and effort), but our conclusion is that a real improvement in working conditions comes at this price.
On the methodological plane, we will note numerous convergences with the DSF examined in the previous paragraph : - Absence of reference to an accident model in both cases - The same kind of use of the questionnaire - The same progressive approach to the problems (pre-diagnosis, in-depth study) - Often common themes of investigation - Same cautiousness regarding the expert (the study of the working conditions is first carried out by the firm itself).
However, in both cases, the lack of a reference to an accident model does not seem to be without methodological consequences. Everything effectively takes place as if this absence were made up for by an implicit reference to what would be the ideal firm, to which the unit under study is then compared. This reference is generally common to both approaches regarding the most recognized accident-producing factors (notably, bad condition of materials and tools, physical nuisances). On the other hand, notable differences are observed for factors less directly linked to the accident. In this way, the first example (DSF) lets some conceptions stemming from the behaviorist trend show through, particulary in certain in-depth models. The second example, for its part, takes up themes of investigation such as "liberty of expression", "solidarity
11The analysts make use of a "little library" which furnishes a digest of the different themes of investigation. -30- of interest with the firm", "participation in decisions", which bring up the values embodied in "industrial democracy".12
III.2.3 - Safety Diagnosis Questionnaire - SDQ (Bernhardt and coll., 1984)
This method relies upon a concept of accident-causing situations, according to which the risk of accident appears when the operator can no longer predict the progress of the work or regulate the level of risk, that is to say when the latter assumes such a degree that the operator is no longer able to guarantee control of it by him (or her) self.
So, the safety objective is compromised :
- (Quite obviously) when there are multiple risks.
- If the operator is confronted with several simultaneous risks (for example, obstacles to movement, sparks projected from a welding job).
- When the technical or organizational conditions are found to be incompatible with the requirements needed to carry out the work safely.
These possible incompatibilities are called "Critical Constellations" (Fig. 2, p. 18) and the principal aim of the "a priori" diagnosis is precisely to detect the possibility of an accident in a work situation. This is accomplished with the help of a questionnaire consisting of 250 items, divided into eight units of investigation :
1. Structure And Implementation Of Work Safety Programs, 2. Formal Organization Of Work, 3. Environmental Influences, 4. Possibilities Of Hazards And Safety In The Work System, 5. Presentation And Processing Of Information, 6. Execution Of The Work Task, 7. Communication And Cooperation, 8. Acting In Safety-Critical Situations.
The questionnaire is filled up by "safety experts" (op. cit.) but many questions are addressed to the operators. It has been created in order to study the work stations or the jobs in their real organizational and technical context. The replies to the questionnaire allow us to define their "Critical Constellations" which are, in effect, potential and complex accident factors (Fig. 2 is a diagram of the principle involved) ; for example, "danger due to lack of cooperation" corresponds to the following configuration (for a welding work) :
12According to Weiss (1978), the expression "industrial democracy" can be used conventionally "to identify a non-exclusive contractual system of industrial relations, in which workers and unions find them selves to be, in some manner - and in a different way - involved in the firm's running and decisions". -31-
Demand Incompatible Condition
- Communication with changing - Risk area cooperation partners - Time pressure - Agreement about utilization of space - Contradictory verbal instructions - Verbal communication - (No) contact with work safety personnel
- Working under unfavorable acoustic conditions
- No communication with foreigners
In conclusion, the accident model which underlies the SDQ (for all that it is very simple) corresponds nonetheless to a well-established reality : most accidents actually result from the synthesis of elements which can appear to be fairly insignificant from a safety point of view when they are considered in isolation. Such syntheses of accident factors have been extracted from work accidents (Lievin and Pham, 1980) or road accidents (Fleury and coll., 1987). These syntheses, or "Critical Constellations", are in some way "minimum scenarios" sufficiently general to be found in several kinds of work situations but also sufficiently specific not to remain undiscovered during an "a priori" analysis ; effectively, minimum scenarios inevitably occur less frequently than their component elements, but for this very reason, such situations can attract the attention of interested parties by virtue of their unusual or unexpected character.
III.2.4 - Management Oversight And Risk Tree - MORT (Johnson 1975)
In its essence, MORT is an accident analysis method, hence an "a posteriori" method and it is already listed as such (Monteau, 1979b).
As a matter of interest, the accident is first defined as "an unwanted transfer of energy that produces injury or damage to persons and property or degradation of an ongoing process. It occurs because of a lack of barriers and/or controls. It is preceded by sequences of planning errors and operational errors that produce failures to adjust to changes in human or environmental factors. These errors lead directly to unsafe conditions and unsafe acts that arise out of the risk in an activity" (Johnson, 1975).
Consequently, the accident is the result of a set of omissions or inadequacies which are logically combined and which one can then single out in a systematic fashion by following the diagram reproduced in Fig. 3, p. 19.
The accident analysis is oriented then, in the following three directions : -32-
- Research of "specific" factors such as oversights involving protection, for example.
- The search for "assumed" risk factors, that is those tolerated because of their rarity of because it is impossible to find the answer or because their prevention is too costly.
- The research of factors linked to the firm's "general characteristics of the management system", which have contributed, directly or not, to the occurence of the accident.
This investigation is carried out with the help of 300 questions, many of which are of the "open" type (that is to say, there is no standardization of replies).
For example, the "Maintenance" sub-problem is made up of the following questions (Johnson, op.cit.) :
• Was the maintenance plan LTA (Less Than Adequate) ? Was there a failure to specify a plan ? a) Was maintainability LTA ? b) Was the schedule LTA ? c) Was competence LTA ?
• Was there a failure to analyze failures to causes ?
• Was execution LTA ?
(Etc.)
The questionnaire can be used in an "a priori" manner in two ways :
- The author himself envisages its application to the examination of quality and to the relevance of a safety plan. In a certain way, the questionnaire becomes a guide to prevention that is usable in the framework of an audit of the function. Suokas (1988) notes that, "in this case the aim of the search is to identify accident contributors at a higher level and to direct the use of other methods of safety analysis to examine more closely the problems identified".
- Hendrick and Benner (1987) observe that not all the questions asked during an accident analysis are necessarily relevant in every case. So certain questions can be found to be superfluous in the specific context of the accident under analysis, as they are not relevant ; we can, for example, discover a maintenance failure without this having played an active role in the accident under study. Furthermore, the fact that the analyst has to make a judgement on a given problem in terms of "satisfactory" or "unsatisfactory", leads (in the best of cases) to an in-depth examination of the work situation. In practice, then, systematic application of the MORT method combines (without having intended to) the "a posteriori" and "a priori" approaches. This confusion can be perceived as being a source of enrichment or tediousness, according to one's point of view and immediate objectives. -33-
In conclusion, the MORT method is an example of an "a posteriori" method whose use is extended to the "a priori" diagnosis of risk. It is, in a way, an example of a link between "a posteriori" and "a priori" methods. There is, then, no solution that involves continuity between these two types of approach.
Many accident methods can, without any discussion, be used in a dual way like this, to the extent that they are expressed in the form of a questionnaire, or by an accident model that directly suggests some precise questions. ("Algorithmic Methods", cf. Monteau, 1979b).
Such would be the case, for example, for the models proposed by the Swedish Work Environment Fund (cf. Corlett and Carter, 1982), by Kjellen and Larson (1981), Périlhon (1985) or Tuominen and Saari (1982).
Moreover, this example brings to mind that one of the most immediate means of putting "a priori" actions into gear consists not only of analyzing undesirable events other than the accident, such as incidents and near-misses, but also dangerous situations, false maneuvers or errors in the execution of a prescribed task. Safety practitioners have long been convinced of the importance of analyzing all events. Nevertheless, these types of practices remain fairly rare, so much so that it is true that an incident (by definition, an even without materially important consequences) is often ignored, indeed it is often covered up, when those who caused it feel that it resulted from willful disobedience of formal rules.
III.2.5 - An Example Of Systemic Accident Modelling (Dumaine, 1985)
This last mode of "a priori" analysis is based upon the use of an accident model shown in Table XVII, pp. 20-21. The model proposed is of itself valid as a generalized description of the accident mechanism.13 The accident, or more precisely the injury, appears as the result of a logical chain of sequences which themselves are so many areas for possible accident prevention.
In principle, then, this "a priori" risk analysis consists of progressively constructing a debased work situation by identifying :
- The potential dangers (energies, locations incompatible with the presence of human beings, etc.).
- The processes ("undermining", "destructive" and "triggering") by which the dangers can become a reality.
13 Rigorously, we can consider that the term "model" is employed in an abusively extensive fashion because, as Lliboutry (1985) recalls "one should call a model the description of reality which provides scope for calculation". The notion of "model" as used here, approaches that of the "scope model" proposed by Rouanet (1983) which corresponds to the "concrete" idea of a "device" which is constructed with the aim of putting into an operational form the questions asked about the phenomena under study. So, according to Rouanet, the "scope model" "furnishes a context in which to interpret the acquired data". -34-
- Factors originating triggering of these processes ("generating factors", "antecedent causes").
So the author proposes a grid for classifying antecedent causes destined to facilitate their identification, while pointing out that "the classification of causes is not of primary importance" (Dumaine, 1985).
In contrast, details of the accident are correctly located within the prevention phase proper, which results from the analysis and which is the last stage of it.
The proposed solution thus has four essential characteristics :
- It puts into practical form an affirmed didactic intention. In effect, the accident model, while remaining relatively simple, nonetheless clarifies the fundamental properties of the accident that we need to know about, namely : multiple causes of the phenomenon ; existence of causes over and above that of the injury (in depth) ; coming together of independent processes, etc.
- The adopted model above all furnishes a guide to analysis which replaces, and to good effect, a closed questionnaire which would risk being overcrowded with questions and which would soon become tiresome to use.
- Implementing this solution above all involves participation and calls upon the expertise of all those involved (operators and their supervising people), implying "visits shop by shop, work station by work station and preferably accompanied by those responsible and most senior in that sector" (Dumaine, op. cit.).
- The solution is primarily aimed at the prevention of serious accidents ; "there is a particular therapy for serious accidents, even if human behavior plays a role in the majority of accident configurations and remains a frequent, if not systematic, component" (Dumaine, op. cit.).
However, De Keyser (1987) suggests that the serious risks are known and that the origin of accidents, rather, has its roots in the lack of adaptation of the applied means of prevention. "Dumaine assimilates the latent risk into the technical risk, which is well known and for which for want of being able to elimitate it, passive safety measures are activated ; that is, a set of conditions favorable to the elimination of, or limitation of the effects of accident. This type of passive safety includes : safety features inherent in the machine, individual or collective types of protection available, emergency procedures, non-material barriers etc. However, not a day goes by without some accident revealing that the protection was neutralized because it got in the way of the work, safety systems were put out of action, etc."
Elsewhere, HO (1987) underlines the increasingly non-material character of some potential accident factors as, for example, the omission of some information vital to the operator or an inadequate mental representation of the technical process that he is controlling.
Finally, according to Cuny (1987b), the solution shown here represents "appreciable progress and furnishes us with a tool whose application should be encouraged in industry" but he adds that "this mode of "a priori" risk research -35-
"lacks a functional technical structure that offers the possibility of achieving in this research, if not exhaustivity, at least an efficiency which approaches it".14
Definitely, we consider that only an extensive application of this procedure would permit the safety practitioner to extract from it the advantages and limits for the firm.
III.3 - Value And Limitations Of Systems Ergonomics Methods
Advisedly, the use of the methods examined in the previous paragraphs assumes that several conditions are fulfilled.
From a general point of view, we can observe that deepening our knowledge of the risks - an objective aimed at by all the methods discussed - demands a double effort :
• A technical effort to adapt the method to the specific problems of the firm ; even the most formal methods (the DSF, for instance) demand a reasoned selection of the questions and themes of investigation, as well, as a function of the technical characteristics (type of activity), organizational (existence of functional structures) and psychological ones (motivation of those involved regarding safety problems). This adaptation effort requires knowledge (technical, ergonomic, etc.) which is not necessarily possessed by the group responsible for the implementation of the adopted procedure and this comment is even more true for the application phase. So, several solutions are proposed to resolve such a difficulty :
- Make preliminary training of the group in the method used (case of the MORT method).
- Carry out self-training of the group with the help of training aids associated with the method (DSF, "Diagnosis Of Working Conditions").
- Choose members of the group from among those who have the best knowledge of the work under examination ("Accident Modelling").
- Have recourse to "Safety Experts" (SDQ).
• An effort to put in place rigorous logistics. The application of the previous method should involve making a detailed program without which we fear that there is the risk of scuttling the action. This risk is doubtless increased if the method used brings significant results only after its complete implementation (case of DSF notably). On this point, some methods are more "flexible" than 14 Cuny (1987b) sketches out the four principles which those subscribing to the HAZOP method (already discussed in Chapter II.3, Para. "Application Of General Rules For Deducing Risks" should not fail to note, namely : - Start from the real work sequence - List all the variations which the system elements can encounter - Cross the elements with each other and note their possible interactions - Identify risks starting from the preceding grid. -36-
others, when they permit us to discover actions for prevention during the course of applying the methods (case of SDQ and "Accident Modelling").
This double effort of making technical adaptation and action plans presupposes the availability and involvement of local social participants and it demands, on the other hand, that the method (chosen by the group which puts it into operation) be truly suitable.
Finally, one must remember, if necessary, that knowledge of the risks uncovered by the application of "a priori" methods is only an intermediate objective and that the actors will participate in the action only as long as they remain convinced that this will lead to real modifications in their working conditions. The improvement obtained, then, must be worth the efforts agreed to by the analysts, and the existence of a balance between these two terms (technical adaptation and plan of action) determines, to a great extent, the continuation of the action. -37-
400 - SYSTEMS SAFETY CONCEPTS
401 - BIRTH AND DEVELOPMENT
Fault tree methods (1960's)
Systems safety has been built up as a discipline as the result of an increasing need to master new and littel-know risks. The complexity of high-risk technical systems (weapon systems, electro-nuclear, areonautical) has actively contributed to the development of a systemic approach to installations.
The "systemic approach" designates a series of analytical steps designed to deal with the complex problem of characterizing numerous forms of organization (biological, social, socio-technical, etc.). Examination of their interactions has a central place in that process.
It is in fact the capacity for considering interaction phenomena between the elements of a technical system (functional units, components) that characterizes the methodological originality of systems safety.
But, at the beginning of this approach, people were concerned in a more restricted way with the reliability of the techniques that had been put into practice. Lack of experience as regards the existence of certain failure risks naturally led those persons responsible for complex technical programmes to concentrate their efforts on perfecting a method, or rather an analytical tool, which would allow them to proceed towards a systematic examination of risks.
The Fault tree analysis method, developed by tye Bell Telephone Company for the U.S. Air Force, will deal with the first requirement, which is to control the risk in an analytical fashion. This method was first tried out at the beginning of the 1960's for evaluating the safety of Minuteman ICBM firing systems.
Development of Fault tree analysis would be pursued by the Boeing Company who was also developing the "Preliminary risk analysis" method. During 1965, Boeing, with Washington University, would co-sponson a symposium on Systems Safety at Seattle : dissemination of the method has begun. -38-
It would find an important area of application some ten years later, with the appearance of the "Rasmussen Report" (1975), alias the "Wash-1400 Report", concerning the safety of American nuclear power stations. Started at the beginning of the 1970's, this study represented the first exhaustive risk analysis of nuclear power stations. It would contribute to the development of probabilistic risk analysis by deliberately giving importance to quantifying risk, using the Fault tree method. Another analytical method would be developed during this study : the "Event tree" method, presented as being complementary to Fault tree analysis.
Debates
The polemic which followed the publication of this study resulted in part from that attitude. What was missing was the use of a "coherent methodology organized at the level of a complete system, that takes into account, on one hand, its environment (technological, natural, human) and, on the other, its lifetime (commissioning, maintenance, etc...)", (Deschanels and Lavedrine 1984, p. 32).
The American Institute of Physics effectively leveled a certain number of methodological and technical criticisms at the "Wash-1400 Report", and that reaction led to the development of a second version, which would itself be called into question by the "Lewis Report" (1978) ; this report being drawn up at the behest of the United States Congress. This document, reviewing the achievements and limits of "Wash-1400", sets forth various criticisms, bearing down notably on the validity of the basic data and upon the methodological and statistical means used. Finally it emphasizes the need to have more reliable methods for the quantification of uncertainty.
These conclusions contributed to the decision taken at the beginning of 1979 by the Nuclear Regulatory Commission (NRC) to reject the results of Wash- 1400.
Such a situation almost condemned at birth the nascent techniques of probabilistic risk quantification and in consequence threatened the existence of the Fault tree method which gave importance to pursuing that objective. -39-
The accident that occured in March of that same year, at the Three Mile Island nuclear power plant, stopped that movement in its tracks. The accident scenario, as it was reconstructed after the event, had effectively been foreseen by the Rasmussen Report (although the latter asserted that the damaged core would melt and even though the calculated probabilities, notably those related to human error, appeared retrospectively to be devoid of any foundation). The Three Mile Island accident would contribute to relaunching interest in probabilistic analyses of risks in the nuclear industry.
Nonetheless, in parallel with these events systems safety analysis methods developed rapidly. They found various applications, as much on the scale of important industrial sites (Safety analysis of the Petrochemical Complex on Canvey Island, 1978 and 1981) as for the operational safety of industrial products such as aircraft (Wanner, 1969).
Developmental factors
Systems safety progressively developed as a discipline according to its own logic, that of creation and improvement of analytical tools, designed from the beginning for improving the relability or systems that were technically complex and which carried high risks for the safety of personnel.
Further, the development and dissemination of these methods and new techniques have been reinforced by the evolution experienced by a nuclear industry that generated ever more important risks underlying major accidents.
Figure 1, p. 24, is a schematic of the different elementary contributions that account for the importance given to systems safety methods today.
Besides the essentially historical factors already discussed in the previous paragraph (complex and unreliable technologies, tools available), two other pieces of information must be considered : - The impact of industrial catastrophes, - The development of large industrial sites.
Industrial catastrophes -40-
Industrial catastrophes (Feyzin, Flixborough, Three Mile Island, etc...) have undeniably contributed to favouring the development of analytical methods for risk prediction. The human and material losses consequent on these accidents have brought with them social consequences (unacceptability of major risks) and economic ones (accumulated safety demands on the part of insurance companies) which made it necessary to find some efficient solutions quickly. The development of the SEVESO EC Directive and, for France, the law on "classified installations" are regulatory responses which, in formulating obligations as to results (but not providing the means to achieve them), lead industrial firms to seek out and adopt efficient procedured for risk analysis.
The methods that stem from systems safety can then come up to their expectations;
The SEVESO EC Directive of 24 June 1982 puts an obligation on European Community member states to take all necessary measures to ensure that those responsible for certain industrial activities (we note that military and industrial installations are excluded here) can prove that they have determided "the existing risks of major accidents and have taken the appropriate safety septs and informed, trained and equipped all their personnel working on site in order to assure their safety" (Article 4 of the Directive).
In France, it is through the legislation on "classified installations" (Law of 19 July 1976 and application decree of 21 September 1977) that this Directive is applied. This legislation submits installations "that can present dangers or inconveniences, to "authorization or declaration" according to the gravity of the dangers or inconveniences that their use can present", for the well-being of the neighborhood or safety or public health. As regards classified installations see Ferange (1984) and Charbonneau (1987).
The development of large industrial sites
Serious risks are implied, notably as a result of the (nuclear and chemical) energies brought into play. Besides, an accelerating technical development reduces more and more the amount of relevant technical experience already acquired ; new and unfamiliar risks appear (emission of harmful produts, for example). This lack (or even total absence) of experience engenders the need to create or develop tools to allows us to control the risk provisionally at least. -41-
Existing methods tried out in very specialized or "frontier" (state-of-the-art) industrial sectors arouse the interest of other sectors, such as the chemical and petrochemical industries. Most often, those responsible will have to adapt these tools to their needs and possibilities, particularly economic ones. Thus the implementation of methods of systems safety for very varied industrial needs is often relatively removed from the tedious and complex procedures applied in the specific fields of weaponry, nuclear energy or, again, the aeronautical industry. -42-
402 - RISK CRITERIA
Criteria for defining the "acceptable risk threshold"
The fundamental objective for a study of systems safety is to attain a level of safety judged to be satisfactory. It rests consequently on a comparison between an evaluated level of safety and the standard.
The definition of a safety standard for a complex technical system in industrial use is carried out by measuring the maximum allowable risk of accident,and this measure should be the subject of a certain amount of preliminary agreement.
Formal definition of risk
The idea of risk, whether it is a matter of risks affecting the means of operation, the environment or the individual, is characterized by the pairing or "possibility of occurrence" and "gravity of the consequences" as applied to a dreaded event.
From a theoretical point of view, a curve of risk acceptability can then be defined (Fig. 2, p. 25).
This curve allows us to distinguish between acceptable and non-acceptable risks. A feared event can thus be represented by a point defined by two coordinates, "probability" (X-axis) and "gravity" (Y-axis) : event A, with serious consequences but low probability of occurrence, would represent an acceptable risk ; vice-versa, event B, of lesser gravity but more probable, would correspond to a risk that we would judge to be unacceptable.
This way of representing risk raises two questions : - Haw does one evaluate the probability of occurrence of an event ? - What are the criteria that permit us to delineate the "frontier of the acceptable" ?
The product "Gravity x Probability" is only constant from a theoretical point of view (inversely proportional values characterizing the curve of the equilateral -43- huperbola, in the form of Y = a/X). However, we might need to ask ourselves whether Event B on Fig. 2, p. 25, an event whose gravity and rarity are not extreme, is really the concern of systems safety ?
Probability of occurrence of an event
This is generally evaluated from statistical estimates. Thus to determine the probability of failure of a simple component (a mechanical or electronic part, for example) it is possible to use statistical date resulting from laboratory trials or date from operational use that has been established from acquires experience. For example, we will attribute to such-and-such an electronic component a failure probability of 1/1000th per hour of operation (10**-3/h).
Difficulties arise when very low probabilities have to be evaluated, in other words, probabilities which are not amenable to statistical operations. For example, the evaluations might no longer be based upon the reliability of a given element but, as Lievens (1976) suggests, upon the reliability of a given method. "Thus, the total number of flight hours made by the set of all airplanes permits us to say that the method of dimensioning aerofoil longerons is sufficiently reliable for us to evaluate the probability of rupture of that element as less than 19**-9/h". Lievens, op. cit., p. 53). However, as a general rule, it is certain that when we are referring to undesirable events that are very rare (probability criterion), the evaluations can be marred by a large amount of uncertainty, due, in particular, to the arbitrary nature of the imagined scenarios and the accumulation of margins of error.
This statement is also valid for very serious accidents (gravity criterion), whose social and financial consequences are difficult to evaluate in advance.
Delineating the "Frontiers of the acceptable"
This comes from a decision that is political in nature. Certain other considerations can, moreover, contribute to the fixing of the threshold. The two evaluation methods that are usually called upon to give a foundation to decisions and to justify them, if the need arises, are the following : - Evaluation from the standpoint of the value human lives, which leads us to translate into monetary terms the advantages obtained from reducing the -44-
risk. This evaluation method (for example, actualizing the total of the individuals' net future income, this value reflecting gheir contribution to the country's GNP) will allow us to allocate resources in such a way that, given a set of budgetary restraints, we can maximize the number of lives saved. - Evaluation from the standpoint of comparing different risks that are already "accepted". This approach then is no longer concerned with cost but with level of risk.
To the best of our knowledge, the first work of importance that may have been deveted to these questions is that of Starr (1969). This author distinguishes between two types of activity : - Voluntary activities ; for which an individual makes use of his own scale of values to take decisions (for example, tobacco smoking). - Involuntary activities ; whose choice, control or mastery generally is outside the individual's reach. They must consequently be considered as being imposed by society (for example the use of electricity).
On the other hand this author has evaluated the "annual mean benefit" obtained by these different activities : - For voluntary activities, the benefit is measured by the average of the amounts which the individual is disposed to spend on them. - For involuntary (imposed) activities the associated benefit corresponds to the contribution of each of them to the annual mean revenue per individual.
Finally Starr established a correlation between the "fatal accident probability per hour of exposure" and this "annual mean benefit".
The results obtained led the author to propose the following two interpretations : - For an equivalent benefit, the individual will accept a risk about one thousand times higher when it results from a voluntary activity as compared to when it is linked to an involuntary one. - For each category of activity, the risk increases much faster than the advantage associated with it. Thus, a doubled economic advantage will bring with it a risk that is eight times as great.
Villemuir (1988) observes however that "the study of the relationships between risks and expected benefits - the average annual benefit - has given rise to numerous discussions, given the difficulties of assessing the expected benefits. -45-
Starr's work would show besides that the mean probability of death caused by illness (around 10**-7 per person per hour) represents a limit value for risk acceptability. This value is effectively often used as a lower limit for accepting the probability of occurrence of a very grave situation.
Classes of gravity and probability
The levels of probability and gravity that are characteristic of the different levels of risks are rarely presented in a continuous manner. Rather, practitioners define classes for each level. Here, bu way of illustration, are the thresholds fixed in the field of civil aviation. (Lievens, op. cit.) :
Consequences - Classes of gravity :
- Minor : there is no perceived deterioration of a system's performance nor interruption of the mission, nor injury to persons, nor notable harm to property or to the system itself. - Significant : there is a perceived deterioration in the performance of the system which can cause interruption of the mission. Buth there is no injury to persons nor notable damage to property or persons. - Critical : there can be injury to persons and or notable damage to property or the system. - Catastrophic : the system is destroyed and/or many persons are seriously injured or killed.
Classes of event probability :
- Frequent : an event whose probability of appearance is above 10**-3 per hour. - Less frequent : an event whose probability of appearance lies between 10**- 5 and 10**-3 per hour. - Rare : an event whose probability of appearance lies between 10**-7 and 10**-5 per hour. - Extremely rare : an event whose probability of appearance lies between 10**-9 and 10**-7 per hour. - Extremely improbable : an event whose probability of appearance is below 10**-9 per hour. -46-
The overall system safety objectives can be expressed in the form of as grid like that shown in Table I, p. 26. Simpler risk evaluation grids, which do not rely on the notion of quantified probability, are also often used. The French Union of Chemical Industries (UCI) suggest one in its "Safety Note ≠ 4". -47-
403 - INDUCTION AND DEDUCTION
The application of systems safety calls for reasoning by induction and by deduction. This terminology designates the complementary procedures of identification and hazards analysis which are expressed in concrete terms by the use of particular techniques. The best known are "Failure Modes and Effects Analysis" (FMEA) for the inductive procedure and "Fault Tree Analysis (FTA) for the deductive one.
Inductive procedure
Starting from causes identified at the very beginning, this consists of protraying the different sequences of events likely to lead to one or several effects prejudicial to the system.
The inductive procedure "descends" from cause to effect. It is also called the "direct method", an expression which successfully conveys the sense of direction of the investigation, from the causes towards the effects.
Deductive procedure
This consists of working back to the root causes of the failures, which are given "a priori", by reconstructing the course of events likely to lead to these failures. The deductive step "ascends" from the effects towards the causes, justifying the use of the equivalent term "inverse method".
Specific and complementary nature of both procedures
It is interesting to note that, contrary to the inductive process, which is characteristic of numerous analytical methods, such as FMEA but also Preliminary Hazard Analysis (PHA), Event Tree Analysis (ETA), etc., the deductive process is generally associated with a single mathod, namely Fault Tree Analysis (FTA). In reality, this last method has no exclusive hold over the deductive process. "None of the so-called inductive methods is exclusively so ; the deductive process, which comes naturally to any person aware of the finality of his work, is never absent from it". (Lievens op. cit., p. 262). -48-
The principle of FTA is the following : "starting from the undesired event which is unique and well defined, FTA consists of identifying and logically representing the combinations of primary events that lead to the fulfillment of the undesired event." (Signoret and Leroy, 1986, p. 1602).
The commonly used expressions "inductive" and "deductive" are convenient because they permit us to make a distinction between two procedures for analysis of a technical system : progression from the general to the particular and vice versa.
But in the event, a more attentive examination of the criteria permitting us to make this distinction highlights an ambiguity as to just what is being referred to : the mode of reasoning intrinsic to each method or the mode of examining the technical system ?
For example, an Event Tree and a Fault Tree both require a deductive mode of reasoning : given the state of the system, at a particular level or instant, deduce from it the following state. In contrast, the mode of examination of the system will be inductive. In the first case, (progression from the elementary events rowards the final undesirable ones) and deductive, in the second case (inverse progession). Moreover, the types of logic used are themselves different : Binary, on the one hand, and Boolean on the other.
Figure 3, p. 27, locates both procedures in relation to mode of system analysis and to the flow of events in time. The inductive (or direct) method, which starts from primary events and leads to the identification or undesired ones, is "in phase" with the virtual progress of these events. On the contrary, the deductive (or inverse) method, which reconstructs the logic of the chains between primary events and undesirable ones is in "phase opposition" with respect to their progress in time.
This characteristics allows us to carry out simulations of dysfunctions for the various technical devices : studies of control circuit failures, non-material barriers, programmed logic systems, etc... (cf. for example, Schweitzer and Gerardin, 1984). -49-
404 - STAGES OR A STUDY
Implementing of a System Safety Study consist of carrying out four principal phases (after Signoret and Leroy, 1986) : - Defining the system under study - Identifying the risks posed by the system - Modelling the logic of operation and failures of the system - Making qualitative and quantitative analyses.
Numérous authors have proposed a description for the implementation stages of a systems safety study (Barbet and Guyonnet, 1984 ; Sutter and Troexler, 1987). The process presented by Signoret and Leroy (1986) has been chosen as it summarizes fairly well the essence of the different positions.
Definition of the system under study
Knowledge of the system :
This requires collecting together documentation about production and safety objectives, the regulations in force, technical complexity (schematic diagrams, detailed plans, specialists' explanations).
Description of the system's environment, especially as regards interactions with the outside world (inflow of primary materials and outflow of finished products).
Knoledge of safety objectives which the system must respect : definition of abnormal use (conditions of use outside specifications), supply of information about undesirable system events, etc.
This aspect presents, in particular, some problems of division of responsibilities in the case of a dysfunction or destruction of the system (for further information, cf. Deschanels and Laverdine, 1984). We would add here that specifications related to maintenance can directly affect safely. The example quoted within a collective work devoted to "technological accidents" (French CNPP - AFNOR, 1988) is illustrative in this regard. This concerns the accident that occured in Chacago in 1979 ; while taking off, an aircraft had one of its engines fall off ; it lost equilibrium and crashed at the end of the runway. "The inquiry discovered why the strut between wing and engine pylon had given way. This link is -50- worked a lot, which makes periodic dismounting, inspection and remounting necessary. The procedure foreseen by the manufacturer was first to detach the engine from the pylon, then the pylon from the wing, and to do the reverse on re-assembly. But the operator had decided, with economy in mind, not to disassemble the pylon from the engine. In that way, a mass of seven tons, instead of 600 kilos, was detached from the wing and then remounted on it ; some error in manipulation distorted the wing - a light and pliable part - and made the ling fragile" (p. 44).
Identification of the hazards posed by the system
The specialists'knowledge of hazards is rarely exhaustive (Signoret and Leroy, op. cit.) and it is even less so when the technical system under consideration is new or very complex. Methods that permit a more systematic identification of the hazards have consequently been developed. The best known are : - Preliminary Hazards Analysis - PHA - Failure Modes and Effects Analysis - FMEA - Event Tree Analysis - ETA.
These three mathods are the subject of a detailed presentation in Article 500 "Systems Safety Methods".
Modelling the system's operational and failure logic
Implementation of inductive methods for hazards identification essentially calls for a deductive process that allows us to reconstruct the sequences of events that are to all intents and purposes involved in the appearance of the undesired dysfunctions. "It is a question of establishing a model to correctly represent the causality between these (dysfunctions) and the primary events - failure of a given component, external events etc." (Signoret and Leroy op. cit., p. 1602).
The most used method of presentation is that of Fault Tree Analysis, whose history has been mentioned and which is described in Chapter 504. -51-
Qualitative and quantitative analysis
Beginning with models, this last stage consists of evaluating the probability of occurrence of the hazards envisaged. (Signoret and Leroy, op. cit., p 1599). A qualitative analysis (classification of the undesirable events as a function of their relative importance, their mode of appearance, etc.) or quantitative (allocation of probabilities of occurrence for each of these same events) both lead to the evaluation of the level of risk inherent in the system.
The results obtained will be a valuable aid once a decision has been made to proceed, if necessary, with modifications of the system, in order to improve safety.
The authors indicate additionally that Hazard Analysis allows those responsible for a project (and from the design stage) if not entirely to suppress the risks posed by an installation, "at least, to anticipate them, and so minimize their consequences by modifying the characteristics initially intended for the system". (Signoret and Leroy, op. cit., p. 1599).
This remark is important because it puts the accent on the importance of making a system safety study in the design stage. In this sense, the four stages just presented describe, in a certain manner, an exhaustive and idealized process. In practice, we have some difficulty in imagining that such a complex procedure can always be suitable in the course of operation of a technical system, because of evident feasibility and cost reasons.
Fig. 4, p. 28, summarizes the different stages presented by the authors. -52-
405 - PRINCIPLES AND USES
The expression "systems safety" serves to designate very dissimilar applications, as much as regards the objectives assigned as in the means - technical, human, financial - employed to achieve them. What is there effectively in common betwenn a study of system safety applied on the one hand to a large industrial complex and on the other to a machine tool's control circuit ? Three comments will bring a shade of realism to the implementation of the method.
Respect for the fundamental objectives
The comparative examination of "acceptable" versus "evaluated" risk most often is only a characteristic of some very hurried studies, either those dealing with industrial activities that imply very high risks to the population (a problem of major technical hazards) or those which concern privileged sectors of activity and which also have considerable technical content (large budgets, weak or absent competition, highly developed research, etc.).
Weapons systems provide an illustration of this last point. For example, Louet and Barrault (1986) presented a safety study of a submarine's on-board missile system. This study conforms perfectly to the basic principles of systems safety : a comparative examination of accepted and evaluated risks, and decisions made (Fig. 5, p. 29).
Nature of the system studied
The majority of studies are concerned with more or less complex closed systems and not those exposed to the environment. Closed systems entail few or only simple exchanges with the outside world, in the form of information or energy. They are note very capable of reacting to outside changes and even more unable to change their behavior according to circumstances ; essentially most traditional technical mechanisms fall into this category. These studies, although motivated most often by the concern for general prevention (that is, being limited to the safety of industrial personnel), use analytical methods (FMEA and FTA abeve all) that were developed in the field of systems safety. -53-
For a general presentation of systems safety applied more specificaly to closed systems, such as machine systems, cf. HO (1976).
The expression designates more the use of tools rather than the implementation of a methodology, properly speaking. In this context the analyses remain in general qualitative or semi-quantitative for some applications.
Criteria for choosing the method
Chapter 403 "Induction and deduction" provided us with the opportunity of shading the formal distinction between inductive and deductive procedures. Lievens (op. cit.), on the other hand, is of the opinion that the choice of method(s) also depends on the degree of knowledge of the system under study. For example, at the end of a comparison between deductive and inductive mathods, he points out that some of them are functionally dependend on having acquired experience of the risks.
In summary, the rational distinction between inductive and deductive tools is partly blurred in favor of more practical considerations (even if, elsewhere, this distinction is santified by use). -54-
500 - SYSTEMS SAFETY METHODS
Carrying out a systems safety study requires using different analytical tools which vary in performance, field of application and complexity. Only the essential characteristics of the most common methods (along with the technical and operational criteria that permit us to distinguish between them) will be presented here. For a detailed presentation (and technique) of the set of methods actually in use, cf. Villemeur (1988).
The following four methods will be examined : Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Event Tree Analysis (ETA), and Fault Tree Analysis (FTA).
501 - PRELIMINARY HAZARDS ANALYSIS - PHA
Presentation
PHA has, as its objective, the identification of risks presented by a system and after that "to define the design rules and procedures that will permit elimination or control of dangerous situations and the potential accidents that are thus made evident" (Lievens, op. cit., p. 124).
Different methods actualy in use have comparable objectives ; see, for example, the HAZOP method, presented in the first part of this review of the methods of predictive hazard analysis (INRS Documentary Note ND. 1768-138- 90).
Essentially, it is at the design stage (and, according to the author, when the system puts into practice littel known technologies) that this type of analysis is of most interest. PHA must consequently be periodically updated during the design and development phases and finally in the course of industrial use.
The description of the results obtained is made by means of various modes of presentation : - Column Tables - Logical Trees -55-
- finally, in the form of more synthetic documents (recaps). -56-
Column Tables
These alllow us to put the information into order as a function of previously defined ideas.
The following list of titles for each column can be proposed (Lievens, op. cit.) : 1) Subsystem or Function : identification of the assembly under study, whether a sybsystem or functional assembly. 2) Phase : identification of the phase or mode or utilization of the system during which certain elements can generate a hazard. 3) Dangerous elements : identification of subsystem or functional assembly elements associated with an intrinsic hazard. 4) Event causing a dangerous situation : identification of conditions, undesirable events, breakdowns or errors that can transform a dangerous element into a dangerous situation. 5) Dangerous situation : identification of dangerous situations resulting from the interaction of the dangerous element and the system assembly, following an event as described in (4) above. 6) Event causing a potential accident : identification of conditions, undesirable events, breakdowns or errors that can transform a dangerous situation into an accident. 7) Potential accident : identification of the possibilities of an accident resulting from dangerous situations, following an event as described in (6). 8) Consequences : identification of the consequences associated with potential accidents, when the latter occur. 9) Gravity : qualitative measure of the gravity of the preceding consequences, within the scope of MIL STD 882 classification standard : minor, serious, critical, catastrophic (cf. Chapter 402 "Risk Criteria"). 10) Preventive measures : collection of information touching on : - The efficiency of the preceding measures - Their introduction into the system or its operating procedures.
This table is filled up by a specialist who has a good knowledge of the system, taking into account the dynamic relations that exist between the different stages of the analysis.
Thus, for a dangerous element (Col. 3), for example a lathe, certain conditions must be fulfilled to bring about a dangerous situation, for example, the appearance of unexpected vibrations (the "Event" of Col. 4). -57-
Similary, a dangerous situation does not necessarily lead to a potential accident (Col. 7). Another event or supplementary condition (Col. 6) will have to appear, for example, an operator will have to be near. In this respect, the author specifies that "the joint application of the inductive and deductive approaches" is naturally brought into play as soon as the analyst makes an effort to identify dangerous situations, that is when he asks himself the question, "in what way can the dangerous element lead to a potential accident ?".
The distinction proposed between "dangerous elements", "dangerous situation" and "accident potential" implicitly takes us back to a certain conception of the accident which can be represented on a schematic, such as Fig. 6, p. 30.
This representation of the process leading to the (potential) accident has some heuristic value, that is, it is an aid to conceptualizing; It is effectively a useful guide for the analyst in the task of breaking down the system into its elements, so as to make the inherent risks evident.
Logical Trees
A different presentation of the data produced by the PHA can be obtained by using a "Logical Tree" type of structure (Fig. 7, p. 31).
For each subsystem studied, this tree breaks down the chain of circumstances which led to the unwanted event, and it includes a mention of the serousness of the consequences in the case of an accident. The ideas used remain unchanged as regards presentation in the form of tables. The only difference is in the appearance, which illustrates the importance of showing the causal links better.
Another mode of description
The (French) Union of Chemical Industries (Notebook ≠ 1, 1981) proposes a use of PHA based upon the distinction between products and processes. A distinction which is particularly well adapted to the chemical industry that the -58-
UIC represents (concerning hazards analysis in the process industry, cf. also Abribat et coll., 1988).
"Product Sheets" and "Process Sheets" (Table II, p. 32) permit the analyst to take into account systematically, and according to the context, the set of safety givens. By means of an indexing system, these sheets give access to a Safety File which groups together the set of information corresponding to each heading. -59-
502 - FAILURE MODES AND EFFECTS ANALYSIS - FMEA
Presentation
FMEA is the most used analytical tool and among the set of inductive techniques it is one of the most efficient ones at our disposal. Lievens (op. cit., p. 130-131) indicates that it is a question "of a procedure very commonly used in reliability studies to : - Analyse the consequences of failures which can affect equipment or system - Identify the failures that have important consequences, with regard to different criteria such as : success of the mission, availability, maintenance loads, safety, etc.".
The application of this tool for safety purposes results from work carried out by various authors and firms (Mc Donnel Douglas Co., in particular). See Table VII for the distinction between "reliability" and "safety".
Principal objectives and implementation of FMEA
Objectives and implementation can be described in seven points : - Definition of the system to be studied - Identification of failure mode - Research into the causes of appearance of the failure modes - Analysis of failure effects - Examination of possibilities of compensating for failure effects - Evaluation of the risk associated with each failure mode - Proposal for corrective actions and preventive measures.
Definition of the system to be studied
This step leads to "breaking down the system into elements or components for which one has information judged to be sufficient" (Barbet and Guyonnet, op. cit., p. 44). -60-
This stage is most often put into effect by means of a block diagram which allows us to highlight the set of functions that should be performed by the system.
The authors point out that the degree of breaking down is not definable "a priori". "It depends on the system to be studied, on the extent of the analysis to be carried out, and on the information available, according to the phase of the design that is being addressed" (op. cit., p. 44). This point is confirmed by Lievens (op. cit.) indicating that it is convenient in practice to choose a level that makes it possible to obtain sufficient data about each failure mode. The choice of the "splitting up" level, from which the FMEA should be carried out, "is a matter of timeliness, and not one of principle" (p. 132).
Identification of failure modes likely to damage correct system operation
The establishment of failure modes comes down to determining "the effect by which a failure is observed" (Barbet and Guyonnet, op. cit. p. 44) or, in other words, asking oneself the questions : How ? ; "how can the functions to be carried out be themselves affected ?" (Lievens, op. cit., p. 132).
The author (p. 133) proposes six categories of failure modes : - Output blocked at level O (or 1) : this is the case of a breakdown of a connection, the failure of an electronic component in a short circuit (or closed circuit). - Damaged Output : this is, for instance, the case of a loss of flow in an hydraulic circuit that could bring lower cooling efficiency, excessive delay in braking, insufficient lubrification, etc. The general methods of drift analysis can be applied to the case of damaged outputs. - Intermittent Failures : we can cite the example of an electronic component in a logical system whose output oscillates permanently between O and 1. Such failures are among the most difficult to analyze, as much using FMEA as any other safety analysis method. - Excessive Output : for instance, the case of a liquid whose temperature would be too high following failure of a thermostat. - Unexpected Output : for instance, the case of an alarm which goes off at a moment when it has no reason to do so. - Undesirable Output : we can cite the example fo electrical equipment that carries out the functions for which it has been designed but which puts out -61-
excessive heat and so can, for this reason, endanger the proper operation of neighboring equipment.
The UCI proposes another definition of the concept of failure mode (Notebook ≠, 1981) : "a disturbance in the operation or performance of a subsystem, assembly or component". INRS points out on, the other hand, that it is possible to analyze the behavior of a sybsystem from the point of view of : - A functional failure (example : premature operation, unexpected failure, etc.). - Performance failures (examples : on a drive motor, speed variations, vibrations, sparks, etc.).
Search for causes of appearance of failure modes
The search for the possible causes for a failure to appear is made at the same time as the failure modes are identified (Damien, 1985, p. 50). This author, whose procedures were inspired by the UCI's work, points out that the causes should be sought :
- In the material itself (example : mechanical and electrical breakage, deformation, wear, seizing-up, etc.), - In the inputs (example : energy, connection to the preceding component, lubrication, etc.).
Analysis of failure effects
The analysis of failure effects should be carried out on a case-by-case basis, after specifying the failure modes(UCI). This stage generally points out :
• Local effects, for example : - subsystem : motor drive, - local effect : drive stopped.
• System effects : in the preceding example, if the motor drives the agitator, the effect on the system can be a rise in temperature, and indeed an acceleration of the reaction. If it is a pump, the effect can be a reversal of its direction of rotation (examples taken from UCI Notebook ≠ 4). -62-
Examination of possibilities of compensating for failure effects
Damien (op. cit., p. 50) presents three ways of reducing failure effects :
- Réduction of the probability of failure (example : safety devices such as contact keys, written procedures, preventive maintenance, etc.). - Reduction of failure propagation (example : doubling of sensors, alarms, etc.). - Reduction of failure gravity (example : fire-door, screens, etc.). -63-
Evaluation of the risk associated with each failure mode
Risk evaluaton is generally carried out starting with scales for gravity and probability (scale used first for each failure mode).
By way of example, the scales proposed by the UCI are shown in Tables i and ii, pp. 32-33.
• Gravity Evaluation : the levels are defined by the consequences of the failures or faults in the operation of a subsystem, or of components.
• Probability Evaluation : it is often difficult to obtain the failure probabilities and we proceed to a semi-quantitative evaluation. The failures are classed at six probability levels.
• Risk Evaluation is expressed by means of a 2-digit number, combining previously defined levels of Gravity and Probability (ex. : gravity 3, probability 2 = Risk 32).
The set of analytical results is then carried over to a "Critical grid", having Gravity on the X-axis and Probability on the Y-axis.
A number, characteristic of the risk, is given to each cell ; consequently this varies from 11 (minimum) to 66 (maximum risk).
It is important to note here that this mode of representation (shading) gives priority to Gravity over Probability. This is expresses by the manner of reading the number - in the direction "column - line". Table III, p. 33, (UCI example) thus permits us to state that priority is given to cell 63 ("major" and "rare" event) in comparison to Cell 36 ("serious" and "very frequent" event).
Depending on the needs and possibilities, risk evaluation can also be envisaged in a strictly quantitative manner, by considering the probability of occurrence of each failure mode. The analysis then is called Failure Mode Effects and Criticality Analysis (FMECA).
Proposal for corrective actions and preventive measures to eliminate and control hazards detected -64-
Identifying situations liable to deteriorate into accidents should allow us to decide upon the acceptability or non-acceptability of the corresponding risks. In the latter eventuality (unacceptable risk), the study will lead us to carry out corrective actions or to propose preventive measures.
• Corrective actions will lead for example to a specification for the following (Lievens, op. cit.) : - Existing alarms - Knowledge of other failures that trigger those alarms - Possibility of failure identification by personnel responsible for running and maintaining the system - Immediate corrective actions and long-term ones - Parameters to watch out for - etc.
• The preventive measures taken should be reviewed and made the subject of periodic updating to control their efficiency as time goes by. The preventive measures labeled by Lievens (op. cit.) as Primary Safety (Secondary and Tertiary Safety being concerned with controlling the consequences of accidents), put into effect different techniques. By way of example, the author picks out the following measures : - Hazard elimination (ex. : use of non-inflammable materials) - Limitation of dangerous parameters (ex. : use of low voltages - 12/24 V - for manual electrical tools, continuous monitoring and automatic control of critical working parameters - Isolation devices ; blocking and banning devices (ex. : making inflammable liquids inert, locking of electrical equipment, operation of a car's starter motor contingent on fastening safety belts - Safety following failure (fail-safe) (ex. : use of fuses and circuit-breakers) - Reduction of probabilities of failure or error (ex. : over-dimensioning of important elements, use of redundancy) - Recovery (ex. : careful cleaning of the ground after accidental spreading of an inflammable liquid).
Presentation of results -65-
The results of an FMEA are presented in the form of column tables whose structure can vary as a function of context and needs (adding or suppressing certain information). All of these respect, however, as a whole, the seven stages described in the previous paragraph.
An extract of the results of an FMEA conducted by the "National Safety Group" of the National Powders and Explosives (SNPE), which deals with the safety of a pyrotechnic installation, is shown in Table IV, p. 34. Study presented by Damien (op. cit.). -66-
503 - EVENT TREE ANALYSIS - ETA
This procedure permits us to picture the set of possibilities that result from different combinations of events.
The development of the tree is done starting with an initial event and progressing according to a binary logic (yes/no, 1/0). From the point of view of the implied reasoning, this mode of investigation alows us to clearly distinguish this method from PHA and FMEA. As each event leads to the identification of two possible successive states a number of events taken into consideration will lead to 2**n possible paths and consequently as many final possibilities.
An illustration of this procedure will borrow the example proposed by Barbet and Guyonnet (op. cit.), which is concerned with examining the risks due to possible failure combinations of the main functions of a smoke removal system, in case of fire (Figs. 8 and 9 p. 35).
A first solution envisages triggering a sequence consisting of the following five functions :
Solution A : - Blowing down stairway (BS) - Blowing down airlock (BA) - Smoke Extraction from airlock (EA) - Blowing of air circulation (BC) - Extraction of air circulation (EC).
A second solution retains the following sequence (the order is unchanged but two functions are suppressed) :
Solution B : 1. Blowing down stairway (BS) 2. Blowing down airlock (BA) 3. Extraction of air circulation (EC).
Each of these functions represents an event. Both solutions lead to highlighting the set of possible undesirable consequences, presented below : -67-
- Class 1 : (Cl 1) : this class groups together the failure combinations which do not change the smoke removal system sufficiently to ensure the protection of the stairway and satisfactory sweeping of the air circulation. - Class 2 : (Cl 2) : this class groups together all the cases where only the stairway protection is guaranteed. Sweeping of the air circulation is no longer carried out correctly. - Class 3 : (Cl 3) : this class groups the cases of breakdown where only the satisfactory sweeping of the contaminated air circulation is assured. Stairway protection is no longer achieved. - Class 4 : (Cl 4) : this class groups the failure combinations that do not achieve any satisfactory circulation sweep nor any protection of the well of the stairway. In practice, this is equivalent to an absence of smoke removal. - Class 5 : (Cl 5) : here we group the failures that, during a fire, would bring about a more dangerous configuration than just a situation where smoke would not be removed. - Class 0 : (Cl 0) : a sixth class which represents the normal state of operation of the installation.
Solution A leads to the event tree presented in Fig. 8, p. 35. It presents eight cases leading to Class 5 (the most dangerous) consequences.
Solution B leads us to examine the situations given in Fig. 9, p. 35. No Class 5 consequence appears in this case.
This example illustrates a possible use of the Event Tree technique. Through systematic examination of different system dysfunction modes it allows us to identify the safest solution.
The Event Tree lends itself fairly well to quantitative analysis : assignment of probabilities of failure and/or proper operation to each event (complementary probabilities) and calculation of the resultant probabilities. The Accident Sequence Precursor (ASP) methodology is an illustration of the quantitative process associated with the Event Tree method (cf. Cooke and Coll, 1987). -68-
504 - FAULT TREE ANALYSIS - FTA
Presentation
This is a matter of making a logic diagram representing combinations of events that lead to the production of an undesirable output which is defined "a priori" ; this is done within a framework of a system intended to guarantee a specific mission. It is formed of "successive levels in such a way that each event can be generated starting from events on the lower levels by means of various logical operators (or "gates") (Pages and Goudran 1980, p. 49). The deductive process allows us to develop the tree starting from the event at the top, continuing until we arrive at a series of basic events.
Some confusion has arisen between the Fault Tree technique (either "failures" or "faults") and that of the Causal Tree. Most authors in fact use both expressions without distinction to designate the provisional method of analysis presented here.
It is useful to remember that the method of Causal Trees developed by the INRS aims at representing the set of facts that caused an accident and not making an "a priori" accident risk analysis. On the other hand, this method tends to highlight accident factors that are predominantly socio-technical and organizational in nature and not essentially technical, the latter type of factor being characteristic of the AFT method (for more details, cf. Leplat, 1985, pp. 55-59 and Guillermain and coll., 1989, p. 5).
The basic, or elementary, events are characterized by the three following criteria : - They are independent of each other - Their probabilities of occurrence can be calculated or estimated - The specialists concerned do not feel the need to break them down into simpler events.
Fig. 10, p. 36, illustrates the appearance of a Fault Tree (cf Table V, pp. 36-37, for a recap of the principal symbols used).
The criteria "independent of each other" mea that an elementary event results neither directly nor indirectly from another event of the same level (that is, the -69- probability of failure of component A does not include the probability of failure of component B). -70-
Principles of application
To be carried out efficiently, a Fault Tree Analysis must keep to the following stages : - Knowledge of the system - Definition and choice of the undesirable event to be studied - Construction of the Tree - Use of the Tree.
Knowledge of the system
Those responsible for the study must have a very good knowledge of the system (its design, fabrication and use) to be able to analyze its possible failures. Besides, the objectives very often require mobilizing multi-disciplinary teams.
Definition and choice of the undesirable event to be studied
This stage, which is the definition of the main event conditioning the relevance and scope of the analysis, is essential.
"The main event can be defined a priori as being a potential known vent that is characteristic of the system under study and made obvious by experience, in particular by analysis of incidents that have been produced on the system studied or on equivalent systems." (Bardet and Guyonnet, op. cit., p. 45).
It is useful to complete this definition by remembering that the knowledge of this event can be obtained at the end of a general Hazard Identification phase (and calling upon, if the case arises, a method such as PHA or FMECA).
A first estimate of these risks - in frequency and gravity - will make easier the choice of an event or events that will have to be the subject of an AFT analysis.
Construction of the Tree -71-
The construction of the tree is made according to a deductive logic and its graphical formulation makes use of various symbols.
This stage generally makes use of various abilities in order to permit as complete a development as possible. In this regard, Lievens (op. cit.) specifies that AFT is not easy to handle, which justifies from his point of view recourse to multidisciplinary teams : "the Tree must be constructed and used by multidisciplinary teams, putting together a large number of specialists of various backgrounds. Each of them sees the events related to his speciality dispersed among several points on the Tree, drowned in the middle of other events that are of less concern to him". (p. 88).
Use of the Tree
The use of a Fault Tree can remain qualitative or be extended if necessary by a quantifying stage, when there exist sources of data about the failure rates of different components.
An intermediate solution consists of carrying out a semi-quantitative analysis, using a procedure of classifying elementary events in terms of risk levels (a UCI type of classification).
Qualitative use consists of analyzing the tree in the context of its logical structure. Thus, examination of different possible scenarios leads us to identify the events or sets of basic events connected directly with the main event ("minimal section" idea). For example, a basic event tied to the main one by a succession of OR gates leads logically to the principal event because intermediate combinations do not exist.
On the other hand, the logical structure of a Fault Tree allows us to use Boolean Algebra, that is it allows us to express this structure by means of logical equations. Two important consequences result from this : - The possibility of simplifying the Tree's logical structure by clarifying false redundancies. The reduction of these (Boolean reduction) involves simplifying certain Boolean expressions and consequantly the structures that they represent. -72-
- The possibility of making simulations on the computer. This will be a question of qualitative simulations whose objective has to do with the Tree's structure and not upon possible quantitative data.
These simulations permit us to examine the different combinations existing and to summarize the Tree into the set of minimal sections. Each Fault Tree can in fact be described by means of a finite number of "minimal sections", linking elementary events to the undesirable output. Such simulations can also serve, if necessary, to test various proposals for system modification (by gate substitution, for example).
Quantitative use requires having probabilistic data on the events (especially component reliability). These probabilities most often come from statistical estimates made from trials or from data gathered during previous operation of similar systems, and this boils down to highlighting the "de facto" linkage between the "a priori" and "a posteriori" processes : starting from the probabilities of occurrence of primary events (estimated statistically from the results of past operations) we will subsequently calculate the probability of occurrence of unwanted events. In other words, "it is from system component reliability data which already exists that we will evaluate the risk presented by new systems" (Signoret and Leroy op. cit., p. 162).
Knowledge of the probability densities for the appearance of each basic event (that is, the laws of distribution of failure probabilities) allows us to : - Determine the overall probability of appearance of the main event - Determine the most critical paths, in other words the most probable of the combination of events likely to bring about the main one.
Illustration
This illustration repeats the principal elements of a safety study presented by Chereau and coll. (Lievens, op. cit., pp. 275-294) which refers to a relatively simple technical system : a gas generator for a car safety airbag. This pyrotechnic system was designed to function one time only per unit. This device protects the driver and/or passenger in case of a frontal collision. The system is composed of : - A collision shock detector, accelerometer - A pyrotechnic evice generating cold gases -73-
- An inflatable bag.
The system's intended operation is as follows : - To be non-functional at a collision velocity < 25 km/hr, - To be functional above this velocity (serious collision zone).
Analysis of the safety consequences for different dysfunction modes allows it to be stated that an inadequate or sudden operation of the system introduces an important risk into the three situations (serious, light, and no collision) examined by the authors.
The authors made a systematic study of the undesired "gas generator bursting" event which leads to the Fault Tree structure shown in Fig. 11, p. 38.
A simulation of the different combinations of events (known possibilities) has allowed us to define the first critical path :
Bursting Gas Generator . . . Chamber Overpressure . . . Non-functional Safety System Chamber Pressure Too Strong p > 120 bar . . . Combustion Speed Too High . . . Deficient Jet Geometry . . . Inadequate Mechanical Strength -74-
The event with the largest influence is the "non-functional safety system".
"Influence" in this context is defined as the ratio of the corresponding "a posteriori probability" to that of the unwanted event. ("A posteriori" probability = p. for which the state may be simulated and simultaneously the unwanted event takes place). Here the "influence" of theevent "non-functional safety system" is : p. "a posteriori"/p. "unwanted event" = 0.776 (cf Lievens, op. cit., p. 286).
So the modifications shoud address this part of the system in order to lessen the initial probability of failure. -75-
A new simulation, carried out after introducing a probability five times smaller for the event under consideration (from 0.5 10** -3 to 10** -4) causes a new critical path to appear, which has the significance of being simpler :
Bursting Gas Generator . . . Metal Housing Too Weak . . . Metallurgical Failure
This simulation allows us to highlight the need for improving the generator's safety system. This illustration of the use of FTA on a closed and simple technical system allows us already to take note of the complexity of the process (this being an example of its use concerning the "nature of the system under study' as described in Chapter 405).
The reader will picture without difficulty the degree of complexity that can be reached when such a tool is applied to systems or subsystems that are very complex and ramified (state-of-the-art technologies, process management, etc.). -76-
600 - SYSTEMS SAFETY ISSUES
601 - RELIABILITY AND SAFETY
The numerous analogies between both approaches have contributed to some confusion, in particular about the respective objectives being pursued. This situation has historical roots, to the extent that the development of safety system studies has mainly relied upon techniques progressively developed by the reliability people.
Reliability theory dates from the thirties ; it became an entirely separate discipline following the evolution of the primitive idea of "failure rates" which, as a measuring instrument for comparing two past events (that is, working with frequency rates), became transformed into an instrument capable of providing provisional results (introduction of probabilities).
From the 1960's onwards, the extension of the principle of reliability studies of electronics, vis-a-vis that of mechanical, hydraulics or electrical systems, would lead to the development or adaptation of methods of systematic risk analysis such as Fault and Event Trees, etc.
The identical nature of the tools used by the reliability and safety people, to which it is useful to add the common use of statistical and probability calculation techniques, strengthens the feeling of being in the presence of the one and the same discipline which is being indicated by two different names.
Table VI, p. 39, brings to mind the essential differences between reliability and safety. In particular it allow us to state that the two disciplines are not located on the same plan, as much as regards objectives as the means of implementing them ; to such a degree that these objectives can come into conflict, particularly when the analyst finds himself dealing with systems for which the probability for mission success (reliability) must come after the (safety) requirement of not having a catastrophic event before or during that same mission.
However, we must not lose from sight the fact that the results of reliability studies furnish a quantity of data (elementary component failure rates in -77- particular) that are indispensable for conducting quantitative studies of dysfunction scenarios developed during systems safety studies. -78-
602 - THE ORIGIN AND RELIABILITY OF DATA SOURCES
Two sources of information allow us to feed data to the quantitative risk analysis models :
Data Banks
These provide the user with "the most probable values of failure rates as a function of a certain number of constraints applied to the component : climate and mechanical constraints (humidity, vibrations, pressure, lighning strikes), thermas constraints, electrical ones (voltage, current, power)" (Chapouille 1972, pp. 52-53).
Operational data
These have the advantage of a certain realism regarding contextual factors - (environmental characteristics, operation beyond the specifications, maintenance faults, etc.). Besides, the observed dysfunctions are still amenable to statistical manipulation. However some warnings should be given about the reliability of these data.
Limits of data originating in technical literature
Based on the results of laboratory trials, these data run the risk of assigning a minimal importance to constraints truly found in the real world.
Lievens (op. cit., p. 54) indicates, for examples, that "the frequency and consequences of maintenance errors which contribute significantly to overall failure rates, cannot be appreciated (in the laboratory)". This remark is again taken up by Cooke and coll. (op. cit.). The authors express, in a general way, their reserve about the confidence which should be given to data which are silent as to the exact origin of the indicated failure rates and in particular when maintenance characteristics must be considered "as a significant cause of failure". -79-
On the other hand, the weighting coefficients used for the calculation of failure rates vary with a very wide margin, which leads to assigning values which are fairly different, depending on the parameters chosen. (Laplace, 1987, p. 40). Does the user always have at his disposal adequate criteria to allow him to make a totally certain decision about the coefficient to be applied ?
By way of illustration, Laplace (op. cit. p. 40) indicates the calculation mode used for determining the failure rate b = (pi.e).(pi.q).(pi.1).(bb). where : pi.e = 0.04 to 22 (range of variation 1 to 550) : depends on the environment in which the component is being used (environment without any special constraint = 0.04, missile launch environment = 22). pi.q = 1 to 300 : represents the quality of the component. pi.1 = 1 to 10 : represents the "learning factor" for the manufacturer. (A component that has been a long time in production is more reliable than a new one). bb = (c1). (pi.t).(pi.v).(pi.e) (c2+c3) where : c1 depends on the number of logic gates pi.t depends on the working temperature of the component pi.v depends on the sypply voltage c2 depends on the complexity of the component (same nature as c1). c3 depends on the number of interconnections to the outside world, type of enclosure.
Limits of operational data
These data are tainted with uncertainties which result which result principally from the widespread use of the two simplifying hupotheses that follow : - Constancy of failure rates - Independence of failures. -80-
The hypothesis of constant failure rates boils down to considering only the "normal" working life, in other words, excluding the teething periods (debugging time in the engineer's vocabulary), and wear. If this hypothesis is justified for electronic components, which have a very long useful lifetime (up to 100, 000 hours), it is much less justified for mechanical ones.
According to Lievens (op. cit., p. 54), this rate can vary as a function of environmental and operational constraints. He recalls especially that the constancy hypothesis "is almost valid if the alements concerned are withdrawn from service before the phenomena of use and deterioration appear (p. 157).
The failure independence hypothesis boils down to considering that at any instant "t" the failure rate of any given element is independent of the failures which show up or have shown up on other elements.
In reality, this hypothesis comes to grief over three difficulties (Lievens op. cit., p. 255) : - It is extremely difficult to conceive of perfectly redundant assemblies, without any common element whose failure would bring about the failure of the assembly" - The failure of one element often brings with it, whether at the moment it is produced, or with time, more elevated constraints at the level of other elements" - The constraints generated by the system's environment induce at the level of various elements, other constraints which are not independent of one another. Thus at the moment when an element has the greatest probability of breakdown, the same will be true, in all likelihood, for the neighboring elements. -81-
603 - INTERPRETATION OF PROBABILITIES
Concept of a "rare event"
The use of the mathematical concept of probability to estimate the rate of occurrence of undesirable events on the scale of an industrial complex requires delicate interpretation. What, for example, is the meaning, in the nuclear field, of a probability of observing a serious accident expressed as 10**-7 (one ten millionth) per reactor and per year ?
Is such an event likely to be produced on average once every 10 million years ("length of time" interpretation) or is rather more convenient, if we take into consideration a theoretical number of 10 million reactors in service, to envisage on average one accident per year ("quantitative" interpretation) ?
Neither of these interpretations is legitimate. They are nonetheless fairly spontaneous as it is natural for one's mind to prefer an image for the representation of a number as small as 0.0000001 !
In reality, the measure of probability of occurrence of an event does not provide any information at all as to its position along the time axis (Referring to the distribution). If not itself an information of a probabilistic nature, which requires making hypotheses about the behavior of the phenomenon under study (theoretical notions of density and law of probability) and/or having empirical observations at ones disposal.
A random event, improbable though it may be, can perfectly well take place in a second, a millenium, or never.
Two principal modes of probability interpretation
In order to resolve this kind of interpretation ambiguity, we must distinguish separately between the mathematical concept of probability and the use which is likely to be made of it. Morlat (1983) recalls that objects in the real world, of which probability (conceptually defined) can be a model, are :
• Frequency -82-
This is observed in particular in the field of inferential statistics, that is, when there is a clash between the probabilistic modelling of a natural phenomenon and the results of observation or experience. The interpretation in frequency terms of the measure of probability, characterized by the notion of failure rates, evaluated most often through statistical stimates (cf. Chapter 403). However, the user should question the practical significance of this type of interpretation, regarding very rare events and (the possibility of) referring to accidents that are never observed.
• Subjective probability This refers less to a technical definition of the concept of random events than to the measure of an individual's degree of conviction as to a certain situation. The elementary event probability for the throw of a die is 1/6. This is a question of a probability for which the degree of conviction is based upon the idea of equi-probability (P(1) = P(2) = P(6)) and upon that of limiting frequency (for a large number of throws, the probability of an elementary event tends towards 1/6). In contrast, two geologists will not have the same degree of conviction as to the existence of a petroleum reservoir underneath a given terrain. Here it will be a question of a situation in which the same event arouses different degrees of conviction from one individual to the other (cf. Tricot and Picard 1969). To this author, subjective probability is a measure of the confidence which a person can assign to an uncertain proposition.
Probability and decision making
Estimating the probability of occurrence of dysfunctions and, in particular, of serious accidents can scarcely have any objective other than to assist amaking a decision on safety matters. That is why it is reasonable to think that calculated probabilities should be interpreted in the sense of "subjective possibilities" even if the language used is often that of "frequencies" (Morlat, op. cit.). A failure probability of 10**-5 has no absolute significance ; on the contrary, compared to a probability 10**-4 it signifies better safety, or indeed when faced with a value of 10**-6, worse.
Subjective probability and expert judgements -83-
Estimation of subjective probabilities is carried out on occasion through expert judgements, as illustrated by the four exemples below :
• Signoret and Leroy (op. cit. pp. 1606-1607) recall that "when we do not have the results of previous use, we can use expert judgement (whose systematized application is known by the name of "Delphi") to estimate the order of magnitude of the probabilities being sought. This method consists principally of questioning specialists in the relevant technical field about the frequency of occurrence of highlighted primary events and to carry out a statistical calculation on the set of responses. For the results to be statistically significant it the number of experts must be sufficient and their responses must be independent. The Delphi Method has also been used to establish failure data for electrical and electronic material, registered in an IEEE (Institute of Electrical and Electronics Engineers) document (IEEE Std- 500, 1977).
• Wanner, cited by Lievens (op. cit., pp. 201-202) uses a workload evaluation procedure that consists of associating accident probabilities with various kinds of scores, awarded in a conventional fashion starting with the responses given by pilots and aircrews to the various questions asked of them (Table VII, p. 40).
• Sarrat and N'Kaoua (1986, p. 33) specify, in a study about the systems safety of a reinforcing wall "that in the absence of any collection of probabilistic data comparable to those of reliability polls, the information has been obtained by relying on the knowledge and experience acquired in these fields... these data have been established with the following in mind : - Common event : probability of occurrence = 1.10**-3 - Rare event : probability of occurrence = 1.10**-6 - Extremely rare event : probability of occurrence = 1.10**-9".
• One last illustration taken from Cooke and coll. (op. cit., p. 17) is concerned with taking data furnished by 14 different expert sources with the aim of estimating the failure probability of a pipe. The amplitude of the responses varies from 5.10**-6 to 1.10**-10 (Table VIII, p. 41). The illustrations put forth demonstrate the extreme variability of expert judgements. Taking into account the specific use which can be made of the results obtained, these methods raise the question of their legitimacy. The confidence that we can give to the estimation of a risk (for an individual recognized as being -84- competent) is one thing, the conversion of this estimate to a quantified measure of failure probability is another.
In fact, in the latter eventuality, nothing is more prohibitive to carrying out arithmetical and logical operations on such data, as surely as starting from a measure of the failure mode made on a component through laboratory tests or by statistical observations established as rigorously as possible. Such a process encourages us to be prudent and it makes Villemeur (op. cit.) state, for example, regarding the quantification of errors and their introduction into Trees "it is generally recommended to evaluate the sensitivity of results to a change in the probabilities of human errors, which have the greatest weight. In fact the elevated uncertainty for the probabilities given by today's models causes a final result to be of littel significance unless it is accompanied by a sensitivity study or an error factor" (p. 431). -85-
604 - THE INTRODUCTION OF HUMAN RELIABILITY
Technical and human reliability
The problem raised by the use of expert judgements is only the particular expression of the difficulties that accompany attempts at quantification that are not founded upon the observation of strictly technical phenomena. The translation of an expert's judgement into a measure of probability thus boils down to making a measure of the importance which the specialist attributes to a given risk (subjective probability).
Such a process presupposes that we support the hypothesis according to which the picture of the risk which the expert has constructed corresponds reasonably well to the real one. That hypothesis remains unverifiable to the extent to which the real risk cannot be evaluated technically (at least, with sufficient guarantees), with the exception of the "a posteriori" case, in other words from a theoretical point of view for the case of sufficiently numerous accidents. In short, the expert's judgement rests upon confidence, which for want of anything better, permits us to have numerical estimations or those that we can assign a number to.
In an analogous fashion, the importance given to the risks likely to be generated by the operators' activity has been put in concrete form by the introduction of techniques inspired directly by analytical methods of technical reliability.
The measurable character of technical failure results essentially, from the possibility of breaking down a system into its elementary or "functional" units. Similarly, human reliability techniques propose to describe, from an analytical point of view, the set of activities of indificuals, in order to highlight certain classes of error that are likely to cause risks to the system.
The overall system reliability is then conceived of as being the resultant of two subsystems : the reliabilities of (a) the technical components and (b) the human ones.
Human reliability methods -86-
Depending on the type of analysis practised on the system, these are generally differentiated into quanlitative and quantitative approaches. For each of these, the best known are, respectively, the "Rasmussen Arch" (Rasmussen, 1986), and the THERP (Technique for Human Error Rate Prediction) (Swain and Guttman, 1983).
The analytical method of operator behavior developed by Rasmussen allows us to distinguish different phases during which errors can be produced. It can, for example, be a matter of an error of attention in the first "activities" phase, or again, of omission of a step during the "procedural choice" phase. This model, more generally, distinguishes between three levels of operator behavior, namely : behavior based upon knowledge ("cognitive"), upon the rules ("procedural"), and upon ability ("machine" behavior). (Cf. also Villemeur, 1988, p. 416).
Numerous other predominantly qualitative, and generally simpler, classifications are used by some analyst. For example, Lievens (op. cit., pp. 132-134) picks out, in parallel to the modes of technical failure, some "error modes" such as forgetfulness, wrong moves, and imprecise or sudden actions.
The THERP Method can be considered to be an extension of technical reliability methods to the human component in the system. The principle of application is the following : - Definition of the system failure to be studied - Identification of all the actual human operations - Prediction of the error rates for each operation - Determination of the effect of the error on the failure rate of the system - Recommendation for modification of the system to bring its error rate to an acceptable level.
Two basic values are used by THERP : - The probability (Pi) that an operation leads to an error of class'i' - The probability (Fi) that an error of class 'i' brings about a failure of the system.
The product Fi x Pi is the probability that an error produces in a given operation will bring about a system failure. -87-
Other methods, oriented towards the quantification of tasks and operator dysfunctions, have been developed. Hannaman and coll. (1984) review 16 models. For example, Carnino (1987) cites the OAT (Operator Action Tree) model which is concerned more specifically with the detection and quantification of the "decisions and actions" type of error : "the Action model is an Event Tree linked to the detection, interpretation, diagnosis and the required reactions". The "probability of failure" curve for the operators, as a function of the time taken to think and diagnose, then gives us the probability sought (Carnini, op. cit., p. 74).
Two modes of approach to human error
The idea of human error holds a central place in the "human reliability" approach. Althouth this is an old concern that has given rise to numerous works, in particular in the fields of work and ergonomic psychology, human error has not always been associated with the risk of accidents, as the following passage testifies ; this is an extract from the introduction to the documentary study Human Factors and Safety (CECA, 1967) : "Going even further, some people have imagined that there could be some parallelism between the error and the accident mechanisms. If it were admitted as valid that, to reach these mechanisms, we should study the production of errors in laboratory conditions, we would have a powerful method of analyzing the various factors influencing the production of accidents ; but the "IFs" implied by this technique weighed too heavily on us for it to appear desirable to transpose error research into this report" (p. 18).
In fact, it is important to notice that the objectives pursued by systems safety and in particular human reliability give priority to the technological accident risk. In other words, personnel safety is considered to be a by-product of the overall system safety. In this context, the place of human reliability studies cannot be defined in a unique fashion.
Its most evident but also its most arguable contribution concerns the detection of human errors likely to worsen the level of overall system safety. Human error is considered to be the functional equivalent of various classes of failure (mechanical, electrical, electronic...), which affects the system. In other words, its methodological status is that of an input variable. -88-
Another trend in human reliability is more related to the process of systems ergonomy. It confers upon the error the status of an output variable, that is, a dysfunction indicator for the system "man x task", whose performance it would (if the case arises) be advisable to improve on, particularly through ergonomic adjustments. Here, human error and accident can be reconciled in an original way.
Both will in fact be considered to be just so many symptoms of a dysfunction which is not exclusively attributed to the operator (aptitudes, vigilance, etc.) but also to the characteristics of the work station or, more generally, the working environment. For a more detailed presentation of this approach, cf. Guillermain and coll. (op. cit.). -89-
The limits of human reliability
Ergonomic studies dedicated to safety have demonstrated the need to disengage ourselves from an approach that is too exclusively operator- centered. The "socio-technical" type of approach has thus replaced unicausal studies, dominated as they are by the idea of a person's predisposition to accidents.
In other words, the need to expand the field of investigations has progressively been asserted, particularly with the highlighting of accident-producing phenomena, such as interferences between tasks, (coactivity, intersection, succession, frontier zones) or recovery activities, all of these being phenomena which result in the first place from organizational dysfunctions. Phenomena that Favergé (1970) qualifies precisely as "reliability black spots". Cf. also Monteau and Pham (1987) and the first. part of this review (ND-4768-138-90), in particular, the last chapter.
Within the perspective of operator activity analysis, it would then be appropriate to refer to the "socio-technical" system and not to the technical system alone. However, it clearly appears that in the context of systems safety, the concepts that influence for the most part the researches dedicated to evaluating human errors had hardly until then taken into account the aforementioned factors.
Numerous critical reports have consequently been addressed to the methods of quantification of human error :
• The most serious and profound critique dealt with excessive "reductionism" and consequent distancing from reality of an approach echoing the position of "mechanicism" wich "ignored the specific particularities of the working of living organisms, and of man, in particular, showing a tendency to consider the operator, within the management systems, as an inanimate mechanism" (Léontier and coll., 1961). So, Leplat (1985, p. 98) recalls that the transfer of technical reliability methods towards those of human reliability is delicate to the extent that, in contrast to a mechanical device, man adapts and learns, pursues multiple goals, often treats information in a global fashion and has the "capacity to control himself and correct his errors" (Léontief and coll., 1961). -90-
A critical review, more directly centered upon the practical importance of human error quantification studies, has been carried out by the Human Factors Society of America.On the occasion of a report commissioned by the U.S. NRC, following the accident that occurred at the nuclear power station on Three Mile Island (Snyder and coll., 1982). The mains aspects bearing on this review are brought to mind by Cooke and coll. (op. cit., p. 28). - The data furnished bu the Sxain and Guttman Report (op. cit.) are inadequate. This being the "Handbook of Human Reliability"drawn up by Swain and Guttman, on behalf of the U.S. NRS. This document furnishes, in particular, some probabilistic estimates of errors for various tasks, estimations accompanied by intervals of confidence. For example, the probability of error associated with readout of a digital display would be p = 10**-3 with an interval of confidence between 5.10**-4 and 5.10**-7. - Such data, which have never been validated, can be used in an overconfident way by those who have had little warning (of the inadequacies). - They have had virtually no impact on the design of nuclear power stations. - A series of ergonomic adjustments, properly carried out, would of itself alone bring a notable improvement of the situation in the nuclear field.
• A last critical review will be borrowed from Fadier and Guillermain (op. cit.). RECHERCHER AUTRE REF, OU ELIMINER CITATION hese authors lay stress on not having too strong a dependence on "the analyst's intuition" associated with a lack of serious ergonomic analyses of the work directed towards making evaluations. They recall besides that "these quantitative methods are difficult to transfer to more traditional enterprises, taking account of the duration and cost of their use. In fact, the stakes in a "high-risk" firm have from a financial point of view nothing in common with "low-risk" industries. Furthermore, man's place in the traditional industries is less confined and less subject to strong variations (in time, and as regards task distribution). It is, then, necessary to rework these methods if we want to generalize them to more traditional systems. The "reworking" in question refers to the "man/machine reliability process", a less ambitious but doubtless more realistic method, which incorporates the ergonomic aspect of the activity (task analysis, in particular). -91-
CONCLUSION
Checks And Socio-Technical Methods
Without pretending to be exhaustive in this review, we can state that :
- There is no lack of "a priori" methods ; they are even fairly numerous and, in any case, very varied.
- Their respective fields of application extend from the work station to the whole firm and so the methods observed are complementary rather than competitive, as Table XVIII, pp. 22-23, shows. This table recaps the methods, their objectives, and their favored areas of application.
This complementarity permits us to believe that the suitable application of each method depends as much upon the level of safety already reached as the complexity of the target for analysis (work station, shop, firm).
We can moreover point out that the development of "a priori" methods (that is to say, their growing complexity) reflects very closely the evolution of the techniques and modes of organization (or the firm).
From the start, in the simplest work situations, and as much from the technical point of view as from the organizational one, the possible risks generally show themselves to be of a permanent and material character. For this reason, they are very often directly observable : machines without protection, slings in deteriorated condition, makeshift scaffolding, dangerous stacking, for example, are the common lot of the most precarious situations as far as safety is concerned. In these cases, the efficiency of inspections and checks is no longer in need of demonstration ; it is a matter of the oldest and commonest "a priori" procedure.
Prevention progresses firstly then to the identification and suppression of the most dangerous risks, which notably are the target or the arsenal of regulations, but this action places the safety practitioner face-to-face with of a new situation ; the risks progressively lose their material character and become more and more complex and sporadic. For example, a machine is properly equipped with a protective device but this is periodically taken off to make a -92- particular adjustment during operation ; the sling is no longer in a poor condition but a new employee does not know how to use it correctly ; the rolling scaffold does not attract comment but somebody moves it for a moment too close to an electric power line. In short, then, the "a priori" diagnosis should concentrate on the work station and, for example, examine the materiel available, its conditions of use, the defined operational modes versus those which are really being used, make an inventory of eventual incidents and their cures.
Studies of the work station are vital, ergonomics define the methods used, and participation by the operators strengthens them. But long ago (Faverge, 1965) people ceased thinking of the firm as a mosaic of isolated work stations. From then on, it has been likened to a system composed of elements (work stations, shops, services) liable to suffer disturbances in their operation, and to transmit the effects of these to other elements, but also capable of recovering more or less efficiently from these debased situations. This concept, which is summed up under the expression "system ergonomics" is translated into more general analyses of work situations, uncovering risks whose appearances are expressed by concepts such as coactivity, frontier zones, recovery, etc. Tracking down such risks no longer consists of pointing out such-and-such a disprepancy with respect to a norm, but rather of identifying the operational modes of systems (or sybsystems) wich are prejudicial to the well-being of their components.
We point out that risk analyses are commonly translated into drawings and models of accident-producing situations, at first limited and conditional, which are then extended by accident models of which at least some make it their aim to be general. Some models have the virtue of suggesting a systematic and logical set of questions that can be expressed in the form of questionnaires, but the obsession with being complete can condemn them to be of useless size and weight !
However, these methods all retain, apart from their variety, the same ultimate objective ; actually they are part of a general procedure whose aim is the complete suppression of risks and where risk identification is only one preliminary stage. But as the eradication of risk - zero risk - is recognized as being often too unrealistic an objective, new methods have appeared that allow a quantitative approach to risk. These are designated by the generic term "Methods of Systems Safety" and, essentially, they are applied to the technical -93- aspects of work situations (components, machines, installations). Besides their specific interest, they have introduced, as we will show in the following paragraph, the idea of rational control of risk. -94-
Systems Safety Methods
Table IX, p. 42, gives an overview of the set of methods, objectives and favored levels of application of systems safety. A quick examination will confirm that it is an approach essentially centered on the technical aspects of safety.
Placed in the context of different methods of previsional risk analysis, systems safety represents in fact an option which tends to deliberately confine its field of investigations to a system that is precisely defined, where we seek out the failures which lead to, or are at the origin of, the unwanted events.
Applied to a limited range of technical equipment, such as, for example, the control circuit of a press, these methods are capable of suppressing design faults which could, for instance, produced the repetition of the cycle, the operation of the press without a protective sensor, the possibility of starting with a single command, etc. These "systemic" safety analyses present the undeniable merit of reducing or suppressing certain types of risks before observing their effects.
We should remember, however, that their use has been extended with some priority to systems which could experience events whose consequences would be very serious, indeed catastrophic. Applications to such complex systems as aircraft, nuclear plants, and refineries show (if this is necessary) that the absolute safety of any activity really is a myth. So "safety is always relative and any statement to the contrary is only an incantation with no operational value" (Goliger and Lievens, 1976). The concept of reasoned control of risk, and consequently an implicitly assumed lack of safety, takes precedence from then on.
To conclude this general presentation of systems safety methods, it seems important to us to bring up two questions which are not primarily technical or methodological :
The Threshold Of Unacceptability Of Risk :
In theory, the formal representation of risk to which reference is generally made (cf. III.2) demands that an event which is fairly frequent but of minor gravity should be rejected (risk not accepted) under the same heading as a very -95- improbable event that has unfortunate consequences (Risk = Probability x Gravity). In fact, the hyperbola of risk masks an evident reality : systems safety has not been designed and developed with the reduction of low gravity risks in mind. The methods it advocates are accordingly essential, the more they are concerned with high risk systems, for which the experience of the accident, from the very fact of its potential gravity, is not easy to envisage. "In short, the question of its possibility overshadows its probability" (Lagadec, op. cit., p. 1148)15.
Cost/Efficiency Ratio :
The development of risk analysis methods in the framework of systems safety also raises the question of their transfer to "traditional" activities, in particular, "low-risk" activities. Always supposing that such a development could be observed, the question of cost would become of primary importance, particularly when it is time to decide to implement a study.
Also, it does not seem reasonable to us to make the hypothesis that there exists a linear relation between costs (not only financial ones but also those that can be expressed in terms of mobilizing personnel, competence required, etc.) and the efficiency expected (the higher the costs, the better the efficiency should be). The level of techniques and competence committed, and the time invested in this kind of study, lead us to believe that an intervention conducted with all the rigor and seriousness that one would hope for, would be badly out of tune with very significant economic constraints.
Vice versa, it would doubtless be useful to ask the question about the limits to an ever-increasing complexity or of a still greater extension of the supposed field of validity of systems safety.
Taking into account more and more sophisticated and, sometimes flimsy parameters (in particular, in the field of expert judgements and of a certain trend in human reliability studies), the implementation of complex dysfunction
15The study of potential accident gravity is an entirely separate scientific activity. See, for example, Escande and Lannoy (1989) for a presentation of the phenomena of dispersion of inflammable products. -96-
modelling programs16 and the use of "incoherent" 17or "augmented 18 Fault Trees is not an approach that is likely to result in concrete applications in other than "frontier" sectors 19.
Elsewhere, contributions are appearing that propose simplified versions of common systems safety techniques (cf. Capps, 1984 ; Agostini, 1986). They are quite obviously responding to a need of financial and technical accessibility for these tools.
Over and above a certain cost (which can be contemplated in the research field), is there not a risk in practice of reaching the threshold of "diminishing returns" so familiar to economists ?
16 Cf., for example, the technique of Petri Grids (illustrations in Signoret and Leroy, op. cit ; Barbet and coll., 1987).
17Introduction to complex logic gates. See, for example, Kumamoto and Henley (1978) for the proposition of an algorithm for treatment of Trees, and Locks (1989) for a critical discussion of these.
18Introduction to failure propagation modelling technique, inspired by artificial intelligence methods (formalizing the rules for knowledge production, in particular). See Narayanan and Wiswanadham, 1987.
19A number of industrial catastrophes (Seveso, Bhopal, Flixborough etc.) highlight gross dysfunctions at the most general organizational levels. What is the value of explanations formulated in terms of human error or of sophisticated chains of failures, in the face of statements such as this : "So, for the Flixborough accident, (we see) a change in the production process and a tripling of the factory's capacity without any redefinition of the safety system ; the maintenance engineer's job vacant ; the place of the safety engineer is uncertain ; a decision to put into place a provisory length of pipe without any preliminary study and without any testing ; no attention to leaks, some of which "reduced gradually themselves", etc." (Lagadec, 1987, p. 35). -97-
REFERENCES
• ABRIDAT J.C. et coll. - Introduction à l'analyse du risque technologique dans les procédés chimiques. Cahiers de Notes Documentaires, 1988, 131, pages 265-276, ND 1675.
• AGOSTINI J. - Exploitation rapide d'arbres de défaillances. Actes du 5ème Colloque international de fiabilité et de maintenance. Biarritz, ADERA, 1986, pages 300-303.
• APACT (Association for Accident Prevention and Improvement of Working Conditions) - Méthode des observations instantanées, 1976.
• AVISEM - Techniques d'amélioration des conditions de travail dans l'industrie. Suresnes. Editions Hommes et Techniques, 1977.
• BARBET J.F., GUYONNET J.P. - Les méthodes d'analyse de la sécurité des systèmes. Revue Générale de Prévention, 1987, 30, pages 42-50.
• BARBET J.F., JOUBERTON D., HOGNON B., MATHEZ J. - Approche probabiliste de la sécurité incendie dans les E.R.P. Revue Générale de Sécurité, 1987, 63, pages 45-50.
• BARTHOD P. - Checklist d'observation pour l'analyse d'un poste de travail. Une méthode de recherche de facteurs de risques a priori. Revue des Conditions de Travail, 1985. 16, pages 28-38.
• BERNHARDT V., HOYOS C.G., HAUKE G. - Psychological safety diagnosis. Journal of Occupational Accidents, 1984, 6, pages 61-70.
• BOISSELIER J., BOUE G. - Guide des comités d'hygiène et de sécurité, Paris, Les Editions d'organisation, 1980.
• BRITISH CHEMICAL INDUSTRY SAFETY COUNCIL - A guide to hazard and operability studies, 1974.
• CAPPS J.H. - System safety for plant safety specialists. Professional Safety, 1984, 29, 6, pages 22-26. -98-
• CARNINO A. - La fiabilité humaine. Enjeux, 1987, 81, pages 73-76.
• CAVE J.M. - Le patronat et la prévention des accidents du travail. La sécurité intégrée. Arts et Manufactures, 1977, 288, pages 11-20.
• CECA - Les facteurs humains et la sécurité. Luxembourg, Etudes de physiologie et de psychologie, 1, 1967.
• CHAPOUILLE P. - La fiabilité. Paris, PUF, 1972.
• CHARBONNEAU S. - L'étude de danger des installations classées. Actes du Colloque AITASA, Bordeaux, IUT, 1987.
• CNPP-AFNOR - Les accidents technologiques. Paris, 1988.
• COOKE R.M., GOSSENS L.H.J., HALE A.R., HORST (VAN DER) J. - Accident sequence precursor methodology. Delft, Technische Universiteit, 1987.
• COOPER R., FOSTER M. - Sociotechnical systems. American Psychologist, 1971, 26, 5, pages 467-474.
• CORLETT E., CARTER F. - Travail posté et accidents. Dublin, Fondation européenne pour l'amélioration des conditions de vie et de travail, 1982.
• CRAMA (French Regional Health Insurance Fund Of Aquitaine/Bordeaux) - Code Bâtiment. Bordeaux, Service Prévention de la Caisse regionale d'assurance maladie d'Aquitaine (without date).
• CRAMCO (French Regional Health Insurance Fund Of Center- West/Limoges) - L'hygiène et la sécurité dans le travail. Guide pratique de l'animateur de sécurité. Limoges, Service prévention de la Caisse régionale d'assurance maladie du Centre Ouest (issued before 1979).
• CUNY X. - L'étude de la structure des relations de travail et son application sur le terrain. Le Travail Humain, 1972, 35, 2, pages 261-266.
• CUNY X. - Les comportements de prise de risque dans le travail. Revue de Psychologie Appliquée, 1987a, 37, 1 pages 1-11. -99-
• CUNY X. - Nouvelles perspectives pour l'étude des risques professionnels dans l'industrie chimique. L'Actualité Chimique, octobre 1987b, pages 318- 321.
• DAMEL R. - Examen d'un projet de vanne. Introduction des concepts ergonomiques dans une étude technologique. Cahiers de Notes Documentaires, 1967, 46, ND 526, pages 65-68.
• DAMIEN M. - Analyse des modes de défaillance des installations pyrotechniques. Revue Générale de Sécurité, 48, 1985, pages 46-52.
• DECOSTER F. - Ergonomie, fiabilité, disponibilité des systèmes automatisés en processus discontinu de fabrication, Revue des Conditions de Travail, 1987, 33, pages 2-19.
• DE KEYSER V. - La démarche participative en sécurité. Bulletin de Psychologie, 1979, 33, 334, pages 479-491.
• DE KEYSER V. et coll. - L'apport de l'analyse pluridisciplinaire des accidents à l'action de prévention. Le Travail Humain, 1984, 47, 3, pages 244-247.
• DE KEYSER V. - Ergonomie et sécurité. Paris, Encyclopédie médico- chirurgicale, intoxications 16800 A 10, 2, 1987.
• DESCHANELS J.L., LAVEDRINE P. - Maîtrise des risques techniques. Revue Générale de Sécurité, 1984, 35, pages 31-35.
• DIDIER R. - Inspection sécurité assistée par ordinateur (Progiciel ISAO). 3ème Colloque aquitain d'hygiène et de sécurité, Bordeaux, 1985.
• DOGNIAUX A. - Approche quantitative et qualitative d'un problème de sécurité industrielle. Journal of Ocupational Accidents, 1978, 1, 4, pages 321-330.
• DUMAINE J. - Analyse des "quasi-accidents". Méthode Usinor-INRS. Informations Sécurité Usinor, octobre 1977. -100-
• DUMAINE J. - Cercles de progrès et sécurité et conditions de travail. Rapport annuel d'Usinor, 1983, pages 23-30.
• DUMAINE J. - La modélisation du phénomène accident. Sécurité et Médecine du Travail, 1985, 71, pages 11-22.
• DUMAINE J; - La stratégie de la prévention. Diagnostic de sécurité et plan à long terme. Sécurité et Médecine du Travail, 1986, 73, pages 24-27.
• DURAND P., RICARD S., LAN A. - Qualité chantier. Grille de questions. Programme de sécurité et ingénierie; Montréal, IRRST, 1986.
• EMERY F., TRIST E. - Les systèmes socio-techniques. Hermès, 1986, 2, pages 28-30.
• ESCANDE J., LANNOY A. - Les risques chimiques industriels. La Recherche, 1989, 207, pages 280-290.
• FAVERGE J.M. - L'ergonomie des systèmes. Bulletin du CERP, 1965, 14, 1- 2, pages 19-24.
• FAVERGE J.M. - Psychosociologie des accidents du travail, Paris, PUF, 1967a.
• FAVERGE J.M. - Les critères de sécurité. In : CECA - Les facteurs humains et la sécurité. Luxembourg, Etudes de physiologie et de psychologie du travail, 1967b, 1, pages 29-38.
• FAVERGE J.M. - L'homme agent d'infiabilité et de fiabilité du processus industriel. Ergonomics, 1970, 13, 3, pages 225-261.
• FAVERGE J.M. - Analyse de la sécurité du travail en termes de facteurs de risque. Revue d'Epidémiologie et Santé Publique, 1977, 25, pages 229-241.
• FERAUGE C. - Rapport au Ministère de l'Environnement du groupe de travail sur la prévention des risques industriels. Paris, Conseil supérieur des installations classées, 1984. -101-
• FLEURY D., FLINE C., PEYTAVIN J.F. - Les accidents de poids lourds : analyse du dossier de l'EDA; Recherche, Transports, Sécurité, juin 1987, pages 31-39.
• GILARDI J.C., TARONDEAU J.C. - Technologies flexibles et organisation du travail. Revue Française de Gestion, 1987, 63, pages 62-72.
• GOGUELIN P. - Vers une nouvelle psychologie du travail. Evolution ou mutation ? Revue de Psychologie Appliquée, 1987, 37, 2, pages 139-174.
• GOLIGER J., LIEVENS C. - La sécurité des systèmes. Le Monde, 14 juillet 1976, page 10.
• GUELAUD F., BEAUCHESNE M.N., GAUTRAT J., ROUSTANG G. - Pour une analyse des conditions du travail ouvrier dans l'entreprise, Paris, Armand Colin, 1975.
• GUILLERMAIN H., FADIER E., NEBOIT M. - Ergonomie cognitive et fiabilité humaine : proposition d'une méthodologie commune appliquée à une situation de contrôle de processus discontinu. In : DE KEYSER V., VAN DAELE A. - L'Ergonomie de conception. Bruxelles, De Boeck Wesmael, 1989, pages 205-210.
• HANNAMAN G.W., SPURGIN A.J., LUKIC Y.D. - Human cognitive reliability model for PRA analysis. Palo Alto, document NUS-4531, 1984.
• HENDRICK K., BENNER L. - Investigating accidents with STEP. New York, Marcel Dekker inc., 1987.
• HO M.T. - La sécurité des systèmes : aperçu sur les principales méthodes d'analyse - Applications à l'étude des possibilités d'occurrence d'événements non désirés et leur prévention. Vandoeuvre-les-Nancy, INRS, rapport n° 154/RI, 1974.
• HO M.T. - Réflexions sur l'analyse de la sécurité des systèmes, ses méthodes et ses problèmes. Cahiers de Notes Documentaires, 1976, 85, ND 1037, pages 571-580. -102-
• HO M.T. - Examen de quelques aspects et problèmes liés à l'application des concepts et méthodes de sécurité des systèmes au cas de l'entreprise. Compte rendu du 1er séminaire européen sur la sécurité des systèmes, Bordeaux, juin 1980.
• HO M.T. - Note sur le Système ISAO, Paris, INRS, note interne du 16.06.85.
• HO M.T. - Accident analysis and information system failure analysis. In : WISE, DEBONS (eds) - Information systems : failure analysis. Berlin, NATO ASI, séries vol. F 32, pages 73-78.
• IEEE - Guide to the collection and presentation of electrical, electronic and sensing component reliability data of nuclear power generation stations, IEEE Std-500, 1977.
• JOHNSON W.G. - Sequences in accident causation. Journal of Safety Research, 1973, 5, 2, pages 54-57.
• JOHNSON W.G. - MORT : the management oversight and risk tree. Journal of Safety Research, 1975, 7, 1, pages 4-15.
• KJELLEN U. - An evaluation of safety information systems at six medium- sized and large firms. Journal of Occupational Accidents, 1982, 3, pages 273-288.
• KJELLEN U., LARSON T. - Investigating accidents and reducing risks - a dynamic approach. Journal of Occupational Accidents, 1981, 3, pages 129- 140.
• KRAWSKY G., MONTEAU M. CUNY X. - Méthode pratique de recherche de facteurs d'accidents. Application expérimentale et résultats. Vandoeuvre-les- Nancy, INRS, rapport n° 77/RI, 1973.
• KUMAMOTO H., HENLEY J.E. - Top-down algorithm for obtaining prime implicants sets of non-coherent fault trees. IEEE Transactions on Reliability, 1978, R-27, 4, pages 242-249.
• LAGADEC P. - Le risque technologique majeur. Paris, Pergamon, 1981. -103-
• LAGADEC P. - Défaillances technologiques majeures et grandes situations d'urgence. 1ère partie. Travail et Méthodes, 1987, 450, pages 33-37.
• LAPLACE R. - Critères de fiabilité des systèmes électroniques. Electronique Applications, 1987, 54, pages 37-41.
• LAUMONT B., CREVIER H. - Perception des risques dans une entreprise de chaudronnerie industrielle. Archives des Maladies Professionnelles, 1986, 47, 7, pages 550-551.
• LEFEBVRE H. - Sécurité et conditions de travail. Guide pratique de la hiérarchie. Grenoble. Société alpine de publications, 1986.
• LEGRAND (French firm, without author) - La sécurité en fiches. Travail et Sécurité, 1984, 6, pages 311-312 & 337-339.
• LEONTIEV K., LERNER A., OCHANINE D. - Sur quelques tâches dans l'investigation du système "homme-machine automatique". Voprosy Psikhologii, 1961, 1, pages 13-21.
• LEPLAT J. - Fiabilité et sécurité. Hommage à Faverge. Le Travail Humain, 1982, 45, pages 101-108.
• LEPLAT J. - Erreur humaine, fiabilité humaine dans le travail. Paris, Armand Colin, 1985.
• LEPLAT J., CUNY X. - Les accidents du travail, Paris, PUF, 1974.
• LEWIS H.W. et coll. - Risk assessment review group. Springfield, U.S. Nuclear Regulatory Commission. National Technical Information Serie, NUREG/CR-0400, 1978.
• LIEVENS C. - Sécurité des systèmes. Toulouse, CEPADUES, 1976.
• LIEVIN D., PHAM D. - La sécurité dans une usine de production de fil textile artificiel. Vandoeuvre-les-Nancy, INRS, rapport n° 482/RE, 1980.
• LIU M. - Technologie, organisation du travail et comportements des salariés. Revue Française de Sociologie, 1981, 22, pages 205-221. -104-
• LIU M. - Approche socio-technique de l'organisation. Paris, Les Editions d'organisation, 1983.
• LOCKS M.O. - Fault trees, prime implicants and noncoherence, IEEE Transactions on Reliability, 1980, R-29, 2, pages 130-135.
• LOUET M., BARAULT P. - La sécurité des systèmes, un exemple d'application. Revue Générale de Sécurité, 1986, 54, pages 27-34.
• LLIBOUTRY L. - Modèles et révolution des sciences de la terre. La Recherche, 1985, 16, pages 272-278.
• MAUGE M. - Fiche technique de sécurité - Machines à rouler et cintrer les métaux. Paris, INRS, edition ED 691, 1986.
• MONTEAU M. - Essai de classement des risques professionnels et des actions de prévention. Cahiers de Notes Documentaires, 1975, 74, ND 900, pages 255-262.
• MONTEAU M. - La sécurité dans une usine de production de fil textile artificiel. Vandoeuvre-les-Nancy, INRS, rapport n° 456/RE, 1979a.
• MONTEAU M. - Bilan des méthodes d'analyse d'accidents du travail. Vandoeuvre-les-Nancy, INRS, rapport n° 456/RE, 1979b.
• MONTEAU M. - Quelques problèmes méthodologiques posés par le diagnostic de sécurité en entreprise. In : Psychologie du travail : perspective 1990, Paris, EAP, 1983, pages 632-640.
• MONTEAU M. - Comment gérer la sécurité dans l'entreprise ? Compte rendu du colloque AIIT 1986. Travail et Sécurité, 1986, 9-10, pages 524- 527.
• MONTEAU M., PHAM D. - L'accident du travail : évolution des conceptions. In : LEVY-LEBOYER C., SPERANDIO J.C. - Traité de psychologie du travail. Paris, PUF, 1987, pages 703-727. -105-
• MONTMOLLIN (de) M. - Les systèmes hommes-machines. Paris, PUF, 1967.
• MONTMOLLIN (de) M. - Les psychopitres. Paris, PUF, 1972.
• MONTMOLLIN (de) M. - L'ergonomie. Paris, La Découverte, 1986.
• MORLAT G. - Grands risques et probabilités. Culture Technique, 1983, 11, pages 103-107.
• MOUGEOT B., DINE R. - Recherche systématique des risques en atelier par une méthode a priori. Vandoeuvre-les-Nancy, INRS, rapport n° 496/RI, 1980.
• MOYEN D., QUINOT E., HEIMFERT M. - Exploitation d'analyses d'accident du travail à des fins de prévention. Le Travail Humain, 1980, 43,2, pages 255-274.
• NARAYANAN H.N., WISWANADHAM N. - A medhodology for knowledge acquisition and reasoning in failure analysis of systems. IEEE Systems, Man and Cybernetics, 1987, SMC-17, 2, pages 274-288.
• NICOLET J.L., CELIER J. - La fiabilité dans l'entreprise. Paris, Masson, 1985.
• OMBREDANE A., FAVERGE J.M. - L'analyse du travail. Paris, PUF, 1955.
• OPPBTP (French Occupational Risk Prevention Organisation For The Building And Civil Engineering Industries) - Médecine du travail. Sécurité et conditions de travail - Réflexions sur le malaxeur-projeteur. Cahiers des Comités de Prévention du Bâtiment et des Travaux Publics, juillet-août 1985, pages 9-20.
• PAGES A., GONDRAN M. - Fiabilité des systèmes. Paris, Eyrolles, 1980.
• PASMORE W., FRANCIS C., HALDEMAN J., SHANI A. - Sociotechnical systems : a North American reflection on empirical studies of the seventies. Human Relations, 1982, 35, 12, pages 1174-1204. -106-
• PERILHON P. - Approche de l'accident et de la prévention. Revue Générale de Sécurité, 1985, 42, pages 48-53.
• PIGANIOL C. - Conditions de travail : les grilles mesurent-elles l'essentiel ? La Revue de l'Entreprise, 1978, 20, pages 50-55.
• PIOTET F., MABILLE J. - Conditions de travail, mode d'emploi. Paris, ANACT (National Association For Working Conditions Improvement), Collection outils et méthodes, 1984.
• RAMSEY J.D., BURFORD C.L., BESHIR M.Y. - Systematic classification of unsafe worker behavior. International Journal of Industrial Ergonomics, 1986, 1, pages 21-28. . • RASMUSSEN J. - Information processing and human-machine interaction. New-York, Amsterdam, Londres, North-Holland, 1986.
• RASMUSSEN N. et coll. - Reactor safety study. Springfield, US Nuclear Regulatory Commission, WASH-1400, NUREG-75 10 14, 1975.
• RNUR (Régie Nationale des Usines Renault) - Aide-mémoire d'ergonomie, 1983.
• ROUANET H. - Modèles en tous genres et pratiques statisticiennes. In : Comprendre l'homme, construire des modèles. Les modèles implicites et explicites en psychologie. Paris, Société française de psychologie, Actes du colloque, mai 1983, pages 55-64.
• ROUSSEAU M. - Une application de la méthode des "observations instantanées" à la sécurité. Travail et Méthodes, 1965, pages 73-74.
• SAF (Swedish Employers'Federation) - Expériences suédoises de gestion participative des ateliers - Bilan de 500 cas de réorganisation des tâches. Suresnes, Editions Hommes et techniques, 1977.
• SARRAT P., N'KAOUA M. - La sécurité des systèmes appliquée en conception : l'étude d'une paroi. Revue Générale de sécurité, 1986, 54, pages 31-34. -107-
• SCHWEITZER A., GERARDIN J.P. - Méthodes permettant d'améliorer le niveau de sécurité des systèmes à logique programmée. Electronique, Techniques et Industries, 1984, 12-13, pages 51-56 et pages 39-49.
• SEILLAN H. - Réflexions sur la notion juridique d'hygiène et sécurité. Sécurité et Médecine du Travail, 1981, 58, pages 5-8.
• SIGNORET J.P., LEROY A. - La prévision du risque technologique. La Recherche, 1986, 183, pages 1596-1607.
• SKIBA R. - Grundlagen, Methoden und Grenzen der Gefährdungsanalyse - eine Ubersicht (Principles, methods and limits of risk analysis - an overview). Sicher ist sicher, 1972, 23, 10, pages 484-490.
• SNYDER H.L. et coll. - Critical human factors issues in nuclear power regulation and recommended comprehensive human factors long-range plan. Springfield, US Nuclear Regulatory Commission, NUREG/CR-2833, 1982.
• SPERANDIO J.C. - La psychologie en ergonomie, Paris, PUF, 1980.
• STARR C. - Social benefit versus technological risk. Science, 1969, 165, 3899, pages 1232-1238.
• SUOKAS J. - The role of safety analysis in accident prevention. Accident, Analysis, Prevention, 1988, 20, 1, pages 67-85.
• SUTTER A., TROXLER R. - Modèle d'analyse de la sécurité des systemes à la conception et en fonctionnement. Actes du Colloque AITASA. Bordeaux, IUT, 1987.
• SWAIN A.D., GUTTMAN H.E. - Handbook of human reliability analysis with emphasis on nuclear power plant applications. Albuquerque, Sandia National Laboratory, NUREG/CR-1278, SAND 80-0200, 1983.
• THONY C., VIEUX N. - Pratique des visites d'entreprises et des études de postes. Paris, France-Sélection, 1986. -108-
• TRICOT C., PICARD J.M. - Ensembles et statistique. Montréal, MC Graw Hill, 1969.
• TUOMINEN R., SAARI J. - A model for analysis of accidents and its application. Journal of Occupational Accidents, 1982, 4, pages 263-273.
• TUTTLE T., WOOD G., GRETHER C., REED D. - Psychological-behavioral Strategies for accident control. A system for diagnosis and intervention. Behavorial Safety Center Westinghouse Electric Corporation, Technical Report BSC-3, 1974.
• UNION DES INDUSTRIES CHIMIQUES (UIC) - Cahier de sécurité n° 1 - L'analyse préliminaire des risques. Cahier de sécurité n° 4. L'analyse des modes de défaillance des effets et des probabilités. Paris, UIC, 1981 a et b.
• VAN DAELE A. - Les politiques de sécurité des entreprises : réflexion sur quelques influences heuristiques organisationnelles, stratégiques et idéologiques. Revue des Conditions de Travail, 1987, 29, pages 20-33.
• VILLEMEUR A. - Sûreté de fonctionnement des systèmes industriels. Paris, Eyrolles, 1988.
• WALTERS N.K. - Politique et programmes progressifs dans l'entreprise : les éléments essentiels de politiques et programmes efficaces. Rapport 10ème Congrès mondial de prévention des accidents du travail et des maladies professionnelles, AISS, Ottawa, 1983.
• WANNER J.C. - Etude de la sécurité des aéronefs en utilisation (ESAU), Service technique aéronautique, 1969.
• WEISS D. - La démocratie industrielle : cogestion ou contrôle ouvrier ? Paris, Les Editions d'organisation, 1978. -109-
TABLES AND FIGURES
The tables and figures of the INRS risk analysis review are contained in the following 42 pages which do not have the standard header.
They can be extracted from the binder to enable simultaneous reading of the text and observation of the relevant tables and figures.
The 42 pages should be filed behind this page Risk Analysis 700-1.