I Think We Are at a Point, Due to the Nature of Some of These Mishaps, and Through Reflection

Total Page:16

File Type:pdf, Size:1020Kb

I Think We Are at a Point, Due to the Nature of Some of These Mishaps, and Through Reflection

I think we are at a point, due to the nature of some of these mishaps, and through reflection, that we know we are able to improve on the way we work and take decisions to improve both operations and safety, by using newer technologies available to us now, that allow better and faster communication and decision making. Some of us have been continuously bringing these points up in conversation in different lights, drawing on our experience and diversity. I think what is important to know and do to make constructive progress in this direction, is to map human expertise from the various directions into such systems that allow the systems to recognize, identify, signal, and distribute the parameters that potentially lead into risk, so that they may be handled partially by the system and totally by the experts, so that any developing risk may be handled via process to re- direct the operations into a manageable risk scenario. Although, many good steps have been implemented in certain capacities, it is clear, that there is plenty left to do and much is left to be desired in current configurations of realtime and collaborative decision processes.

We were discussing an example on another thread, but what is important to keep in mind here, is that we can only automate to a certain level, and that humans need to be ultimately responsible for resolving any risk or danger that develops. Technology can really only help us to not overlook an event that is developing on several fronts, where it would normally take more than one person to notice, by bringing the information together from the different fronts, monitoring the combination of parameters, so that when a risk or danger develops, it is automatically recognized by the system, and then the system generates and distributes alerts and alarms to the "team" , whether they are found together or are distributed, to handle the situation. There are scenarios where such systems can be very useful in gaining time, where otherwise humans may still be squabbling between each other and explaining to each other, the system may deliver an undeniable flag to wave. There are cases where you have total system takeover and action, such as in a fire extinguishing system, where not only the danger is detected, but the action decision is also automated. At the well level or reservoir level, much more complicated scenarios may develop, and as much is due to subsurface conditions that we cannot normally see anyhow, for many safety processes, we are far more dependent on system detection. It should be said, that regulators are a different kettle of fish than operators. A single operator through its' own experience may develop a process and system that works very well, but is not in the business of sharing this development. Regulators while visiting such an operator may realize, oh, we should be doing this everywhere, and then they can recommend or develop standards. I find that these standards and regulating bodies are very instrumental in helping to assure levels of quality on an industry-wide basis, but they will always lag a bit behind the operators as proprietary developments are often used competitively.

One common theme that is touched on, after reading through this thread, is distracted mainly by the "How?" and these are minor in my opinion. Once it is clear to the critical mass of people exactly what the most critical missing piece to assessment and communication of risk is, then, and only then, providence will move to make this happen. Garry was discussing, with me and many others, back a year or more ago as most present were in agreement of the missing piece and now Garry is bringing this notion back to discussion and it still isn't something everyone really wants to "stomach" because it is difficult and difficult things get "skipped over" until all of the easy things are proven to still lack the proper solution. The missing piece, in these most complex and dangerous drilling operations are ubiquitous access to summations of risk from individual "risk silos". Peter, back many comments prior, mentioned Montara and if anyone wants to go back to the Montara summary report produced and presented by PTTEP, it is easily presented that PTTEP agrees, and made recommendations and presented the need for a similar functioning system. Some on this thread are confusing Process Safety with Ella's original premise being Drilling Process Safety. While the concept is basically the same the unknowns in drilling make things more interactively complex as the earth model is tightly coupled with the containment system. This doesn't make things easier.Let's not redo the discussions that have already been done on that front. They are well documented and there is much written, by myself, on the subject. There are companies that have individuals that are sold on the concept and yet there are many more that have not implemented any system that accomplishes the lacking that many more see clearly exists. It is good to see Garry writing about the need for exactly this here a year later and frankly the state of the industry in these regards is exactly as anticipated. Every easy idea and schema will be developed before the more difficult needed ones will be adopted.

Ella let me answer your question. There are many very good comments that describe the process of assessing and accounting for risk and the proper procedures and need for diligence in doing so yet it is how the individual risk assessments and the "a priori" and "dynamic" risks are neurally connected to a "master" risk assessment in "real time" that is lacking still. The main reason is it is "too difficult". Look, the most important element in any enterprise is diligence and patience. All of the latest rounds of regulations, actually, are attempts to regulate "diligence" by prescribing actions that ensure it, yet there will always be "loopholes" in those strivings. Over the last year, I've had the great opportunity to work with many people and companies on exactly this subject and I've seen that almost everyone agrees that this is the one most important thing to lend the situational awareness, spot anomalies to pre plan expectations and ability to respond safely to dynamic conditions we encounter regularly. We do not know all companies do privately yet publicly the efforts to this point lack ubiquitous, dynamic risk level assessments and broadcasting.Also, one comment on the commercial motive that tends to haste; it must be controlled. The same sentence in a book of ethics and morality says, "diligence leads to profit and haste leads to waste and ruin", and we can all see that play out in most accidents from stumbling to get to a toy by a child and cutting corners to get finished in industries. Point is regulators have known this for decades and through many rounds of disasters and try to prescribe formuli and red tape to ensure diligence and prevent haste and yet a better way is to mix in "goal orientation" and the goal should be to alert to people at all times the danger level because people tend to understand and take more caution during times when a "red flag" is raised and it prevents complacency likewise during times of a "green flag". Also gathering this information supplies formality to the type of communication and risk assessments needed on a constant dynamic basis in any enterprise. This simply remains lacking in ubiquitous forms.My suggestions are to go back and read Garry's 1st and 3rd comment. Garry repeated his 1st comment because everyone skipped over what he was saying and the same for his third comment. My first comment is the same as what Garry is saying. My suggestions are to focus on that, eliminating haste and ensuring diligence.

Prescribing how this is done, once this is accepted and focused on, will lead many to understand the need for situational awareness provided by summation of risk silos. With knowledge of this people tend to self regulate their haste and motivate themselves to diligence focused on the most important thing at every moment in time.

Wayne, my heart goes out to you. You mention, "commercial interests" and yet lets distinguish that not all commercial interests are necessarily lacking and yet only when they detract from diligence and lead to haste. Okay, so once we agree that the main focus should be on ensuring diligence and eliminating haste, we might also note that by definition diligence means a lack of haste or acting before needed precaution, etc. so let’s focus on mandating diligence. There are companies that are doing very well in this regard on their own using screening tools etc. to identify complex wells and using checklists on this to ensure that due diligence is performed. While there is still room for cutting corners by individual people this is less likely and performance controls on individual task completion may also be utilized. In terms of mandating this, as of yet, the new CFRs and the Final Rule of the BSEE gets mixed reviews and perhaps a failing grade on a few items yet a passing grade on the well containment screening tool, etc. For example look at a regulation that would have ensured diligence on the Macondo incident. We might have many choices of specific guidelines to require due diligence yet let’s pick one that is most specific and say that a more diligent design, monitoring and assessment of the negative pressure test would have ensured that the engineer that would have designed the test would also have had to monitor the test, assess the pressures and ultimately certify that the test passed or failed. Right? That is diligence in a nutshell. Yet four years after Macondo the following regulations relating to that specific action read:According to the Final Rule the 30 CFR 250.423c reads(c) You must perform a negative pressure test on all wells that use a subsea BOP stack or wells with mudline suspension systems. The BSEE District Manager may require you to perform additional negative pressure tests on other casing strings or liners (e.g., intermediate casing string or liner) or on wells with a surface BOP stack.And 30 CFR 250.423c1 reads: (1) You must perform a negative pressure test on your final casing string or liner.So the mandate from BSEE doesn’t ensure diligence and yet mandates that the test be performed and without ensuring diligence the test itself remains dangerous.Also, there is no mandate on how to test the shoe as an operator may chose to test the liner without having tested the shoe first or simply placed a bridge plug on top of the shoe without any due diligence into how far above the shoe a bridge plug might safe to drill out not knowing if pressure from the shoe had leaked, risen, and brought the gas pressure below the bridge plug.Of course oral mandates might prevent that and yet the most effective way to manage diligence in this case is to require the test itself to be designed, monitored, assessed and certified by one engineer, that has immediate access to every bit of information, preferably the operations engineer assigned to construct the well.Why the resistance? No resistance? Then why the oversight? No, oversight? Then explain why it isn’t important to regulate diligence in these straight forward and less prescriptive ways that really are goal oriented regulations that are simply stating a goal, safe testing, and ensuring this happens.Keep in mind that these aforementioned issues are in addition to the glaring lacking of ubiquitous "risk silo" summations.These same exact things are repeats from the last four years of discussions on these same topics. Not complaining yet let's keep it real.

Peter: your comment elaborates the daunting task of communication and competence assurance among teams that are "risk silos", if you will.Good thing to focus on as you obviously are well aware of.The most important thing to communicate is the key, in my opinion, and that is the risk level of each compartmental operation (risk silo i.e.. mud loggers, drilling crew, subsea engineering team, onshore design and operations teams, each piece of equipment or well bore barrier, etc.) to the maintenance of the barrier. Competence within each "risk silo" is assessed in the risk level. Communicating the risk level of each "silo" is achievable. Summing the contributions to the overall risk level that is communicated back to each "silo" is true communication and most helpful to situational awareness in which competency levels can then be tailored to match the current risk level, within each individual risk silo and triggering higher competence overall supervision and decision support levels as well.Otherwise, when the least competent individual is on the hitch during the highest risk you will see why his/her decision will be subject to higher probability of error. In a word, not tying competency levels to dynamic risk levels is "stupid". It should be done before the project begins so the vacation and well control school schedules of all can be synchronized with the pre planned estimate of the risk levels during the project. The Actual vs Pre Planned estimate make a good anomaly detection device as well.

Human Factors is classified by experts under the term Ergonomics, that is known by lay people as posture at their desk and how well we see our computer screens. Examples of ergonomics at work is using the term "Cockpit error" to replace "Pilot error", since a study of WWII pilots concluded that a small change in the position of an instrument in the cockpit of the air forces best pilots was causing them to crash. Similarly, the focus on giving individuals vital information in ubiquitous fashion, in our case, the current level of risk, is akin to having a well positioned gauge on an instrument panel that is essentially a "decision support panel", that all of us on the team are technically capable of understanding.Ella:I'm not a fan of a bridging document in the same breath as a ubiquitous all encompassing barrier risk level. A bridging document usually is only a statement of assignment of responsibilities and a risk level is an assessment of the current condition of the barrier.You and others have mentioned Montara root causes and I mentioned that the PTTEP final report had noted the need for a ubiquitous summation of risk silos in their report. Here is the link to the report: http://www.au.pttep.com/media/18761/building%20on%20the%20lessons %20of%20montara.pdfThe line of sight tool is their version of the "master" risk assessment of the barrier I'm referring to so it cannot be said that envisioning the need for this is something beyond any one company that needs to be developed as a "cross-industry initiative". Let it be clear that in discussions independent of the Montara report and before any knowledge of their conclusions that the need for a ubiquitous summation of risk level of the barrier was noted as a primarily vital one. The PTTEP report simply became another confirmation of the same conclusion that almost every report hints at yet doesn't quite hit the nail on the head. Of course "In all labor there is profit, But mere talk leads only to want", so saying this is needed is different than actually doing anything. There are company(s) that have looked at the complexity of such a system and abandoned its development due to difficulty. That is human nature to abandon difficult tasks unless they are clearly recognized as being vitally necessary; a vision is created. Many times this isn't recognized until all the less vital, and difficult ideas are developed and found still lacking the affect of the more difficult and vital one.Clearly the "how?" is the needed process and is the difficulty and "why" is the needed outcome. The difference between diligence and haste is that diligence focuses on the process and the outcome, both, while haste focuses only on the outcome. That sole focus on outcome causes accidents. Performance control in commercial businesses management theory are usually only outcome based and in drilling this is the days versus depth curves. A "balanced" performance control in engineering management must include metrics for the process and the most vital of these is the health of the barriers since the process of drilling itself is simply the removal of natural barriers to flow of hydrocarbons and replacing them with man made conduits of flow control. So with that in mind the risk level before spud is essentially the risk of the natural barrier failing just under the rig, or practically zero barring a major earthquake and the earth opening up precisely there, yet once the well is spudded in deepwater drilling riderless, for instance, the risk level grows steadily, drops, increases, stays the same, etc. over the project timeline. This is a process performance metric and not an outcome performance metric. Just the term "process" is a step forward, however, since it signals that focus on "outcome" must be balanced with a focus on "process", yet "outcome" is equally important and in our case here the outcome we need must not be abandoned simply because the process is too difficult.

Good stuff Wayne you persevere and stick to your guns and keep ignoring detractors. Saying "the issue is bigger than 'process safety'", as a comment on a thread titled, "Process Safety", defines you and I like you for that specific panache. Being a staunch proponent of reason and protocol, I will go on record as saying the issue is process safety and particularly in ensuring that focus on outcome is always balanced with diligence and due process. Google "due process" and it is defined as: "1) NOTICE, generally written, but some courts have determined, in rare circumstances, other types of notice suffice. Notice should provide sufficient detail to fully inform the individual of the decision or activity that will have an effect on his/her rights or property or person." If I am working on a drilling rig this NOTICE would be a RED FLAG if the risk was extremely high and this would give me the right as a human being to either, A. Make sure that the most competent people are on the job at that moment in time. B. Heighten my awareness C. Use equipment of the highest standards D. Run off (or catch the next boat in) if I think the people or equipment, on the job are not up to the task of operating at the high level of risk. Due process; do not forget that term for this issue in our industry. The broadcast of a ubiquitous summation of risk silos is exactly Due Process yet the assessments and summation must be done before any ubiquitous notice of "due process" can be done so we are already a couple of steps behind.Another term is "due diligence" and this strikes close to your stated battle against the profit motive yet I tell you capitalism isn't bad if there is due process and due diligence. Commercial interests in capitalism lead to a focus on profit and thus getting things done fast. Books of wisdom also say that "a man quick in his work will stand before kings!" yet the same book of wisdom says that "diligence leads to profit and haste to disaster". So really the issue goes back to the difference between haste and "quick in work" or alacrity. Clearly the quickness must be coupled with diligence or else it is haste. We all know this and see this in our everyday lives. We have an outcome in mind yet must focus diligently on the steps of the process in order to reach our goals. In fact then alacrity may be defined as a diligent focus on the job at hand, the process, if you will. So we come full circle the need for performance metrics to assess diligence. Google "due diligence" and you get alot of legal terminology and this definition:The theory behind due diligence holds that performing this type of investigation contributes significantly to informed decision making by enhancing the amount and quality of information available to decision makers and by ensuring that this information is systematically used to deliberate in a reflexive manner on the decision at hand and all its costs, benefits, and risks.You see now my determination to influence the discussion in the direction of "due process" and "due diligence"?

Peter Aird says: 7 year deepwater development study, more than 100 wells, the process failure safety facts based on loss control and not H&E devoid of safety as follows. Project facts. No one was hurt. (Zero) There was no spills ZERO There was no major traditional well related problems issues in terms well control ( no kicks, few losses issues), little wellbore instability, and very little stuck pipe. In fact no strings had to be backed off in any of these wells. So where were the issues? What was the safety related incidents/ accidents. ? Well there were 191 major non injury significant drilling operational related incident/ accident events. Every well being normalised with data managed in exactly the same way. Thus almost I.5 year of operating incident/ accident loss. (Big safety stuff!) Where How many reports does current management process capture, do we think?

The above illustrates the need for TRUE INCIDENT reporting to combat the OUTCOME BIAS.

It has to be clear that Process Safety is ONLY well containment and barrier quality and that overall safety might need to be more clearly described as Process safety, personal safety, and project safety since if the derrick "craters" this isn't personal safety nor process safety and yet a group safety issue.Peter:You're right, of course, that measurements of key safety metrics don't apply to well containment and barrier issues and minimizing the process and yet focusing on the keys of the process with the use of a dedicated process engineer on duty every tour was an idea presented at last years offshore process safety conference. Andrew Hopkins, in his book, used the term "slacker" meaning this person's only duty was the barrier. This would be a "barrier engineer" (BE) and would be responsible for summation of the health of the barrier (or present risk level) at every moment in time and communicating this status back as a ubiquitous risk level giving further situational awareness to everyone on the project. The one response from Transocean after Macondo was that if they had been given clear notice of the increased risk they might have heightened awareness and responded with more rigor and vigilance and this was also the conclusion of PTTEP at Montara leading to their devising the "Line of Sight Tool" as the solution.Incidentally, a BE and "BROADCAST" system meets the conspicuously absent discussion of what is owed to the personnel on the rig brought up by Peter and In my opinion it is indeed to help them and not burden them with RED TAPE perhaps simply designed to mitigate the "consciences" of regulators and other shore personnel, and if Due Process is the standard, allow for an accurate accounting and NOTICE of present risk presented in a ubiquitous, consistent, and accurate manner.All the talks of Standards in construction miss the point, standards must be kept, yet in construction any and all plans are executed with varying levels of success and equipment isn't always up to spec after installation and this affects the overall health of the barrier to flow. A "wet shoe" is an example of this. On the simplest operations the current health of the barrier is easily communicated across tours and crew changes and into stages of the procedure yet formalizing this as the primary duty of a BE is the key to due process and actually adding an element to an operation that helps and doesn't simply add layers of red tape that do not. The BE would be responsible for assessing the barrier and if a further test is needed to assess then this would be the duty of the BE. The risk level would be "ubiquitous". This means it would be broadcast to the entire team and present on the morning drilling report as a color and number for precision. In the most complex operations service companies could "tailor" personal, equipment and procedures to match risk level with competence level, equipment standards and procedural caution, with that information.

Ella: There is nothing "singular" nor "unidirectional" about summing risk and sharing its sum. The summation is "neural" and from every component of the project so there is nothing less singular and unidirectional than that; its all encompassing. The missing component is actually encompassing components of risk and sharing it with the project as a whole. Eventually this will be imposed if it isn't embraced. Simply look at the fallout from the deaths of the family in the Texas Panhandle from the H2S well that blewout. The rigs now must employ a system very nearly the same as a BROADCAST with a BE, ie. Flags of symbolic color must be flown on the rig that give DUE PROCESS and notify all that are on the location or even those that might drive up on the location unsuspecting of the level of clear and present danger and a dedicated H2S engineer. Rights to risk information is not a new concept nor is the energy industry exempt nor able to resist the trend because it adds safety. Really the problem is that not enough of us have sufficient grasp on the history of the industry to see the parallels to our current issues. Let's simply ignore the statement of "spreading risks" because the ultimate "spreading of risks" is socialization of risks and ignoring risks and due process is simply an unacceptable form of socialization of risks. The due process of BROADCASTING risk level is a "Goal Oriented" prescriptive regulation. How the risk is summed is not prescribed. It was proven long ago by Emmanuel Kant that good will is required yet perhaps you start a discussion on how to enforce goodwill. That is your answer to how it is enforced. The current additional processes are "Process" prescriptions. This is red tape at its absolute worst!Consider the barrier a "dam". We're only talking about one engineer monitoring the status of the health of the "dam". Go to the Hoover Dam and ask them how many engineers they have on staff doing this exact thing. If there is a town built downstream you will see just how possible this is and that it is taken seriously and yet more importantly it is done. If you live below that "dam" you look up and see the color of the flag they are waving. This is Due Process and we could debate whether every human is due this yet we might all agree that people serving components of the bigger job benefit from situational awareness and so the project itself is safer because it shares risk information ubiquitously. I'm not sure how this cannot be accepted completely.

Way to go Peter! Everything you said needs to be carefully considered. You said it, "everything can be better measured but we choose not to". I'm still not sure how days versus depth data translates to anything other than the outcome performance controls we're used to and why the thinking that lead to problems will now lead to solutions. Rushmore offset data doesn't record barrier data and so how do we measure performance of control over the barrier when we are only reporting and measuring days versus depth and comments on a drilling report that give superficial and often wrong indicators of the health of the barrier e.g.. "bumped plug; cement job a success!" Yet again, however, most are once again pointing to the fact that in 100s of wells there is not one well control incident? Low incident high consequence is the nature of the beast. Its so easy to discount the danger and to overestimate the utility of processes and declare them as successful and useful by people at the top end of the triangle when the issue is low incident and high consequence. Its the same logic to las vegas; the truth is hidden in the obscurity of probability versus the propensity of constant streams of data from everyday operations. Hidden still are the brief moments when little things go awry and never make it on the morning report. Peter Aird talks about treating incidents as vital instead of just accidents since in only focusing on accidents we suffer the “outcome bias” since incidents suggested poor performance that simply avoided an “accident” or consequence due to luck or the percentages. I've worked as a company man as well Peter, and yes, unless one has, the subtle challenges are not always appreciated while engineering designs and operations in the office. If the subject was not so serious the lack of recognition of the essence and vitality of constant and dynamic barrier assurance, and in the most complex operations, perhaps a dedicated barrier engineer (BE), would be humorous. It isn't so much that barrier assurance isn't on the minds of key personnel as much as it is that this should be communicated ubiquitously to everyone on the team, dynamically, to heighten awareness, and that in the most complex operations this begins to exceed the capabilities of informal assessments in the mind of one person that has other duties. This is situational awareness 101 and the human factors of cognitive overload and the due process of rights to risk information rolled into one. If the barrier assurance is thorough enough it may be summed into a risk level and combined with competency levels so that in times of highest risk the highest levels of competency and equipment Q/A may be utilized. Look, the formalizing of the barrier assurance measures itself will lend to begin gathering the data necessary to learn more about the weak links and also focus diligent attention where it is needed.

That is exactly it. That is the essence of all of the preaching. On the simplest projects this is all that is needed. On the most complex projects a formal gathering of "risk silos" in regards to the barrier need to be gathered and perhaps overseen by a dedicated BE, summed and reported ubiquitously back on the morning report and everywhere to everyone on the project in real time. The more this is done the better the data gathered and it improves. Triggers and alarms can be dealt with by SMEs appropriated and automatically, eventually (or not) as Garry was speaking of (that was esoteric to most at the time he stated it). When, where and if, a BE, is needed or any or all of this is automated and perhaps artificial intelligence added, is beside the point. This process starts by, as you say, getting this metric formally on the morning report. The quality, and completeness of the data will improve as the significance of this metric is seen. This has been done "between the ears" and needs to be formalized in the most complex operations and during changeovers, like tour changes, crew changes, etc. so why not formalize it and begin gathering data that will impact well containment and barrier quality during well construction?

Good night. It is nice to see this understanding in others. Of course the limit to the definition of process safety is mainly due to its inception in refineries where the only moving part that interacts with people is the fluid itself (with minor exceptions) and it is contained, until it isn't. On a drilling rig, moving parts are everywhere and the safety of the "process", also then, becomes personal safety, well control, major hazards, etc. It is still assumed that we are talking of well control when we discuss process safety in drilling and so if that is the definition and scope, the barrier assurance metric is the right discussion and focus. Human factors are entangled in this metric in complex ways as well as equipment quality, and the risk in assessment of barrier quality and "commercial" resistance to costly testing (due diligence) to reduce its uncertainty. The first step, however, is in "funneling" the focus to this precise point. Once the focus is there, the situational awareness is there that supports better decisions and competency levels. Then advances in personal safety and major hazards are other components that result in more injuries and deaths and lost time, as Peter A. alludes to, can be developed in similar ways by gathering the right kind of information and sharing it. Once a metric is used and focused on, people of "good will", begin reporting it and gathering it and not before then. Its a bit of a "Catch 22", yet it's true, and achievable.

Ella: process flow diagrams are not something seen a lot in drilling and if at all yet perhaps there is a benefit in it since it would identify transitions (harboring risk derivative max/min values). I won't speak for everyone's opinion in the relationship between the company man and the OIM yet since the contractor and operator have had different procedures and ideas/opinions on well control procedures, BOPE configurations, etc. this relationship can be strained at the moments of highest risk and need for a definitive decision and consensus. While bridging documents hope to mitigate the majority of issues it doesn't address the dynamic nature of this relationship and the disparate risk silos that exist between the two autonomous organizations and communication lines. This is just one opinion; mine.

Well, yes, and supporting the decisions of the company man with supervisory levels dictated by dynamic risk level of the barrier, like having the engineer that designs the negative pressure test (very high risk potential) being required to "sign off" on it. Of course this is "pre-emptive" prevention and assignment and clarification of duties during well control is "reactive" mitigation, they both form two sides of a "bow-tie" safety diagram that still has room for improvement.

If anyone designs a negative test that, as you say, tests both the float shoe and the hanger seal in the well head housing, at the same time, like they did at Macondo, they're in trouble eventually and that isn't competent. They all, in a certain organization, did it like that to save time before and yet it was never competent. Its fine as long as everything holds and yet that isn't really the point of a test now is it? To do both tests at once the drill pipe is miles off bottom. If that makes sense to anyone as the engineer, then yes it would be better to not have the engineer involved and there does need to be "a whole lot of discussion", because "help" has passed already. As far as with regulators, who would involve them in structuring our operational team and lines of communication? That was never suggested. The point is in supporting decisions better, not intrusively, and yet at moments when the guys on the rig are scratching their heads and need support (pretend this never happens all you want), the drilling engineer that designs and complicates a test that puts the barrier at immediate high risk is not observing the operations nor checking to make sure the guys on the rig understand and know what they are doing? The engineer that designed the test should know; right? If they don't who does? Does the engineering manager know? Yes, there does need to be a whole lot of discussion and yet with people that know what they are doing.

Ella: I watched it and provide my notes here in 2 parts in service. My apologies for using the term "guy" generically since I did this fast and didn't catch their names.There remains confusion as to Process Safety (Well Containment) and focusing on process performance controls in safety (includes personal, major hazards and well containment). I think there needs to be one single focus on Drilling Process Safety as Well Containment, and one single focus on process performance controls on major hazards and personal safety. The confusion is palpable watching the panel and in general in the industry and it is stifling to progress.Gaming safety measures. Middle managers with commercial focus using performance controls that are commercial outcome based rather than balanced with process performance controls and being diligence based metrics. If the metrics were truly effective there would be no “gaming” except by people harboring the dumbest of mindset.LTIs based on injury rates affects reporting.Process Safety Indicators instead of LTIs? Yet they suggest none.At 20:00 “I’m not convinced that we will not have another Macondo... I’m not convinced that a lot has changed... We need to get down to the ground and make people understand what the risks and how we manage them and engage them to avoid these events. The people…” Look, here we go again where guys are stated the need clearly and yet not hearing the answer that myself and BROADCAST, Garry, and now Peter see clearly as the answer; DUE PROCESS.Let’s look again at the definition of due process in the words the guy on the panel himself used: Gather the risks of the risk silos on the project=”what the risks are”. Assess and measure the risks=”how we manage them” Ubiquitously broadcast the risk level to create situational awareness, heightened awareness and matching competencies, supervisory levels, and equipment standards=”engage them to avid these events.”At ~22:00 “I’m not sure we can legislate culture. I think we can legislate symptoms of culture.” I agree that he lawmakers cannot legislate culture unless they realize that certain formalities and actions create culture. Only then can the actions and formalities be required.23:55 personalizing consequences. The speaker implies that people in the office are too immune to consequences. This will continue as long as frontline workers take the blame for office laxities and informalities that offer no decision support, due diligence, nor due process to them.

One guy suggests that realizing the consequences will change things. I think he implies that things will change simply from the enormity of the disaster and its publicity as “what can happen”. Yet, we’ve seen this gets pushed from consciousness, ignored and people are hoping for this to get out of the spotlight like the LUSI Mud Volcano, Lake Peignor, etc. Thise are not studied as lessons learnable yet almost as laughable entertainment, years removed, or even as a dark family secret never to discuss. Deviance from procedure are only viewed from the consequences that stem from them. This is the outcome bias. Clearly true. This is what Peter Aird rightly speaks of often, thankfully. 30:15 Leaders are swamped with commercial interests. “Must educate middle managers from the top as to the benefits of safety measures.” Wow! This is a very telling statement. Everybody feels compelled to cheat because of the outcome bias that doesn’t punish deviance because of low incidence of high consequence per deviance. Again this is outcome bias. 33:20 “must develop procedures that allow for human error.” Yes! Just culture also means you cannot punish human error! “Creating the widest possible system”. Is he talking about a Just Culture. 35:00 Multitasking. What pressures were on those workers? Cognitive load that lead to cognitive errors in decisions. We must measure cognitive load and include this in the summation of risk silos. In this case each person is a “risk silo”. Each individual cognitive load increases this “silo risk level”. When summed this is communicated back. Individual cognitive loads are measured in the processed and then can be managed by splitting cognitive loads, adding competency etc. First it must be seen as important, gathered expertly, assessed, measured, summed, and ubiquitously broadcast back to “the people”. DUE PROCESS. 40:00 Major accident hazard management must be focused on. Nothing is ever finished and nothing is ever beyond criticism. Baloney! BROADCAST and due process is beyond criticism. They obviously have heard of neither. 42:00 “Complacency.” Complacency is mitigated by BROADCASTING risk. Isn’t this the reason for the Terror Alert System? And the Forest Fire Risk System? Yes! 45:00 Organizational change. It is the responsibility of those managing the organization to make sure the processes are in place. 48:00 different metrics for personal and process safety. Competency. Design. People. Procedures. Equipment flaws. Thinks they are inherently linked. They overlap. I agree. LTIs can be a gross filter for overall and personal cognitive load yet might be used against people in an Unjust Culture. 51:45 Make the people in the office accountable. True, so true. Yet prosecuting company men when every expert points to organizational systemic failure is not such a good start.

These discussions and safety in complex operations on the rig, in general, suffer from not seeing the forest for the trees. A good question to ask is "what is the one concept that would make the biggest impact that should be enacted first", and the question has already been answered on this thread, yet not universally seen because of the myriad of other ideas, concepts and comments.

Let's take that further and discuss that the one action that would prevent unbridled commercial interest from encroaching on the safe decision space and the due process of protecting crews by giving them notice of current levels of risk that may affect them, as pulling performance metrics from the risk of the process. The diligence needed to do this is the "due diligence", and the act itself is the "due process"; both missing in the latest tragedies. If you already understand the one subject I've discussed for three years then you do not need to read on because it is the same thing.A good question to ask is "what is the one thing that would make the biggest impact, that should be worked on first". In answering this, the issues that have been brought up incessantly can be recalled and the one thing that makes the most difference applied. The list includes using process performance controls in addition to commercial performance controls, protecting crews, heightening situational awareness, assessing risks carefully (due diligence) and giving notice to crews (due process), assessing, measuring and managing the risk of the engineering design below reasonable levels, compliance with "Right to Risk Information", apropriating competency to the risk level of the project, and being able to regulate this behavior. First the question, "how can this be done?". This is in essence assessing the process for the risk and then broadcasting this neurally gathered risk information back as a ubiquitous risk level. The forest (overall risk of failure of the barrier) cannot be seen because of the trees (individual silos of risk in every, individual comprising the human barrier, tangible component comprising the wellbore, the earth geomechanical reality, and deviation of risk assessed in planning from those that exist after implementation, the disparate experiential data from tours, crew changes, etc. and the decisions being made based on this disintegrated vastness of data). We do this in less complicated situations and operations, between our ears in many areas of the rig and in life in general yet the difference is that in some projects these "risk silos" extend beyond the view and perception of one person and therefore must be "summed" and returned to the individual. I'm not an HSEQ professional, yet a drilling engineer, I'm more interested in getting clients to utilize this concept than developing it and profiting from this, because it will make things safer and we have many children as well.The actual concept is that of a "forest", the ubiquitous summation of risk silos, and the "trees", the risk silos.

We do in fact need to "DO" something because at this point it is only talk and that is really nothing! The answer to each of your questions as to "gaming", "why no one has process safety indicators", and why the regulators "have not even contemplated this", is because this tool doesn't exist except between the ears of those of us that have considered this. Human beings devise solutions that match the tools we have in our toolbox. This is the great practical nature of human beings. If the tool isn't appropriate or does not work, and yet is "required", it will be "gamed". If the most appropriate tool to solve every issue doesn't exist then many less appropriate tools will be thought of and utilized in the place of that one tool, with lesser results. This is THE human factor and based on the great human attribute of action, NOW!

Let's also consider that compliance to standards, procedures, best practices or even better judgment suffers from "drifting" into poor practices because of the nature of low incident major consequence events that only happen a small percentage of the time they are possible because humans cover imperfections, perfectly as a team, a large percentage of the time. A lot of the reason for this is that crew and others involved suffer from the numbing effect of being alert over the course of a long project because of the long extended periods of time spent at an actual low risk. There needs. This is in essence suffering from the effect of "crying wolf". The incessant calls to safety from the C Office, universally and in our industry in general creates this "false alarm effect", over time since these accidents never happen for most projects, crews, and are by definition, "low incident/high consequence". Read: Cry Wolf: The Psychology of False Alarms By S. Breznitz It basically negates the desired effect of keeping crews, and even engineers, GnG and even supervisors and upper level managers alert to the moments when risk IS highest. There needs to be a distinction between moments of low risk and high risk or the deadening effect of insensitivity to moments of highest risk, or the "false alarm effect", because of the lack of ubiquitous broadcast of current risk levels. Garry: QA requires QC and QC requires measuring quality. Q of commercial performance gives QAQC of commercial risk and Q of well containment gives QAQC of barriers to assure well containment. Just making sure the focus remains on "what" to measure in process safety. To be clear, it is the risk to the barriers that constitutes their quality and the risk to "well containment". This is the most vital measurement of the process of disturbing and replacing a perfectly good natural barrier to hydrocarbon blowout, with our man made one called a well and the process called drilling and constructing a well. Risk is the one parameter that MUST be dynamically assessed during this process and then shared back ubiquitously to every component. Look for another parameter more important if you must yet you will eventually find your way back.Also, back to the focus on the "false alarm effect" is the psychology of the Signal Detection Theory (SDT). This theory is honored by True Due Process and Due Diligence of well containment, that broadcast of ubiquitous summation of myriad risk silos offers.Here it is on wikipedia in service to this focused discussion: http://en.wikipedia.org/wiki/Detection_theoryPeter: Key word "overridden". If the CEO adopts this measure there is no need to override. Influence is by definition needed. Who will lead the way, the "leaders", or the regulators? If I was the CEO I would definitely not enjoy being overridden I think.Peter: Yes! Thanks for the support and allow me to refine your great thought that we need to replace "what if" questions with, "what do you have in place if", to "we need to complete that question with "what is the summation of risks of what we have in place if". We must assess the landscape and THEN MEASURE, yet not measure indiscriminate parameters and yet specifically measure the risk to the barriers. And, not only measure that yet, then communicate it back, as you completely agree with and understand is the due process and situational awareness that appropriates further decisions of many parameters that are a self reinforcing positive "feedback" loop of refinement and improvement.

Garry: first, in regards to your previous comment, a good sign of a "coherent" conceptual system is that its components match the overall system in design. In this BROADCAST system the overall FLAG bears the exact same data and due diligence as each component "FLAG" of the system. It displays each parameter of assessment of a coherent "scalable" system. The overall due diligence matches the due diligence of each component and broadcasting the assessment IS due process, situational awareness and a positive feedback loop of learning, situational competency and improvement.In regards to tectonics, of course, this is true and we are painfully aware of the community reactions to the Youngstown induced seismicity and the deadly, induced earthquake in Russia years back, also the reverse is true in blaming an earthquake, subjectively, for the LUSI mud volcano that is ongoing. Points to lack of due diligence and process and also good will. If an engineer brought up earthquake induction as a line item risk he/she would probably be putting his/her office into boxes before a geomechanic would be summoned and has that lack of knowledge, understanding and due diligence changed at all?Peter: perhaps good teamwork starts with Kantian good will, then builds with knowledge, wisdom and understanding and focus and scope, and stays together with courage facing these types of problems; not earthquakes yet ill will, lack of focus, narrow scope and fear. Yet, probably most often because people are busy, dutifully and extremely competently, working in each of these component systems to consider for the amount of time it takes to understand these concepts and the due process that would enhance their designs and executions. Low incidence events allow selfish attitudes that suggest "this isn't my problem let the people alive in 100 years deal with the 100 year event".

It's scalable and transferable to major hazards and personal safety and almost every industry (a sign of a general lacking in patterns of communication especially within interactive complex and tightly coupled systems and it's appropriateness). The fact that it uniquely meets standards of due diligence, due process and its inherent premise of "the right to risk information" (standards in other industries), is another sign that it is a lacking that should be rectified now internally rather than imposed thusly "overriding" corporate management and its sovereignty. Like traveling to the moon, "not because it is easy yet because it is hard", we should encourage this, "because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to" succeed (besides making things safer and providing several basic and "Just Culture" concepts/human rights along the way). Thank you for your continuing consultation and wisdom.

Peter once again is making the sense that we all should be on this issue of really defining risk necessarily and there is no room to agree with you more on the need to change the corporate mindset of being locked in to reporting useless, or incomplete numbers. Desiring, even expecting to someday have an epiphany of change, but not willing to change the mindset or the culture of locked-in reporting to achieve it. Nor are they willing to ask the hard questions in order to uncover what must be done. If you look carefully at exactly what is the distinction between "incidents" and "accidents", you can began to understand that incidents are "triggers" to consequence that can be either mitigated or not. Looking at those number can be used to build conclusions of ratios of accidents to incidents according to a pattern developed by Frank Bird, as you've related before. Also, the subtle distinction that looking at incidents before accidents is that this extracts the random, and probabilistic nature of consequences out of the overall "risk" equation and focuses closely at it. I've pointed this out here before and you are zooming in on it and that is the need to understand the equation of risk more closely. The actual formula for risk. Most people miss the opportunity to learn and utilize better metrics by doing this simply act of zooming into the details of a big billion dollar subject. Peter, the reason it isn't done is probably, velleity, and lack of hands on experience. HSE isn't used to drilling hands getting involved in designing actual reported metrics in a big way. RISK = EXPOSURE X VULNERABILITY x PROBABILITY because CONSEQUENCES = EXPOSURE X VULNERABILITY , the equation is often seen as RISK = PROBABILITY x CONSEQUENCES, and wrongly spoken ("Probably won't happen") of and even thought of as simply RISK = PROBABILITY, and clearly, because the reporting of useless numbers to hazard recognition is so universally omnipresent, breaking this formula down and really knowing the details is rarely done. If it was done then there would be the focus of EXPOSURE and VULNERABILITY, which is exactly what Peter Aird, an actual drilling person, continues to point out with less response to its validity than should be happening. A system that gathers information related to EXPOSURE and VULNERABILITY of the barrier will gather "true" risks not clouded by OUTCOME BIAS. Reporting the exposure and vulnerability of the barrier (and other hazards in general HSE and MAE), is the start, and summing this and broadcasting it back to small teams working on components of the bigger project as a whole is Due Process and adds situational awareness that makes the rig a safer place to work and is defined as Process Safety and yet summing the exposure and vulnerability to HSE and Major hazards can be done in this same way as the broadcasting of risk levels to well containment I've been proposing. The information people need is their exposure and vulnerability to hazards and "Rights to risk information" is something many industries think of as due process and human rights and situational awareness experts think of as common sense in mitigating risk. First things first in order to analyze exposure and vulnerability to hazards the incidents need to be reported and not simply the ones that lead to accidents that are of the level currently reported.

Garry: It isn't difficult to distinguish between a high risk area for induced seismicity and low risk as you know. The formula is simple and involves the Young's Modulus (S), the slip length (fault movement) and the rupture area. All of the above are limited in sedimentary basins because S is very small in sand and claystone compared to granite that we do not drill. Also, slip length and rupture area are large in granite and small in sedimentary basins because large rupture areas are possible in granite and not in buried sedimentary rock. Also, the potential for large tectonic movements is relegated to seismic hotspots that align with bigger crustal plate tectonics. Generally as engineers we are in a "sandbox" that is devoid of these types of "seismic hazards" yet the due diligence must be done initially and shaking Grandma's chandelier is going to stir up interesting discussions that we must foresee and not mitigate with "pizza coupons" and think we are winning friends and influencing people.Piezoelectricity isn't studied in the realm of our petroleum business because we don't drill granite that contains the quartz (in massive quantities). Some extremely open minded individuals (a euphemism?) ask the question similar to "lightening" as to whether it strikes down or up? Does the earthquake light originate in the granite and move to the heavens or vice versa? Any further layer of discussion becomes unacceptable to less robust mental and emotional structures. Are they, and our petroleum basins, easily disturbed? Yes, in some cases, like drilling in Gazli and the "Hayward" fault (double entendre intended) or other seismic hotspot. This is a general question only valid to specifics. Not giving a project at least a cursory scan of the seismic potential is reckless. When thinking of seismicity being of the magnitude to disturb, this must include the most sensitive souls living among us especially if their fears are unfounded. Many oil companies have learned this lesson the hard way by dealing with the present state of opposition presented by escalating protests to "minor" induced seismicity. A well was drilled in the GOM that induced fault movement and that case isn't the only one; it happens yet a fault in sand isn't the same as one in granite and plate tectonics does not have a movement "seam" in the GOM so there is a small limit to a potential "slip" length. It can however broach a conduit to flow to the surface and loss of well containment in specific cases.Ella: Clearly the "bad players" with ill will and intent to circumvent rules, laws, codes and regulations that work need "overridden". Focus on performance controls based only on traditional commercial interests taught at MBA schools in due time leads to motive for "gaming" the system and staffing drilling teams with people that will focus on TASK and not PEOPLE and the bottom line of money and ease of reporting, rather than diligence that costs money.The goal orientation of regulators would be the right first step and not prescriptive measures mainly because how can regulators step up and claim they know the right prescriptions after centuries of not knowing, understanding and proclaiming the need for communicating the value in measuring exposure and vulnerability to hazards and the due process of summing these risks and broadcasting them back to all people on the project? The answer is they cannot and proclaiming to be an amateur is not a strength of either the government nor the leaders in industry. We have problems admitting that we are less than expert. This is no way to begin to improve. We need an open and humble mindset and the will to move beyond mere velleity.Signal Detection Theory says Noise prevents the detection of signals. Process case is counterintuitive since noise is "the trees" and the signal is "the forest", and the risk of the sum of individual risks, due process, is eliminating the "noise" of "silence" and "complexity".

Bassey, it is agreed that the scope of process safety can be extended to any process included in the life cycle of petroleum delivery to our customers yet my expertise extends only to the drilling process and that is the most common association at this time focused on the drilling disasters, yet it is noticed that these concepts transfers to the bigger science and the lesser disciplines as well. So, in regards to any definition of process safety, it is time to re-engineer the metrics of risk and more importantly to communicate in terms of the details of risk to make situational awareness omnipresent and accessible to all that have the right to risk information according to due process. This isn't about human rights its about acknowledging that people make our systems safer and giving people the right information will make our systems safer still. Furthermore, this distinction is in the myriad quality and quantity of information there is in a project that might be thought important to communicate between individuals, teams, units, companies, etc. working on a project. Risk IS the key information that contains the most vital information available to share. The confusion begins because of the multitude of information that is mistaken as equal to or more important than communicating risk. The metrics of risk are re-engineered by studying exposure and vulnerability that equal consequence only a percentage of the time because humans are perfect a certain percentage of the time. This is the part of human error that people do not understand. We do not understand that our drilling systems are imperfect and only become perfect part of the time because of the efforts of people. People are adaptable and react to solve problems that our systems create all of the time because our systems are constantly in a state of wear and tear and disrepair and unable to detect problems and yet we do make them work perfectly most of the time despite the 50% incident rate Peter represents as being the truth. Because of human intervention our EXPOSURE to opportunities of injury, accident, and damage, Wayne, is constant and yet our VULNERABILITY is limited yet can be minimized even further if we report, study and respond to this reality (this is Peter A's assertion). There is a built in probability that normal human error will prevent a person from stepping in and stopping a failed physical component and people are more VULNERABLE at certain times. During vulnerable times of high cognitive load and complexity, adding support in decision making can decrease the probability of bad decisions. So metrics to determine those moments would in fact be support metrics for decision making. This is artificial intelligence's place in the human barrier preventing uncontrolled flows and also personal safety and major hazards. RISK to the human barrier is indeed about decision making and the details of vulnerability and exposure to cognitive loads just as risk to the physical barriers is about the vulnerability and exposure to loads from the earth. The world is better off with money Wayne because it allows us to communicate and trade with others and have dominion in the world. These type innovations are better developed in the free market because competitive influences refine better than well funded projects without the constraints of competitive pressure. The government regulators, Ella, simply must keep the market truly free and fair to prevent the socialization of risks and once this practice is firmly established and carried forward for many years, corporations will innovate these "better" systems because of commercial reasons and therefore good will nor loftier motivations become irrelevant. In that case, Ella, CEOs can choose the systems they adopt yet all costs of all risks will be internalized and an inextricable profit motive can be the driver. No "overridden" prescriptive regulations needed. Earthquake risk is internalized Garry.

Bassey: It's clear not everyone is paying attention to the degree necessary to understand how we manage the status quo, yet there are many that are expressing the desire to know more. By measuring vulnerability and exposure and creating situational awareness of current and dynamic risk levels (status quo) each individual involved manages his/her own risk and team's appropriate competence, supervisory levels and equipment to match the levels of vulnerability and exposure to hazards. Its like the US Forest Service's National Fire Danger Rating System (NFDRS). The NFDRS is a system that allows fire managers to estimate today's or tomorrow's fire danger for a given area. It combines the effects of existing and expected states of selected fire danger factors into one or more qualitative or numeric indices that reflect an area's fire protection needs. It links an organization's readiness level (or pre-planned fire suppression actions) to the potential fire problems of the day. Knowledge of these levels can help forest visitors make decisions about whether or not to have a campfire or ride their OHV in a grassy area. Homeowners may choose to postpone burning a debris pile if they are aware of the fire danger level for that day. Contractors working in the forest may consider extra precautions when using equipment that might produce sparks. In some cases, the National Forest may even restrict certain activities based on the fire danger levels. Key words, "It links an organization's readiness level to the potential fire problems of the day". For our most interactively complex and tightly coupled projects this would be first assessing the potential problems of the day and broadcasting this ubiquitously to everyone on the project that then tailors their readiness level (situational awareness) to the current level of risk. In this system the diligent assessment and communication in terms of vulnerability and exposure to hazards becomes routine and thus managed. This is DUE PROCESS. This same type system is MANDATORY for H2S because of the drilling in the Texas panhandle that killed an entire family. We need to study and learn from our industry's vast history and integrate the lessons learned into dangerous operations. It's interesting that otherwise brilliant people in our industry run like their hair is on fire from new things and even the most simple, effective and clever concepts that add value quickly to the status quo. The english translation of the latin, "status quo", is "current situation". Before we can change the status quo we must assess it, measure it and then we can manage to improve it. We assess the vulnerabilities and exposures of every component that comprises the status quo; measure; sum, then present this back clearly and omnipresently to all, as "the status quo" or "current situation". This is situational awareness, or status quo management 101. We are in essence making the status quo apparent to all so we either make changes or realize none are necessary. People do not really protect the status quo yet we are often simply not sufficiently aware of its scope, breadth and components and people are rightfully careful not to change things we do not sufficiently know and understand.

The solution has not changed in years and yet the presentation of any solution is always the challenge and a learning process. More than one operator has already embraced the postulates, one, more than two years ago. There is nothing more than presenting a system to an operator, that needs to be done, because the government cannot "make" a company do anything well except balk like donkeys. The regulators should not get involved because adding process and red tape makes things more dangerous because it adds to the workload indiscriminately since each operator is different and knows best how to operate within its leases with its personnel and particular issues. All regulators need to do is internalize the risks on each operator and ensure the operator has the means to be accountable for every risk it takes. The solutions to manage complexity and its risks will develop, and improve to the extent that the regulators stay out of prescriptions and simply do the one thing they should and that is internalize risks and, for the detail oriented, this means every consequence. Insurance was invented 5000 years ago and insurers "encourage" operator accountability to utilize preventative due diligence, and due process or else they go out of business. The largest organizations likely self insure and national companies can do practically anything they determine suits them best yet they are accountable to their countrymen and neighboring countries, if consequences extend beyond their boundaries. Sorry to disappoint anybody that sees adventure, or utility, in spending money in academic research, government and NGOs, yet this isn't quantum physics, this is simple reasoning and collecting the interest on principal invested in people we already work with and employ. The basic concept is easily custom tailored to any company and if anyone needs help doing this (my rate is fair) they can certainly contact me because explaining this is becoming easier and easier.

The one thing in common that will do the most is responsibility and accountability and the best way to ensure this is to hold them accountable. Government has a mandate, handcuffs and guns for those that refuse yet don't use the handcuffs before the accidents.

We want to know the regulator's role in process safety and see ambivalent motivations as the reason process risks are ignored and this is the truth and see the motive to be safe at all costs as the proper motivation and also see that the competing motive to make a profit is the problem and again this is true UNLESS the motive to be safe is merged into the profit motive. This is integrating disparate motivations and the job of the government and its only job that must be done. This is the only way to solve ambivalent motivations and is integrity. This one thing every project has in common must be the only focus of government regulators. The one thing in common is that every project has risk of consequences and every company that expects to keep the profit of the best consequences must be forced to pay the cost of the worst and allowing even one to declare bankruptcy and avoid paying consequences IS FAILURE TO REGULATE because that allows companies to SOCIALIZE RISKS AS MITIGATION FOR WORST CASE SCENARIOS and incentivize a "loophole". Government regulators have courts, laws, judges, prosecutors, law enforcement officers, guns, handcuffs and the duty to use this power to ensure that any company that does business in every country pays for, or has the means to, pay for these worst case scenario consequences and not the expertise to tell operators how to drill wells step by step with prescriptions because they will never know any operator's business better than the people operating their business. This is common sense that gets distorted by not breaking this down carefully and looking at it and thinking about it. The reason all of the risks were not mitigated and such a poor accounting of the status quo of risks in operations related to the barrier to the worst case scenario were done in the past is the fault of the government in allowing companies to present the worst case scenario risks as being ridiculously low and portraying them in an mockingly negligent light. This is well documented in the "Buffalo Report" and every competent study on the tragedy of Macondo because this is the root cause. Free market forces work and yet markets are not free if all of the costs are not appropriated yet tax payers are left to pay the bill and families to mourn the dead. Delivering petroleum involves complexity and risk and yet we do make imperfect systems work perfectly in terms of containing the well most of the time because these worst case events are very low incident yet high consequence. Companies are in business because of ability to accept risk yet given a "loophole", MANY WILL TAKE IT. An operator will think of low probability as the risk itself and think, "well, if it blows out, we're out of business yet it 'probably won't happen'". The government regulator must enforce the worst case scenario and enforce means to pay it. An operator accepts risk and can be nearsighted and think to socialize the consequences of the very, tiny chance that an event would completely bankrupt the company. The regulator's ONLY JOB is to ensure this is not allowed to happen. If this one thing is done and the regulator stops prescribing every step and additional processes our business would spend the money to innovate, develop and mitigate. The government needs to only do the main thing it must do and that is hold people accountable to the whole of society and hold companies accountable to the whole of their consequences. They could do this in many ways yet they didn't do it before Macondo. Clarke (1999) called fantasy documents the permits to drill offshore before Macondo listing walruses, sea otters, sea lions, and seals under “sensitive biological ecosystems" that might be affected by the worst case scenario and yet listed the risk as very small. The probability was very small and yet not the consequences and therefore risk was huge. The government HAD ONE JOB TO DO. Macondo was the consequence of government regulator failure to do the one job it had to do.

The government ensured ignorance by allowing risks to be socialized, and allowed risks to be socialized by not regulating that upper limits on possible consequences are set properly and the means to pay the costs for them enforced. Ignorance socializes risk if the government allows it to happen. Socialization of risks is defined as having the poor pay for the mistakes of the rich. Socialization of risks is allowing ignorance instead of mitigation. It is a loophole that allows ignorance in place of mitigation. Ignorance as a mitigation stops innovations, development and improvements before they ever have a chance to begin. Socializing risks is a crime just like robbery only on a bigger scale. The government exists and is given its mandate to enforce laws to prevent this. This is the one job government regulators MUST do. People with the courage, morality and mental energy to actually do the right thing and standup to proclaim what the government must do are as Einstein said, "...are met with violent opposition…".

There is a time for everything, and the time to point a finger even, and in the discussion of what the regulator needs to do and does not need to be involved in, saying the regulator has one thing it must do, has to be said. While we may be the government and the government may be us, Bassey, this still must be said clearly and with courage, not in the spirit of blame and yet clinically, to clarify the mechanism of change that a clear message from the regulator sends. If the regulator mandates accountability and responsibility then accountability and responsibility will be the rule in the industry. If the regulator ignores the operators presenting the worst case and presents a "fantasy document", then ignorance will be perpetuated in the industry and this is exactly the reality of the last twenty years. The logic is simply that the message was one of ignorance. We do agree that the latest tragedy was responded to correctly by both the company and the government. The responsibility of the oil company involved was the kind of accountability that we need more of and to be clear is required of every company that wants the license to operate. The logic of ignorance, however, was set in motion by allowing ignorance to exist instead of accountability and we must clinically point the finger to see exactly how this happened and not be squeamish for a moment in time. It is necessary. When anybody stands up on his/her hind legs and says the one thing that must be done the natural reaction of those still on all fours is to claim that this person is trying to make a name for him or herself yet this is the reaction that most prophets receive at first and yet they really are giving information to save society. We must stand upright with courage and speak with the spirit of the 5th amendment. It must be clear that there are still things that must be said before action is taken. The government must continue to ensure accountability undiluted with unrelated prescriptions. The government ensured ignorance by allowing risks to be socialized, and allowed risks to be socialized by not regulating upper values on possible consequences and means to pay worst consequence costs. Ignorance socializes risk if the government allows it to happen. Socialization of risks is defined as having the poor pay for the mistakes of the rich and all to pay for only the bad consequences of one and yet allows the one to receive all of the good consequences that all do not share in. Socialization of risks is allowing ignorance instead of mitigation. It is a loophole that allows ignorance in place of mitigation. Ignorance as a mitigation stops innovations, development and improvements before they ever have a chance to begin. Socializing risks is a crime just like robbery only on a bigger scale. The government exists and is given its mandate to enforce laws to prevent this. This is the one job government regulators MUST do. Do not misconstrue an implication of a particular president in the government of the US in my comments. The truth is the current administration started out by announcing an intent to allow drilling in Florida! The disaster put an end to that quickly because the current administration was not aware of how lax the MMS was and didn't understand the "fantasy" that the companies were allowed to present as the worst case scenario, yet acted quickly to ensure that the one job the government must do in any viable country is to enforce accountability and justice, and this was and is still being done.We cannot be squeamish in looking at this issue and getting this one thing right. Let's not pretend this was gotten right in the past or is a sure thing to keep getting right in the future and let's not discount its importance or cower in the implications of how important being honest and accountable is to innovation and safety in our processes. If we do not now we will be running chasing after less vital things with no success forever.

This is the one thing that needs to be said and yet small operators cringe at hearing yet think of other industries and if we allowed say, toxic waste operators to operate in our communities near our children that didn't have the means to pay for contaminating our children. Not ensuring they have insurance or the means to pay worst consequences means allowing them to "ignore" this risk and consequence altogether. This is allowing them to operate without accountability. This is not holding people to consequences and this is criminal. This concept is JUSTICE 101.

This is the most common concept that no one wants to talk about and I proclaimed it and others will try to point the finger at me and say I'm just being critical or trying to make a name for myself because they are afraid of the implications of accountability. The truth is accountability leads to lower costs yet people are afraid to be held accountable. People want others to stop at the stop sign yet want the police to let us roll right through without ticketing us. Its selfish human nature at work and we need to rise above this. We cannot let this message be diluted. Justice and accountability are a four way stop. We must, and all others must, stop. This is for the common good. These are the basics that we must get right yet people want to skip over this foundation of justice because its uncomfortable. 11 people just died and that was real uncomfortable. Wake up.

Med Seghair: Precisely! That summarizes my last comments except the regulator has to be accountable and responsible if there is any hope of regulating accountability and responsibility. The problem with regulators reaching out to industry for help is precisely that they are now in a problematic relationship. This is the reason that the mode of regulator dialogue has to be in certain terms. The terms must be exactly what the regulator is managing. And the reason the regulator must communicate in solely risk and not in prescribing step by step procedures is that exactly the wrong relationship with the operators is created; namely one in which the regulator must be subservient to the higher knowledge and understanding the operator will always have on their own properties. The regulator must be measuring exposure and vulnerability to hazards, which is precisely the status quo of risk, and only this if it has any hope of actually regulating. Only then will the regulator avoid the relationship where it is relegated to 'ask the industrialist' for help. Great point Med. Vince: That is exactly it! The difference between a Just Culture mentality and the mindset of human understanding and humility knows the exact opposite is true. FRONT LINE WORKERS ARE NOT BUMBLING IDIOTS as so often portrayed in the wrong culture you complain of and yet our drilling rigs and equipment and systems are IMPERFECT and only BECOME PERFECT BECAUSE OF THE HUMANS THAT ADAPT and make these systems perfect. When mistakes happen they usually point to a new improvement needed in the imperfect system to be added by people working in that system and if we simply blame the people without looking situationally at the imperfect system they were working in we do not adapt the system to improve it. The difference is the wrong culture fails because of the fundamental attribute error when accidents may point to failures in their beloved management systems they would like everyone to see as perfect yet it indeed isn't. This is disgusting. We need to judge everyone according to situational attribution judgment and not the fundamental attributes. This is common sense in human factor engineering. First before we improve workers interactions with imperfect and rigid systems we must hold operators completely accountable and responsible for them and ENSURE THE RISKS ARE PROPERLY MEASURED AND THE OPERATOR HAS THE MEANS TO PAY THEM SO THE COMMERCIAL VIABILITY LOGS THE TRUE RISK AND THEREFORE ASSIGNS ENGINEERS TO TRULY MITIGATE THEM FOR COMMERCIAL REASONS. The means can be self insurance or outside insurance or even escrow. No company wants to hear this and most people will deny this needs to be done because we go many decades even without these low incident, high consequence disasters, yet if we truly want companies to assign assets to recognize, measure and mitigate/manage risks there is nothing better than putting the potential consequences in jeopardy for all of the commercial managers to see. No one wants to do this, even me as a business owner, because our wealth is in jeopardy yet in instances where we jeopardize the wealth of others, like in the fisheries of the GOM etc. this is justice and the right thing to do and also it makes our industry spend money on engineers, etc. that need to mitigate these risks through engineering, science and innovation. If not we will be spending money on employing people that will simply keep their mouths shut and let us all believe the fantasy and simply throw front line people to the sharks periodically and that is disgusting and not why any of us got college degrees, a heart, and a soul for doing the right things and making a true career and not simply feeding at the "slop" like hogs. Wayne: STAY FOCUSED. What is beyond me is how we all acknowledge we need to manage risk better and then quickly scatter our minds into prescribing other things. Simply measure risk.

Peter: the proper role of the regulator in a system that works is completely understood and there is no difference in suggestions previously and your statement above as long as the details of the "safety case" are exposures and vulnerabilities summed into risk along the timeline of the project activities. Calling the projection of the risk during every point in time of the process of drilling and completing done before the well is drilled in the planning and permitting phase the "safety case" is nothing different from previous descriptions of the need for BROADCAST and FORECAST of dynamic status quo of risk. The point is the "Safety Case" must be "risk based" and not "mitigation based", the former is like treating the disease and the later is like treating symptoms only. The "safety case" or pre drill summation of risks projected on a timeline, is the part of the BROADCAST system, explained for the last two years, called FORECAST, and FORECAST, is compared against the actual summation of the individual components of risk dynamically, BROADCAST, and variances between the risk FORECAST and the actual BROADCAST, will suggest the risks are under the level suggested in pre-planning or above and perhaps the regulator can step in. The actual comparing of the two has been called SIMULCAST. How is the regulator to know the risk is in or out of control if the dynamic risk status quo is not measured and broadcast to everyone including to the regulator? The added benefit is in broadcasting this back to EVERYONE, and not simply back to the regulator, as this adds situational awareness. Yet, keep in mind, that people's ideas of the "SAFETY CASE" can vary. If the regulator is just analyzing the operators list of risks and mitigations and saying, "this is safe", and the regulator simply agrees without the risk being summed from individual risk components along the timeline of the project activities then the variance will not be detected in real time. If it is, then this is doing exactly as a FORECAST and BROADCAST of the status quo of risk accomplishes. The subtle details of a useful "safety case" and a worthless and even distracting "safety case" is the absence of levels of risk as a function of time both in the pre drill and operational phases.It is okay to call for the safety case only if the safety case forecasts the risk along the timeline and then risk is summed and broadcast ubiquitously during the project to everyone on the team AND the regulator and the broadcast is SIMULCAST to compare against the forecast during the event. There is a plethora of agreement now to the basic presentation of the need to broadcast the due process, due diligence, (broadcast of the status quo of risk diligently measured and summed from every component of vulnerability and exposure in the system), and comparing these to those forecasted at the beginning of the project as in "safety case" regulating. FORECAST, BROADCAST, and SIMULCAST. Still, how diligently this is done will be determined by the level of accountability to possible consequences are enforced, and yet how the operator, perceived they will be enforced. That is the role of the regulator that has been worse than misunderstood; it had been absent.With all of that being said we do owe Peter recognition for bringing the Safety Case nomenclature to the role of the regulator to the discussion because using the model of ubiquitously broadcast summations of risk for the operational phase and comparing this to the FORECAST of summations of risk along the projected timeline (which is the SAFETY CASE), the regulator can spot variances of the only parameter that matters in terms of possibility to prevent and that is by measuring exposures and vulnerability and variances to mitigation plans developed in the design phase by designing engineers, GnG, etc. This is exactly "Safety Case Regulation" and therefore there is no problem whatsoever in calling it this from here and now foreword. Thanks.

Part 2 Now the part of the "safety case" that is not sufficient without added details is diligence in assessing risk. The discussion around "safety case" regulation is that the risk is created by the operator and thus must be managed by the operator. YES! How could anyone disagree! And furthermore, the regulator can compare reality to claims made in the "safety case". Yes, yet when does the regulator audit and compare? Shouldn't this be done dynamically, and all of the time? YES! How could anyone disagree? Yet the operator must have incentive to go to all of that trouble which by definition is DILIGENCE. The part of the "safety case" that isn't discussed is that the diligence of the assessment of the risk created by the operator is determined by whom? The regulator? The regulator, we may all begin to recognize and agree is naturally deficient of all of the details that go into assessing risk for specific details of specific projects, for instance, the degree of certainty of a pore pressure analysis through the hydrocarbon bearing zone assessed by the operators GnG squad. So the proficiency of the "safety case", without the added detail of "due diligence", is limited similarly to the state of our safety before Macondo; the true risk of the process can be ignored if the operator thinks it can "game" the system of rating risk and it can and has proved this for as long as our industry has existed. Fantasy documents? Self assessing risk to ecosystems as minor and then realizing $billions in damage? There were many more the same yet only one that suffered the actually low probability consequence. That point must be well understood. The point is due diligence in the risk assessment must be done and how can this be regulated. We do not have to go back through the last 10 comments about Due Diligence being a STRONG FUNCTION of ACCOUNTABILITY to the WORST CASE CONSEQUENCES, yet let's repeat that DUE DILIGENCE IS A STRONG FUNCTION of ACCOUNTABILITY to the WORST CASE CONSEQUENCES! The "Safety Case" is meaningless without the "Due Diligence" that must be enforced by every operator knowing that EVERY ADVERSE CONSEQUENCE WILL BE PRIVATIZED and NOT EVEN PARTIALLY SOCIALIZED and THIS FACT along with the fact that the Macondo consequences are in the $BILLIONS and perhaps might reach the $100BILLIONS and the government is going to enforce this on the operators that are unlike BP? BP did this and should be given every single amount of praise for integrity in honoring its accountability and yet before Macondo was every operator thinking this level of accountability would be enforced? NO. If you do not understand this then this is a missing piece of the overall lessons to learn. Does anyone know for sure that the government isn't now engaging in "LEMON SOCIALISM", "CORPORATE CRONYISM", and "RISK SOCIALISM", by allowing operators with less than $100BILLION dollars in assets to operate in situations that could possibly result in similarly bad consequences as the operator that was able to pay the cost of the worst case consequences? We all cry bloody murder about BAIL OUTS and TOO BIG TO FAIL and this lemon socialism rightfully so since due diligence is a strong function to the accountability to the sum of all bad consequences possible. This is the due diligence that must be balanced with the dynamic risk based "safety case" to be any safer at all than any safety measures we've felt were safer that have come before.This isn't easy yet getting only part of this right is how we've gotten this completely wrong in the past. Partly right is completely wrong. Einstein said, "The modes of thought that brought on these problems will not solve them". Due diligence AND due process are required. Just due process isn't going to work and only due diligence isn't going to work as much as both of them together and yet in my book due diligence is more important than due process yet both are needed.

This is the due diligence that must be balanced with the dynamic risk based "safety case" to be any safer at all than any safety measures we've felt were safer that have come before.This isn't easy yet getting only part of this right is how we've gotten this completely wrong in the past. Partly right is completely wrong. Einstein said, "The modes of thought that brought on these problems will not solve them".Due diligence AND due process are required. Just due process isn't going to work and only due diligence isn't going to work as much as both of them together.

Diligence is directly proportional to accountability. Adding accountability adds diligence. Due diligence and due process must be balanced together. Adding due process to everyone enables due diligence by everyone. Without due process even the most diligent will be unaware of the total risk of the system that consists of all of its components. With this being clear we must have due diligence and yet to have due diligence we must have due process and before anyone has an incentive to either due diligence or due process they must first be accountable to the worst possible consequence. Focus on “means testing” which in its most strict form is “means in escrow” or “means in insurance” and then due diligence and due process and that is the Mighty Triumvirate. A simple and close to home example of means testing and insurance is making sure that a moving company moving the contents of your household has the “means” to pay for the replacement of its entirety if the truck(s) is(are) stolen or burn in (a) fire(s). The difference between moving insurance is that catastrophic well containment events jeopardize 100s of lives which are worth more than the contents of each persons home and thousands of families that exist because of the invaded ecosystem that is having risk added to it in order to enrich a sole operatorship that may consist of several operators yet they will not share in the good consequences of profit with those same families that subsist in the ecosystem they invade with risk.

Vince: You don't need phased out you need due process & this is organizational failure not human failure. Heinrich’s Safety Triangle suffers from generality where specificity is vital and yet highlights the need to look at the precursors to consequences, probability & risk & that is vulnerability & exposure to hazards. Heinrich's triangle takes all of the incident information from the industry in every region with all of the myriad nuances & differences & rolls out the stats of number of incidents that lead to minor accidents, that lead to accidents, etc. This is not the level of diligence we are looking for yet it does show that there is a relationship between vulnerability (V) and exposure (E) (incidents) & consequences (C) (accidents) & probability (P) because we discussed 20 comments before about the exact formula C = E X HAZARD X V and how this relates to RISK = P(x) X C & the P(x) = P(E) + P (V). These are the details of risks that can only be assessed & measured with diligence & the bigger picture safety of well containment & its hazard & the V & E of each individual on the overall team at the rig & the people living & surviving on the ecosystem this rig is adding risk to. In the case of the drilling & completion process safety the Vs & Es to hazards are beyond the capacity & capability of one individual & therefore each individual needs these vulnerabilities & exposures diligently gathered, assessed, monitored, summed & measured & then broadcast back to them because each individual is able to adapt to make the system resilient if each individual is given this risk information, & in fact in many industries the individual has the legal right to this risk information & the due process laws afford this to them.In other words its a complex function that needs diligence. As your pointing out that individual safety where you work has been using the "safety case" the well containment hazard & MAE can benefit from this as well yet to the degree that the details are diligently assessed according to vulnerabilities and exposures this safety case will either be useful or simply distracting & adding to complacency by giving assurance where there should be alarm. The new BSEE rules enforce recognition of worst case consequences & this is definite improvement. The well design must withstand shut in or cap and flow of this worst case discharge & yet the Macondo well did withstand that & it wasn't the problem because the problem was that the vulnerabilities and the exposures to the hazard were not summed, assessed and broadcast back to the various teams that would have added heightened awareness and more diligent scrutiny to the tests, and procedures, etc. had they been. Enforcing recognition of worst case consequences is only the beginning & doesn't ensure diligence in operational awareness of dynamic risk levels unless the commercial risk is in jeopardy & the commercial managers thusly allocate the financial resources to make sure the operations team allocated a person(s) to gather all of the risk silos, sum them & broadcast them back. Privatized consequences is political and not guaranteed.If the regulator doesn’t study the “safety case” diligently to notice Walruses, Sea Lions, & Sea Otters in the GOM & balk, then who is to protect all of us from the lack of diligence or competence in the government? Yet if the regulator simply “means tests” the operator to pay all of the possible worst case scenario consequences, then even if the regulator cannot legislate nor enforce “good will” nor “honesty”, “intent”, nor “competence”, it takes to get the correct estimates of risk from the operator & proper handling of operations, it can simply transfer sufficient funds from escrow to clean up the mess and compensate the harmed & repair the damage. If the possible damage ever gets to the point that it cannot be repaired, isn’t that a “RED FLAG” to not permit the project?

The safety case is the best way to go yet only if it is built on vulnerabilities and it isn't the way to go if it is built on mitigations of issues that arise. This is a well known philosophy in sports performance psychology and that is about the quality of decisions and focus and situational awareness added by process orientation and the detriment to cognitive function of the outcome orientation associated with a focus on mitigating adverse outcomes called "the movie effect". Also, focus on mitigating outcomes is akin to the "cheese" of the Swiss Cheese Model while focus on vulnerabilities in building the safety case is akin to focusing on "the holes in the cheese". As is always the case with regulations in the US they seem to get part of it right yet if only part of it is right then all of it is wrong in the safety case. Anyone care to challenge this assertion?

An example of the subtle difference between the parameter that regulators must use in "safety case" regulating & the ones they use now is illustrated with your comment Vince. You said, now the regulator asks the operator to: " • Show how you determined what is required to complete this safely. • Show me your procedure for xxx… • How did you learn this? • What do you do if it goes wrong? " This use of "safety case" is "mitigation based" & it must be "risk based" & even further detailed it must be "vulnerability & exposure based". The first bullet point: "• Show how you determined what is required to complete this safely." The definition of a mitigation is = 'what is required to complete this safely' Keeping track of mitigations is not "vulnerability & exposure & thus risk based" This won't work. The second bullet point: "• Show me your procedure for xxx…" Again, a procedure is only a mitigation & tracking mitigations communicates no useful dynamic parameter that only conveyance of risk based vulnerability & exposure does. The third bullet point: "• How did you learn this?" What? Are they simply trying to see if you had an accident? If the safety case was built around pre-drill diligence in identifying vulnerabilities & exposures that comprise risk this question becomes irrelevant & distracting. We are more interested in pre-cursors to incidents which are pre-cursors to accidents. The fourth bullet point: "• What do you do if it goes wrong?" This is simply suggesting we "pile" one mitigation on top of another mitigation & extreme mitigation based safety case which is not affective in the least to detecting pre-cursor signals (vulnerabilities & exposures) to incidents yet is treating the right side of the "bow tie" & is reactionary & not helpful in improving & adding robust innovations to our processes, yet is responsible for piling mitigations on top of mitigations, processes on top of processes & red tape on top of red tape. The victim is DUE PROCESS because the rig workers ARE NOT NOTIFIED OF THE UNDERLYING VULNERABILITIES AND EXPOSURES that the layer upon layer of mitigations is hiding. This is trying to keep the well from blowing out by piling swiss cheese on top of it. Keep in mind that this "mitigation based" "safety case" may work better for personal safety issues than process safety issues, they both are much better when they are "risk based" "safety cases" that monitor vulnerability & exposure levels rather than as a general comment on mitigations along a project activity timeline with no way to notice when vulnerabilities and exposures have overlapped at any moment in time during operations. All mitigations, in that case, could be judged as "100%" yet the vulnerabilities and exposures to hazards of every component in the system can unexpectedly overlap leading to unsafe levels even though the quality and quantity of mitigations is at "safety case" levels. An example of a "vulnerability and exposure based safety case" is the "yellow hat" that evolved because there were no mitigations better than "due process" given to all of the fellow workers of their exposure to this vulnerability. Each worker sees the "yellow hat" (due process) and creates a unique mitigation specific for the exact moment in time and the myriad of activities that could be happening. Prescribing multiple on top of multiple mitigations is a recipe for prescriptive cognitive overload. This example holds true in the broader cases. We need due process afforded vulnerability and exposure based dynamic safety case regulating and the due diligence created by ensuring accountability by means testing operators to the cost of all of the worst possible consequences (well designs are now prescribed to mitigate). This protects the vulnerable and creates an automatic, dynamic mitigation that each individual is afforded in heightened awareness to the hazard.

PeterXPeter: Its okay to disregard my comments since they are basically in addressing prevention by measuring vulnerability and your comments are basically noticing incidents/accidents/failures/stresses, after the fact. In a proper safety program both sides of the "bow tie" are needed in the mix. One comment, however, mitigations based getting this data voluntarily is encouraged and even if that impossibility (to regulate with the incentivization of "gaming" or not disclosing numbers as status quo) is achieved, on an industry scale, its data that can be reacted to and not prevented; ie. symptoms and not the case; unawareness of high vulnerabilities and exposures. Sure do the mitigation side as well as the awareness side yet the mitigations must originate from the awareness afforded. Using the Swiss Cheese Model its my understanding that your focus is on what gets through the holes of the cheese and my focus is on the locations and size of the holes and interactions between the holes and the vulnerabilities, exposure and risk this creates. Mitigations follow metrics of the cumulative vulnerabilities. This is again mainly with well containment in mind which is what it is assumed process safety defines and not slips, trips and falls. Using unrelated indicators to suggest the level of vulnerability to well containment events is seen as wishful thinking as it is likely unrelated and again mitigating and not preventing. This isn't disparaging mitigation and yet is only continuing a balanced focus on a mix of prevention and mitigation concepts.

Vince: "Reducing the exposure to that risk." Should read, "reducing the exposure to that hazard". Subtle difference between hazard and risk. A hurricane is a hazard and yet one about to hit South Carolina isn't a risk to Houston. The nature of risk is your true question since you ask what is acceptable and unacceptable and we answered this awhile back. Ask a guy if it is okay that he will lose an arm yet be highly paid! Due process is informing him that he will be vulnerable to this hazard so he can either try another job or heighten his/her awareness and likely be safer because of it, yet unless a guy trips over a gold brick in the road he will likely have to add risk in exchange for a reward even if the only risk is wasting time. In terms of risk to others living in the area around our projects? The worst possible consequence must be diligently and honestly assessed and if this consequence exceeds the capacity or capability of the operator to mitigate financially then the risk is too great because then further damage is spread to taxpayers in the form of socialization of risk. Also, if the worst case consequences to individual workers is beyond their willingness to accept then again...The key point is that the public and the individual workers have the right to the risk information and yet has the operator even thought of this information in regards to well containment or is it being measured diligently and how to enforce that is the topic we discuss here. There is no reward without risk unless you discover the La Brea tar pits on your land like Jed Clampett. We go to a spot on earth that we can drill through natural barriers to blowout and disaster and remove them. We add risk to every ecosystem we work in, usually we add risk where there originally was none. That is the oil business in safety terms. That is likely every business in safety terms. We are mitigators of the very risk we create and yet my duty is to point out that we must broadcast the status quo of our vulnerabilities as we ourselves change them and that the groups as a whole doesn't have the risk information that each individual has and the individual does not have the risk information of the system as a whole unless this is actively provided. First step is to realize it adds value.Peter: Weaknesses would be classified as a form of vulnerability that is exposed to some degree of stress being physical, mechanical, or chemical, and over some period of time and yes the "holes" in the "cheese" must be located, tracked, measured, summed and broadcast in order to add due process that gives the human preventative of heightened awareness necessary to design actions according to levels of vulnerability that is missing without these measures. Mitigations are not prescribed yet are designed dynamically by those involved that know exactly what they are doing and now have the information they need; knowledge of the key parameter of the system at every moment in time. This is like the "yellow hat". The government doesn't tell us step by step what to do around less experienced workers that expose us to their foibles yet gives us the risk information we need to deliberate over the proper mitigation at that moment considering our current activity and cognitive load. The "yellow hat" isn't the "hole in the cheese" the "yellow hat"'s situation in the context of the current activity in relation to each individual that is "entangled" with him/her, that is the "hole in the cheese". This treats the worker as a "situational attribute" and not a "fundamental attribute" or else the "yellow hat" is simply an invitation to mock the fellow instead of carefully observing the situations the "yellow hat" works and moves through.This same pattern of logic, intuition and due process and situation attributional focus holds for process safety and the vulnerability of the barrier in well containment as it does for personal safety as the "yellow hat" example.

Risk is when we know the range of possible consequences and probabilities of hazards occurring and uncertainty is when we don’t. Uncertainty is a lack of any of the components necessary to assess anything. Hazards are best defined, by “stress, perturbation or uncertainty”. Vulnerability is thus exposure to hazards. Risk is the probability of the occurrence of a hazard x barrier vulnerability (resilience or lack thereof makes vulnerability a variable) x exposure to the hazard (as a function of control system) / resilience and reliability (of the control system. O = failure 1 = success). This is a very complicated differential equation. The problem is that in the geomechanical models we infer from seismic there is uncertainty that makes it impossible to assess risk accurately yet we can still project worst case consequences necessary to insure against socializing them and yet because we drill through moments of uncertain risk we must assess, measure, sum, and broadcast another parameter and that is vulnerability in order to factor in uncertainty without the lose of quality that risk suffers because of the lack of the probability variable during moments of uncertainty.

RISK = f (P,V,E,ReRe) = ( P x V x E ) / ReRe

Note: The “control system” is the control barrier or the BOP system that consists of the human control and the BOPE and control begins with detection of uncontrolled pressure differential (aka loss of barrier to flow).

One more thing needs to be said that is brought to light by the last several comments and that is that assessing the risk and thus assuring the safety of our drilling process is hampered by the complexity and the inadequate understanding of the components of risks. This misunderstanding makes it impossible to order the components of risks and hazards and truly assess our vulnerabilities and measure them and thus risk becomes “unmanageable”.

Words like:

Process, safety, Risk, Mitigation, control, prevent, hazard, harm, reducing, tolerable, acceptable, level, implementing, effective, loss, control, safety, culture, appropriate, strategies, measures, services, reasonably practical, identification, organizational, systems, operational, environment, determine, effects, assessment, probability, occurrence, severity, analyzed, assessed, magnitude, acceptability, determined, minimize, remove, system, unacceptable, control, introduced, potential, consequences. Reducing, optimum, solution, vary, depending, local, circumstances, urgency, situation, concept, account, unpredictable, anomalies, natural, disasters, human, fallibility, influence, personal, opinion, experience, taking, catastrophic, failure, necessarily, bad, thing. , Every, job, task, undertake, chance, unwanted, unavoidable, consequence, obviously, fall, same, category, safer, life-threatening, execute, hurt, kill, numerous, accounts, people, engage, dangerous, activities, climbing, definitely, doing, know, willing, ability, help, achieve, goal, skills, rules, risking, lives, seriously, damaging, public, property, experience, clear, mind, knowing, do, able, evaluate, all, relevant, aspects, instance, well, trained, precautions, accident, opinion, irresponsible, eliminate, futile, accept, prosper, fundamental, part, business

Leaders lead and consequences follow! There is often "public outcry" over consequences stemming from our much needed industry and our beloved profession as petroleum engineers. Why the "public outcry"? Because people that otherwise dwell securely feel vulnerable from things "that didn't need to happen". What is the definition of "things that didn't need to happen"? This is best defined by pointing out the exact opposite and those are things beyond our control that were not chosen and those would be "hurricanes, other acts of nature or 'force majeur', etc.". Still there is an outcry if our city was more vulnerable to these acts of nature than we were led to believe. This segues into the word "lead". The word for "character" in many of the myriad languages of our world is used to define "measurement". Our leaders must lead with character and they must internalize all consequences. If the two consequences are profit or loss then both of these must be internalized to the exact same extent. We know that poor consequences are a function of exposure and vulnerability to hazards and we add risk to the areas we think hold consequences we seek, yet we might entertain a fantastic notion of only profit and yet every single project in history has both profit and loss, and we only count on making the sum of the two "lean toward" profit yet both consequences must be accountable to the one that is initiating consequences; both good and bad. We add risk to the areas we explore? Yes! Miles of earth barrier hold down the oil and gas reserves we hope to uncover and so technically we replace a perfectly good barrier for our new one that is "risky" compared to mother nature's. We are the hazard if we do not internalize every consequence and the word consequence is defined as "that follows" the lead of our executives that initiate removing the barrier to well reservoir flow. Executives are risk creators, and start events that follow, or consequences, both good and bad. All people are vulnerable to the consequences of our projects and yet most of them are only exposed to the bad ones. This must never be the case and is not just wrong it is bad for business and can lead to longer term legal problems and shorter term public relation problems and the revocation of our license to operate. Who is to protect from this and thus the commercial interest of the corporation as well? The leader. Another word for "leader" is "ruler" or "a leader that measures". The goal oriented version of that title is "to measure" and the prescriptive or tyrannical version is, "one who makes rules". We need the leaders that measure. We do not need to hear that "it is too complex" or " costs too much" because that cost is either born by those making the profit and able to pay it or the vulnerable that are uninvited to the good consequences and a leader must protect, diligently, the company and others from this because the reason a ruler measures is to protect from vulnerabilities or to "manage" risks. The executive that sees this cost as only benefiting the outside interests of the innocently endangered is being shortsighted and yet the added diligence in measuring barrier vulnerability improves our ability to replace natural ones with our designed and constructed ones that produce our profits and reduce our costs. The term "risk" is NEVER used to measure "good" consequences and yet only "bad" ones so "risk management" is about executives executing protective "measures". Again these are not "rules" yet actual work measuring vulnerability in order to protect themselves and their organizations and innocent people and their private property. In order to relate the terms of risk to risk itself and then be able to define each term so we can assess it, measure it, manage it, and internalize it, we must first define the term management, leaders, executives and rulers. Frontline workers are a very far distance away from being rulers and yet are often blamed for accidents that experts deem organizational. “Too big to fail” means those organizations are reaping rewards by taking on more risk than they are accountable for? Does our industry have an accountability problem like others have had?

If any leader reads this and wants more responsibility and help in managing risk, I'm available. It will not be freedom from consequences yet will add diligence to measure and manage the vulnerabilities that determine them and increase good ones. Testimonials are available.

FORECAST, BROADCAST and SIMULCAST are managing components of consequence that exist before risk is ever born within tolerances during delivery.

The word 'leader' is believed to date from the 14th century middle English 'leder' a quality of character. The suffix '-ship' is also derived from middle English meaning 'to shape'. So the word leader-ship, literally means 'to shape character'

Focusing on outcomes is “what” we are doing and want to achieve and Is ignoring the process and “how” we are doing it, so detecting changes in components of “how” we are doing isn’t possible if we are focusing on an outcome. This is change blindness 101. A thousand MOCs nor SEMS process accounting cannot reverse the blindness to change we have if our focus is not on means and “how” we are doing things. The outcomes actually deteriorate by focusing on them. Therefore focusing on “kicks” will ensure more happen. To decrease kicks we must focus on “how” kicks happen; the precursors of “kicks” and therefore the components of the consequential kick. Those components are our vulnerability and exposure to well control. This may seem pedantic to some and unnecessarily complex and yet they would be wrong. Our drilling fluid system is a membrane that is vulnerable to stress and our membrane is “exposed” to earth stresses in the form of fluid and gas pressures as well. We must focus on this interaction of vulnerability and exposure and measure here. If the focus is here we notice subtle changes and if it isn’t we are blind to changes at that boundary. This is well known as “change blindness”. Even geniuses do not notice changes in places they are not focused. This is human nature; the capacity of all brains, and the human factors we need to know about in order to avoid expecting things of our human barrier to failure that our drilling systems rely on. Focusing on outcomes is “what” we are doing and want to achieve and Is ignoring the process and “how” we are doing it, so detecting changes in components of “how” we are doing isn’t possible if we are focusing on an outcome. This is change blindness 101. A thousand MOCs nor SEMS process accounting cannot reverse the blindness to change we have if our focus is not on means and “how” we are doing things. The outcomes actually deteriorate by focusing on them. Therefore focusing on “kicks” will ensure more happen. To decrease kicks we must focus on “how” kicks happen; the precursors of “kicks” and therefore the components of the consequential kick. Those components are our vulnerability and exposure to well control. This may seem pedantic to some and unnecessarily complex and yet they would be wrong. Our drilling fluid system is a membrane that is vulnerable to stress and our membrane is “exposed” to earth stresses in the form of fluid and gas pressures as well. We must focus on this interaction of vulnerability and exposure and measure here. If the focus is here we notice subtle changes and if it isn’t we are blind to changes at that boundary. This is well known as “change blindness”. Even geniuses do not notice changes in places they are not focused. This is human nature; the capacity of all brains, and the human factors we need to know about in order to avoid expecting things of our human barrier to failure that our drilling systems rely on. HSE outcomes can be changed by a careful and diligent study of the principles of Quantum Mechanics. One of the principles is the Uncertainty Principle. The theory contends that the mere act of observing behavior changes it because we are observing outcomes and not the precursors of outcomes that lie at the origins of atoms and in the DNA of their nucleus. Our instruments of observations do not allow this and yet the nucleus of our operations in the oil and gas industry we do have the instruments to observe and measure and yet we do not.

Only by studying the detailed characteristic components of risk can we get to the core of the components of the precursors that can be changed by observation and the uncertainty principle of quantum mechanics. An example is in looking at various hazards and how different parameters and measures mean different things. Take water, as an example. Is a cup of water a hazard? Distinction of units of measure are critical. Volume, temperature, locus, and even distinctions of locus like dynamic position as in a hurricane as water can reach high velocities or in a water jet that can etch metal. One cup of water is good for us and yet if we drank 10 gallons we would die. Water in a cup is harmless yet water on the floor can cause of to slip and fall and kills us by breaking our neck or through hitting our head on the floor. That same cup of water thrown in our face is harmless and yet if it is scalding hot it is harmful. So the components of consequence are vulnerabilities and exposures and components of the attributes of what we are exposed to make all of the difference. Focusing on the big picture can cause us to be blind to the changes in details. We must diligently break down components into details, assess the details that are most important to measure changes in and focus on the nucleus of the essential element of the detail of the component of precursor to consequence that exists prior to the actual existence of the consequence. This circumvents the “actor-observation bias” and manages the precursors to risk by taking advantage of the Heisenberg Uncertainty Principle of quantum mechanics. The mere act of observing the precursors of consequence changes them. Measuring them is only important in order to broadcast dynamic situational awareness.

I'm not sure if you're suggesting "De-evolution" or missing my attempted point that the precursors to risk are the components of consequence or something else? My suggestion is clearly that the very definition of leadership is to measure diligently differentiated characteristics of consequence and if this better matches the descriptions of leaders of ancient history then we need to devolve in that way. Look how precisely the stones in the pyramids scattered around the world were cut! The height of arrogance is believing those were "primitive" cultures that engineered those marvels since scientist today cannot fathom how they did it. Risk defined by probability of outcomes cannot be measured nor managed and in fact the act of observing incidents and accidents creates more risk and worse outcomes. This is quantum mechanics and proven by Heisenberg in presenting his landmark Uncertainty Principle. Since risk involves probability the act of observing the components of probability alter them rendering them useless and hazardous to rely on. Probability is a hopelessly futile pursuit. The only way to improve outcomes is by observing the precursors of consequence and ignoring lagging statistical probabilities, and not by leaning on our own understanding and yet by simply focusing. This can be proven logically, intuitively and experimentally and success has been documented by use of this theory in other disciplines. Identifying and shaping the characteristics of consequence is leadership. We must lead and not lag.

Proverbs 17:24 Wisdom is directly in front of an understanding man, but the eyes of a fool are at the end of the earth.

This above proverb seems to conform the uncertainty principle and change blindness and the need to focus on the precursors of consequence. Workers game outcome based safety programs because they do not believe in them and know they do not work. Workers know that outcome based safety programs order Sisyphean Tasks that end up resulting in worse consequences than if they do not roll all of the incident numbers up the Hadean hill that then rolls them over in the form of INCs or other safety data disincentives. Every day they are asked to roll that stone up the hill once again. A cruel game with a cruel outcome and one that is continually gamed. The game erodes confidence until there is none left when confidence is really the most important things of all to observe, monitor and maintain and not the outcome based indicators.

Respectfully, yes, we disagree on the need to "finding what motivates each and every single individual" on our teams yet on assuring they ARE motivated to locate their focus properly. My 1/7,000,000,000 diluted opinion is that we need to communicate using a clear and concise mode that matches the complexity of our specific situations and this should convey the precursors of consequence and absolutely leave out the noise of outcomes and motivations except when we are vulnerable to them. This is motivating all to focus on the means with confidence in the outcome. This same principle applies to every position of leadership even to the level of individual with no charges and a shovel.

Kevin Lacy and I see very closely eye to eye and its spooky since we've come to our conclusions independent of one another. Here are notes that I made will reading his article: http://www.drillscience.com/dps/KevinLacyArticleNotes.docx

The part that impressed me the most was that both of our notions of a positive reinforcement cycle that becomes culture is so similar and yet we developed it from completely different frameworks of thought since mine came from a mindset coached to professional athletes. I'm not exactly sure how Kevin Lacy came up with his steps.

Mine is:

1. Leadership 2. Process 3. Confidence

This makes a culture and maintains it.

Leadership directs focus and locus of control on the process and the precursors of risk that include uncertainty and puts up barriers to drifting into even brief lapses of outcome focus. Leadership directs focus to "how" and away from "what" the system is doing. The "what" is in the subconscious of the competent. This is vital because of the low incident nature of well containment that tends to lend itself to drifting into an outcome bias that causes shortcuts to the process seem "successful" and thus might consistently erode "confidence" in the process without management by measuring confidence in the system.

Process empower others that better know "how" at any moment in time with dynamic assessments of current level of vulnerability and exposure to hazards to the barrier and control systems. Focusing on "how" the system is in regards to vulnerability and exposure to the barrier and control is facilitated by a BROADCAST hazard level that adds situational awareness to each individual and eliminates the false alarm effect. It is also key to the resilience of the control system and supervisory level, redundancy and competency level assignments.

Confidence is the gauge and measure of the leadership and process and determines the effectiveness of the processes and is the performance control for process safety. Confidence of the process in dealing with uncertainty is not the same as the kind of confidence in personal assessments that can lead to poor assessments of the probability of outcomes. By not measuring probability the process is geared to more observant measures that preclude judgment and focus on vulnerabilities and exposures alone so as the focus is constantly on "how" the system is doing and not "what" is happening at any moment in time.

Command styles that constantly ask "what" are distracting and debilitating to the “quick cognition” mindsets in frontline teams and also disassembling to the situational awareness required in complex and highly interdependent operations.

One culture is so focused on success and “what it looks like” that they truly do not understand that it looks different every, single time and the only way to have success is to focus on “how to succeed” and then afterward the biggest part of celebration is to look at closely the unique beauty of the success for the first time. The best celebrations of success are related to "how we did it!" because the "what" is always different yet the "hows" usually relate to our focus and efforts; the true precursors of consequence. Celebrations of "what we did" always leaving us feeling empty. This processing of "how" are situational because they apply to all situations and leave us more focused on "our" unique situation and how to achieve in it and is the best approach to process every outcome. We must feel the "how this happened" fully, understand, and know the lessons learned and then get motivated for another goal yet with more expertise in how to achieve it. It is truly tragic that we fail because we either “want too much” or “want too little”. The conscious focus must be on the process yet we must turn our wish (too little) for the best outcome into motivation (just right) and not focus on it (too much). Focus on an outcome is abnormal psychology and commonly called an “obsession” and yet if you’ve ever been in an enterprise that is so focused on costs and cutting them and performance controls that measure time in minutes and minutes in thousands of dollars with no controls, obsession is the best description. Leadership shapes the circumference of an organization by directing focus on precursors of consequence.

The greatest safety story of our generation has to be Sully Sullenberger’s Miracle on the Hudson and yet the outcome is our focus and not the “how” related to vulnerabilities. Why? Because we celebrate the “what” and not the “how. True celebrations are all this and that concerning “how we did it”. We put our arms around each other and talk of “hey, remember how I was facing this and did that and how much was on the line and how everyone was worried and how I dug in and gave it my best effort yet!” Yet one culture celebrates the “what”. Kitty Hawk. What? Man’s first flight. What? Orville and Wilbur Wright. How? There is less emphasis there or else we would talk of how the Wright Brothers was also the first recorded incident of a “bird strike” and we would be discussing “how” this “affected” the flight, “how” the brothers responded, and “how” vulnerable we are to bird strikes even today, as Sully was and studied precursors of consequence and “how” to deal with them best at Cal Berkley and had a copy of “Just Culture” in the cockpit on the day he landed his plane on the Hudson after a “bird strike”. That is the “how” celebration after a “how” trained pilot, had to deal with “how” to land (that’s an ambivalent term) on water. He didn’t know “what” it looked like yet he had thought of “how” to do it and did. “What” is a stifling term that actually makes us stop in our tracks confused and yet “how” is an empowering word that launches us into action. What is an accountability term that forces us to move our wishes into motivations. We must move our “whats” into “hows”. We must focus on “whats” as leaders and decide if this is “what” we want to do. Once the leaders, the decision makers, decides this is “what” we want to do then the focus must be directed firmly to “how”. “What” can only be kept in the subconscious if there is confidence in “how” diligently the “what” to do was arrived at. Again this is confidence that reinforces how the culture operates properly. Leadership, Process, and Confidence build as long as diligent decisions as of “what” to do and “how” to do it are made. Made implies create and we do create our consequences yet this is done prior to the decision and thus are precursors of consequence and where the focus must always be.

Summarizing. “What” to do starts the process and then quickly is subjugated to a motivation to do it and directed to the subconscious. The focus then is kept (managed) on “how” to proceed. The leadership is in “deciding”, “motivating”, “directing” and “managing” know how that comprises our processes. Confidence with uncertainty in the outcome builds yet relying on statistics isn’t confidence this is “hedging”. It’s a distraction.

We need to see the character of hazard, the precursor of consequence, because this describes it in terms of the situations in which hazard exists. Hazard is always, at least in every situation I’ve been able to conceive of, in the form of an interstitial component of vulnerability to an exposure of something; anything over time.

Ella: Yes, it is the “condition/environment” that is a function of “time”, or as I’ve referred to it repeatedly here, “the project timeline”. (“Time is what keeps all things from happening at onece” –Einstein)

Ella: “but the outcome(s) play(s) a very distinct role because there are result led processes where the desired outcome defines the process.” Yes! In fact nearly all processes are designed with the outcome in mind and in fact as an exercise we might look for a case where this is not so and perhaps find none, yet the point is important that the process goes through stages of development of process. Take drilling. The first time anyone drilled they were completely focused on the outcome they wanted then asked “how” and came up with the “cable tool” concept. Each time that cable tool went up and down were they in a “mantra” saying “oil, oil, oil, oil, oil?” Perhaps in the subconscious yet this was the motivator. This has been my assertion all along, yet the focus in on “how”. In terms of well control though, and let’s not lose sight of this topic in an SPE group, the original focus was simply on getting a hole deeper and not so much on containment. In fact at the time of Spindletop the common method to contain and produce oil was to dig a pit and containment berm around the drill sight and simply let the well blowout. I’m looking at patterns in coming to conclusions on where the focus needs to be and the pattern that creates a culture that becomes a reinforcing feedback loop of improving process safety. Leadership (directing a “conscious” focus on precursors of consequence ), Process (the “how” we do this that drifts in and out of the subconscious level as experience and mastery increases), and Confidence (that is basicly the very act of the belief in the process that creates the “ability” to focus on the precursors of consequence and not the outcome). The only process I know of that benefits from a fixed outcome focus is in drawing a straight line and yet there is the distinction in drawing a straight line we do not focus on the outcome we simply focus on the point we will be going to and so it is a precursor still. The consequence of focusing of the precursors is that the line is straighter. Focusing on “where we are going” is different that the outcome we are discussing here of “cutting costs” and the MBA style performance controls that only look at a financial bottom line.

“separating the operations into sub units” regards the issue of contractor responsibility that isn’t central to the level of debate here with the people commenting at this time. Clearly with heels dug in over whether risk includes probability and not being able to agree that uncertainty means we are “uncertain” about the range of possibility necessary to define risk is a show stopper, moving on the contractor responsibility is like studying calculus while arguing over multiplication and the BSEE in the US did pass a law ensuring contractor responsibility two years ago; is anyone involved in this discussion not aware of that fact?

In regards to concepts that are “replicable across other processes/operations” that is exactly the methodology in finding components of consequence and risk, what I call precursors of consequence, that apply across other processes and operations.

We live in a world with oceans of human factors and waves of information where some individuals are capsized by “rogue waves” then labeled as “drunken sailors” and summarily dismissed and now we have documented proof of “rogue waves”, although all scientists and formulas disproved them before and all Sailors that bore witness to them were slandered and dismissed. Similarly my opinion is a “rogue wave” of human factors hit the company men in Macondo. Similar to the theories, now, of formation of rogue waves in the open ocean, is it possible that oceans of communicated and observed information compressed into decreasing time-space continuum, by cost focused managers, in highly complex and interdependent operations actually create the “rogue wave” of cognitive errors that are then blamed on the front line workers instead of the precursors of the consequential cognitive errors.

We need to see the character of hazard, the precursors of consequence that exist in uncertainty not consequentially derived probability, because this describes the situations in which hazard is born. Hazard is only a derivative of impending “epiclipse” of the envelopes of vulnerability and exposure and our margins are the intervening spaces; interstices. We operate in the depending interstitial precursors of hazard. Any specific example proves this. Casing has vulnerability to stress, and the earth stresses expose the casing and our drilling fluid hydro-static or -dynamic (operation dependent) is the interstice. This pattern repeats and lends itself to the environment SPE works in. Ella: “exposure becomes a trigger only when the vulnerability is not addressed” and furthermore, vulnerability isn’t a hazard unless the exposure exists. The focus is now on the “triggers” that are precursors of consequence and not the consequence itself. Yes, it is the “condition/environment” that is a function of “time”, or “the project timeline”. (“Time is what keeps all things from happening at one” –Einstein) We decided to do things based on the outcome we want; to deliver a needed commodity to society and make a profit in doing so. Profit is the outcome we do not focus on and not the goal: to drill a well (outcome: at a profit). The point is to make a profit yet we don't focus on it during the process. If we focus on how were are achieving what (construction the well) the outcome (profit) will be good. The definition of time of Einstein is exactly the point. We want to focus on how we drill the well without the pressure of people that have financial reasons for it to "happen all at once". Alacrity yes, and yet constant pressure in the space time continuum; no. Looking at patterns in coming to conclusions on where the focus needs to be it is the pattern that creates a culture that becomes a reinforcing feedback loop of improving process safety. Leadership (directing a “conscious” focus on precursors of consequence ), Process (the “how” we do this that drifts in and out of the subconscious level as experience and mastery increases), and Confidence (that is basicly the very act of the belief in the process that creates the “ability” to focus on the precursors of consequence and not the outcomes we want; profit). So, the focus issue does benefit from carefully distinguishing the difference between objective and adjectives. The objective is to construct a well and produce it and the adjective to avoid focusing on is the time/cost element, like "extremely profitable" well, or "extremely costly" well, or "on budget" well, or "grossly over budget" well. Its obsession on outcome, in terms of "time" that is inextricable linked to "money", that is suggesting we shove a "closet full" of "project tasks" into a "suitcase" of "time". This, simply and clearly, creates a hazard, by exposure to our vulnerability, to cut corners and ignore the "how", that others may not see the need for and defenders of the best practice are not there on the frontline to defend. “separating the operations into sub units” was meant to express that while accountability is central to responsibility that motivates the diligence necessary in complex, dependent operations, the issue of contractor responsibility was addressed by the BSEE in the US on August 15, 2012: http://www.bsee.gov/uploadedFiles/Issuance%20of%20an%20Incident %20of%20Non%20Compliance%20to%20Contractors.pdfIn regards to concepts that are “replicable across other processes/operations” that is exactly the methodology in discovering precursors of hazard.

As the envelopes of vulnerability and exposure intersect, hazard develops and exists within the intersection if they are 2 dimensional and intervolume if they are 3 dimensional or more, and resides there. This intersection, in well containment, must be controlled by BOPE if this occurs in hydrodynamic mode of the drilling fluid system or by hydrodynamics if this occurs in hydrostatic mode.

Now back to Ella's attempt at summarizing things let's be clear on the fact that the intensity of the focus and more importantly the location of the focus is the key. Also, calling this local focus the precursors of consequence is only the tip of the iceberg because the general categories within both exposure and vulnerability also have state properties. The tip of the iceberg extends beyond the general categories and involves both the tangibles of our engineered well bore and intangibles of the human factors of how our teams operate, interact (internally and externally in the system) and make decisions. We also need the big picture of the summations of precursors of consequence so we may focus locally on the precursors of our individual task within the framework of the systems status. The complexity and interdependence can be seen in "how" details interact as alluded to by Captain Haji's examples.To illustrate this clearer here are just general categories of vulnerabilities that apply to human factors and casing design alike (tailoring required) one side of two sided coin of:Precursors of ConsequencePart 1: VulnerabilitySensitivity Degree to which a system is affected by or responsive to internal and external stimuli (note that sensitivity includes responsiveness to both problematic stimuli and beneficial stimuli)Susceptibility Degree to which a system is open, liable, or sensitive to internal and external stimuli (similar to sensitivity, with some connotations toward damage)Vulnerability Degree to which a system is susceptible to injury, damage, or harm (one part—the problematic or detrimental part—of sensitivity)Impact Potential Degree to which a system is sensitive or susceptible to internal and external stimuli (essentially synonymous with sensitivity)Impulse Potential Degree to which a system is sensitive or susceptible to internal and external stimuli over a period of time.Looming Stress Trending lagging indicators add stress to a system tracking those indicators. The panic in human factors distracts attention from noticing the precursory details. Pressure build up tests and indicators of yield points are a tangible example.Deflating Stress Trending lagging indicators of subtracting stress from a system tracking those indicators. The complacent in human factors distracts attention from noticing the precursory details. Negative pressures tests are an example of tangible deflating stressors.Stability Degree to which a system is not easily moved or modifiedRobustness Strength; degree to which a system is not given to influenceResilience Degree to which a system rebounds, recoups, or recovers from a stimulusResistance Degree to which a system opposes or prevents an effect of a stimulusFlexibility Degree to which a system is pliable or compliant (similar to adaptability, but more absolute than relative) Coping Ability Degree to which a system can successfully grapple with a stimulus (similar to adaptability, but includes more than adaptive means of "grappling")Responsiveness Degree to which a system reacts to stimuli (broader than coping ability and adaptability because responses need not be "successful")Adaptive Capacity The potential or capability of a system to adapt to (to alter to better suit) internal and external stimuli or their effects or impactsAdaptability The ability, competency, or capacity of a system to adapt to (to alter to better suit) internal and external stimuli (essentially synonymous with adaptive capacity)The point is the focus is everything and once the focus is correct the discipline will advance in the correction realm and that is the realm of leading indicators=precursors of consequence.

We must increase synergies and eliminate antagonists in our systems. External components of internal interactions of the system must be confronted and accessed individually and appropriately, precursorily assessed internally, gathered neural, precursorily assessed synergistically and cumulatively expressed to impress situational dependent component controls, eliminate antagonistic detection interactions and add synergy, integrity, unity and situational awareness. Antagonistic interactions add noise that detracts from signal detection.

Yes, and reinforced with confidence! We agree to to re-engineer the metrics and yet the subtleness is that leadership must direct the focus. With the proper focus the metrics will be appropriately and uniquely developed. With a focus on the end we always wind up chasing our tails, yet with a focus on precursors we affect better outcomes through innovative and unconventional means because we are operating within "uncertainty" and not "risk". Outcomes are motivational, not focusing. We get the "how" and "what" mixed up and doom is on the horizon if we lock focus there and panic further worsens already doomed situations. "What" motivates! That isn't a question that is the truth. "How" focuses! Order is important. Decide "what" we want (motivation) that is the "heart" of our operation that effort springs from. Focus on "how" to do it (diligence) that is the direction of effort that springs from the motivation that is the "mind" of our operation. The efforts constitute a process that, once mastered, moves the "how" (savvy) to the subconscious, along with the motivation, and our consciousness focuses on the precursors of hazards to the process (preemption) that energizes our "decision space" where assessment and determinations of "right and wrong" and "good and bad" are made. Our performance control is the "confidence" in the system that must be monitored and maintained by redirecting focus and "tweaking" details of the process and comparing anticipated outcomes based on confidence in the process with actual outcomes that reinforces confidence in the process or initiates "tweaks". This constitutes a positively reinforcing process with "confidence" as the performance metric. We don't focus on "known" hazards derived from "consequences" and yet the "unknown" hazards that emerge from the "precursors" of hazards not yet seen. We only notice change within the scope of our focus so if we focus on the known we do not notice the emerging "unknown" hazard. Outside of this scope, the focus on outcomes, is the realm of deception, delusions, cognitive error and magic that uses slight of hand and change blindness to ensure the observer does not detect changing conditions. Inside the scope of precursors is where the magician operates and this is where emerging hazards are born and the place we must focus to lead our projects away from, prevent and detect emerging hazards. Seems like common sense and yet it really isn't commonly delineated and we rarely sense this is missing. By the way this is a model known as the "Tactical Triangle" used in high performance coaching circles. It directs focus away from outcome to vulnerability and exposure that energizes action that prevents an epiclipse of hazard and self assesses, tweaks, and reinforces, in terms of the ultimate value of "confidence" in the process.

Rex: Yes! Thanks for directing me to Frank Wright and it was a pleasant surprise to see how strongly his definition of risk corresponded with my own. We should never fear new opinion since they will either alter our own or refine them and is the "process" we should be confident in.

Ella: I completely understand the counter intuition of directing our focus to the precusors of consequence. And I'm well aware that 98% of the rest of us are focused on reactive mitigation based focuses. If it was ubiquitously understood and universally accepted, why would I feel it necessary to continually "harp" on this? Also, I'm well aware that 98% of the rest of us are focused on reactive mitigation based focuses and in fact 98% of the tactical triangle I'm espousing may in fact be mitigation based and yet the most important part and the part that I get asked about by the best safety experts is "what are the leading indicators" and this is my focus. My explanation of the tactical triangle is that the base is founded on the process of mitigation of foreseen hazards and the very top (2%?) is the focus on vulnerability and exposures where unforeseen hazards emerge. The logic that thinks its impossible to focus on emerging hazards from the epiclipse of hazard is the anchoring bias, focusing illusion, and focalism on outcomes as the first piece of information that is still leading some of us astray; apparently. "The difference between knowledge and lack thereof"? My coach a long time ago taught me the very definition of a closed and open mind. "We cannot learn things we think we already know". This is in essence the concise anti venom of focalism. The top 2% where uncertainty is being forced into an equation of risk that discounts consequences all in the name of "knowing"? This is an absurd delusion since innovation lies in the domain of uncertainty and it is the height of error to push uncertainty into nicely presentable probability simply in order to "look tidy", nor does it work. Once again, if our first piece of information is based on past hazards our search for new ones is anchored to the old ones and this is the base of the triangle and mitigation based and the search for the new evolving hazards must always be at the top of our focus.

Ron: What fence? That is a big part of the ethics problem; we do not build fences around the situations we get ourselves into and the focus we drift out of; We want to make it too simple, external and thus inappropriate.

Dave: These discussions cannot be any simpler nor more universal in its applicability and it is an internal locus of control. That does not mean it IS simple, because it isn't, it means it cannot be simplified further. If it is esoteric then it is. The "tactical triangle" model of a positive feedback loop, builds on the internal locus of control concept, uses that simple focus and the concept of confidence in the system as a control. This creates a culture and also puts a professional athlete in "the zone". The fence is confidence. Kevin Lacy just wrote an article in which it is clear we both simply see the patterns emerge and the concepts and structures that underlie competent focus.http://www.DrillScience.com/DPS/KevinLacey- TheRoadToHighReliability.pdf http://www.DrillScience.com/DPS/KevinLacy ArticleNotes.docxNo one can control any other's logic nor moral compass nor is there an "easy button" where we can simply go to people's driving records and magically identify and filter bad actors that we can "blame" for systemic failures or "eliminate" them and all risk subsequently. If there was a test to see who has sold their soul at the door that may have more bearing on the "attitude problem" that distracts focus and corrodes confidence in true leadership. There is actually more people connecting the dots of the logic and concept and agreement with the simple focus methodology and the "tactical triangle" concept of process safety culture and connecting with me to develop this. In my opinion a process requires "faith" in the system in order to maintain the correct locus of control and yet this doesn't have to be a morality test or anything religious and yet it can be tested and measured in no uncertain terms. I'm reading a paper from the UK government about creating leading indicators and the theme is very closely adhered to there as well except for perhaps less recognition of the simplicity of the underlying details of the general topic of leading by affecting conclusions and tying all of this together into a self reinforcing culture."epiclipse" of vulnerability and exposure (my invented anglicized Greek) may confuse you so let me refer you to the term "Doghouse Plot" or "flight envelope" that precision (>Mach Speed) jet pilots use to describe the exact concept of the epiclipse of hazard and the "coffin corner" at its farthest extreme. Go here for federal regulations pertaining to "flight envelopes": http://www.ecfr.gov/cgi-bin/text-idx? type=simple;c=ecfr;cc=ecfr;sid=a8f38006e777ba46ba8000f7c2fe6641;regi on=DIV1;q1=23.335;rgn=div8;view=text;idno=14;node=14%3A1.0.1.3.10.3 .70.8Consider that the flight envelopes of an F15 do not resemble a basic Cessna's nor does the safe operating epiclipse of a Mercedes AMG resemble that of a Honda Fit nor does the well containment epiclipse of a land operation and subsea Deepwater operation have everything in common. Determining the accountability and internal locus of control from the lagging indicators of people's driving records would be business as usual and worse than useless it would be distracting. BP has always had an extremely vigorous driver safety program and no one dared drive over 5 mph in a parking structure of theirs not take a "safety moment" lightly for this would be on their permanent record and used as an indicator of their respect or lack thereof for risk. This doesn't translate to the leadership quality of internal locus of control, focus on precursors and respect for the uniqueness of the epiclipse of hazard, if you will, or the "flight envelope" for an already widespread use of the exact same concept. We shouldn’t dismiss the subject of this discussion without completely understanding it. it.

Ella: Of course your right that "knowledge" has to be there. No one has suggested that we send people to the rig that are not expert on specifics of the components that comprise the whole operation it is simply in how things are done we discuss now. My thoughts on process safety that add to already existing measures form a structure that guides actions, thoughts, communications and most importantly focus. First of all its not a "risk assessment" that is new to any of this discussion and also what's new isn't suggesting any of the normal mitigations of known hazards is discontinued. Its that at the top of a triangle of process safety tools is the need to direct an internal locus of control and communicate in precursors of consequence and NOT in terms of risk. At the bottom of the process safety triangle, fine, mitigations are mitigations, yet at the top of the focus we must be innovative and focus on detecting emerging hazards from within uncertainty within the operating envelope and communicating in strict abbreviated terms of only the most useful precursory information. Like a "flight envelope" for > Mach Speed aircraft. The operational envelopes consist of vulnerability and exposure terms and not statistics of incidents of consequential episodes in past history. The designers of the plane know this stuff. No historian needed in the cockpit. In our case the engineers have this for the tangible portion of the well bore that we operate within. It is and always has been an operational envelope and yet we never have communicated in those terms before Macondo. We agree tracking that amount of data and communicating in those terms would be way too much data and impossibly incoherent, yet pilots find communicating in terms of the "flight envelope" and "doghouse plot", a breeze. It simply is the correct way to communicate in complex and interdependent systems operating in potentially hazardous situations. BROADCASTING summations of individual operational envelopes tolerances (precursors of consequence) is in fact due process, E PLURIBUS UNUM, maintaining individual internal loci of control by internalizing external factors, and using abbreviated and densely vital precursors of consequence. Communicating in abbreviated terms that are dense with only vital information is the key to avoid diluted communications of huge volumes of useless information and even harmful confirmations of wishful thinking that are the result of not putting bounds and strict rules of what is communicated. The US Navy Nuclear Aircraft Carrier uses "Comm Brevity" and my suggestion is we adopt more strict forms of communicating that convey not only vital information quickly and efficiently and yet that is actually internalizes external operational envelope information that allows each component of the whole project situational awareness and appropriations that improve functions within each component. The human factor component of the barrier and control systems have their own individual and group operational envelopes. Like tangible loads and strengths of the well bore comprise the operational envelope of exterior components of the well bore the human factor has its own precursors of consequence and they would consist of cognitive loads and individual cognitive strengths and weaknesses. Since the external factors are internalized the human operational envelope can be maintained within limits. Communicating in abbreviated and internalized terms of external, silos of complex and interdependent operational envelopes is key. Its the actions, thoughts, communication and focus that is important to get right and the details of each unique individual operational envelope must be developed and "tailored" appropriately. This concept is transferable, scalable, and more importantly vital and doable. Thanks for persevering Ella and perhaps my presentation of these ideas is improving?

Corrupt Culture:

• Operating outside of the “safe operating envelope”. • Contempt towards circumspection of “how things are done”. • Closed minded obstinance to examine “how things are done”. • Corruption hides “how things are done” deceptively hiring "lackeys" and firing "whistleblowers". • Rationalizing against any investigation and becoming "hidebound" into the corruption.

Strong Culture: (1) Operating within the “safe operating envelope”. Micah Endsley Air Force Pilot (2) Encouragement to learn based on open minded acceptance that things can be improved.

(3) Open minded efforts at examining “how things are done” in learning situations. Carol Dweck (4) Adoption of measures to discover and/or expose operations outside of the “safe operating envelope” and lead "actions" and share lessons learned. (5) Straight, honest, talk and focus on precursors of consequence in communicating situational awareness and internalizing its externalities and in assessing consequences to redraw the boundaries of the “safe operating envelopes”.

Ella:First let me disagree with, “comes down to people or their willingness to follow the prescribed processes” because this doesn’t seem correct. People seem infinitely smarter than the “prescribed processes” and “game” the bad ones so we need better ones and people are not the problem, yet the solution. Process safety needs to be focused on the precursors of processes that create strong and positively reinforced processes.My statement was in “context” and the complete statement was,“We are applied scientists as engineers and as such we should be able to discern how our structures will withstand all of the operational loads from installation in construction through production after completion. We might have a lot of "history based" engineering, aka "cut and paste" engineering going on and this is a problem in the most complex and interdependent operations.”The point in saying, “should” and not “are” is, as it has been mentioned by many people before now, that “cut n paste” or “history based” engineering is a problem. History based engineering uses a successful well design in another location and the problem is that the reason for this might be to avoid the diligence necessary to carefully develop the operational envelope of exposures and vulnerabilities. While the “cut n paste” program may very well work in most cases the common language we need to focus on and internalize is never even acknowledged. This is where a language of “will probably be okay” and “who cares” about the details gets dangerously introduced into the vernacular of complex and interdependent operations. We must operate on that precursory level of "knowing" and not on the risk and probability level of "will probably be okay" since it worked for someone else and yet "who cares" about the "uniqueness" of the details, and the precursors of consequence. The better vernacular is in terms of the “operational envelope” and thus the precise location our focus must be in order to detect emergencies because hazard emerges from the epiclipse of the operating envelope of load (exposure) versus resistance (vulnerability). We must utilize exposure and vulnerability based engineering, not only as a “nicety” of formal protocol and yet because it creates the very vernacular that our human based operations must use to communicate internally.

Recommended publications