“Differences challenge assumptions.” Anne Wilson Schaef
There is a common misunderstanding about the nature of Process Hazard Analyses (PHAs): that they are objective studies.
By objective, we mean that different teams, working at different times and at different places, but looking at the same process, will get the same results. There is a perception that PHAs are like arithmetic: 4 + 7 = 11, no matter who does the sum, or where or when they do it.
In fact, PHAs are not objective. Their virtue is not in their objectivity, but in the discipline they impose on a review team and in the resulting documentation of hazards produced during the PHA.
There are several reasons that PHA teams may get different results for nominally identical processes. They can be divided into
- Structural differences in methodology
- Actual differences in “identical” systems
- Differences in training, experience, and preferences of team members
Structural differences in methodology
Before a team even convenes, there may be differences in how their organization directs them to consider hazards. These differences are structural and do not depend on the team. They can include the PHA methodology used, how it is used, as well as the organizational standards directing the values used in analysis.
Methodology: The Occupational Safety and Health Administration acknowledges six methodologies as potentially appropriate for the PHA of a particular process. They are What-If, Checklist, What-If/Checklist, Hazard and Operability Review (HazOp), Failure Mode and Effects Analysis, and Fault Tree Analysis. They each have different strengths and weaknesses, so are likely to produce different results. The first question to answer when trying to explain differences in PHA results is whether the same methodologies were used.
But let’s say that two PHAs both use the HazOp methodology. Several schools of thought on how best to conduct a HazOp have developed since the methodology was originally developed by engineers from ICI in 1963 and introduced as a class in 1974. As a result, even PHAs that all use a methodology called “HazOp” may be different.
Nodes: The key feature of HazOps is their basis on segments of the process called “nodes” and one of the differences between HazOps can be how they divide the process into nodes. Some HazOps consider very small segments of process equipment as nodes; say, from one connection to another. Other HazOps consider entire units as nodes. The difference in granularity can result in differences in the hazards a team identifies and how the team analyzes those hazards.
Most HazOps use nodes that fall somewhere between pipe sections on the one hand and whole units on the other. Judgment on the part of the facilitator goes into dividing the process into nodes. Because judgment is required, the division of a process into nodes, even when using the same rules for dividing the process into nodes, can result in differences from one facilitator to another.
Process deviations: When a HazOp team considers a node, they do it in terms of a defined list of process deviations. Process deviations are a combination of parameters, e.g. temperature, pressure, flow, and composition, and guidewords, e.g. too high, too low, reverse, and other than. One of the differences between HazOps can be the list of process deviations the facilitator uses. There is a core list of process deviations upon which all facilitators agree, but most facilitators have an additional list of process deviations they believe are also important and worth the time it takes the team to consider them.
Risk tolerance criteria: Another feature of HazOps is that they include a qualitative assessment of risk, which the team then compares against the organization’s Risk Tolerance Criteria (RTC). RTC establish the frequency at which events with a certain impact severity are tolerable. It is not the severity of the event—the fire, the explosion, the toxic release—that teams consider, but the severity of the impact on a variety of vectors: personnel safety, community safety, environment, assets, or some combination of the four.
RTC usually rely on categories rather than specific values, and the categories are typically separated by orders of magnitude. The more severe the impact, the lower the frequency that the team can consider tolerable.
In the United States, each organization is responsible for establishing its own RTC. There can be differences based on which vectors the organization has determined it must consider, on how the categories of impacts for those vectors are defined, and on the tolerable frequencies assigned to each category. As a result, there are many opportunities for RTC to be the source of structural differences between PHAs.
Initiating cause frequency: As a HazOp team considers each process deviation of a node, its first task is to determine potential causes for the deviation and the initiating frequency at which those causes occur. An organization’s HazOp procedures often spell out the standard frequency to use for various types of causes. These standards are usually based on published values, such as those in Guidelines for Initiating Events and Independent Protection Layers in Layer of Protection Analysis put out by the Center for Chemical Process Safety. That said, organizations sometimes substitute other values or identify additional causes to include in their corporate HazOp procedures. Some other organizations don’t address initiating cause frequency in their corporate HazOp procedures and just leave it to the HazOp team to decide.
Safeguard reliability: When estimating the risk of a particular hazard, it is the final event frequency that matters, not the initiating cause frequency. The final event frequency depends on the initiating cause frequency, of course, but it also depends on the reliability of the safeguards that are in place. Just as organizations spell out the standard frequency to use for various types of causes, they may also spell out the reliability to use for various types of safeguards.
Again, these standards are usually based on published values, but organizations may also substitute other values or include additional safeguards in their HazOp procedures or simply hope their HazOp teams figure them out.
Actual Differences in “Identical” Systems
Nothing makes a project engineer more uncomfortable than being told “this process is identical to the one you worked on before, so it shouldn’t be much more than a copy-and-paste job.” Even if the second process is an exact duplicate of the first process as it was built, personnel working on the first process have been modifying it to deal with issues or to make improvements. One of the differences that frequently occurs in “identical” systems is that they do not have identical safeguards in place.
If, somehow, processes are truly physical duplicates of one another, including the layouts of the processes and their relationship to the rest of their facilities, there is the potential for non-physical differences that still effect safety. Operating and maintenance staffing levels, how often operators go on rounds and how long those rounds last, and production rates and inventory levels, are a few.
Differences in Training, Experience, and Preferences of Team Members
PHAs are reviews conducted and led by people, and people are different. Each person has different training, different experience, and based on those differences, different preference. OSHA and most organizational PHA procedures address this by requiring a PHA team made up of people from different backgrounds. Even a team including the same roster has changed since the last time that team met.
No two teams are alike.
Differences in team composition can lead to differences in the PHAs the teams produce, even when everything else is the same. There are three areas that seem especially vulnerable to differences in the training, experience, and preferences of team members: identifying causes, identifying safeguards, and assessing consequences.
Identification of causes: Members of a PHA team may consider some causes more credible than others. They may overlook some causes entirely, or some team members may be especially sensitive to certain causes or hazards.
Identification of safeguards: Different teams may be more or less likely to suggest features in the process as safeguards. While the discipline of the PHA review will help dismiss those features that are not actually safeguards, the discipline of the review will not force a PHA team to identify safeguards that it has overlooked.
Assessment of consequences: Different teams may assess consequences differently. Even when they assess the event that results from a process deviation the same, they may assess the impact severities of that event differently. The guidance, “consider the worst case,” is not particularly helpful because no matter how bad the case considered, there is always a worse case, so that “worst case” becomes meaningless. This means that most teams are left to trust their feelings about how bad it could be.
“Identical” is a Myth
It is unlikely that any two PHAs of “identical” facilities will result in identical results. Differences in the facilities as well as differences in the PHA teams are bound to result in differences, even if structural differences in the PHA methodology are completely eliminated. For the most part, these differences are of no consequence.
The concern should not be whether PHAs from different facilities are the same for similar units, however. The concern should be whether significant hazards are being overlooked. PHA teams should not worry about generating PHAs that are the same as the PHAs developed by their counterparts at other facilities. Instead, they should focus their attention on doing a thorough job of identifying the hazards in their own facility.