One of my biggest gripes about safety management over the past 20 plus years is the lemming like fascination with “indicators“.
Notoriously, major inquiries around the globe have found that when organisations focus on “lag” indicators (typically personal injury rates) they miss, or become blinded to, more significant risks and catastrophic events often result.
Most recently, this was succinctly articulated by the Pike River Royal Commission which stated:
The statistical information provided to the board on health and safety comprised mainly personal injury rates and time lost through accidents. … The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries. … The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation.
I have long feared, and it appears that we are heading down the same path under the guise of “lead” indicators. A recent study described in the Queensland Government’s eSafe newsletter found serious shortcomings in using traditional lag indicators for measuring safety.
Nothing suspiring there!
Apparently, the study went on to note a range of leading indicators that helped to deliver good personal injury performance. These indicators included fairly common place practices such as:
- subcontractors being selected based (in part) on safety criteria.
- subcontractors submitting approve, site-specific safety programs.
- the percentage of toolbox meetings attended by supervisors and managers.
- the percentage of planning meetings attended by jobsite supervisors and managers.
- the percentage of negative test results on random drug tests.
- the percentage of safety compliance on jobsite safety audits (inspections).
And so on.
I am not saying that any of these indicators are not good safety practices. They are. They should be measured as a measure of good safety practice – but they are not a measure of a safe workplace. They are not an indicator of risks being controlled.
The problem with any general “indicator” approach, lead or lag, is it does not actually give us any insight into whether the risks in the business are being controlled. It simply perpetuates the illusion of safety.
In other words, I have a bunch of indicators. The indicators are being met. Therefore, the risks in my business are being controlled.
Think of a potential fatal risk in your business. Take confined spaces as an example.
What do any of the indicators described above tell you about whether that risk is being controlled? Typically nothing.
What are the crucial systems in your business?
How do you prove that they are effective?
5 thoughts on “Lead indicators: Reinforcing the illusion of safety”
So you offer that lead indicators are a measure of good safety practice but not an effective measure of a safe workplace…….what is a good measure of a safe workplace. I do not disagree with what you have said but simply want your view on what does work rather than simply what doesn’t work.
Thanks for your comment Glenn.
I guess I would answer this in two parts. First, I think that much of what has been set out in the research about managing “personal injury” does work, insofar as it does manage personal injuries.
I think that there are a series of strategies that can be applied to legitimately drive down the level of personal injury in a workplace.
That being said, the level of personal injuries and the “metrics” that might define the relevant strategies are not indicators that a workplace is safe, or that the safety management system works, or that critical risks are being controlled.
To adopt the words of the Pike River Royal Commission, they do not “prove” of the “effectiveness” of “crucial” systems.
One of the “truths” of safety management that I am not sure receives enough attention is that we may not actually be able to determine what “works”. If safety management is a classic “wicked problem” as defined by research in the 1970s (See: Rittel, W., & Webber, M. (1973) Dilemmas in a General Theory of Planning Policy Sciences 4, 155-169.), which I believe it is, it may not be possible to say that the “problem” is in “solved”.
Wicked problems have no “stopping rule”; safety management requires a value judgement that we have “done enough” (Rittel & Webber, 1973, p. 162):
The planner terminates work on a wicked problem, not for reasons inherent in the logic” of the problem. He stops for considerations that are external to the problem: he runs out of time, or money, or patience. He finally says, “That’s good enough,” or “This is the best I can do within the limitations of the project,” or “I like this solution,” etc.
There is no test of the solution (Rittel & Webber, 1973, p. 163).
In my view, this is the role of (and intellectual challenge for) the health and safety manager. I believe that it is the health and safety manager’s responsibility to identify the “crucial systems”, determine what criteria should be applied to “prove their effectiveness” and be providing assurance to the relevant levels of managers that the risks are being controlled.
It is a narrative, not a number. A conversation and not a metric. It requires a value judgement by a health and safety professional to say “these are the critical risks in our business, these are appropriate processes to manage them and in my professional view the critical risks are being adequately controlled”.
Food for thought?
I still have the safety award I received (Leatherman multi tool) two weeks before Moura Mine No. 2 Underground exploded.
The direct result of that disaster was the need to identify Principal Hazards in the Coal Mining Safety and Health Act.
Hasn’t helped much though, the mines are still driven by lag indicators.
Having recently read an account of the Pike River Mine “Tragedy at Pike River Mine”, there would seem to have been multiple indicators, of the potential for disaster, these where as often is the case, ignored or not acknowledged.
Many many of the warning indicators, where no doubt raised by the HSE, WHS , QSE, crew on site, and the miners, as can be seen by the terrific turnover of staff.
If we don’t measure Lead indicators, what do we use as a metric? How do we justify to boards that we are valuable as Safety Professionals,