Lead indicators: Reinforcing the illusion of safety

One of my biggest gripes about safety management over the past 20 plus years is the lemming like fascination with “indicators“.

Notoriously, major inquiries around the globe have found that when organisations focus on “lag” indicators (typically personal injury rates) they miss, or become blinded to, more significant risks and catastrophic events often result.

Most recently, this was succinctly articulated by the Pike River Royal Commission which stated:

The statistical information provided to the board on health and safety comprised mainly personal injury rates and time lost through accidents.  … The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries.  … The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation.

I have long feared, and it appears that we are heading down the same path under the guise of “lead” indicators. A recent study described in the Queensland Government’s eSafe newsletter found serious shortcomings in using traditional lag indicators for measuring safety.

Nothing suspiring there!

Apparently, the study went on to note a range of leading indicators that helped to deliver good personal injury performance. These indicators included fairly common place practices such as:

  • subcontractors being selected based (in part) on safety criteria.
  • subcontractors submitting approve, site-specific safety programs.
  • the percentage of toolbox meetings attended by supervisors and managers.
  • the percentage of planning meetings attended by jobsite supervisors and managers.
  • the percentage of negative test results on random drug tests.
  • the percentage of safety compliance on jobsite safety audits (inspections).

And so on.

I am not saying that any of these indicators are not good safety practices. They are. They should be measured as a measure of good safety practice – but they are not a measure of a safe workplace. They are not an indicator of risks being controlled.

The problem with any general “indicator” approach, lead or lag, is it does not actually give us any insight into whether the risks in the business are being controlled. It simply perpetuates the illusion of safety.

In other words, I have a bunch of indicators. The indicators are being met. Therefore, the risks in my business are being controlled.

Nonsense.

Think of a potential fatal risk in your business. Take confined spaces as an example.

What do any of the indicators described above tell you about whether that risk is being controlled? Typically nothing.

What are the crucial systems in your business?

How do you prove that they are effective?

Contractor safety management series Part 4: The Queen v ACR Roofing

The Queen v ACR Roofing involved a fatality at a construction site, when a worker was electrocuted after a crane contacted overhead power lines. The worker was employed by a sub-contractor engaged by a 3rd party, and did not have any contractual relationship with ACR, the company that was prosecuted.

The case explores a number of interesting concepts, including whether a sub-contractor can be “engaged” when there is no contractual relationship. The case also explores the ongoing issue of “control” in a contracting relationship, and considers what role the relative “expertise” of the parties has in determining who has control.

You can access a video presentation about the case here.

Contractor safety management series Part 2: Stratton V Van Driel Limited

Stratton v Van Driel Limited is the second case in our contractor safety management series.

It is a somewhat older decision, having been handed down in 1998, but useful in that it looks at a narrow issue that is very important in the context of contractor safety management: Control.

In 1995 Mr Baum, a roof plumber was seriously injured when he fell down a ladder. Mr Baum was employed by Signal & Hobbs, who in turn had been engaged by Van Driel Limited, to do work on the new Dandenong Club in Dandenong, Victoria.

The essence of the charges against Van Driel was that it had not done everything Reasonably Practicable to provide a safe system of work, in that it had not managed the risks associated with working on the roof.

Van Driel defended the charges on the basis that they did not have relevant control over the way an independent contractor did their work.

You can access the video presentation of the case here.

Delphic motherhood statements part 2 – safety documents that nobody can understand

A little while ago I did a post looking at the complexity of documented safety management systems, and the role that documentation has played in undermining effective safety management. You can review the post here.

I was recently sent an article (you can access it here) which underscores the potential negative impact safety documentation has on safety performance.

The New Zealand research found that:

  • Two thirds of employees did not fully understand information contained in health and safety documents , including safety procedures;
  • 80% of employees were not able to accurately complete hazard report forms; and
  • Safety documents were highly complex and used vocabulary that employees did not understand.

A fascinating aspect of the research is that it provides a list of words that were unfamiliar and confused employees. Some of those words included “significant hazards” , “competence”, “accountabilities” and “not adversely affect”. All words that reflect the requirements of legislation and guidance material but have little place in the day to day comprehension of workers.

From my own perspective, I have to say that this research is entirely consistent with my study of major accident events going back 30 years. Every major accident events enquiry that I have ever researched has identified that in some way the documented safety management systems undermine effective safety performance. Typically they are too complex for the people who have to implement them to understand.

Based on my experience I would add two further phrases to the list of unfamiliar words: ” reasonably practicable” and “root cause”. These two phrases are ubiquitous throughout safety management documents in Australia, yet universally whenever I am conducting obligations or investigation training there is no common (much less “correct”) understanding of what these things mean.

There are two things that I find professionally embarrassing as a person who has spent the last two decades specialising in safety and health management . The first is our continued reliance on lost time injury data as a measure of safety performance in light of the overwhelming evidence that they add no value to our understanding of the management of risk.

The second is , despite at least 30 years of “reminders” that out documented safety processes add little to the management of safety risks, almost universally we continue to do the same thing, in the same way but somehow expect a different. I think Einstein had something to say about that.

I have recently been working with a senior executive in an organisation who confronted a safety consultant with the following:

“if you can’t explain it to me easily, then you don’t understand it yourself “

An interesting test to apply to our safety documents?

Safety risk and safety data: Exploring management line of sight

I have recently done a video presentation on a fatality at the Adelaide Desalination plant, which you can find by following this link.

Recently, I was reading some of the transcript of the South Australian Senate Inquiry into the desalination plant (which you can find by following this link), and was struck by one manager’s description of all of the activity undertaken in the name of safety:

We start with the inductions when new staff join the project. So, at 6.30am, usually three times a week—I attend probably two of them; I was in one yesterday—we induct new staff onto the job. The first thing I point out is the list of non-negotiables. The second thing I point out is for each person to look after their mate. It starts there. We then have a standard list of documents. I will read from this list, because it’s quite a large list. There is the HSC risk register, task specific for each job. There is a construction execution plan. There is a JSA, task specific.

We have daily start cards for each area, which is another thing I introduced. I am not sure if we gave you a copy, but it’s a small easily-filled-in card where a work team can assess the risks of adjacent trades, etc. So, that is a specific thing. We have a pre-start meeting every day. There are SafeWork instruction notices posted at each of the work areas. We toolbox the job weekly, because the pace of this job changes. You can go out there in two-day gulps and the whole access can change, so we need to make sure people see that. We have the non-negotiables in place. We have site and work-front specific inductions, which is what I told you about. Again, I attended one yesterday.

I have regular safety walks. I have trained all of my management team and the two layers beneath that to go on safety walks. We have our OHSC risk register. There is a just culture model in place. So, if I need to address an incident and it turns out that this person needs retraining or perhaps needs to be disciplined or work outside the fence somewhere, we use this just culture model for that. We have all been trained in that. There are safety KPIs for management. There is a safety enhancement committee, which is a mixture of workers and staff. I actually chair a weekly safety leadership team, and that’s improving safety over and above. We are looking to refresh it all the time. And so it goes on. I have two pages of this stuff.

Now, there may have been far more information that sat behind all of this activity, but it seemed to me to be a typical approach to safety management – and one that typically gives no insight into whether the risks in the business are actually being managed.

One of my particular areas of interest in the context of safety management is “management obligations”, and more particularly how managers (at all levels) get assurance that the health and safety risks in their business are being effectively managed. It is a concept that I have referred to before and written about (Smith, 2012) as “management line of sight”.

An area of speciality for me is management obligations training; courses that are designed to help managers understand their legal obligations for safety and health, and how their behaviour – what they “do” – contributes to effective safety management.

Over the last 3 or 4 years I have put the following scenario to the various courses:

Who here knows about a risk in their business or area of responsibility that could kill someone?

Invariably, most hands go up.

Who has safety information that comes across their desk on a regular basis.

Again – most hands go up.

OK. What I would like you to do is to think about the risk. Then I want you to think about the data that you have looked at in the past 3 months.

Pause ……

What does that data tell you about how well the risk is being controlled?

And then the lights come on, with the realisation that their organisations spend inordinate amounts of time and resources producing volumes of information that tell them nothing about whether risks in the business are actually being controlled.

This “gap” was most recently highlighted in the Royal Commission into the Pike River Coal Mine Disaster (Pankhurst et.al, 2012), in which 29 men died in an underground coal mine explosion in New Zealand. The Royal Commission noted the following:

The statistical information provided to the board on health and safety comprised mainly [LTI rates]. The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries.

… The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation. (My emphasis).

Typically, in a training course discussion there is no meaningful consensus on  what the “crucial systems” are in a business, much less how we prove that they are effective.

What we can say with a high degree of certainty is that traditional measures of safety performance do not prove the effectiveness of crucial systems – certainly LTI and other personal injury rates do not, and we have known that for at least 25 years. However, other indicators are equally poor in creating insight into the control of crucial systems. The number of management site visits do not enlighten us, nor do the number of audit actions that have been closed out, the number of “behavioural observations” don’t help, the number of people trained, the number of corrective actions completed, the number of JHAs or “take 5s” done and on it goes.

These things are all indicators of activity, which are designed to ensure that the safety management systems are effective, but ultimately, they leave us in no better position as far as understanding the effectiveness of crucial systems.

There is another interesting challenge that falls out of exploring management line of sight, and that is, what should I be looking at?

Historically, and as I touched on above, we typically consider safety in the context of harm and risk: what can hurt people and how likely is it that they will be hurt? But line of sight and assurance demands a wider gaze than hazards and risks.

The Royal Commission (2012, volume 2, p. 176) also stated:

Ultimately, the worth of a system depends on whether health and safety is taken seriously by everyone throughout an organisation; that it is accorded the attention that the Health and Safety in Employment Act 1992 demands. Problems in relation to risk assessment, incident investigation, information evaluation and reporting, among others, indicate to the commission that health and safety management was not taken seriously enough at Pike. (my emphasis)

“Crucial Systems” mean more than gas monitoring or ventilation. They are more than the control of physical risks. They incorporate broader organisation systems around hazard identification and risk assessment, contractor safety management, management of change, incident investigation and so on. All elements that are designed to work together so that the “system” as a whole is effective to manage risk.

If organisations are weak insofar as they cannot “prove” that physical risks are being controlled, the reporting, assurance and line of sight to prove that these other “crucial” systems are effective is almost non existent.

When was the last time you received a report “proving the effectiveness” of your incident investigations, for example?

What are the “crucial systems” in your business, and how would you “prove” that they were effective. Food for thought.

References

Pankhurst, G., Bell, S., Henry, D (2012). Royal Commission on the Pike River Coal Mine Tragedy. Wellington, New Zealand

Smith , G. (2012). Management Obligations for Health and Safety. CRC Press, Boca Raton

25 Years on: Remembering Piper Alpha

In the past few weeks I have been asked to do presentations and share my views about the legacy of Piper Alpha in this, the 25th anniversary year of the disaster.

For me, the positive legacy is the advancement in safety regulation, engineering and “safety in design” that has seen the improvement of the physical safety of high hazard workplaces. Safety in design has also improved the “survivability” of disasters so that when accidents to occur, their consequences are better mitigated.

The ongoing disappointment, however, is the persistent failure of management oversight and assurance to properly understand if health and safety risks are being managed. This is a failure that has played out in every major accident inquiry since Piper Alpha and continues to undermine effective safety management.

You can see a video presentation of these ideas and concepts here.