Everything is Green: The delusion of health and safety reporting

Over the past 12 months, I have been engaged in a significant amount of health and safety “assurance” work, helping organisations to try and understand if the health and safety risks in their business are effectively managed.  Perhaps the most enduring image to come out of the last 12 months for me is the misleading and dangerous assumptions people make based on health and safety reports. 

 Often, when people look to criticise health and safety reporting, they point to lag indicators such as injury rates.  I do not want to talk about injury rates as a measure of health and safety performance in this article, so I would like to put that issue to bed with this observation from the Pike River Royal Commission

The statistical information provided to the board on health and safety comprised mainly personal injury rates and time lost through accidents … The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries. …  The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation.” 

Let’s be clear.  There are no major accident enquiries that have identified personal injury rates as a legitimate measure of the effectiveness of health and safety management.  Personal injury rates are a measure of how many personal injuries have occurred – no more.  Any organisation that assumes personal injury rates are a measure of the effectiveness of their safety management system is misguided.  We have known this for decades. 

The challenge is not managing personal injury rates.  The challenge is proving the effectiveness of our crucial systems for managing health and safety risk.  If our systems are effective to manage health and safety risks, improved safety performance should follow. However, the reverse does not hold true, and countless major accident enquiries have identified fundamentally flawed safety management systems disguised by good and improving personal injury rates. 

We have seen over time the development of so-called “lead” indicators as a counterpoint to traditional, lag indicators.  Lead indicators, supposedly designed to provide insight into the effectiveness of safety management systems. 

But do they? 

Overwhelmingly, lead indicators are nothing more than a measure of activity and the fact that our measures of activity have 100% compliance – they are all green – creates a dangerous illusion of safety. 

A popular “lead” indicator for safety and health is the number of management interactions.  These might be variously described as safety conversations, safe act observations, behavioural observations, safety interactions, management walk arounds and so on.  Inevitably, they show up in a health and safety report as part of a table of lead indicators or a dashboard of “traffic lights“.  These indicators, or traffic lights, are usually coloured red if the indicator has not been met, amber if it has not been completely met and green if the requirement has been met. 

Other, typical indicators might include: 

·    Corrective actions closed;

·    Audits completed;

·    Training completed;

·    Hazards identified; or

·    Pre-start or “Take 5” cards completed.

No doubt there are countless more. 

The difficulty with all of these indicators is that they are measures of “activity“.  Invariably, they tell us whether things have been “done“. 

They tell us nothing about the quality or effectiveness of the activity. 

They tell us nothing about the influence of the activity on safety. 

They make no contribution to proving the effectiveness of our crucial systems. 

One of the phenomena that I have observed about health and safety management over the years is the notion of the “safety paradox“.  The safety Paradox supposes that everything we do in the name of health and safety can both improve and undermine safety in the workplace.  A very good example is a frontline risk assessment tool such as the JHA. 

The JHA is a ubiquitous frontline risk assessment tool implemented by organisations all over the world.  At its best, it can be an effective mechanism to help frontline workers identify the risks associated with their work and develop suitable controls to manage those risks. 

Conversely, it also can disengage the workforce from the safety management message of the organisation and drive a complete “us and them” mentality. 

Research suggests that significant numbers of workers see frontline risk assessment tools like the JHA as a backside covering exercise, designed to protect managers from legal risk in the event of an accident.  In most workplaces, this idea does not come as a complete shock, and there is a general acceptance of the limits or weaknesses inherent in the JHA.  However, the use of the JHA continues to roll on without analysis, without thought, without critical thinking and certainly without any reporting at a managerial level about its effectiveness. 

I have never seen a health and safety report that provides information to the executive about the effectiveness of the JHA system in the business.  Given that the JHA is one of the most critical tools used by organisations for the management of high-risk activities (including working at heights, confined space entry, lifting operations and so on) I find this extraordinary. 

I am not going to advocate whether organisations should use a JHA.  But I don’t think there could be any reasonable argument for an organisation not to know whether their use of the JHA is beneficial to safety or is undermining it. 

What about incident investigations? Are Incident investigations an important system in safety management? 

Whenever I asked this question in training workshops, everybody immediately tells me that incident investigations are very important.  If I asked the question: 

On a scale of 1 to 10, with one being unimportant and 10 being critically important, how important our incident investigations? 

Inevitably the answer is 9 or 10. 

But does your system of incident investigation work?  If it is such a crucial systems, how do you prove its effectiveness?  This was an issue that the Pike River Royal Commission wanted to understand, and they looked at it as follows: 

The workers reported many incidents and accidents.  The commission analysed 1083 reports and summarised a selection of 436 and a schedule. …  there were problems with the investigation process … Incidents were never properly investigated.” 

If you are interested, an extract of cross-examination from the Royal Commission asking questions about incident investigations is available here.  It provides an interesting insight into the sorts of issues managers need to address when their safety management systems are being critically analysed. 

Again, I have never seen a health and safety report that provides information to the executive about the effectiveness of an incident investigation process.  Would it be so unreasonable to expect, and given the apparent criticality of incident investigations, that once or twice a year somebody will prepare a report for executive management summarising incident investigations and forming a view about their quality and effectiveness? 

Finally, let’s go back and consider management interactions.  As I indicated above, these are a very common lead indicator for safety management, but they are also very limited, often nothing more than a measure of how many interactions have been done.  There is typically no measure or analysis about whether interactions were done well or whether they have added value to safety management. 

It seems universally assumed management interactions around health and safety are a good thing but are they? 

What is their purpose, what are they designed to achieve and how do we know that they are achieving that purpose?  Is there a risk that management interactions could be undermining safety in your workplace? 

How do you know the or managers are not just wandering around practising random acts of safety, reinforcing unsafe behaviours and generally just pissing everybody off? 

A green traffic light in a health and safety report as an indicator that everybody who should have had a management interaction has done one is misleading and fuels the illusion of safety which underpins so many catastrophic workplace events. 

When was the last time anybody provided health and safety reporting that made any meaningful contribution to proving the effectiveness of crucial systems?  Have any leaders ever received a report showing a detailed analysis of lifting operations over a 10 month period with a formal, concluded you about the effectiveness of the safety management system to control the risks associated with lifting?  What about dropped objects?  What about working at height? 

What about the efficacy of your permit to work system?  Has that ever been analysed and reported on, other than on a case-by-case “reactive” basis following an incident? 

For your next health and safety “reporting” meeting, try this: Scrap your traditional health and safety report, pick a critical risk such as working at heights, and ask your safety manager to provide you with a presentation about whether, or to what extent, the risk has been managed as far as reasonably practicable.

What is your health and safety reporting really telling you, as opposed to the assumptions you choose to make?  Is there any evidence that it is proving the effectiveness of your crucial systems?

 

 

 

 

 

10 thoughts on “Everything is Green: The delusion of health and safety reporting

  1. I do like and agree with the arguments you put forward, the issue of JSA/JHA places more responsibility on the worker to see and make corrective action on the noted issues. One audit I undertook at a CSE on a jetty the guys seem to have completed a pre start inspection on the site, all the boxes had been ticked in their and the supervisors eyes. I asked what is that dripping from the overhead pipe at the entry to the CSE, “We do not know?”was the response. In fact a number of pipes were in place above the CSE entry, some carried fuel, others crude oil, another Acid fortunately the leaker carried water. How did a JSA/JHA help these people and what could have minimised the risk?, in my book they should have been accompanied by an expert in the area as all these people were subcontractors who had not visited the job site previously.

    food for thought on your interesting questions.

    Regards John Bell

  2. Great blog Greg. The ideology of measuring and reporting drives its own culture and this is evident in the discourse of zero harm organisations. When zero is your goal, you must count rather than think of what ‘counts’. Even when they undertake interactions they want to count them and so undermine the very dynamic of interactions.

    1. Sorry Rob, zero is a goal (just as winning is to a sports team). It drives safety, just like the want to win in sport drives the effort to be the best. You only see it as a number.

      Most of Gregs posts are about meeting complete compliance…that could be seen in the same context as zero…as in zero failure to comply…and lets face it, all that you sell is a means to reduce harm (ultimately), so your motivation is a goal to lower rates…

  3. Greg, An informative blog.
    I would ask the Operations manager (not the Safety Manager) to “provide you with a presentation about whether, or to what extent, the risk has been managed as far as reasonably practicable”. The Ops manager owns the risk and should be intimate with the controls. I would also ask the safety manager their views, I order to assess who knows what they are talking about.

    1. Don’t you think both should have an informed view? Some of my best friends are safety managers (I know how that sounds) but if they do not have a concluded view about the systems they develop, surely something is missing?

Leave a comment