Over the past 12 months, I have been engaged in a significant amount of health and safety “assurance” work, helping organisations to try and understand if the health and safety risks in their business are effectively managed. Perhaps the most enduring image to come out of the last 12 months for me is the misleading and dangerous assumptions people make based on health and safety reports.
Often, when people look to criticise health and safety reporting, they point to lag indicators such as injury rates. I do not want to talk about injury rates as a measure of health and safety performance in this article, so I would like to put that issue to bed with this observation from the Pike River Royal Commission:
“The statistical information provided to the board on health and safety comprised mainly personal injury rates and time lost through accidents … The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries. … The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation.”
Let’s be clear. There are no major accident enquiries that have identified personal injury rates as a legitimate measure of the effectiveness of health and safety management. Personal injury rates are a measure of how many personal injuries have occurred – no more. Any organisation that assumes personal injury rates are a measure of the effectiveness of their safety management system is misguided. We have known this for decades.
The challenge is not managing personal injury rates. The challenge is proving the effectiveness of our crucial systems for managing health and safety risk. If our systems are effective to manage health and safety risks, improved safety performance should follow. However, the reverse does not hold true, and countless major accident enquiries have identified fundamentally flawed safety management systems disguised by good and improving personal injury rates.
We have seen over time the development of so-called “lead” indicators as a counterpoint to traditional, lag indicators. Lead indicators, supposedly designed to provide insight into the effectiveness of safety management systems.
But do they?
Overwhelmingly, lead indicators are nothing more than a measure of activity and the fact that our measures of activity have 100% compliance – they are all green – creates a dangerous illusion of safety.
A popular “lead” indicator for safety and health is the number of management interactions. These might be variously described as safety conversations, safe act observations, behavioural observations, safety interactions, management walk arounds and so on. Inevitably, they show up in a health and safety report as part of a table of lead indicators or a dashboard of “traffic lights“. These indicators, or traffic lights, are usually coloured red if the indicator has not been met, amber if it has not been completely met and green if the requirement has been met.
Other, typical indicators might include:
· Corrective actions closed;
· Audits completed;
· Training completed;
· Hazards identified; or
· Pre-start or “Take 5” cards completed.
No doubt there are countless more.
The difficulty with all of these indicators is that they are measures of “activity“. Invariably, they tell us whether things have been “done“.
They tell us nothing about the quality or effectiveness of the activity.
They tell us nothing about the influence of the activity on safety.
They make no contribution to proving the effectiveness of our crucial systems.
One of the phenomena that I have observed about health and safety management over the years is the notion of the “safety paradox“. The safety Paradox supposes that everything we do in the name of health and safety can both improve and undermine safety in the workplace. A very good example is a frontline risk assessment tool such as the JHA.
The JHA is a ubiquitous frontline risk assessment tool implemented by organisations all over the world. At its best, it can be an effective mechanism to help frontline workers identify the risks associated with their work and develop suitable controls to manage those risks.
Conversely, it also can disengage the workforce from the safety management message of the organisation and drive a complete “us and them” mentality.
Research suggests that significant numbers of workers see frontline risk assessment tools like the JHA as a backside covering exercise, designed to protect managers from legal risk in the event of an accident. In most workplaces, this idea does not come as a complete shock, and there is a general acceptance of the limits or weaknesses inherent in the JHA. However, the use of the JHA continues to roll on without analysis, without thought, without critical thinking and certainly without any reporting at a managerial level about its effectiveness.
I have never seen a health and safety report that provides information to the executive about the effectiveness of the JHA system in the business. Given that the JHA is one of the most critical tools used by organisations for the management of high-risk activities (including working at heights, confined space entry, lifting operations and so on) I find this extraordinary.
I am not going to advocate whether organisations should use a JHA. But I don’t think there could be any reasonable argument for an organisation not to know whether their use of the JHA is beneficial to safety or is undermining it.
What about incident investigations? Are Incident investigations an important system in safety management?
Whenever I asked this question in training workshops, everybody immediately tells me that incident investigations are very important. If I asked the question:
On a scale of 1 to 10, with one being unimportant and 10 being critically important, how important our incident investigations?
Inevitably the answer is 9 or 10.
But does your system of incident investigation work? If it is such a crucial systems, how do you prove its effectiveness? This was an issue that the Pike River Royal Commission wanted to understand, and they looked at it as follows:
“The workers reported many incidents and accidents. The commission analysed 1083 reports and summarised a selection of 436 and a schedule. … there were problems with the investigation process … Incidents were never properly investigated.”
If you are interested, an extract of cross-examination from the Royal Commission asking questions about incident investigations is available here. It provides an interesting insight into the sorts of issues managers need to address when their safety management systems are being critically analysed.
Again, I have never seen a health and safety report that provides information to the executive about the effectiveness of an incident investigation process. Would it be so unreasonable to expect, and given the apparent criticality of incident investigations, that once or twice a year somebody will prepare a report for executive management summarising incident investigations and forming a view about their quality and effectiveness?
Finally, let’s go back and consider management interactions. As I indicated above, these are a very common lead indicator for safety management, but they are also very limited, often nothing more than a measure of how many interactions have been done. There is typically no measure or analysis about whether interactions were done well or whether they have added value to safety management.
It seems universally assumed management interactions around health and safety are a good thing but are they?
What is their purpose, what are they designed to achieve and how do we know that they are achieving that purpose? Is there a risk that management interactions could be undermining safety in your workplace?
How do you know the or managers are not just wandering around practising random acts of safety, reinforcing unsafe behaviours and generally just pissing everybody off?
A green traffic light in a health and safety report as an indicator that everybody who should have had a management interaction has done one is misleading and fuels the illusion of safety which underpins so many catastrophic workplace events.
When was the last time anybody provided health and safety reporting that made any meaningful contribution to proving the effectiveness of crucial systems? Have any leaders ever received a report showing a detailed analysis of lifting operations over a 10 month period with a formal, concluded you about the effectiveness of the safety management system to control the risks associated with lifting? What about dropped objects? What about working at height?
What about the efficacy of your permit to work system? Has that ever been analysed and reported on, other than on a case-by-case “reactive” basis following an incident?
For your next health and safety “reporting” meeting, try this: Scrap your traditional health and safety report, pick a critical risk such as working at heights, and ask your safety manager to provide you with a presentation about whether, or to what extent, the risk has been managed as far as reasonably practicable.
What is your health and safety reporting really telling you, as opposed to the assumptions you choose to make? Is there any evidence that it is proving the effectiveness of your crucial systems?