I approach this article with some trepidation.
I was recently sent a copy of Safe Work Australia’s report, Measuring and Reporting on Work Health & Safety, and subsequently saw a post on LinkedIn dealing with the same. I made some observations on the report in response to the original post which drew the ire of some commentators (although I may be overstating it and I apologise in advance if I have), but I did promise a more fulsome response, and in the spirit of a heartfelt desire to contribute to the improvement of health and safety in Australia – here it is.
I want to start by saying, that I have the utmost respect for the authors of the report and nothing is intended to diminish the work they have produced. I also accept that I am writing from a perspective heavily influenced by my engagement with health and safety through the legal process.
I also need to emphasise that I am not dismissing what is said in the report, nor saying that some of the structures and processes proposed by the report are not valid and valuable. But I do think the emphasis in the report on numerical and graphical information has the potential to blind organisations to the effectiveness of crucial systems.
I also want to say that I have witnessed over many years – and many fatalities – organisations that can point to health and safety accreditations, health and safety awards, good personal injury rate data, good audit scores and “traffic lights” all in the green. At the same time, a serious accident or workplace fatalities exposes that the same “good” safety management systems are riddled with systemic failure – long term systemic departures from the requirements of the system that had not been picked up by any of the health and safety measures or performance indicators.
I am not sure how many ways I can express my frustration when executive leadership hold a sincere belief that they have excellent safety management systems in place, only to realise that those systems do not even begin to stand up to the level of scrutiny they come under in a serious legal process.
In my view, there is a clarity to health and safety assurance that has been borne out in every major accident enquiry, a clarity that was overlooked by the drafters of WHS Legislation and a clarity which is all too often overlooked when it comes to developing assurance programs. With the greatest respect, possible to the authors of this report, I fear this has been overlooked again.
In my view, the report perpetuates activity over assurance, and reinforces that assumptions can be drawn from the measure of activity when those assumptions are simply not valid.
Before I expand on these issues, I want to draw attention to another point in the report. At page 38 the report states:
“Each injury represents a breach of the duty to ensure WHS”
To the extent that this comment is meant to represent in some way the “legal” duty, I must take issue with it. There is no duty to prevent all injuries, and injury does not represent, in and of itself, a breach of any duty to “ensure WHS”. The Full Court of the Western Australia Supreme Court made this clear in Laing O’Rourke (BMC) Pty Ltd v Kiwin [2011] WASCA 117 [31], citing with approval the Victorian decision, Holmes v RE Spence & Co Pty Ltd (1992) 5 VIR 119, 123 – 124:
“The Act does not require employers to ensure that accidents never happen. It requires them to take such steps as are practicable to provide and maintain a safe working environment.”
But to return to the main point of this article.
In my view, the objects of health and safety assurance can best be understood from comments of the Pike River Royal Commission:
“The statistical information provided to the board on health and safety comprised mainly personal injury rates and time lost through accidents … The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries. … The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation.”
I have written about this recently, and do not want to repeat those observations again (See: Everything is Green: The delusion of health and safety reporting), so let me try and explain this in another way.
Whenever I run obligations training for supervisors and managers we inevitably come to the question of JHAs – and I am assuming that readers will be familiar with that “tool” so will not explain it further.
I then ask a question about how important people think the JHA is. On a scale of 1 to 10, with 1 being the least important and 10 being the most, how important is the JHA?
Inevitably, the group settles on a score of somewhere between 8 and 10. They all agree that the JHA is “critically important” to managing health and safety risk in their business. They all agree that every high hazard activity they undertake requires a JHA.
I then ask, what is the purpose of the JHA. Almost universally groups agree that the purpose of the JHA is something like:
- To identify the job steps
- To identify hazards associated with those job steps
- To identify controls to manage the hazards; and
- To help ensure that the work is performed having regard to those hazards and the controls.
So, my question is, if the JHA is a “crucial system” or “critically important” and a key tool for managing every high-risk hazard in the workplace, is it unreasonable to expect that the organisation would have some overarching view about whether the JHA is achieving its purpose?
They agree it is not unreasonable, but such a view does not exist.
I think the same question could be asked of every other potentially crucial safety management system including contractor safety management, training and competence, supervision, risk assessments and so on. If we look again to the comments in the Pike River Royal Commission, we can see how important these system elements are:
“Ultimately, the worth of a system depends on whether health and safety is taken seriously by everyone throughout an organisation; that it is accorded the attention that the Health and Safety in Employment Act 1992 demands. Problems in relation to risk assessment, incident investigation, information evaluation and reporting, among others, indicate to the commission that health and safety management was not taken seriously enough at Pike.”
But equally, the same question can be asked of high-risk “hazards” – working at heights, fatigue, psychological wellbeing etc.
What is the process to manage the hazard, and does it achieve the purpose it was designed to achieve?
The fact that I have 100% compliance with closing out corrective actions tells me no more about the effectiveness of my crucial systems than the absence of accidents.
The risk of performance measures that are really measures of activity is tha they can create an illusion of safety. The fact that we have 100% compliance with JHA training, a JHA was done every time it was required to be done, or that a supervisor signed off every JHA that was required to be signed off – these are all measures of activity, they do not tell us whether the JHA process has achieved its intended purpose.
So, what might a different type of “assurance” look like?
First, it would make a very conscious decision about the crucial systems or critical risks in the organisation and focus on those. Before I get called out for ignoring everything else, I do not advocate ignoring everything else – by all means, continue to use numerical and similar statistical measures for the bulk of your safety, but when you want to know that something works – you want to prove the effectiveness of your crucial systems – make a conscious decision to focus on them.
I thought that the JHA process was a crucial system, I would want to know how that process was supposed to work? If it is “crucial”, I should understand it to some extent.
I would want a system of reporting that told me whether the process was being managed the way it was supposed to be. And whether it worked. I would like to know, for example:
- How many JHAs were done;
- How many were reviewed;
- How many were checked for technical compliance and what was the level of technical compliance? Were they done when they were meant to be done, were they completed correctly etc.
- How many were checked for “quality”, and what the quality of the documents like? Did they identify appropriate hazards? Did they identify appropriate controls? Were people working in accordance with the controls?
I would also want to know what triggers were in place to review the quality of the JHA process – was our documented process a good process? Have we ever reviewed it internally? Do we ever get it reviewed externally? Are there any triggers for us to review our process and was it reviewed during the reporting period – if we get alerted to a case where an organisation was prosecuted for failing to implement its JHA process, does that cause us to go and do extra checks of our systems?
We could ask the same questions about our JHA training.
I would want someone to validate the reporting. If I am being told that our JHA process is working well – that it is achieving the purpose it was designed for – I would like someone (from time to time) to validate that. To tell me, “Greg, I have gone and looked at operations and I am comfortable that what you are being told about JHAs is accurate. You can trust that information – and this is why …”.
As part of my personal due dilligence, if I thought JHA were crucial, when I went into the field, that is what I would check too. I would validate the reporting for myself.
I would want some red flags – most importantly, I would want a mandatory term of reference in every investigation requiring the JHA process to be reviewed for every incident – not whether the JHA for the job was a good JHA, but whether our JHA process achieved its purpose in this case, and if not, why not.
If my reporting is telling me that the JHA process is good, but all my incidents are showing that the process did not achieve its intended purpose, then we may have systemic issues that need to be addressed.
I would want to create as many touch points as possible with this crucial system to understand if it was achieving the purpose it was intended to achieve.
My overarching concern, personally and professionally, is to structure processes to ensure that organisations can prove the effectiveness of their crucial systems. I have had to sit in too many little conference rooms, with too many managers who have audits, accreditations, awards and health and safety reports that made them think everything was OK when they have a dead body to deal with.
I appreciate the attraction of traffic lights and graphs. I understand the desire to find statistical and numerical measures to assure safety.
I just do not think they achieve the outcomes we ascribe to them.
They do not prove the effectiveness of crucial systems.