I approach this article with some trepidation.
I was recently sent a copy of Safe Work Australia’s report, Measuring and Reporting on Work Health & Safety, and subsequently saw a post on LinkedIn dealing with the same. I made some observations on the report in response to the original post which drew the ire of some commentators (although I may be overstating it and I apologise in advance if I have), but I did promise a more fulsome response, and in the spirit of a heartfelt desire to contribute to the improvement of health and safety in Australia – here it is.
I want to start by saying, that I have the utmost respect for the authors of the report and nothing is intended to diminish the work they have produced. I also accept that I am writing from a perspective heavily influenced by my engagement with health and safety through the legal process.
I also need to emphasise that I am not dismissing what is said in the report, nor saying that some of the structures and processes proposed by the report are not valid and valuable. But I do think the emphasis in the report on numerical and graphical information has the potential to blind organisations to the effectiveness of crucial systems.
I also want to say that I have witnessed over many years – and many fatalities – organisations that can point to health and safety accreditations, health and safety awards, good personal injury rate data, good audit scores and “traffic lights” all in the green. At the same time, a serious accident or workplace fatalities exposes that the same “good” safety management systems are riddled with systemic failure – long term systemic departures from the requirements of the system that had not been picked up by any of the health and safety measures or performance indicators.
I am not sure how many ways I can express my frustration when executive leadership hold a sincere belief that they have excellent safety management systems in place, only to realise that those systems do not even begin to stand up to the level of scrutiny they come under in a serious legal process.
In my view, there is a clarity to health and safety assurance that has been borne out in every major accident enquiry, a clarity that was overlooked by the drafters of WHS Legislation and a clarity which is all too often overlooked when it comes to developing assurance programs. With the greatest respect, possible to the authors of this report, I fear this has been overlooked again.
In my view, the report perpetuates activity over assurance, and reinforces that assumptions can be drawn from the measure of activity when those assumptions are simply not valid.
Before I expand on these issues, I want to draw attention to another point in the report. At page 38 the report states:
“Each injury represents a breach of the duty to ensure WHS”
To the extent that this comment is meant to represent in some way the “legal” duty, I must take issue with it. There is no duty to prevent all injuries, and injury does not represent, in and of itself, a breach of any duty to “ensure WHS”. The Full Court of the Western Australia Supreme Court made this clear in Laing O’Rourke (BMC) Pty Ltd v Kiwin [2011] WASCA 117 [31], citing with approval the Victorian decision, Holmes v RE Spence & Co Pty Ltd (1992) 5 VIR 119, 123 – 124:
“The Act does not require employers to ensure that accidents never happen. It requires them to take such steps as are practicable to provide and maintain a safe working environment.”
But to return to the main point of this article.
In my view, the objects of health and safety assurance can best be understood from comments of the Pike River Royal Commission:
“The statistical information provided to the board on health and safety comprised mainly personal injury rates and time lost through accidents … The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries. … The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation.”
I have written about this recently, and do not want to repeat those observations again (See: Everything is Green: The delusion of health and safety reporting), so let me try and explain this in another way.
Whenever I run obligations training for supervisors and managers we inevitably come to the question of JHAs – and I am assuming that readers will be familiar with that “tool” so will not explain it further.
I then ask a question about how important people think the JHA is. On a scale of 1 to 10, with 1 being the least important and 10 being the most, how important is the JHA?
Inevitably, the group settles on a score of somewhere between 8 and 10. They all agree that the JHA is “critically important” to managing health and safety risk in their business. They all agree that every high hazard activity they undertake requires a JHA.
I then ask, what is the purpose of the JHA. Almost universally groups agree that the purpose of the JHA is something like:
- To identify the job steps
- To identify hazards associated with those job steps
- To identify controls to manage the hazards; and
- To help ensure that the work is performed having regard to those hazards and the controls.
So, my question is, if the JHA is a “crucial system” or “critically important” and a key tool for managing every high-risk hazard in the workplace, is it unreasonable to expect that the organisation would have some overarching view about whether the JHA is achieving its purpose?
They agree it is not unreasonable, but such a view does not exist.
I think the same question could be asked of every other potentially crucial safety management system including contractor safety management, training and competence, supervision, risk assessments and so on. If we look again to the comments in the Pike River Royal Commission, we can see how important these system elements are:
“Ultimately, the worth of a system depends on whether health and safety is taken seriously by everyone throughout an organisation; that it is accorded the attention that the Health and Safety in Employment Act 1992 demands. Problems in relation to risk assessment, incident investigation, information evaluation and reporting, among others, indicate to the commission that health and safety management was not taken seriously enough at Pike.”
But equally, the same question can be asked of high-risk “hazards” – working at heights, fatigue, psychological wellbeing etc.
What is the process to manage the hazard, and does it achieve the purpose it was designed to achieve?
The fact that I have 100% compliance with closing out corrective actions tells me no more about the effectiveness of my crucial systems than the absence of accidents.
The risk of performance measures that are really measures of activity is tha they can create an illusion of safety. The fact that we have 100% compliance with JHA training, a JHA was done every time it was required to be done, or that a supervisor signed off every JHA that was required to be signed off – these are all measures of activity, they do not tell us whether the JHA process has achieved its intended purpose.
So, what might a different type of “assurance” look like?
First, it would make a very conscious decision about the crucial systems or critical risks in the organisation and focus on those. Before I get called out for ignoring everything else, I do not advocate ignoring everything else – by all means, continue to use numerical and similar statistical measures for the bulk of your safety, but when you want to know that something works – you want to prove the effectiveness of your crucial systems – make a conscious decision to focus on them.
I thought that the JHA process was a crucial system, I would want to know how that process was supposed to work? If it is “crucial”, I should understand it to some extent.
I would want a system of reporting that told me whether the process was being managed the way it was supposed to be. And whether it worked. I would like to know, for example:
- How many JHAs were done;
- How many were reviewed;
- How many were checked for technical compliance and what was the level of technical compliance? Were they done when they were meant to be done, were they completed correctly etc.
- How many were checked for “quality”, and what the quality of the documents like? Did they identify appropriate hazards? Did they identify appropriate controls? Were people working in accordance with the controls?
I would also want to know what triggers were in place to review the quality of the JHA process – was our documented process a good process? Have we ever reviewed it internally? Do we ever get it reviewed externally? Are there any triggers for us to review our process and was it reviewed during the reporting period – if we get alerted to a case where an organisation was prosecuted for failing to implement its JHA process, does that cause us to go and do extra checks of our systems?
We could ask the same questions about our JHA training.
I would want someone to validate the reporting. If I am being told that our JHA process is working well – that it is achieving the purpose it was designed for – I would like someone (from time to time) to validate that. To tell me, “Greg, I have gone and looked at operations and I am comfortable that what you are being told about JHAs is accurate. You can trust that information – and this is why …”.
As part of my personal due dilligence, if I thought JHA were crucial, when I went into the field, that is what I would check too. I would validate the reporting for myself.
I would want some red flags – most importantly, I would want a mandatory term of reference in every investigation requiring the JHA process to be reviewed for every incident – not whether the JHA for the job was a good JHA, but whether our JHA process achieved its purpose in this case, and if not, why not.
If my reporting is telling me that the JHA process is good, but all my incidents are showing that the process did not achieve its intended purpose, then we may have systemic issues that need to be addressed.
I would want to create as many touch points as possible with this crucial system to understand if it was achieving the purpose it was intended to achieve.
My overarching concern, personally and professionally, is to structure processes to ensure that organisations can prove the effectiveness of their crucial systems. I have had to sit in too many little conference rooms, with too many managers who have audits, accreditations, awards and health and safety reports that made them think everything was OK when they have a dead body to deal with.
I appreciate the attraction of traffic lights and graphs. I understand the desire to find statistical and numerical measures to assure safety.
I just do not think they achieve the outcomes we ascribe to them.
They do not prove the effectiveness of crucial systems.
Greg I think you are being too harsh on the research report but I support you absolutely about the need to have an “effective” safety management system. This has underpinned much of my safety advice over the last 8 years or so and I would propose that OHS professionals should apply an “effectiveness test” which is the question “does this work?”. Does the control work? Does the communication method work? Does the audit program work?
OHS professionals can make strong arguments for the reduction of business costs by asking (or demanding) to see evidence that what is being doing, or what was thought to improve safety, has had the desired effect.
This assesses the effectiveness of various strategies and OHS interventions but should also help in procurement. Planning for an effective safety program or product at the procurement phase allows for effectiveness to be guaranteed. It should also filter out those OHS programs that promise a lot but cannot support the promises with evidence – I would put some wellness, mental health and resilience programs in this category.
The Safe Work Australia report may seem a bit out-of-date or off centre but it provides data and information that business and many in the OHS profession have been asking for. The unsuitability of lag indicators has been the bane of the OHS profession – businesses and government contracts insisted on lag indicators, even though the OHS profession stated their unhelpfulness.
Now Dr O’Neill and Karen Wolfe have provided those new measures with the authority and endorsement of Safe Work Australia. The report is part of the solution and your article is another important part but the journey/struggle continues.
The difficulty I have Kevin – and I do not want to belittle the report – is that it creates an impression that the indicators discussed can somehow give a level of assurance, which they simply cannot. Given how long it has taken the safety industry to move beyond injury rates, despite decades of contrary evidence, I think we are setting ourselves up for another chapter of equally unhelpful reporting. I am more than happy to be proven wrong but i do not see the evidence and everything in my experience runs counter to what is being proposed.
Good and accurate overview Greg. Safety profession has a long way to go to understand and unlearn fallacies of the past
The fascination with numbers and traditional management principles, which comes from various disciplines associated with accounting, engineering and many others does not do any favours to management of occupational health and safety.
Management of HSE risk is a different beast and requires different thinking as it is vastly different from managing financial risk. This is the point which really needs to be emphasised and understood by directors and managers. We need to stop thinking about numbers and indicators and start thinking about people, leadership, operational decision making and proactive management of risks. If we take care of this as a business input, the performance will improve. So how do we measure leadership, balanced operational decision making and utilisation of people as a solution rather than a source of a problem?
The problems we have are many. Firstly, we have people running the organisations who are looking for data to inform them of the state of culture and management of risks, mostly because this is the only way they can understand its state. Can data measure culture, or climate as described in the report? The view of many safety practitioners would disagree. Furthermore, individual perceptions are not descriptive of a safety ‘climate’ but rather the safety culture itself. Culture is really not about the values, assumptions and beliefs, but rather about collective practices in operational decision making – a set of visible actions, and actions which the board clearly needs to be aware of in terms of seeking assurances on management of risks. The focus in this particular space between board and the management should be the same, contrary to the report findings. Decisions themselves do not produce safe healthy and production work. Senior management practices do. If there is one organisational KPI which needs to be included in management of risk it is the observation of those practices at the management level, by the board members. See, we have invested countless millions over the years in observing workers via various BBS programs, but which organisation has a system where board observes safety related practices of the senior management? What is the mechanism for that? Board reports with TRIFR and LTIFR?
There are also two dimensions to due diligence concept. First one which is strictly related to the legislation and this is well covered. But what about moral and ethical aspects of due diligence? How is that being met in practice and what does it mean? We need to think beyond compliance, legislations, and corporations act, as we are dealing with people’s lives. Legal framework and practices in this space are often dehumanising. We need to depart from the thinking of ‘duty’ under the act but rather a duty to a fellow human, first and foremost. There is a big difference there and sadly the gap is not getting smaller. It is disappointing that safety material, including this report does not emphasise those points. How does suppression and limitations placed in some organisations on the internal accident investigations line up with the concept of due diligence and ethics?
Organisational maturity is a complex subject and it is not about data it monitors but rather about how it understands it people and numerous other practices. Here are some of them

Organisational ‘risk picture’ comes from trying to understand uncertainty and the most effective method for this is through evaluation of critical controls, consultation and group projection on the possible scenarios. Hazard ID is only a start of this process and audits and review of the past incidents are useful but very limiting and in some cases completely misleading indicator.
I think on the overall the report is a good resource, providing people, especially directors, managers and safety professionals understand its limitations and relatively narrow envelope.
Interesting. I have just read a JHA of 169 pages. Absolute rubbish, but accepted by so-called safety professionals. We have a problem!