Just following on from the feedback from my last post about process v outcome in safety management, below if a short video I did a little while ago that explains some of the concepts further.
Just following on from the feedback from my last post about process v outcome in safety management, below if a short video I did a little while ago that explains some of the concepts further.
My social media feeds have been abuzz recently following the release of Safe Work Australia’s report, Measuring and Reporting on Work Health & Safety. In part (or perhaps wholly) it is my fault for suggesting the report focussed on activity over assurance and could be problematic in that regard. (see for example LinkedIn, Measuring and Reporting on Work Health & Safety, Everything is Green: The delusion of health and safety reporting).
In several comments and emails, I have been asked to provide some “practical” examples. While it is difficult to provide something that will satisfy everyone, below I offer a few questions that might be useful to interrogate the efficacy of health and safety reporting in your organisation.
I would preface the example below with an observation on due diligence.
Despite what several commentators and marketing campaigns might have you believe, due diligence cannot be satisfied with a checklist, or by attending a WHS training session. The concept of due diligence existed long before WHS legislation, and it has been examined by courts and tribunal in many areas of business. One of the underpinning concepts of due diligence is “independent thought”.
It is incumbent on an individual who is charged with exercising due diligence to exercise a level of independence to understand the “thing” they are required to be diligent about. If that thing is safety, due diligence requires more than passively accepting a monthly WHS report. Due diligence requires independent thought and challenge to understand what you need to know about health and safety and whether the report is informing you about what you need to know.
So, in the spirit of that inquiry, what questions might you ask?
What is the purpose of health and safety reporting?
It might seem trite, but I think it is a legitimate question to start with. After all, if we do not start with a purpose, how to we judge effectiveness?
To many, the purpose of health and safety reporting might seem obvious, but if the history of workplace health and safety has taught us nothing else in the last 30 years, it has taught us about the dangers of assumptions. Do not assume to know the purpose of anything in health and safety – actually know the purpose and test against that purpose.
In many organisations, health and safety reporting is sold as a legal requirement, so in keeping with that theme, perhaps the purpose of health and safety reporting might be
To demonstrate the extent to which our health and safety risks are managed so far as reasonably practicable.
But before the comments start flowing about legal expectations being our minimum standards (sigh!), perhaps we can agree on something like:
To demonstrate the extent to which our health and safety risks are managed.
For those of you who aspire to “zero”, I will leave it to you to come up with your own purpose statement for health and safety reporting. Good luck.
What is the purpose and relevance of an element of health and safety reporting?
Health and safety reports might be filled with all sorts of statements and data. But what purpose do they serve?
A very popular health and safety reporting metric is the number (or percentage) of corrective actions closed out following an incident investigation.
On its face, that statistic is nothing more than a measure of activity – how many things have been done against how many things should have been done. On its face, and at its highest, it might be a measure of “operating discipline” – we are good at doing the things we said we would.
But if the purpose of health and safety reporting is to demonstrate the extent to which our health and safety risks are managed, it does not seem to add much value at all.
Another way to think about a statistical set of action items being closed out is to consider them as an indicator of the effectiveness of incident investigations. After all, the quality of incident investigations is very important to the overall quality of health and safety management and something that inquiries are likely to look at in the event of an accident (See for example Everything is Green: The delusion of health and safety reporting)
Perhaps if people had spent more time asking this question about injury rate data over the past 25 years, it would not have pride of place in safety management today.
What assumption do we have to make if an element of health and safety reporting is going to have value?
If we argue that the number (or percentage) of corrective actions closed out following an incident investigation tells us something about the quality of incident investigations, what assumptions do we have to make?
If 100% of corrective actions from incident investigations have been closed out, and I have a sense of comfort from that, I am making several assumptions. I am making assumptions:
None of these issues are revealed by the number (or percentage) of corrective actions closed out following an incident investigation.
Indeed, if a health and safety report could show 100% of corrective actions from incident investigations have been closed out without any of the assumptions above being true.
And if these assumptions are not valid, and if a major accident happens, and if it is found that incident have never been properly investigated, how can it be said that an organisation and its management was serious about health and safety and exercising due diligence?
I have always believed, at its core, health and safety management is about controlling health and safety hazards.
To some extent, I do not care how organisations say they manage health and safety – safety 1, safety 2, safety differently, visible felt leadership, rules, procedures, prescription, discretion, people are the problem, people are the solution etc., etc., etc. – prove to me that it works. Prove to me that what you do controls the health and safety hazards in your business.
If two people die in an electrical incident at your workplace, nobody cares what your last safety culture survey reveals. You need to demonstrate how the risk of electrocution was managed in your organisation, and whether it was managed effectively.
No one cares what your TRIFR rate is, no one cares how many action items have been closed out, no one cares how many safety interactions your managers have, no one cares how many hazards have been reported, no one cares how many pre-start meetings you have conducted ….
The relevant issue is whether health and safety hazards have been effectively managed.
The things we do in the name of health and safety only matter to the extent that they have a role to play in managing health and safety hazards.
If the number of action items closed out after an incident investigation is important to how hazards are managed, we should be able to explain how and demonstrate the relationship.
Health and safety reporting only matters if it gives us an insight into how well we manage health and safety.
What does your health and safety reporting really tell you?
I approach this article with some trepidation.
I was recently sent a copy of Safe Work Australia’s report, Measuring and Reporting on Work Health & Safety, and subsequently saw a post on LinkedIn dealing with the same. I made some observations on the report in response to the original post which drew the ire of some commentators (although I may be overstating it and I apologise in advance if I have), but I did promise a more fulsome response, and in the spirit of a heartfelt desire to contribute to the improvement of health and safety in Australia – here it is.
I want to start by saying, that I have the utmost respect for the authors of the report and nothing is intended to diminish the work they have produced. I also accept that I am writing from a perspective heavily influenced by my engagement with health and safety through the legal process.
I also need to emphasise that I am not dismissing what is said in the report, nor saying that some of the structures and processes proposed by the report are not valid and valuable. But I do think the emphasis in the report on numerical and graphical information has the potential to blind organisations to the effectiveness of crucial systems.
I also want to say that I have witnessed over many years – and many fatalities – organisations that can point to health and safety accreditations, health and safety awards, good personal injury rate data, good audit scores and “traffic lights” all in the green. At the same time, a serious accident or workplace fatalities exposes that the same “good” safety management systems are riddled with systemic failure – long term systemic departures from the requirements of the system that had not been picked up by any of the health and safety measures or performance indicators.
I am not sure how many ways I can express my frustration when executive leadership hold a sincere belief that they have excellent safety management systems in place, only to realise that those systems do not even begin to stand up to the level of scrutiny they come under in a serious legal process.
In my view, there is a clarity to health and safety assurance that has been borne out in every major accident enquiry, a clarity that was overlooked by the drafters of WHS Legislation and a clarity which is all too often overlooked when it comes to developing assurance programs. With the greatest respect, possible to the authors of this report, I fear this has been overlooked again.
In my view, the report perpetuates activity over assurance, and reinforces that assumptions can be drawn from the measure of activity when those assumptions are simply not valid.
Before I expand on these issues, I want to draw attention to another point in the report. At page 38 the report states:
“Each injury represents a breach of the duty to ensure WHS”
To the extent that this comment is meant to represent in some way the “legal” duty, I must take issue with it. There is no duty to prevent all injuries, and injury does not represent, in and of itself, a breach of any duty to “ensure WHS”. The Full Court of the Western Australia Supreme Court made this clear in Laing O’Rourke (BMC) Pty Ltd v Kiwin  WASCA 117 , citing with approval the Victorian decision, Holmes v RE Spence & Co Pty Ltd (1992) 5 VIR 119, 123 – 124:
“The Act does not require employers to ensure that accidents never happen. It requires them to take such steps as are practicable to provide and maintain a safe working environment.”
But to return to the main point of this article.
In my view, the objects of health and safety assurance can best be understood from comments of the Pike River Royal Commission:
“The statistical information provided to the board on health and safety comprised mainly personal injury rates and time lost through accidents … The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries. … The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation.”
I have written about this recently, and do not want to repeat those observations again (See: Everything is Green: The delusion of health and safety reporting), so let me try and explain this in another way.
Whenever I run obligations training for supervisors and managers we inevitably come to the question of JHAs – and I am assuming that readers will be familiar with that “tool” so will not explain it further.
I then ask a question about how important people think the JHA is. On a scale of 1 to 10, with 1 being the least important and 10 being the most, how important is the JHA?
Inevitably, the group settles on a score of somewhere between 8 and 10. They all agree that the JHA is “critically important” to managing health and safety risk in their business. They all agree that every high hazard activity they undertake requires a JHA.
I then ask, what is the purpose of the JHA. Almost universally groups agree that the purpose of the JHA is something like:
So, my question is, if the JHA is a “crucial system” or “critically important” and a key tool for managing every high-risk hazard in the workplace, is it unreasonable to expect that the organisation would have some overarching view about whether the JHA is achieving its purpose?
They agree it is not unreasonable, but such a view does not exist.
I think the same question could be asked of every other potentially crucial safety management system including contractor safety management, training and competence, supervision, risk assessments and so on. If we look again to the comments in the Pike River Royal Commission, we can see how important these system elements are:
“Ultimately, the worth of a system depends on whether health and safety is taken seriously by everyone throughout an organisation; that it is accorded the attention that the Health and Safety in Employment Act 1992 demands. Problems in relation to risk assessment, incident investigation, information evaluation and reporting, among others, indicate to the commission that health and safety management was not taken seriously enough at Pike.”
But equally, the same question can be asked of high-risk “hazards” – working at heights, fatigue, psychological wellbeing etc.
What is the process to manage the hazard, and does it achieve the purpose it was designed to achieve?
The fact that I have 100% compliance with closing out corrective actions tells me no more about the effectiveness of my crucial systems than the absence of accidents.
The risk of performance measures that are really measures of activity is tha they can create an illusion of safety. The fact that we have 100% compliance with JHA training, a JHA was done every time it was required to be done, or that a supervisor signed off every JHA that was required to be signed off – these are all measures of activity, they do not tell us whether the JHA process has achieved its intended purpose.
So, what might a different type of “assurance” look like?
First, it would make a very conscious decision about the crucial systems or critical risks in the organisation and focus on those. Before I get called out for ignoring everything else, I do not advocate ignoring everything else – by all means, continue to use numerical and similar statistical measures for the bulk of your safety, but when you want to know that something works – you want to prove the effectiveness of your crucial systems – make a conscious decision to focus on them.
I thought that the JHA process was a crucial system, I would want to know how that process was supposed to work? If it is “crucial”, I should understand it to some extent.
I would want a system of reporting that told me whether the process was being managed the way it was supposed to be. And whether it worked. I would like to know, for example:
I would also want to know what triggers were in place to review the quality of the JHA process – was our documented process a good process? Have we ever reviewed it internally? Do we ever get it reviewed externally? Are there any triggers for us to review our process and was it reviewed during the reporting period – if we get alerted to a case where an organisation was prosecuted for failing to implement its JHA process, does that cause us to go and do extra checks of our systems?
We could ask the same questions about our JHA training.
I would want someone to validate the reporting. If I am being told that our JHA process is working well – that it is achieving the purpose it was designed for – I would like someone (from time to time) to validate that. To tell me, “Greg, I have gone and looked at operations and I am comfortable that what you are being told about JHAs is accurate. You can trust that information – and this is why …”.
As part of my personal due dilligence, if I thought JHA were crucial, when I went into the field, that is what I would check too. I would validate the reporting for myself.
I would want some red flags – most importantly, I would want a mandatory term of reference in every investigation requiring the JHA process to be reviewed for every incident – not whether the JHA for the job was a good JHA, but whether our JHA process achieved its purpose in this case, and if not, why not.
If my reporting is telling me that the JHA process is good, but all my incidents are showing that the process did not achieve its intended purpose, then we may have systemic issues that need to be addressed.
I would want to create as many touch points as possible with this crucial system to understand if it was achieving the purpose it was intended to achieve.
My overarching concern, personally and professionally, is to structure processes to ensure that organisations can prove the effectiveness of their crucial systems. I have had to sit in too many little conference rooms, with too many managers who have audits, accreditations, awards and health and safety reports that made them think everything was OK when they have a dead body to deal with.
I appreciate the attraction of traffic lights and graphs. I understand the desire to find statistical and numerical measures to assure safety.
I just do not think they achieve the outcomes we ascribe to them.
They do not prove the effectiveness of crucial systems.
For anyone trying to work their way through due diligence in the context of occupational safety and health, I have put together a short 8 and half minute primer with a few ideas.
I hope you find it useful.
On 6 April 2016 I will be facilitating a due diligence masterclass in conjunction with IFAP from 8.00am until 3.00pm at the Esplanade Hotel in Fremantle, Western Australia.
The program is suitable for all industries and size of business.
Drawing on legal precedents and major accident investigations from all around the world, I will consider due diligence in the context of health and safety legislation including harmonised, WHS legislation and “accessorial liability” provisions in Western Australia, Victoria and the offshore oil and gas industry.
The program will focus on the practical and legal expectations on mangers to control health and safety risks in their business, and what day-to-day application of those principles might look like.
Places are limited and the program is already 50% subscribed.
I am not a fan of the language of “zero“, either as an aspiration or as a stated goal. It has never sat well with me, and seems so disconnected from day to day reality in both society and a workplace that people cannot help but become disconnected from, or dismissive of, the message behind the term. My view has always been that the language of zero actually often undermines the objectives it is trying to achieve (see this case for example).
If you are interested in this topic (and if you are involved in safety you should be) there are far more passionate, learned and articulate critics of the language of zero than me – See for example, anything by Dr. Robert Long.
However, recently I have been asked to do quite a bit of work around psychological harm in the context of occupational safety and health. In particular, how the legal risk management of psychological harm in the context of safety and health might differ from the Human Resources (HR)/employee relations context.
WHS legislation around Australia expressly includes “psychological” health within its remit and the Western Australian Department of Mines and Petroleum has acknowledged that they regard “health” as including “psychological” health, even though it is not expressly described in the State’s mining legislation.
What has emerged, at least to my mind, is the extent to which our policy, procedure and policing approach to safety and health, far from alleviating psychological harm in the workplace, might be contributing to it.
Safety management might be part of the problem.
In an ongoing Western Australian inquiry into the possible impact of fly in/fly out work on “mental health” the Australian Medical Association identified that the way health and safety is managed can contribute to a “distinct sense of entrapment” (page 43):
The AMA also expressed its concerns about this issue, noting that “[o]nerous rules, safety procedures and focus on achievement of production levels have been shown to create a distinct sense of entrapment in FIFO workers.”
The inquiry drew, in some measure, on an earlier report, the Lifeline WA FIFO/DIDO Mental Health Research Report 2013 which also appeared to note the adverse impact of safety and health management on psychological well-being. For example “[a]dhering to on-site safety rules” was identified as a workplace stress (page 77). Interestingly, the Lifeline report noted a sense of “intimidation” brought on by the number of rules and regulations associated with work on a mine, and :
This sense of intimidation was further mirrored in the outcomes of mining safety regulations which in theory were designed to care for workers but in practice led to inflexible regulation over genuine safety concerns (page 81).
Examples from the Lifeline report include:
… a participant recalled a situation in which a worker handling heavy loads required an adhesive bandage but was unable to ask someone to get them for him because he had to fill out an accident report first (which he was unable to do mid-job); hence he had to carry on working without attending to his cuts. Alternatively, another example of the application of safety rules in an inflexible manner was illustrated when a group of workers were reprimanded for not wearing safety glasses on a 40 degree day even though they could not see from them due to excessive sweating. Hence, safety rules themselves were accepted as a necessary part of work but their implementation in an inflexible uniform manner created stress as workers felt their impact hindered their ability to conduct basic work tasks safely and/or without attracting rebuke. Hence, site rules and regulations could translate into arbitrary and punitive forms of punishment, which undermined participants’ ability to fulfil jobs to their satisfaction and left them feeling insecure with their positions (page 81).
It seems, then, that we need to think beyond our own perceptions of what might contribute to workplace stress and understand the impact that our efforts to manage health and safety might actually be having. Again, as the Lifeline research noted:
… although past research has shown that site conditions and cultures, such as isolation and excessive drinking are problematic, this research shows that the regimented nature of working and living on-site also takes a toll on mental health and wellbeing. From the responses of many participants, it was apparent that following site safety rules (either under pressure of internal monitoring or in the perceived absence of adequate safety precautions by co-workers and supervisors) was a significant stressor. Participants felt unable to apply self-perceived common-sense judgments and also reported feeling vulnerable to intensive scrutinising, intimidation and threats of job loss (page 82) [my emphasis added].
The common criticisms of the language of “zero” seem to me to go directly to the factors that have been identified in this research as contributing to psychological harm in the workplace. The pressure to comply with rules, fear about reporting incidents, the inability to exercise individual judgement on how to manage risk and the inflexible application of process are all side-effects of the language of “zero“.
Up until this point the debate around “zero harm” and its utility (or otherwise) as the headline for safety management has been relatively benign. Apart from the advocacy of people like Dr Robert Long “zero harm” seems to have been perceived as a relatively neutral strategy, insofar as people believe that it “does no harm“, and “what’s the alternative?”.
It seems, in fact, that much harm may be perpetuated in the name of “zero“, and at some point the behaviours that it drives will be found to be unlawful.
It is also going to be interesting to see how health and safety regulators, often the champions of “zero harm” oversee its potential impacts on psychological harm in the workplace. Indeed, it would be very useful to see what risk assessments, research or other measures were taken by regulators prior to introducing “zero harm” style campaigns or messages to understand the potential effects of their interventions, or any subsequent research to understand the potential harm they may have done.
Comcare v Transpacific Industries  FCA 500 is an interesting case that looks at the liability of an employer for the death of a non-employee in a motor vehicle accident. In February 2011 a Transpacific employee driving a garbage collection truck ran into a vehicle killing the driver. Subsequent investigations revealed that the truck had faulty brakes.
The case provides some very interesting insights into the “illusion of safety” where it appears that, notwithstanding regulator approval and a routine maintenance regime, the high risk of poorly maintained brakes on a garbage truck was not identified.
There is also an interesting point raised in the case about the extent to which an employer should monitor the work of an employee who has been issued a warning for safety related breaches. Should an employer monitor the employee until they are satisfied that they are working in accordance with the safety requirements?
A short video presentation about the case is available here.
You can access a copy of the case here.