Everything is Green: The delusion of health and safety reporting

Over the past 12 months, I have been engaged in a significant amount of health and safety “assurance” work, helping organisations to try and understand if the health and safety risks in their business are effectively managed.  Perhaps the most enduring image to come out of the last 12 months for me is the misleading and dangerous assumptions people make based on health and safety reports. 

 Often, when people look to criticise health and safety reporting, they point to lag indicators such as injury rates.  I do not want to talk about injury rates as a measure of health and safety performance in this article, so I would like to put that issue to bed with this observation from the Pike River Royal Commission

The statistical information provided to the board on health and safety comprised mainly personal injury rates and time lost through accidents … The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries. …  The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation.” 

Let’s be clear.  There are no major accident enquiries that have identified personal injury rates as a legitimate measure of the effectiveness of health and safety management.  Personal injury rates are a measure of how many personal injuries have occurred – no more.  Any organisation that assumes personal injury rates are a measure of the effectiveness of their safety management system is misguided.  We have known this for decades. 

The challenge is not managing personal injury rates.  The challenge is proving the effectiveness of our crucial systems for managing health and safety risk.  If our systems are effective to manage health and safety risks, improved safety performance should follow. However, the reverse does not hold true, and countless major accident enquiries have identified fundamentally flawed safety management systems disguised by good and improving personal injury rates. 

We have seen over time the development of so-called “lead” indicators as a counterpoint to traditional, lag indicators.  Lead indicators, supposedly designed to provide insight into the effectiveness of safety management systems. 

But do they? 

Overwhelmingly, lead indicators are nothing more than a measure of activity and the fact that our measures of activity have 100% compliance – they are all green – creates a dangerous illusion of safety. 

A popular “lead” indicator for safety and health is the number of management interactions.  These might be variously described as safety conversations, safe act observations, behavioural observations, safety interactions, management walk arounds and so on.  Inevitably, they show up in a health and safety report as part of a table of lead indicators or a dashboard of “traffic lights“.  These indicators, or traffic lights, are usually coloured red if the indicator has not been met, amber if it has not been completely met and green if the requirement has been met. 

Other, typical indicators might include: 

·    Corrective actions closed;

·    Audits completed;

·    Training completed;

·    Hazards identified; or

·    Pre-start or “Take 5” cards completed.

No doubt there are countless more. 

The difficulty with all of these indicators is that they are measures of “activity“.  Invariably, they tell us whether things have been “done“. 

They tell us nothing about the quality or effectiveness of the activity. 

They tell us nothing about the influence of the activity on safety. 

They make no contribution to proving the effectiveness of our crucial systems. 

One of the phenomena that I have observed about health and safety management over the years is the notion of the “safety paradox“.  The safety Paradox supposes that everything we do in the name of health and safety can both improve and undermine safety in the workplace.  A very good example is a frontline risk assessment tool such as the JHA. 

The JHA is a ubiquitous frontline risk assessment tool implemented by organisations all over the world.  At its best, it can be an effective mechanism to help frontline workers identify the risks associated with their work and develop suitable controls to manage those risks. 

Conversely, it also can disengage the workforce from the safety management message of the organisation and drive a complete “us and them” mentality. 

Research suggests that significant numbers of workers see frontline risk assessment tools like the JHA as a backside covering exercise, designed to protect managers from legal risk in the event of an accident.  In most workplaces, this idea does not come as a complete shock, and there is a general acceptance of the limits or weaknesses inherent in the JHA.  However, the use of the JHA continues to roll on without analysis, without thought, without critical thinking and certainly without any reporting at a managerial level about its effectiveness. 

I have never seen a health and safety report that provides information to the executive about the effectiveness of the JHA system in the business.  Given that the JHA is one of the most critical tools used by organisations for the management of high-risk activities (including working at heights, confined space entry, lifting operations and so on) I find this extraordinary. 

I am not going to advocate whether organisations should use a JHA.  But I don’t think there could be any reasonable argument for an organisation not to know whether their use of the JHA is beneficial to safety or is undermining it. 

What about incident investigations? Are Incident investigations an important system in safety management? 

Whenever I asked this question in training workshops, everybody immediately tells me that incident investigations are very important.  If I asked the question: 

On a scale of 1 to 10, with one being unimportant and 10 being critically important, how important our incident investigations? 

Inevitably the answer is 9 or 10. 

But does your system of incident investigation work?  If it is such a crucial systems, how do you prove its effectiveness?  This was an issue that the Pike River Royal Commission wanted to understand, and they looked at it as follows: 

The workers reported many incidents and accidents.  The commission analysed 1083 reports and summarised a selection of 436 and a schedule. …  there were problems with the investigation process … Incidents were never properly investigated.” 

If you are interested, an extract of cross-examination from the Royal Commission asking questions about incident investigations is available here.  It provides an interesting insight into the sorts of issues managers need to address when their safety management systems are being critically analysed. 

Again, I have never seen a health and safety report that provides information to the executive about the effectiveness of an incident investigation process.  Would it be so unreasonable to expect, and given the apparent criticality of incident investigations, that once or twice a year somebody will prepare a report for executive management summarising incident investigations and forming a view about their quality and effectiveness? 

Finally, let’s go back and consider management interactions.  As I indicated above, these are a very common lead indicator for safety management, but they are also very limited, often nothing more than a measure of how many interactions have been done.  There is typically no measure or analysis about whether interactions were done well or whether they have added value to safety management. 

It seems universally assumed management interactions around health and safety are a good thing but are they? 

What is their purpose, what are they designed to achieve and how do we know that they are achieving that purpose?  Is there a risk that management interactions could be undermining safety in your workplace? 

How do you know the or managers are not just wandering around practising random acts of safety, reinforcing unsafe behaviours and generally just pissing everybody off? 

A green traffic light in a health and safety report as an indicator that everybody who should have had a management interaction has done one is misleading and fuels the illusion of safety which underpins so many catastrophic workplace events. 

When was the last time anybody provided health and safety reporting that made any meaningful contribution to proving the effectiveness of crucial systems?  Have any leaders ever received a report showing a detailed analysis of lifting operations over a 10 month period with a formal, concluded you about the effectiveness of the safety management system to control the risks associated with lifting?  What about dropped objects?  What about working at height? 

What about the efficacy of your permit to work system?  Has that ever been analysed and reported on, other than on a case-by-case “reactive” basis following an incident? 

For your next health and safety “reporting” meeting, try this: Scrap your traditional health and safety report, pick a critical risk such as working at heights, and ask your safety manager to provide you with a presentation about whether, or to what extent, the risk has been managed as far as reasonably practicable.

What is your health and safety reporting really telling you, as opposed to the assumptions you choose to make?  Is there any evidence that it is proving the effectiveness of your crucial systems?

 

 

 

 

 

All care no responsibility: What is the role of health and safety management?

I was recently involved is some discussion on LinkedIn about the liability of health and safety managers under health and safety legislation (link here)

Several responses seemed to suggest that because health and safety managers had no authority, they should not have any accountability either – that safety managers only provide advice and have no authority over others.

I must say this is a proposition I find extraordinary, but let’s explore it.

I wrote some time ago that safety managers should have the same level of accountability as company officers, after all, they are the architects of the safety management system and should have some idea about how well it is working  (Are health and safety managers company offices and should they be ?)

Many enquiries around the world have identified that a focus on personal injury rates is not a good indicator of the effectiveness of the health and safety management system, and on occasions, the focus on personal injury rate management can distract an organisation from managing the critical health and safety risks in its business.

Quite rightly, enquiries criticise organisations where the focus on personal injury rates have undermined the effectiveness of their health and safety management system.  But surely, the health and safety manager also has an accountability?

While it might be correct to criticise a chief executive officer of such an organisation, it cannot be correct to suggest that the chief executive officer has a greater understanding of personal injury rates and their influence on safety management than the safety manager.

While the safety manager might not have “authority” surely, they have accountability to advise the organisation about the perils of overreliance on personal injury rates, and to monitor the safety management system to ensure that any reliance on personal injury rates is not undermining safety management.  It must be correct, that a safety manager should continue to warn or flag any concerns whenever they believe that the focus on personal injury rates damages effective safety management.

Is a chief executive officer’s culpability for failing to enquire about the effectiveness of personal injury rates as a measure of safety any less than a safety manager’s culpability for failing to advise?

What about the positive obligations of due diligence imposed on company officers under WHS Legislation.  Is the health and safety industry trying to argue that those standards should not apply to health and safety managers?

Under WHS Legislation, due diligence includes taking reasonable steps to achieve several stated outcomes. 

To acquire and keep up-to-date knowledge of work health and safety matters.

To gain an understanding of the nature of the operations of the business or undertaking of the person conducting the business or undertaking and generally of the hazards and risks associated with those operations.

I do not think anybody could suggest that health and safety managers do not need to meet these expectations. And if they fail to meet these expectations, surely, they should be liable in the same way as company officers.

To ensure that the person conducting the business or undertaking has available for use, and uses, appropriate resources and processes to eliminate or minimise risks to health and safety from work carried out as part of the conduct of the business or undertaking.

To ensure that the person conducting the business or undertaking has appropriate processes for receiving and considering information regarding incidents, hazards and risks and responding in a timely way to that information.

To ensure that the person conducting the business or undertaking has, and implements, processes for complying with any duty or obligation of the person conducting the business or undertaking under this Act.

While it may be true that health and safety managers cannot “ensure”, in the sense that they do not authorise capital expenditure or budgets to achieve this outcome, it is true in my view that health and safety managers are responsible for “ensuring” or at the very least knowing, if:

  • The resources and processes are suitable;
  • The resources and processes are implemented; and
  • The resources and processes are effective to manage the health and safety risks in the business. 

It cannot be right to say that a chief executive officer or another company officer who does not meet these obligations is any more liable than a health and safety manager.

The only possible, reasonable defence that a safety manager could have is they advised, and continue to advise executive management on the suitability, implementation and effectiveness of the health and safety management resources and processes, but that advice was ignored.

If health and safety managers do not have accountability to ensure (or at the very least have an intimate knowledge of) the suitability, implementation and effectiveness of the health and safety management resources and processes, then what is the purpose of having safety managers at all?

To verify the provision and use of the resources and processes referred to in [the above] paragraphs.

It could not possibly be arguable that a health and safety manager has a lesser responsibility to ensure that the resources and processes for managing health and safety risk in business are used than company officers.

Indeed, I can think of no higher duty placed on a health and safety manager than to provide ongoing verification to the business that the resources and processes in place to manage the health and safety risks in the business are in place, implemented and effective to control those risks.

Several years ago, I was at a safety conference, and there were three safety managers on stage talking through their latest success and innovation.  When it came time for questions I asked:

who is responsible for knowing if these things work?

The question seemed to stump the participants, and I got various, unsatisfactory answers ranging from descriptions of reporting processes through to abrogation of responsibilities to “risk owners”.

While everybody in an organisation has responsibilities for safety and health, surely it is the accountability of the health and safety manager, the person engaged to “manage” health and safety to know if health and safety management actually “works”?

If I want to know if my organisation is managing its health and safety risks as low as reasonably practicable, surely the person best placed to answer that question is the health and safety manager?  If they do not have oversight of the effectiveness of health and safety management in the organisation – when that is what they are employed to do – why should anybody else?

Why should the chief executive officer’s obligation to understand the effectiveness of health and safety management in their organisation be any greater than the health and safety manager’s?

It seems to me that until the health and safety industry is prepared to take ownership of health and safety management and stop hiding behind “line management responsibility” and a lack of “authority”, recognition as a “profession” is a long way away.

Until health and safety managers can clearly articulate how well the health and safety management system works in their organisation, and explain what they rely on to form that view, is the position really worthy of the title manager?

 

 

 

Welcome to the intellectual vacuum that is political comment on WHS

Today (29 October 2016) the ABC had an article on the ongoing coverage of the tragic loss of lives at Dreamworld in Queensland.

I have commented before about the disconnect between the loss of life in this workplace accident and the near weekly loss of life in Australian workplaces that the coverage of this incident highlights. That disconnect was underscored by a picture of the Federal Opposition Leader, Bill Shorten, laying flowers outside Dreamworld. I do not begrudge Mr Shorten the opportunity to express his condolences (or advance his political position depending on your level of cynicism), but I cannot recall too many times political leaders have given similar public displays of solidarity when people die at our construction, mining, agricultural or any other workplaces.

But what has prompted this article is the simplistic, reactive, leaderless response that politicians trot out in the face of these types of events.

The ABC Article reports Queensland Premier Annastacia Palaszczuk as saying:

“It is simply not enough for us to be compliant with our current laws, we need to be sure our laws keep pace with international research and new technologies,”

“The audit will also consider whether existing penalties are sufficient to act as deterrents, and whether these should be strengthened to contain provisions relating to gross negligence causing death.

“Because we all know how important workplace safety is and how important it is to have strong deterrents.

“That’s why Queensland has the best record in Australia at prosecuting employers for negligence – and we are now examining current regulations to see if there are any further measures we can take to discourage unsafe practices.”

The idea that we “should not be compliant with our current laws” is both a nonsense and a failure of policy makers to properly accept the findings of the Robens Report published in the mid-1970’s. The reason our laws cannot keep pace with “international research and new technologies”, is because governments continue to insist on producing highly prescriptive suites of regulation which in most cases are adopted by organisations as the benchmark for “reasonably practicable”.

For most businesses, particularly small and medium-sized businesses, technical compliance with regulation is the high-water mark of safety management – an approach reinforced by the “checkbox” compliance mentality of many regulators.

WHS legislation is a leading example of this failure of policy, in so far as it increased the number of regulations in most of the jurisdictions where it has been implemented.

Flexible, innovative safety management requires a regulatory framework that promotes it, not limits or discourages it.  How can a regulator have any credibility when it calls on industry to keep pace with “international research”, when it continues to define safety performance through the publication of lost time and other lag injury rates?

Ms Palaszczuk then adopts the standard “tough on safety” call to arms, without taking the time to recognise inherent contradictions in what she is saying. She boasts that “Queensland has the best record in Australia at prosecuting employers for negligence”, but hints at tougher penalties still.

If the considerable penalties under the WHS Legislation and the “best record” of prosecuting employers are not a sufficient deterrent, why would “tougher” and “better” be any different?

I have written about these types of matters before, and would just ask that before policymakers go charging off in pursuit of higher penalties and more prosecutions, we stop and take the time to see if this tragedy can provide the opportunity lost during harmonisation and introduction of WHS legislation.

That lost opportunity was a chance to stop and consider the way that we regulate and manage health and safety in this country.

And can we start with the question of whether criminalising health and safety breaches and managing safety through a culture of fear driven by high fines and penalties is the best way to achieve the safety outcomes we want?

What is the evidence proving high penalties and prosecutions improve safety outcomes?

Are there ways that we can regulate safety to provide significant deterrents and consequences for people who disregard health and safety in the workplace, but at the same time foster a culture of openness, sharing and a willingness to learn and improve?

Can we redirect the time, money, expertise and resources that are poured into enforcement, prosecution and defending legal proceedings in a way that adds genuine value as opposed to headline value?

This is a chance to stop and think. This is a chance for the health and safety industry to stand up, intervene and take a leadership role in health and safety.

If we do not, the intellectual vacuum will continue to be filled by the historical approaches that have brought us to where we are today.

Reflections on Safety: Reasonably Practicable

In August 2016, I wrote a WHS Update about the High Court decision, Deal v Father Pius Kodakkathanath [2016] HCA 31 which considered the legal test of Reasonably Practicable in the context of Australian health and safety legislation. Shortly after that, one of my connections on Linkedin posted an article about Reasonably Practicable. The article offered an engineering perspective on “As Low as Reasonably Practicable” (ALARP), stating:

… recent developments in Australian workplace health and safety law place proactive responsibilities on senior personnel in organisations, so they must be fully informed to make proper decisions

This sentiment seemed similar to an earlier engineering publication which argued that ALARP and “So Far as is Reasonably Practicable” (SFARP) were different and that this difference was, in part a least as result of “harmonised”, WHS legislation.

In both cases, I believed the articles were misaligned with the legal construct of Reasonably Practicable and misrepresented that there had been a change in the legal test of Reasonably Practicable prompted by changes to WHS legislation.

This background caused me to reflect again on the notion of Reasonably Practicable and what it means in the context of legal obligations for health and safety.

To start, I do take issue with the suggestion that changes to WHS legislation have resulted in a shift in what Reasonably Practicable means. The basis of this idea seems to be an apparent change in terminology from ALARP to SFARP.

The term SFARP was in place in health and safety legislation before the introduction of WHS and jurisdictions that have not adopted WHS legislation still use the term. For example, the primary obligations under the Victorian Occupational Health and Safety Act 2004 are set out in section 20, and state:

To avoid doubt, a duty imposed on a person by this Part or the regulations to ensure, so far as is reasonably practicable, health and safety requires the person …

Indeed, the architects of WHS legislation[1] specifically retained the term Reasonably Practicable because it was a common and well-understood term in the context of Australian health and safety legislation:

5.51 Reasonably practicable is currently defined or explained in a number of jurisdictions. The definitions are generally consistent, with some containing more matters to be considered than others. The definitions ‘are consistent with the long settled interpretation by courts, ‘in Australia and elsewhere.

5.52 The provision of the Vic Act relating to reasonably practicable was often referred to in submissions (including those of governments) and consultations as either a preferred approach or a basis for a definition of reasonably practicable.

5.53 We recommend that a definition or section explaining the application of reasonably practicable be modelled on the Victorian provision. We consider that, with some modification, it most closely conforms to what would be suitable for the model Act.  [My emphasis added]

In my view, it is unarguable that the concept of Reasonably Practicable has been well-settled in Australian law for a considerable period, and the concept has not changed with the introduction of WHS legislation.

If we accept that Reasonably Practicable has been consistently applied in Australia for some time, the next question is, what does it mean?

Reasonably Practicable is a defined term in most health and safety legislation in Australia.  Section 20(2) of the Victorian Occupational Health and Safety Act 2004, for example, states:

(2) To avoid doubt, for the purposes of this Part and the regulations, regard must be had to the following matters in determining what is (or was at a particular time) reasonably practicable in relation to ensuring health and safety—

 (a) the likelihood of the hazard or risk concerned eventuating; 

 (b) the degree of harm that would result if the hazard or risk eventuated; 

 (c) what the person concerned knows, or ought reasonably to know, about the hazard or risk and any ways of eliminating or reducing the hazard or risk; 

 (d) the availability and suitability of ways to eliminate or reduce the hazard or risk; 

 (e) the cost of eliminating or reducing the hazard or risk.

In the High Court decision, Slivak v Lurgi (Australia) Pty Ltd [2001] HCA 6, Justice Gaudron described Reasonably Practicable as follows:

The words “reasonably practicable” have, somewhat surprisingly, been the subject of much judicial consideration. It is surprising because the words “reasonably practicable” are ordinary words bearing their ordinary meaning. And the question whether a measure is or is not reasonably practicable is one which requires no more than the making of a value judgment in the light of all the facts. Nevertheless, three general propositions are to be discerned from the decided cases:

  • the phrase “reasonably practicable” means something narrower than “physically possible” or “feasible”;
  • what is “reasonably practicable” is to be judged on the basis of what was known at the relevant time;
  • to determine what is “reasonably practicable” it is necessary to balance the likelihood of the risk occurring against the cost, time and trouble necessary to avert that risk.[2] [my emphasis added]

Another High Court decision, Baiada Poultry Pty Ltd v The Queen [2012] HCA 14, emphasised similar ideas.

The case concerned that the death of a subcontracted worker during forklift operations.  Baiada was the Principal who had engaged the various contractors to perform the operations and in an earlier decision the court had concluded:

it was entirely practicable for [Baiada] to required contractors to put loading and unloading safety measures in place and to check whether those safety managers were being observed from time to time ((2011) 203 IR 396 at 410)

On appeal, the High Court framed this finding differently.  They observed:

As the reasons of the majority in the Court of Appeal reveal by their reference to Baiada checking compliance with directions it gave to [the contractors], the question presented by the statutory duty “so far as is reasonably practicable” to provide and maintain a safe working environment could not be determined by reference only to Baiada having a legal right to issue instructions to its subcontractors. Showing that Baiada had the legal right to issue instructions showed only that it was possible for Baiada to take that step. It did not show that this was a step that was reasonably practicable to achieve the relevant result of providing and maintaining a safe working environment. That question required consideration not only of what steps Baiada could have taken to secure compliance but also, and critically, whether Baiada’s obligation “so far as is reasonably practicable” to provide and maintain a safe working environment obliged it: (a) to give safety instructions to its (apparently skilled and experienced) subcontractors; (b) to check whether its instructions were followed; (c) to take some step to require compliance with its instructions; or (d) to do some combination of these things or even something altogether different. These were questions which the jury would have had to decide in light of all of the evidence that had been given at trial about how the work of catching, caging, loading and transporting the chickens was done.[3] [my emphasis added]

In light of these, and other decided cases it is possible to form a practical test to consider what is Reasonably Practicable.  In my view, it is necessary for an organisation to demonstrate that they:

  • Have “Proper Systems” to manage the health and safety risks in their business; and
  • Exercise “Adequate Supervision” to ensure that the Proper Systems are implemented and effective to manage the risks.

What constitutes Proper Systems and Adequate Supervision is a judgement call that needs to be determined with regard to the risks.  It requires an organisation to balance the risk against the cost, time and trouble of managing it.[4]

It is also worth noting at this point, that Reasonably Practicable is, generally speaking, an organisational obligation.  It is not an individual,[5] and in particular, it is not an employee obligation.

I often see, when working with clients, safety documents required be signed by employees that the state that risks have been controlled to “ALARP”.  This is not the employee’s responsibility and the extent to which an employee does or does not control the risk to ALARP does not affect an employer’s obligations.

In broad terms, it is the organisation’s (PCBU or employer) obligation to manage risks as low as, or so far as is, Reasonably Practicable.  The employee obligation is to do everything “reasonable”.  This includes complying with the organisation’s systems.

It is the organisation’s obligation to identify the relevant health and safety risks and define how they will be controlled, ensuring that the level of control is “Reasonably Practicable.  It is the employee’s obligation to comply with the organisation’s requirements.

So, what might Reasonably Practicable look like in practice?

I recently defended a case that involved a worker who was seriously injured at work.  Although the injury did not result from a fall from height, the prosecution case against my client was based on failure to meet its obligations about working at heights.

My client had, on any measure, a Proper System for managing the risk of work at heights.  They had a documented working at height Standard and Procedure both of which were consistent with industry best practice and regulator guidance material.  All work at height above 1.8 m required a permit to work and a JHA.  The documented procedures prescribed appropriate levels of supervision and training.

In the three years before the relevant incident, my client had not had a working at height incident of any sort nor had they had a health and safety incident at all.  Based on all of our investigations as part of preparing the case, there was nothing to suggest that the incident information was not legitimate.

The activity which was being performed at the time of the incident was conducted routinely, at least weekly, at the workplace.

In looking to construct a Reasonably Practicable argument to defend the case what would we be trying to do?  In essence, I would be trying to establish that the incident was an aberration, a “one off departure” from an otherwise well understood, consistently applied system of work that was wholly appropriate to manage the risk of working at heights.

In practice, that would mean:

  • Producing statements from all of the workers who performed this task describing how it was performed in practice, and demonstrating that their understanding of the way the work was performed in practice was consistent with the requirements of the organisation’s documented systems;
  • Producing completed documentation from when the job had been performed previously, and demonstrating that the documentation was consistent with the organisation’s requirements, and completed correctly;
  • Producing statements from supervisors who approved the documentation and oversaw the performance of the work and demonstrating that their understanding of the way that the work was performed in practice was consistent with the requirements of the organisation’s documented systems;
  • Producing completed documentation from when the injured worker had performed the work previously and demonstrating that the documentation was consistent with the organisation’s requirements and completed correctly.

There may be other information that we would seek, but in broad terms, the information outlined above helps to build a case that there was a proper system that was effectively implemented and that:

  • All the workers understood the system,
  • All the workers understood how it ought to have been applied, and
  • It was applied in practice.

What happened?

Rather than be able to demonstrate that the incident was a one-off departure from an otherwise effective system, the evidence revealed a complete systemic failure.  While the documented system was a Proper System and complied with all relevant industry standards and guidelines, it was not implemented in practice.

Most compelling was the fact that, despite this being a weekly task, there was not a single instance of the working at height Standard and Procedure been complied with.  We could not produce a single example where either the injured worker or indeed any worker who had performed the task had done so under an approved permit to work with an authorised JHA.

All of the workers gave evidence that the primary risk control tool on site was a Take 5.  The Take 5 is a preliminary risk assessment tool, and only if that risk assessment scored 22 or above was a JHA required.  The task in question was always assessed as 21.  The requirement for a JHA, in the minds of the workforce, was never triggered and none of them understood the requirements of the Standard or Procedure.

To me, this case is entirely indicative of the fundamental failure of Reasonably Practicable in most workplaces.  In the vast majority of cases that I have been involved in the last 25 years, organisations have systems that would classify as Proper Systems.  They are appropriate to manage the risk that they were designed to manage.

Equally, organisations cannot demonstrate Adequate Supervision.  While there may be audits, inspections, checking and checklists – there is no targeted process specifically designed to test and understand whether the systems in place to manage health and safety risks in the business are in fact implemented and are effective to manage those risks.

In my experience, most organisations spend far too much time trying to devise the “perfect” Proper System.  We spend far too little time understanding what needs to be done to confirm that the System works, and then leading the confirmation process.

Reasonably Practicable has not changed.

Reasonably Practicable is not a numeric equation.

Reasonably Practicable changes over time.

Reasonably Practicable is an intellectual exercise and a judgement call to decide how an organisation will manage the health and safety risks in its business.

Reasonably Practicable requires an organisation to demonstrate that they:

  • Have “Proper Systems” to manage the health and safety risks in their business; and
  • Exercise “Adequate Supervision” to ensure that the Proper Systems are implemented and effective to manage the risks.

What constitutes Proper Systems and Adequate Supervision is a judgement call that needs to be determined with regard to the risks.  It requires an organisation to balance the risk against the cost, time and trouble of managing it.

[1] See the National Review into Model Occupational Health and Safety Laws: First Report, October 2008.

[2] Slivak v Lurgi (Australia) Pty Ltd [2001] HCA 6 [53].

[3] Baiada Poultry Pty Ltd v The Queen [2012] HCA 14 [33].

[4] See also: Safe Work NSW v Wollongong Glass P/L [2016] NSWDC 58 and Collins v State Rail Authority of New South Wales (1986) 5 NSWLR 209.

[5] There are some exceptions to this where an individual, usually a manager or statutory officeholder will be required to undertake some action that is Reasonably Practicable.

This article is a general discussion about Reasonably Practicable and related concepts. it should not be relied on, and is not intended to be specific  legal advice.

Systems as Imagined v Systems in Practice

The recent NSW Supreme Court decision, Attorney General of New South Wales v Tho Services Limited (in liquidation) (ACN 000 263 678) [2016] NSWCCA 221 is another in a long line of decisions that highlight the disconnect between safety management systems as they are documented, and what occurs in practice.

Documented safety processes are important.  They provide guidance on how safety is managed and evidence that an organisation is meeting its obligations.  However, where an accident reveals long-term, systemic non-compliance with obvious safety expectations documented safety processes do not provide a defence, often they do not provide mitigation, and in cases such as this they are an aggravating circumstance.  As the Court noted:

The vast range of induction and supervising protocols adopted by the respondent or in force at its premises serves not to relieve the respondent of its responsibility for safety but on the contrary powerfully reinforces the extent to which the respondent failed to put them into practical effect.

For documented safety processes to add value they must:

Be consistent with the organisations risks and obligations;

  1. Be completed correctly; and
  2. Reflect what actually happens in practice.

All too often, documented safety management systems are one of the biggest contributors to the illusion of safety: the gap between the management of health and safety risk as we imagine it and what actually occurs in practice.

These are concepts that I have explored in my recent book, Risky Conversations: The Law, Social Psychology and Risk, and its accompanying video.

You can access a more detailed article about the case here.

 

Work as it is actually performed: investigating when nothing happens

There has been some discussions and commentary in various online forums recently looking at the issue of “positive” incident investigations.  Although there seems to be a variety of nuances in the description of positive investigations they focus on “what went right“.

Some of these investigation models have also incorporated a broader management technique of “appreciative enquiry“, which, as I understand it, came to prominence in the late 1980’s (see HERE for examples and information about appreciative enquiry).

The discussion about these frameworks describes the “what went right” philosophy as a positive view of investigations. It is a philosophy that does not focus on blame, but promotes discussion:

The benefit of that approach is that the conversation with witnesses is an entirely positive one. It is not about what could have happened. Not about the doom and gloom narrowly averted. Rather, it is about their heroic act, well designed process or lucky event that allowed us to avoid the adverse outcome. People love talking about positive things particularly if they had something to do with them. (https://www.linkedin.com/pulse/investigate-your-serious-near-misses-positive-way-michael-tooma?trk=prof-post)

 In my view, when organisations are not mature enough to talk about issues in a non-judgmental way, without attribution of blame, the “what went right” enquiry may present a risk.  It may be seen as a contrivance, with the facilitator spending a lot of their time saying things like “remember this is not about blame“.

In “mature” organisations the need to construct a system of enquiry to focus on the positive and avoid discussion of blame is largely redundant because the participants are aligned with and support the goals of the organisation.  Their desire to support the goals of the organisation overrides any petty, personal concerns about individual praise or blame.

If you have ever been privileged enough to work with high-performance sporting teams or elite military forces, you will understand this idea.

A precondition of belonging to these groups is the willingness to say and hear things that support the group’s objectives without personal agendas or taking personal affront.  The newest member of the team has a license to speak frankly about the performance of the most senior, and the most senior is expected to accept that conversation, not in the context of them personally, but in the context of the overall objectives of the team.

The extent to which organisations have to contrive a system whereby participants are corralled by a “what went right” narrative says a lot about the culture of an organisation and the “buy in” that people have to team objectives.

That is not to say that appreciative enquiry or investigating “what went right” does not have a place in organisations, nor that it could be an important building block along the way to developing something like an elite performing team.  But as a word of caution, you should also understand some of the paradoxes involved.

The Safety Paradox supposes that any initiative done in the name of health and safety has the potential to both improve and damage health and safety in a workplace.

Having sat through appreciative enquiry “management brainstorming sessions” and incident investigations there is a strong sense of “flavour of the month” initiative as well as an even stronger sense of avoiding accountability.  An overriding impression of a process delivered without context or explanation – why this and why now?  The end product is a wall of butcher’s paper populated with sweeping motherhood statements and management speak, completely absent any meaningful desire to manage known problems.

The pendulum, it seemed, had swung too far the other way.

Again, that is not to say it is not an idea that should not be explored and applied.  But it needs context.  It needs explanation; it needs skilful facilitation, and it needs, perhaps most importantly, dedicated and meaningful follow-up with implementation.  Otherwise?  Well, we have all been in “those” types of sessions.

Another aspect of the “what went right” investigations is the requirement for something to have occurred.  There needs to be an incident or near miss to trigger the enquiry.

A risk in the “what went right” enquiry (without more) is that it can contribute to the illusion of safety.

The illusion of safety is the gap between safety management as we imagine it in our organisation and what happens in practice.  Incident investigations can be a powerful tool in exposing the illusion of safety because they have the potential to illustrate the disconnect between what we think happens and what is happening.  By just focusing on “what went right“, particularly in near miss incidents, we may fuel the illusion of safety and create a narrative that our systems are working to protect us from these incidents – effectively papering over the cracks in the edifice.

While avoiding blame and promoting open discussion is important, so too is avoiding sugar-coating the situation.  Again, balance, transparency and genuine enquiry ought to be the goal.

I would like to suggest something different – investigating work as it is performed; investigating when nothing happens.

An investigation framework that I find useful uses systems as opposed to causal analysis.

It supposes that organisations have systems and processes in place to prevent certain things from happening and tries to understand:

  1. What should have happened: how should these are systems and processes have been applied in a particular case to prevent the particular thing from happening; and
  2.  What happened: how was the work performed in the particular case.

From there, we identify and try to explain the “gap” between what should have happened and what did happen.

This framework is not concerned with “causation“.  All identified gaps are given equal attention and analysis, regardless of their potential causal relationship with the incident.  They are all important because they all represent a potential systemic weakness in safety management which, given a different factual matrix, could be causal.

The attractiveness of this framework is that it can help you identify systemic weakness when nothing has happened.

A few years ago I was involved in an incident leading to the prosecution of a client following a working at heights incident.  The incident and the various investigations that followed revealed the usual list of suspects:

  •  Training not followed;
  •  Procedures not followed;
  •  Risks not identified;
  •  Lack of supervision;
  •  Documentation not completed properly, and so on.

As part of working with that client, we applied the systems analysis framework to a range of other, similar high-risk work, including:

  •  Examples where the same task had been performed;
  •  Examples of different working at heights tasks; and
  •  Examples of other high-risk work tasks, including lifting operations and confined space entry.

In every case, the work had been performed “successfully“, without incident or near miss.

However, the analysis of the gap between how the work should have been performed and how it was performed demonstrated the same types of “failures” in the way that work was ordinarily performed as when the incident occurred.

In other words, even when work was “successful”, procedures were not followed, risks were not identified as well as they could have been, training was not complied with, documentation was not completed and so on.

The systemic weaknesses were not just present at the time of the incident.  They were characteristic of the way work was performed in the days and months previously.

The incident was not a one-off departure from an otherwise “good” system – it was simply evidence of otherwise broader, systemic failures.

Moreover, this system analysis approach highlighted weaknesses hidden by the traditional safety metrics – injury rates, action items closed out, hazards reported, management site visits, etc. – all of which were “green“.

I have applied this method of review from time to time over the years where I have been able to convince clients of its value.  On every occasion it brings to light the gap between the safety as imagined and safety in practice, lifting the veil on the illusion of safety.

In the Pike River Royal Commission, the Commission carefully examined Pike River’s system of incident investigation to understand if it “worked“.  They reviewed 1083 incident investigations and did a detailed examination of 436 of them.  Managers were subject to examination of their understanding of the investigation process, and ultimately the Commission found that “incidents were never properly investigated“.

You can see an example of the examination of management HERE.

Weakness in incident investigations, amongst other important systems elements, formed the basis of significant criticism of Pike River and its management:

 Ultimately, the worth of a system depends on whether health and safety is taken seriously by everyone throughout an organisation; that it is accorded the attention that the Health and Safety in Employment Act 1992 demands.  Problems in relation to risk assessment, incident investigation, information evaluation and reporting, among others, indicate to the commission that the health and safety management was not taken seriously enough at Pike.

 What do your philosophy and implementation of incident investigations say about you?

Post incident conduct: Is it relevant?

You can download a PDF version of this update HERE

A recent Victorian decision, Australian Box Recycling , received some attention on health and safety social media sites because of comments about the lack of remorse by the company.

The case involved a workplace fatality, but prior to the prosecution, the company applied to deregister and did not take part in the proceedings. Deregistration was unsuccessful, although the owners of Australian Box Recycling had started a new company doing similar things, and the case against the company proceeded.

The prosecution argued that the actions by Australian Box Recycling showed a lack of remorse for the incident, and the court agreed describing their conduct as “contemptible“, imposing an $800,000 fine:

Their attempt to wash their hands of their responsibilities by shutting down the company once charges were laid, refusing to take part in court proceedings, and starting up a similar company just nine months after their employee died is utterly contemptible and should be condemned

The conduct of a company after a workplace accident can have a significant bearing on the outcome of legal proceedings. Very often, the actions taken by a company can be used to argue mitigation and reduce any penalty the Court might otherwise impose.

A notable example is when a company pleads guilty at the earliest opportunity. Although there are no hard and fast rules about the value of an early guilty plea, it is often associated with discounts on penalties of up to 25%.

Other “post-accident” factors include:

  • The steps taken by the company to improve health and safety;
  • The level of support shown for any injured personnel and their families; and
  • The level of remorse shown by an organisation – often evidence by the factors above.

Conduct that evidences a lack of remorse or a lack of cooperation can have significant, adverse consequences for a company.

One case where this played out was the prosecution of Esso Australia Pty Ltd
following the Longford Gas Plant Explosion in 1998.

In the sentencing hearing following prosecution, the Court was very critical of Esso’s ongoing failure to accept responsibility for the accident:

However, before imposing sentence on Esso it is unfortunately necessary to examine the litigious conduct of Esso in these proceedings. It is necessary both of itself and as an incident of sentencing – remorse and rehabilitation being relevant to that end.

Esso and its senior officers have expressed remorse for the tragic loss of life and injury … I have no doubt that that personal remorse is genuine … I acknowledge that genuine remorse. However, personal expressions of remorse need to be translated into reality. In the present case, they have not been. here are three matters which militate against corporate remorse.” [40 – 43]

The factors that mitigated against that remorse were:

  • The litigious treatment of the employees;
  • The conduct of the defence in the trial, which was described as “one of obfuscation – designed not to clarify, but to obscure” [45]; and
  • The “lamentable failure of Esso to accept its responsibility for these tragic events“. [46]

In another example, a company, Ferro Con (SA) Pty Ltd was heavily criticised following a workplace fatality when it relied on an insurance policy:

In my opinion Mr Maione and Ferro Con have taken positive steps to avoid having to accept most of the legal consequences of their criminal conduct as determined by the course of justice. This has occurred through Mr Maione successfully calling on an insurer to pay his fine .” [78]

In my opinion Mr Maione’s actions are so contrary to a genuine acceptance of the legal consequences of his criminal offending that they dramatically outweigh the benefits to the justice system of the early guilty plea and statement of remorse. Accordingly it would be entirely inappropriate to grant any reduction of penalty to Mr Maione or Ferro Con in these circumstances .” [81]

But it is not just prosecutions where post incident behaviour by a company and its officer can influence the outcome of a legal process, or lead to criticism.

A striking example comes from the Montara Commission of Inquiry.

The Inquiry was established following the uncontrolled release of hydrocarbons from an offshore drilling platform off the coast of North West Australia in August 2009.

During the Inquiry, one of the key participants. PTTEPAA was heavily criticised for its conduct, to the point that the Inquiry recommended that the Australian Government review PTTEPAA’s licence to operate in Australia:

The Inquiry considers that the manner in which PTTEPAA approached the National Offshore Petroleum Authority (NOPSA), the NT DoR and the Inquiry itself provides further evidence of the company’s poor governance. PTTEPAA did not seek to properly inform itself as to the circumstances and the causes of the Blowout. The information that it provided to the regulators was consequently incomplete and apt to mislead. Its dealings with this Inquiry followed a similar pattern.

The Inquiry recommends that the Minister for Resources and Energy review PTTEPAA’s licence to operate at the Montara Oilfield. At this juncture the Inquiry has little confidence in PTTEPAA’s capacity to apply principles of sensible oilfield practice ” (page 12)

There is nothing in these cases that should discourage a business from understanding and acting on any legal rights they have following a workplace accident. However, these rights and any legal strategy need to be carefully balanced as part of an overall response.

 

The Safety Paradox and the challenge of health and safety assurance

I am currently working on a new book on practical health and safety assurance, which I hope to have out by the end of the year, but I recently came across an article published through LinkedIn entitled Six Mistakes H&S Managers Make with Occupational Health & Safety.

I do not want to comment on the article itself, although it is worth a read. It was the following paragraph that caught my attention, and goes to the heart of what I am trying to explore in the context of health and safety assurance.

Habits are what save us when our mind is not consciously on the job. Many of the health and safety systems we use (such as Take-5s, prestart talks, and health and safety observations) are aimed at creating habits in people’s minds so that they are constantly aware of hazards in the work environment, and can react when they see something that is about to hurt them. Each little action and health and safety discussion might not prevent an incident itself, but they all add together to create valuable health and safety habits. Do not think that you are repeating this training or talk for the millionth time and that you are wasting time and money. When the crisis hits it will probably be these repetitive sessions that will prevent great harm or loss.

First, let me explain what I mean by the Safety Paradox. The Safety Paradox is my theory that all health and safety initiatives have within them the potential to both improve and undermine safety, and one of the significant ways that safety initiatives undermine safety is by contributing to the Illusion of Safety.

The Illusion of Safety is characterised by the Gap between the safety system as we imagine it, and the system in practice, and it is often caused by activity: Because we are doing a lot of stuff for safety, it must all be good and positive and lead to a good safety outcome.

We know that not all safety initiatives are always good, and that safety initiatives can undermine safety.

Research into JHAs and other frontline risk assessment tools shows how they can disengage the workforce from the organisation’s health and safety message, but at the same time create an unfounded sense of comfort in management that workers have – and are using – appropriate tools to manage risk (See for example: D. Borys, Exploring risk awareness as a cultural approach to safety: Exposing the gap between work as imagined and work as actually performed).

The Baker Panel Review into the BP Texas City Refinery Explosion referred to “initiative overload”, identifying that many well intentioned safety initiatives may have overloaded refinery personnel to the detriment of safety.

To my mind, the assumption that we are doing something in the name of health and safety, and therefore it must be good and it must be achieving the purpose for which it is intended is one of the foundational building blocks for the Illusion of Safety, and must be challenged.

So, in this case when the author says:

Many of the health and safety systems we use (such as Take-5s, prestart talks, and health and safety observations) are aimed at creating habits in people’s minds so that they are constantly aware of hazards in the work environment, and can react when they see something that is about to hurt them

Health and safety assurance requires us to understand that this outcome, this purposecreating habits in people’s minds so that they are constantly aware of hazards in the work environmentis actually being achieved. The assumption that the purpose is being achieved flies in the face of the Safety Paradox, contributes to the Illusion of Safety and undermines safety and health in the workplace.

Health and safety assurance requires us to understand the potential negative outcomes of these safety activities. For example, to what extent does the constant requirement to fill out a piece of paper before every job (i.e. a Take – 5) desensitise the workforce to risk, trivialise risk or make the workforce think that management doesn’t trust them? To what extent does the workforce believe that these pre-job processes and signature collections are management’s attempt to, adopting the language of the Borys article above, “cover their arse”?

It is wholly insufficient for the safety industry to say that these safety initiatives are theoretically good processes, but not understand the potential negative outcomes nor to invest the time and energy to understand whether the safety initiatives are achieving their intended purposes.

And when the author goes on to say:

Do not think that you are repeating this training or talk for the millionth time and that you are wasting time and money. When the crisis hits it will probably be these repetitive sessions that will prevent great harm or loss.

surely there must be some onus to understand whether this thing that has been done for the “millionth time” is not having a negative effect? I can think of nothing more damaging for health and safety in the workplace than doing something for the “millionth time” and not knowing if it is achieving its purpose, or more damaging, undermining its intended purpose.

The safety industry must be accountable for its initiatives, and management must hold the safety industry accountable. It is simply unacceptable to continue to pump initiatives and processes into organisations on the theoretical assumption that they are “good” for safety without being able to demonstrate that those initiatives and processes are achieving the purpose which they were designed.

By the way, your injury rate performance is not a measure of whether your health and safety initiatives are achieving the purpose.

 

 

 

The logic of safety penalties and prosecutions

Recently, the Victorian Government made some noise about increasing  penalties for breaches of health and safety legislation, with maximum fines increasing to $3M. This is consistent with most jurisdiction in Australia, and the drive for significant personal and corporate penalties for breaches of health and safety legislation has always been a feature of the safety industry in Australia.

Recently I wrote about the value of safety prosecutions ($450,000 is this what we want from safety prosecutions), and argued for an alternative model:

What if, rather than prosecutions, organisations who have had accidents could opt in to a safety learning program. In this case, for example, a detailed investigation and research project to understand all of the factors influencing the incident. Not just the role of the employers and workers, but also the regulator, the way safety information is made available and the best ways to help small and medium sized businesses implement a safety program.

  • The project would be paid for by the employer – so there is still a financial penalty.
  • Both the incident and the research could be highly publicised to add to the deterrent value.
  • Valuable lessons would be available within months, as opposed to meaningless factual statements after years.

Prosecutions can, and should still be reserved for the worst classes of offence but these would be very limited. [my emphasis added]

It has been reported by the ABC today (9 July 2016) that a Queensland electrical contractor has been charged with manslaughter following a fatality at a building site.

Leaving aside the merits of the individual case, this action does beg the question, why all the fuss about penalties in health and safety legislation if they are not used for the most serious offences? What does it say about health and safety legislation as an avenue for legitimate criminal prosecutions?

More importantly, would health and safety legislation be a better tool for improving health and safety in workplaces if it was not based on prosecution and fear, but had legitimate mechanisms for the promotion and sharing of lessons about workplace safety?

Is there an argument that says health and safety legislation should focus on workplace health and safety, and it should be left to the criminal law processes to deal with serious failures of workplace safety? Alternatively, if health and safety legislation is an appropriate tool to punish individuals and organisations, shouldn’t it be used when the opportunity arises?

If not, then what was all that harmonisation fear mongering about?