Work as it is actually performed: investigating when nothing happens

There has been some discussions and commentary in various online forums recently looking at the issue of “positive” incident investigations.  Although there seems to be a variety of nuances in the description of positive investigations they focus on “what went right“.

Some of these investigation models have also incorporated a broader management technique of “appreciative enquiry“, which, as I understand it, came to prominence in the late 1980’s (see HERE for examples and information about appreciative enquiry).

The discussion about these frameworks describes the “what went right” philosophy as a positive view of investigations. It is a philosophy that does not focus on blame, but promotes discussion:

The benefit of that approach is that the conversation with witnesses is an entirely positive one. It is not about what could have happened. Not about the doom and gloom narrowly averted. Rather, it is about their heroic act, well designed process or lucky event that allowed us to avoid the adverse outcome. People love talking about positive things particularly if they had something to do with them. (https://www.linkedin.com/pulse/investigate-your-serious-near-misses-positive-way-michael-tooma?trk=prof-post)

 In my view, when organisations are not mature enough to talk about issues in a non-judgmental way, without attribution of blame, the “what went right” enquiry may present a risk.  It may be seen as a contrivance, with the facilitator spending a lot of their time saying things like “remember this is not about blame“.

In “mature” organisations the need to construct a system of enquiry to focus on the positive and avoid discussion of blame is largely redundant because the participants are aligned with and support the goals of the organisation.  Their desire to support the goals of the organisation overrides any petty, personal concerns about individual praise or blame.

If you have ever been privileged enough to work with high-performance sporting teams or elite military forces, you will understand this idea.

A precondition of belonging to these groups is the willingness to say and hear things that support the group’s objectives without personal agendas or taking personal affront.  The newest member of the team has a license to speak frankly about the performance of the most senior, and the most senior is expected to accept that conversation, not in the context of them personally, but in the context of the overall objectives of the team.

The extent to which organisations have to contrive a system whereby participants are corralled by a “what went right” narrative says a lot about the culture of an organisation and the “buy in” that people have to team objectives.

That is not to say that appreciative enquiry or investigating “what went right” does not have a place in organisations, nor that it could be an important building block along the way to developing something like an elite performing team.  But as a word of caution, you should also understand some of the paradoxes involved.

The Safety Paradox supposes that any initiative done in the name of health and safety has the potential to both improve and damage health and safety in a workplace.

Having sat through appreciative enquiry “management brainstorming sessions” and incident investigations there is a strong sense of “flavour of the month” initiative as well as an even stronger sense of avoiding accountability.  An overriding impression of a process delivered without context or explanation – why this and why now?  The end product is a wall of butcher’s paper populated with sweeping motherhood statements and management speak, completely absent any meaningful desire to manage known problems.

The pendulum, it seemed, had swung too far the other way.

Again, that is not to say it is not an idea that should not be explored and applied.  But it needs context.  It needs explanation; it needs skilful facilitation, and it needs, perhaps most importantly, dedicated and meaningful follow-up with implementation.  Otherwise?  Well, we have all been in “those” types of sessions.

Another aspect of the “what went right” investigations is the requirement for something to have occurred.  There needs to be an incident or near miss to trigger the enquiry.

A risk in the “what went right” enquiry (without more) is that it can contribute to the illusion of safety.

The illusion of safety is the gap between safety management as we imagine it in our organisation and what happens in practice.  Incident investigations can be a powerful tool in exposing the illusion of safety because they have the potential to illustrate the disconnect between what we think happens and what is happening.  By just focusing on “what went right“, particularly in near miss incidents, we may fuel the illusion of safety and create a narrative that our systems are working to protect us from these incidents – effectively papering over the cracks in the edifice.

While avoiding blame and promoting open discussion is important, so too is avoiding sugar-coating the situation.  Again, balance, transparency and genuine enquiry ought to be the goal.

I would like to suggest something different – investigating work as it is performed; investigating when nothing happens.

An investigation framework that I find useful uses systems as opposed to causal analysis.

It supposes that organisations have systems and processes in place to prevent certain things from happening and tries to understand:

  1. What should have happened: how should these are systems and processes have been applied in a particular case to prevent the particular thing from happening; and
  2.  What happened: how was the work performed in the particular case.

From there, we identify and try to explain the “gap” between what should have happened and what did happen.

This framework is not concerned with “causation“.  All identified gaps are given equal attention and analysis, regardless of their potential causal relationship with the incident.  They are all important because they all represent a potential systemic weakness in safety management which, given a different factual matrix, could be causal.

The attractiveness of this framework is that it can help you identify systemic weakness when nothing has happened.

A few years ago I was involved in an incident leading to the prosecution of a client following a working at heights incident.  The incident and the various investigations that followed revealed the usual list of suspects:

  •  Training not followed;
  •  Procedures not followed;
  •  Risks not identified;
  •  Lack of supervision;
  •  Documentation not completed properly, and so on.

As part of working with that client, we applied the systems analysis framework to a range of other, similar high-risk work, including:

  •  Examples where the same task had been performed;
  •  Examples of different working at heights tasks; and
  •  Examples of other high-risk work tasks, including lifting operations and confined space entry.

In every case, the work had been performed “successfully“, without incident or near miss.

However, the analysis of the gap between how the work should have been performed and how it was performed demonstrated the same types of “failures” in the way that work was ordinarily performed as when the incident occurred.

In other words, even when work was “successful”, procedures were not followed, risks were not identified as well as they could have been, training was not complied with, documentation was not completed and so on.

The systemic weaknesses were not just present at the time of the incident.  They were characteristic of the way work was performed in the days and months previously.

The incident was not a one-off departure from an otherwise “good” system – it was simply evidence of otherwise broader, systemic failures.

Moreover, this system analysis approach highlighted weaknesses hidden by the traditional safety metrics – injury rates, action items closed out, hazards reported, management site visits, etc. – all of which were “green“.

I have applied this method of review from time to time over the years where I have been able to convince clients of its value.  On every occasion it brings to light the gap between the safety as imagined and safety in practice, lifting the veil on the illusion of safety.

In the Pike River Royal Commission, the Commission carefully examined Pike River’s system of incident investigation to understand if it “worked“.  They reviewed 1083 incident investigations and did a detailed examination of 436 of them.  Managers were subject to examination of their understanding of the investigation process, and ultimately the Commission found that “incidents were never properly investigated“.

You can see an example of the examination of management HERE.

Weakness in incident investigations, amongst other important systems elements, formed the basis of significant criticism of Pike River and its management:

 Ultimately, the worth of a system depends on whether health and safety is taken seriously by everyone throughout an organisation; that it is accorded the attention that the Health and Safety in Employment Act 1992 demands.  Problems in relation to risk assessment, incident investigation, information evaluation and reporting, among others, indicate to the commission that the health and safety management was not taken seriously enough at Pike.

 What do your philosophy and implementation of incident investigations say about you?

When does the language of “zero harm” become unlawful?

I am not a fan of the language of “zero“, either as an aspiration or as a stated goal. It has never sat well with me, and seems so disconnected from day to day reality in both society and a workplace that people cannot help but become disconnected from, or dismissive of, the message behind the term. My view has always been that the language of zero actually often undermines the objectives it is trying to achieve (see this case for example).

If you are interested in this topic (and if you are involved in safety you should be) there are far more passionate, learned and articulate critics of the language of zero than me – See for example, anything by Dr. Robert Long.

However, recently I have been asked to do quite a bit of work around psychological harm in the context of occupational safety and health. In particular, how the legal risk management of psychological harm in the context of safety and health might differ from the Human Resources (HR)/employee relations context.

WHS legislation around Australia expressly includes “psychological” health within its remit and the Western Australian Department of Mines and Petroleum has acknowledged that they regard “health” as including “psychological” health, even though it is not expressly described in the State’s mining legislation.

What has emerged, at least to my mind, is the extent to which our policy, procedure and policing approach to safety and health, far from alleviating psychological harm in the workplace, might be contributing to it.

Safety management might be part of the problem.

In an ongoing Western Australian inquiry into the possible impact of fly in/fly out work on “mental health” the Australian Medical Association identified that the way health and safety is managed can contribute to a “distinct sense of entrapment” (page 43):

The AMA also expressed its concerns about this issue, noting that “[o]nerous rules, safety procedures and focus on achievement of production levels have been shown to create a distinct sense of entrapment in FIFO workers.”

The inquiry drew, in some measure, on an earlier report, the Lifeline WA FIFO/DIDO Mental Health Research Report 2013 which also appeared to note the adverse impact of safety and health management on psychological well-being. For example “[a]dhering to on-site safety rules” was identified as a workplace stress (page 77). Interestingly, the Lifeline report noted a sense of “intimidation” brought on by the number of rules and regulations associated with work on a mine, and :

This sense of intimidation was further mirrored in the outcomes of mining safety regulations which in theory were designed to care for workers but in practice led to inflexible regulation over genuine safety concerns (page 81).

Examples from the Lifeline report include:

… a participant recalled a situation in which a worker handling heavy loads required an adhesive bandage but was unable to ask someone to get them for him because he had to fill out an accident report first (which he was unable to do mid-job); hence he had to carry on working without attending to his cuts. Alternatively, another example of the application of safety rules in an inflexible manner was illustrated when a group of workers were reprimanded for not wearing safety glasses on a 40 degree day even though they could not see from them due to excessive sweating. Hence, safety rules themselves were accepted as a necessary part of work but their implementation in an inflexible uniform manner created stress as workers felt their impact hindered their ability to conduct basic work tasks safely and/or without attracting rebuke. Hence, site rules and regulations could translate into arbitrary and punitive forms of punishment, which undermined participants’ ability to fulfil jobs to their satisfaction and left them feeling insecure with their positions (page 81).

It seems, then, that we need to think beyond our own perceptions of what might contribute to workplace stress and understand the impact that our efforts to manage health and safety might actually be having. Again, as the Lifeline research noted:

… although past research has shown that site conditions and cultures, such as isolation and excessive drinking are problematic, this research shows that the regimented nature of working and living on-site also takes a toll on mental health and wellbeing. From the responses of many participants, it was apparent that following site safety rules (either under pressure of internal monitoring or in the perceived absence of adequate safety precautions by co-workers and supervisors) was a significant stressor. Participants felt unable to apply self-perceived common-sense judgments and also reported feeling vulnerable to intensive scrutinising, intimidation and threats of job loss (page 82) [my emphasis added].

The common criticisms of the language of “zero” seem to me to go directly to the factors that have been identified in this research as contributing to psychological harm in the workplace. The pressure to comply with rules, fear about reporting incidents, the inability to exercise individual judgement on how to manage risk and the inflexible application of process are all side-effects of the language of “zero“.

Up until this point the debate around “zero harm” and its utility (or otherwise) as the headline for safety management has been relatively benign. Apart from the advocacy of people like Dr Robert Long “zero harm” seems to have been perceived as a relatively neutral strategy, insofar as people believe that it “does no harm“, and “what’s the alternative?”.

It seems, in fact, that much harm may be perpetuated in the name of “zero“, and at some point the behaviours that it drives will be found to be unlawful.

It is also going to be interesting to see how health and safety regulators, often the champions of “zero harm” oversee its potential impacts on psychological harm in the workplace. Indeed, it would be very useful to see what risk assessments, research or other measures were taken by regulators prior to introducing “zero harm” style campaigns or messages to understand the potential effects of their interventions, or any subsequent research to understand the potential harm they may have done.

Gallifreyan_20150512223239

Lead indicators: Reinforcing the illusion of safety

One of my biggest gripes about safety management over the past 20 plus years is the lemming like fascination with “indicators“.

Notoriously, major inquiries around the globe have found that when organisations focus on “lag” indicators (typically personal injury rates) they miss, or become blinded to, more significant risks and catastrophic events often result.

Most recently, this was succinctly articulated by the Pike River Royal Commission which stated:

The statistical information provided to the board on health and safety comprised mainly personal injury rates and time lost through accidents.  … The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries.  … The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation.

I have long feared, and it appears that we are heading down the same path under the guise of “lead” indicators. A recent study described in the Queensland Government’s eSafe newsletter found serious shortcomings in using traditional lag indicators for measuring safety.

Nothing suspiring there!

Apparently, the study went on to note a range of leading indicators that helped to deliver good personal injury performance. These indicators included fairly common place practices such as:

  • subcontractors being selected based (in part) on safety criteria.
  • subcontractors submitting approve, site-specific safety programs.
  • the percentage of toolbox meetings attended by supervisors and managers.
  • the percentage of planning meetings attended by jobsite supervisors and managers.
  • the percentage of negative test results on random drug tests.
  • the percentage of safety compliance on jobsite safety audits (inspections).

And so on.

I am not saying that any of these indicators are not good safety practices. They are. They should be measured as a measure of good safety practice – but they are not a measure of a safe workplace. They are not an indicator of risks being controlled.

The problem with any general “indicator” approach, lead or lag, is it does not actually give us any insight into whether the risks in the business are being controlled. It simply perpetuates the illusion of safety.

In other words, I have a bunch of indicators. The indicators are being met. Therefore, the risks in my business are being controlled.

Nonsense.

Think of a potential fatal risk in your business. Take confined spaces as an example.

What do any of the indicators described above tell you about whether that risk is being controlled? Typically nothing.

What are the crucial systems in your business?

How do you prove that they are effective?

Contractor safety management series Part 2: Stratton V Van Driel Limited

Stratton v Van Driel Limited is the second case in our contractor safety management series.

It is a somewhat older decision, having been handed down in 1998, but useful in that it looks at a narrow issue that is very important in the context of contractor safety management: Control.

In 1995 Mr Baum, a roof plumber was seriously injured when he fell down a ladder. Mr Baum was employed by Signal & Hobbs, who in turn had been engaged by Van Driel Limited, to do work on the new Dandenong Club in Dandenong, Victoria.

The essence of the charges against Van Driel was that it had not done everything Reasonably Practicable to provide a safe system of work, in that it had not managed the risks associated with working on the roof.

Van Driel defended the charges on the basis that they did not have relevant control over the way an independent contractor did their work.

You can access the video presentation of the case here.

Contractor safety management series: Introduction

I have just finished finalising a presentation for a case involving the death of a worker employed by a subcontractor that was 2 companies removed from the Principal. The case involved the prosecution of the Principal in respect of a fatality.

Earlier this year I prepared a post and presentation on the Hillman v Ferro Con (SA) decision, which also involved the death of a worker employed by a contractor. You can access the blog post and video presentation here.

Contractor safety management seems to be an ongoing struggle for a lot of businesses, so I thought that I would do a series looking at a number of cases that examine the issues around contractor safety management. At the end of the series I will try to bring together a number of the issues raised to see if we can’t structure some key guiding principles.

At this stage, I am planning a series of 10 or 11 video presentations looking at some of the key cases across a number of jurisdictions over the last few years.

The first case in the series is Nash v Eastern Star Gas, a recent decision of the New South Wales Industrial Court which was handed down on 6 September 2013. You can access the blog post and video presentation here.

I hope you enjoy the series, and I look forward to any comments or feedback.

Delphic motherhood statements part 2 – safety documents that nobody can understand

A little while ago I did a post looking at the complexity of documented safety management systems, and the role that documentation has played in undermining effective safety management. You can review the post here.

I was recently sent an article (you can access it here) which underscores the potential negative impact safety documentation has on safety performance.

The New Zealand research found that:

  • Two thirds of employees did not fully understand information contained in health and safety documents , including safety procedures;
  • 80% of employees were not able to accurately complete hazard report forms; and
  • Safety documents were highly complex and used vocabulary that employees did not understand.

A fascinating aspect of the research is that it provides a list of words that were unfamiliar and confused employees. Some of those words included “significant hazards” , “competence”, “accountabilities” and “not adversely affect”. All words that reflect the requirements of legislation and guidance material but have little place in the day to day comprehension of workers.

From my own perspective, I have to say that this research is entirely consistent with my study of major accident events going back 30 years. Every major accident events enquiry that I have ever researched has identified that in some way the documented safety management systems undermine effective safety performance. Typically they are too complex for the people who have to implement them to understand.

Based on my experience I would add two further phrases to the list of unfamiliar words: ” reasonably practicable” and “root cause”. These two phrases are ubiquitous throughout safety management documents in Australia, yet universally whenever I am conducting obligations or investigation training there is no common (much less “correct”) understanding of what these things mean.

There are two things that I find professionally embarrassing as a person who has spent the last two decades specialising in safety and health management . The first is our continued reliance on lost time injury data as a measure of safety performance in light of the overwhelming evidence that they add no value to our understanding of the management of risk.

The second is , despite at least 30 years of “reminders” that out documented safety processes add little to the management of safety risks, almost universally we continue to do the same thing, in the same way but somehow expect a different. I think Einstein had something to say about that.

I have recently been working with a senior executive in an organisation who confronted a safety consultant with the following:

“if you can’t explain it to me easily, then you don’t understand it yourself “

An interesting test to apply to our safety documents?

Safety risk and safety data: Exploring management line of sight

I have recently done a video presentation on a fatality at the Adelaide Desalination plant, which you can find by following this link.

Recently, I was reading some of the transcript of the South Australian Senate Inquiry into the desalination plant (which you can find by following this link), and was struck by one manager’s description of all of the activity undertaken in the name of safety:

We start with the inductions when new staff join the project. So, at 6.30am, usually three times a week—I attend probably two of them; I was in one yesterday—we induct new staff onto the job. The first thing I point out is the list of non-negotiables. The second thing I point out is for each person to look after their mate. It starts there. We then have a standard list of documents. I will read from this list, because it’s quite a large list. There is the HSC risk register, task specific for each job. There is a construction execution plan. There is a JSA, task specific.

We have daily start cards for each area, which is another thing I introduced. I am not sure if we gave you a copy, but it’s a small easily-filled-in card where a work team can assess the risks of adjacent trades, etc. So, that is a specific thing. We have a pre-start meeting every day. There are SafeWork instruction notices posted at each of the work areas. We toolbox the job weekly, because the pace of this job changes. You can go out there in two-day gulps and the whole access can change, so we need to make sure people see that. We have the non-negotiables in place. We have site and work-front specific inductions, which is what I told you about. Again, I attended one yesterday.

I have regular safety walks. I have trained all of my management team and the two layers beneath that to go on safety walks. We have our OHSC risk register. There is a just culture model in place. So, if I need to address an incident and it turns out that this person needs retraining or perhaps needs to be disciplined or work outside the fence somewhere, we use this just culture model for that. We have all been trained in that. There are safety KPIs for management. There is a safety enhancement committee, which is a mixture of workers and staff. I actually chair a weekly safety leadership team, and that’s improving safety over and above. We are looking to refresh it all the time. And so it goes on. I have two pages of this stuff.

Now, there may have been far more information that sat behind all of this activity, but it seemed to me to be a typical approach to safety management – and one that typically gives no insight into whether the risks in the business are actually being managed.

One of my particular areas of interest in the context of safety management is “management obligations”, and more particularly how managers (at all levels) get assurance that the health and safety risks in their business are being effectively managed. It is a concept that I have referred to before and written about (Smith, 2012) as “management line of sight”.

An area of speciality for me is management obligations training; courses that are designed to help managers understand their legal obligations for safety and health, and how their behaviour – what they “do” – contributes to effective safety management.

Over the last 3 or 4 years I have put the following scenario to the various courses:

Who here knows about a risk in their business or area of responsibility that could kill someone?

Invariably, most hands go up.

Who has safety information that comes across their desk on a regular basis.

Again – most hands go up.

OK. What I would like you to do is to think about the risk. Then I want you to think about the data that you have looked at in the past 3 months.

Pause ……

What does that data tell you about how well the risk is being controlled?

And then the lights come on, with the realisation that their organisations spend inordinate amounts of time and resources producing volumes of information that tell them nothing about whether risks in the business are actually being controlled.

This “gap” was most recently highlighted in the Royal Commission into the Pike River Coal Mine Disaster (Pankhurst et.al, 2012), in which 29 men died in an underground coal mine explosion in New Zealand. The Royal Commission noted the following:

The statistical information provided to the board on health and safety comprised mainly [LTI rates]. The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries.

… The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation. (My emphasis).

Typically, in a training course discussion there is no meaningful consensus on  what the “crucial systems” are in a business, much less how we prove that they are effective.

What we can say with a high degree of certainty is that traditional measures of safety performance do not prove the effectiveness of crucial systems – certainly LTI and other personal injury rates do not, and we have known that for at least 25 years. However, other indicators are equally poor in creating insight into the control of crucial systems. The number of management site visits do not enlighten us, nor do the number of audit actions that have been closed out, the number of “behavioural observations” don’t help, the number of people trained, the number of corrective actions completed, the number of JHAs or “take 5s” done and on it goes.

These things are all indicators of activity, which are designed to ensure that the safety management systems are effective, but ultimately, they leave us in no better position as far as understanding the effectiveness of crucial systems.

There is another interesting challenge that falls out of exploring management line of sight, and that is, what should I be looking at?

Historically, and as I touched on above, we typically consider safety in the context of harm and risk: what can hurt people and how likely is it that they will be hurt? But line of sight and assurance demands a wider gaze than hazards and risks.

The Royal Commission (2012, volume 2, p. 176) also stated:

Ultimately, the worth of a system depends on whether health and safety is taken seriously by everyone throughout an organisation; that it is accorded the attention that the Health and Safety in Employment Act 1992 demands. Problems in relation to risk assessment, incident investigation, information evaluation and reporting, among others, indicate to the commission that health and safety management was not taken seriously enough at Pike. (my emphasis)

“Crucial Systems” mean more than gas monitoring or ventilation. They are more than the control of physical risks. They incorporate broader organisation systems around hazard identification and risk assessment, contractor safety management, management of change, incident investigation and so on. All elements that are designed to work together so that the “system” as a whole is effective to manage risk.

If organisations are weak insofar as they cannot “prove” that physical risks are being controlled, the reporting, assurance and line of sight to prove that these other “crucial” systems are effective is almost non existent.

When was the last time you received a report “proving the effectiveness” of your incident investigations, for example?

What are the “crucial systems” in your business, and how would you “prove” that they were effective. Food for thought.

References

Pankhurst, G., Bell, S., Henry, D (2012). Royal Commission on the Pike River Coal Mine Tragedy. Wellington, New Zealand

Smith , G. (2012). Management Obligations for Health and Safety. CRC Press, Boca Raton

25 Years on: Remembering Piper Alpha

In the past few weeks I have been asked to do presentations and share my views about the legacy of Piper Alpha in this, the 25th anniversary year of the disaster.

For me, the positive legacy is the advancement in safety regulation, engineering and “safety in design” that has seen the improvement of the physical safety of high hazard workplaces. Safety in design has also improved the “survivability” of disasters so that when accidents to occur, their consequences are better mitigated.

The ongoing disappointment, however, is the persistent failure of management oversight and assurance to properly understand if health and safety risks are being managed. This is a failure that has played out in every major accident inquiry since Piper Alpha and continues to undermine effective safety management.

You can see a video presentation of these ideas and concepts here.

Are health and safety managers “company officers” and should they be?

This post has been prompted by recent activity on various blogs and safety discussion boards about whether a health and safety manager could be a Company Officer for the purposes of recently adopted health and safety legislation.

For those of you who follow this blog outside of Australia, part of this post is particular to recent legislative developments is Australia, although part of the discussion also looks at the broader accountabilities of health and safety managers.

Since about 2008, Australia has been engaged in a discussion about a legislative change agenda commonly referred to as “harmonisation”. The object of harmonisation was to achieve nationally consistent health and safety legislation across all jurisdictions in Australia. Although due to commence in 2013, and despite “harmonised” laws having been implemented in a number of jurisdictions, to date, the objectives of harmonisation have not been achieved.

You can read more about harmonisation here.

One of the key elements of harmonisation is a positive obligation of “due diligence” imposed on “company officers”.

Previously, under Australian law Company Officers could be held personally liable for breaches of safety legislation where offences occurred due to the company officers consent, connivance or neglect. A recent example of this type of case was the Western Australian decision, Fry v Keating, and you can see a presentation of this type of case here.

The due diligence obligations mean that relevant individuals must demonstrate positive actions to be satisfied that health and safety risks are being effectively controlled. So for example, the “model bill” used to frame harmonised legislation provides that due diligence includes taking reasonable steps:

  • to acquire and keep up-to-date knowledge of work health and safety matters; and
  • to gain an understanding of the nature of the operations of the business or undertaking of the person conducting the business or undertaking and generally of the hazards and risks associated with those operations; and
  • to ensure that the person conducting the business or undertaking has available for use, and uses, appropriate resources and processes to eliminate or minimise risks to health and safety from work carried out as part of the conduct of the business or undertaking; and
  • to ensure that the person conducting the business or undertaking has appropriate processes for receiving and considering information regarding incidents, hazards and risks and responding in a timely way to that information; and
  • to ensure that the person conducting the business or undertaking has, and implements, processes for complying with any duty or obligation of the person conducting the business or undertaking under this Act; and
  • to verify the provision and use of the resources and processes referred to above.

Given that harmonisation is about legislation aimed specifically at managing health and safety risks, it does suggest two important questions: Could health and safety managers by company officers for the purposes of the due diligence obligations, and should they be?

In my view, the answers are “probably not”, and “yes”.

Although health and safety managers, are often “senior managers”, they are not by default company officers. The term “officer” of a corporation is defined by s 9 of the Corporations Act 2001, and relevantly for this post includes a person who makes, or participates in making, decisions that affect the whole, or a substantial part, of the business of the corporation.

A relatively recent case looking at the issue of who may be a company officer was Shafron v Australian Securities and Investments Commission [2012] HCA 18, which you can access here. The case was one of a series of cases that concerned the prosecution of a number of company officers and executive managers of James Hardie arising out of disclosures by the company over its ability to fund potential asbestoses liabilities.

Mr Shafron was both Company Secretary and the General Legal Counsel, and the relevant arguments turned on whether Mr Shafron could be a company officer in his capacity as General Legal Counsel.

Part of the argument run by Mr Shafron was that he could split the two roles, Company Secretary and General Counsel; so that when he was acting in his capacity as a Company Secretary, he was a Company Officer, but that in his capacity as General Counsel.

That majority of the High Court “greatly doubted” that the capacities could be spilt in that way, but usefully for this discussion went on to discuss whether Mr Shafron was a Company Officer when acting in his capacity as General Counsel.

In forming the view that Mr Shafron was a person, who makes, or participates in making, decisions that affect the whole, or a substantial part, of the business of the corporation  the High Court made a number of observations.

First, that Mr Shafron was a senior officer, the second or third most senior executive in the company.

Second, Mr Shafron was one of a small group of three people who were “responsible for formulating” the relevant proposals.

Third, Mr Shafron’s “participation” went beyond merely providing advice – he played a large and active part (along with two others) in putting together the proposal that they chose should be put to the Board and adopted.

What is clear from the decision is that in some circumstances, whether a person is a Company Officer is situational – it is not fixed. So a person in making (or participating in making) some decisions may be regarded as a Company Officer, but in other cases may not.

On the face of the reasoning of the High Court, it is difficult to envisage too many circumstances where a health and safety manager would be likely to be found to be a Company Officer.

In my experience, health and safety managers are not typically amongst the senior echelon of executive managers, nor do they put proposals directly to the Board. To the extent that health and safety management proposals are put before a Board, they often come via a CEO or “sustainability” manager who put their own imprimatur on the proposal.

So to answer the first question, could health and safety managers be company officers for the purposes of the due diligence obligations? In my view – I cannot rule it out, but probably not,

As interesting (or otherwise) as this discussion might be, the more fundamental question is whether health and safety managers should be regarded as company officers – or at least have equivalent obligations of due diligence under safety legislation.

By way of comparison, there was and continues to be ongoing debate about how the mining industry is some parts of Australia will implement the principals of harmonisation.  At one point, a draft set of what were referred to, as “non-core” mining regulations were prepared, and without going into the rationale behind, and operation of the non-core regulations they did propose:

  1. The appointment of a senior person on a mine site who would be responsible for safety under the regulations – the Site Safety executive or SSE; and
  2. That the SSE would be “deemed” a company officer for the purposes of the health and safety regulations.

In doing this, the regulations were clear that the positive obligations of due diligence would apply to that position.

There seems to me to be no reason in principle why a similar approach could not be adopted in relation to health and safety managers. And if you look at the due diligence obligations as set out above, there is no reason that I can think of why you would not expect a health and safety manager to be across all of those requirements.

So, even is a health and safety manager may not be a company officer, there is no reason why they should not have the positive obligations of due diligence. After all, where would we expect the company officers to get the information top discharge their obligations if not from the health and safety manager?

Transpacific Industries: Disciplinary action as a safety control

This is a case I have looked at before, and often use in management training to help explain the concept of reasonably practicable, and the relationship between reasonably practicable and the hierarchy of controls.

I was prompted to post it following the release of Safe Work Australia’s guidance material on reasonably practicable.

The case involved the prosecution of Transpacific Industries following a fatality in 2009. In an earlier, almost identical  incident, Transpacific had responded to a breach of its procedures with what the Court described as “robust disciplinary action“. When the repeat incident occurred in 2009 the question that was argued was whether the earlier disciplinary action was a “sufficient response“: Was it reasonably practicable? You can access the video discussion of the case here, and a copy of the case here.

To receive future updates and case studies, please subscribe to the blog, or follow me on twitter.

Regards.

Greg Smith