When does the language of “zero harm” become unlawful?

I am not a fan of the language of “zero“, either as an aspiration or as a stated goal. It has never sat well with me, and seems so disconnected from day to day reality in both society and a workplace that people cannot help but become disconnected from, or dismissive of, the message behind the term. My view has always been that the language of zero actually often undermines the objectives it is trying to achieve (see this case for example).

If you are interested in this topic (and if you are involved in safety you should be) there are far more passionate, learned and articulate critics of the language of zero than me – See for example, anything by Dr. Robert Long.

However, recently I have been asked to do quite a bit of work around psychological harm in the context of occupational safety and health. In particular, how the legal risk management of psychological harm in the context of safety and health might differ from the Human Resources (HR)/employee relations context.

WHS legislation around Australia expressly includes “psychological” health within its remit and the Western Australian Department of Mines and Petroleum has acknowledged that they regard “health” as including “psychological” health, even though it is not expressly described in the State’s mining legislation.

What has emerged, at least to my mind, is the extent to which our policy, procedure and policing approach to safety and health, far from alleviating psychological harm in the workplace, might be contributing to it.

Safety management might be part of the problem.

In an ongoing Western Australian inquiry into the possible impact of fly in/fly out work on “mental health” the Australian Medical Association identified that the way health and safety is managed can contribute to a “distinct sense of entrapment” (page 43):

The AMA also expressed its concerns about this issue, noting that “[o]nerous rules, safety procedures and focus on achievement of production levels have been shown to create a distinct sense of entrapment in FIFO workers.”

The inquiry drew, in some measure, on an earlier report, the Lifeline WA FIFO/DIDO Mental Health Research Report 2013 which also appeared to note the adverse impact of safety and health management on psychological well-being. For example “[a]dhering to on-site safety rules” was identified as a workplace stress (page 77). Interestingly, the Lifeline report noted a sense of “intimidation” brought on by the number of rules and regulations associated with work on a mine, and :

This sense of intimidation was further mirrored in the outcomes of mining safety regulations which in theory were designed to care for workers but in practice led to inflexible regulation over genuine safety concerns (page 81).

Examples from the Lifeline report include:

… a participant recalled a situation in which a worker handling heavy loads required an adhesive bandage but was unable to ask someone to get them for him because he had to fill out an accident report first (which he was unable to do mid-job); hence he had to carry on working without attending to his cuts. Alternatively, another example of the application of safety rules in an inflexible manner was illustrated when a group of workers were reprimanded for not wearing safety glasses on a 40 degree day even though they could not see from them due to excessive sweating. Hence, safety rules themselves were accepted as a necessary part of work but their implementation in an inflexible uniform manner created stress as workers felt their impact hindered their ability to conduct basic work tasks safely and/or without attracting rebuke. Hence, site rules and regulations could translate into arbitrary and punitive forms of punishment, which undermined participants’ ability to fulfil jobs to their satisfaction and left them feeling insecure with their positions (page 81).

It seems, then, that we need to think beyond our own perceptions of what might contribute to workplace stress and understand the impact that our efforts to manage health and safety might actually be having. Again, as the Lifeline research noted:

… although past research has shown that site conditions and cultures, such as isolation and excessive drinking are problematic, this research shows that the regimented nature of working and living on-site also takes a toll on mental health and wellbeing. From the responses of many participants, it was apparent that following site safety rules (either under pressure of internal monitoring or in the perceived absence of adequate safety precautions by co-workers and supervisors) was a significant stressor. Participants felt unable to apply self-perceived common-sense judgments and also reported feeling vulnerable to intensive scrutinising, intimidation and threats of job loss (page 82) [my emphasis added].

The common criticisms of the language of “zero” seem to me to go directly to the factors that have been identified in this research as contributing to psychological harm in the workplace. The pressure to comply with rules, fear about reporting incidents, the inability to exercise individual judgement on how to manage risk and the inflexible application of process are all side-effects of the language of “zero“.

Up until this point the debate around “zero harm” and its utility (or otherwise) as the headline for safety management has been relatively benign. Apart from the advocacy of people like Dr Robert Long “zero harm” seems to have been perceived as a relatively neutral strategy, insofar as people believe that it “does no harm“, and “what’s the alternative?”.

It seems, in fact, that much harm may be perpetuated in the name of “zero“, and at some point the behaviours that it drives will be found to be unlawful.

It is also going to be interesting to see how health and safety regulators, often the champions of “zero harm” oversee its potential impacts on psychological harm in the workplace. Indeed, it would be very useful to see what risk assessments, research or other measures were taken by regulators prior to introducing “zero harm” style campaigns or messages to understand the potential effects of their interventions, or any subsequent research to understand the potential harm they may have done.


Boal V BHP – Zero Tolerance: Are your “systems” commensurate to your attitudes?

I am not a fan of the language of “zero“, either as an aspiration or as a stated goal. It has never sat well with me, and seems so disconnected from day to day reality in both society and a workplace that people cannot help but become disconnected from, or dismissive of, the message behind the term. My view has always been that the language of zero actually undermines the objectives it is trying to achieve.

If you are interested in this topic (and if you are involved in safety you should be) there are far more passionate, learned and articulate critics of the language of zero than me – See for example, anything by Dr. Robert Long.

However, as a lawyer who specialises in workplace safety and health, I was very interested in a recent decision by the Fair Work Commission that demonstrates how an employers attitude of zero can be used against them.

In Mr Shannon Boal v BHP Coal Pty Ltd (U2014/5272), Mr Boal was dismissed for breaching mobile phone usage requirements when his mobile phone was found in the cabin of the truck he had been operating.

While the Fair Work Commission found that there was a valid reason to terminate Mr Boal’s employment, it found that the termination was unfair for a number of procedural reasons. In part, the Fair Work Commission relied on the level of training and information that Mr Boal had been provided about the relevant procedure.

The training documentation provided did not clearly demonstrate that employees were trained in this new procedure and signed accordingly, or that it was given a significant roll-out to employees commensurate with their ‘zero tolerance’ attitude to incidents of breaches, given how this case has been pursued (my emphasis added).

Whatever your view about zero as appropriate language for managing workplace health and safety issues, there is no doubt that it is strong, absolutist language. If you are going to frame your approach to safety in the rhetoric of zero, you need to be sure that your actions at work match the rhetoric. If you don’t, your workers will see your safety messages as nothing but “window dressing“, designed to look good but basically meaningless, and lawyers will use the term to undermine the efficacy of your systems.

Lead indicators: Reinforcing the illusion of safety

One of my biggest gripes about safety management over the past 20 plus years is the lemming like fascination with “indicators“.

Notoriously, major inquiries around the globe have found that when organisations focus on “lag” indicators (typically personal injury rates) they miss, or become blinded to, more significant risks and catastrophic events often result.

Most recently, this was succinctly articulated by the Pike River Royal Commission which stated:

The statistical information provided to the board on health and safety comprised mainly personal injury rates and time lost through accidents.  … The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries.  … The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation.

I have long feared, and it appears that we are heading down the same path under the guise of “lead” indicators. A recent study described in the Queensland Government’s eSafe newsletter found serious shortcomings in using traditional lag indicators for measuring safety.

Nothing suspiring there!

Apparently, the study went on to note a range of leading indicators that helped to deliver good personal injury performance. These indicators included fairly common place practices such as:

  • subcontractors being selected based (in part) on safety criteria.
  • subcontractors submitting approve, site-specific safety programs.
  • the percentage of toolbox meetings attended by supervisors and managers.
  • the percentage of planning meetings attended by jobsite supervisors and managers.
  • the percentage of negative test results on random drug tests.
  • the percentage of safety compliance on jobsite safety audits (inspections).

And so on.

I am not saying that any of these indicators are not good safety practices. They are. They should be measured as a measure of good safety practice – but they are not a measure of a safe workplace. They are not an indicator of risks being controlled.

The problem with any general “indicator” approach, lead or lag, is it does not actually give us any insight into whether the risks in the business are being controlled. It simply perpetuates the illusion of safety.

In other words, I have a bunch of indicators. The indicators are being met. Therefore, the risks in my business are being controlled.


Think of a potential fatal risk in your business. Take confined spaces as an example.

What do any of the indicators described above tell you about whether that risk is being controlled? Typically nothing.

What are the crucial systems in your business?

How do you prove that they are effective?

Be careful what you wish for: Is the safety “profession” being sidelined?

Let me start with a few caveats and/or disclaimers.

First, this post has been inspired by a number of articles and discussions on line about the “practice” of safety and the safety “profession“. I only qualify profession because of the amount of disagreement about whether people who advise about safety are professionals, and what their role is.

Second, this is an outside looking in discussion (I am a lawyer by training and practice for the last 20+ years), and inside looking out (I run a combined law firm and safety consultancy, specialise in safety management and have been a senior health and safety adviser in the oil and gas industry).

Finally, this is a top down view. Most of what I do these days is designed to help senior executives and boards understand if their health and safety risks are being controlled. Part of this means that I spend a lot of my time challenging safety managers to show that what they do works.

With all of that on the table, the point of this post is to raise an observation: It seems to me that more and more safety managers and advisors are being excluded from the table.

When it comes to decisions and strategies that organisations genuinely care about, the safety manager is simply not present.

This first struck me about 10 years ago. At the time I was involved in a depressingly high number of matters that involved fatalities and in every case, the decision makers dealing with the fatality did not include the senior safety advisor. General managers, CEO’s, HR managers, CFO’s – they were all there, but not the safety manager.

Thinking about this further, in more recent times (about the past 5 years) I have noticed two key trends.

One is the due diligence debate.

In Australia a number of jurisdiction have introduced legal obligations on “company officers”, requiring them to demonstrate personal due diligence with respect to the management of health and safety risks.

It seems to be that this “debate” has been hijacked by lawyers (apologies to my friends in the law) and that the safety advisors are completely ill equipped to deal with the issue. If you look online at the articles, commentary and observations about due diligence in Australia it is a discussion dominated by lawyers.

My personal experience is that I am often called into an organisation only because the senior safety advisor cannot explain the due diligence obligations to the executive managers, nor clearly define a way to manage those obligations.

What this (it seems to me) has led to is:

  1. Safety advisors are not driving key executive management initiatives that impact safety; and
  2. Safety management has become little more than “legal risk management”, which may or may not drive good safety performance.

The other observation come from the oft quoted phrase “safety is a line management responsibility“.

It seems that a number of businesses are embracing that philosophy and using it as a driver to reduce the health and safety “head count“.

Again, my personal experience is a high level of frustration from businesses about the poor quality of service from health and safety advisors. It is one thing to push responsibility for safety onto line management, it is another thing all together to use that as an excuse for not doing your job – a perception that the safety profession has been burdened with.

While senior executive accountability and line management responsibility are cornerstones of effective safety performance the safety profession must define its role and place in that framework.

We must also add value.

Hiding behind mountains of paper, unrealistic and unachievable aspirational safety objectives, jargon and the inability to explain the relationship between what we do and how it adds value all contribute to marginalise safety advisors from  management and the workforce.

What value do we add?

Delphic motherhood statements part 2 – safety documents that nobody can understand

A little while ago I did a post looking at the complexity of documented safety management systems, and the role that documentation has played in undermining effective safety management. You can review the post here.

I was recently sent an article (you can access it here) which underscores the potential negative impact safety documentation has on safety performance.

The New Zealand research found that:

  • Two thirds of employees did not fully understand information contained in health and safety documents , including safety procedures;
  • 80% of employees were not able to accurately complete hazard report forms; and
  • Safety documents were highly complex and used vocabulary that employees did not understand.

A fascinating aspect of the research is that it provides a list of words that were unfamiliar and confused employees. Some of those words included “significant hazards” , “competence”, “accountabilities” and “not adversely affect”. All words that reflect the requirements of legislation and guidance material but have little place in the day to day comprehension of workers.

From my own perspective, I have to say that this research is entirely consistent with my study of major accident events going back 30 years. Every major accident events enquiry that I have ever researched has identified that in some way the documented safety management systems undermine effective safety performance. Typically they are too complex for the people who have to implement them to understand.

Based on my experience I would add two further phrases to the list of unfamiliar words: ” reasonably practicable” and “root cause”. These two phrases are ubiquitous throughout safety management documents in Australia, yet universally whenever I am conducting obligations or investigation training there is no common (much less “correct”) understanding of what these things mean.

There are two things that I find professionally embarrassing as a person who has spent the last two decades specialising in safety and health management . The first is our continued reliance on lost time injury data as a measure of safety performance in light of the overwhelming evidence that they add no value to our understanding of the management of risk.

The second is , despite at least 30 years of “reminders” that out documented safety processes add little to the management of safety risks, almost universally we continue to do the same thing, in the same way but somehow expect a different. I think Einstein had something to say about that.

I have recently been working with a senior executive in an organisation who confronted a safety consultant with the following:

“if you can’t explain it to me easily, then you don’t understand it yourself “

An interesting test to apply to our safety documents?

Transpacific Industries: Disciplinary action as a safety control

This is a case I have looked at before, and often use in management training to help explain the concept of reasonably practicable, and the relationship between reasonably practicable and the hierarchy of controls.

I was prompted to post it following the release of Safe Work Australia’s guidance material on reasonably practicable.

The case involved the prosecution of Transpacific Industries following a fatality in 2009. In an earlier, almost identical  incident, Transpacific had responded to a breach of its procedures with what the Court described as “robust disciplinary action“. When the repeat incident occurred in 2009 the question that was argued was whether the earlier disciplinary action was a “sufficient response“: Was it reasonably practicable? You can access the video discussion of the case here, and a copy of the case here.

To receive future updates and case studies, please subscribe to the blog, or follow me on twitter.


Greg Smith

Unfair dismissal, delphic motherhood statements and other observations on safety documentation

Delphic adj. ambiguous, enigmatic, obscure. Also Delphian [L Delphicius, from Gk Delphikos of Delpji (the ancient city in central Greece, famed for its oracle of Apollo, which was noted for giving ambiguous answers.) (The Macquarie Dictionary)

Let me apologise for the somewhat ‘delphic’ nature of the title for this blog, but it is an accurate description of a recent case and other findings, which has led to the observation on safety documentation. Although I might say less of an observation and more an update on, or restatement of, a long running concern that I have had about how safety documentation continues to actively undermine our efforts to create safer workplaces.

First are the recent unfair dismissal proceedings in Paul McGrath and Maitland Hayward v Sydney Water Corporation t/as Sydney Water [2013] FWC 793.

The case involved two workers who had their employment terminated after apparently breaching their company’s ‘lock out/tag out’ (LOTO) procedures. In the end, the termination was overturned and they were reinstated to their original positions. A number of the reasons for the decision turned on matters particular to Australian unfair dismissal laws, and included things such as:

• The long period of service of the two workers.

• The long and unblemished safety records of the two workers.

• The workers’ age and work history.

• The impact of the termination on the workers.

• The workers’ remorse.

However, the quality of the LOTO procedure was also a factor. The tribunal dealing with the claim noted:

• The LOTO procedure did not expressly detail the steps required to be taken to isolate power sources.

• The LOTO procedure requires formal training every two years, which did not occur.

The tribunal noted that there was “some attraction” in a submission that Sydney Water was itself in breach of its own policy.

• Evidence of experienced electricians was that the LOTO procedure was, at best, unclear, and at worst, confusing.

The tribunal noted that Sydney Water seemed to acknowledge this problem by undertaking extensive retraining of its employees on the procedure, because employees were concerned that they could also be subject to disciplinary action for a breach of the procedure.

The issue of the quality of safety documents in a safety context was also explored in the Royal Commission into the Pike River Coal disaster. The Commission noted in that case:

By November 2010 there were over 398 documents in the electronic system. Of these 227 were in draft as they were not signed off by two managers, although they were still used in the meantime. The number, and length, of the documents posed a challenge to the credibility of the system.

Although many of the documents were helpful, there were problems, not only with the sheer volume of material, but also with some of its content. For example, in 2010 two consultants and a Pike manager assessed the ventilation management plan and concluded it needed a complete review. (Volume 1, page 73)

Unfortunately, there is nothing surprising in this. The quality of safety documentation has been implicated in most major disasters for the past 25 years. And again, unfortunately, the response of the safety profession (and others) seems to be to keep doing the same thing and expect a different result. A few examples should suffice to make the point:

Longford Royal Commission: Fire and explosion at Esso’s gas plant in Longford, Australia. Two fatalities.

Esso’s [safety management system], together with all the supporting manuals, comprised a complex management system. It was repetitive, circular, and contained unnecessary cross referencing. Much of his language was impenetrable. These characteristics made this system difficult to comprehend both by management and buy operations personnel. (Page 200)

Montara Commission of Inquiry: Uncontrolled release of hydrocarbons off the north-west coast of Australia on 21 August 2009. No fatalities.

A number of aspects of PTTEPAA’s Well Construction Standards were at best ambiguous and open to different interpretations. The fact that a number of PTTEPAA employees and contractors interpreted aspects of the Well Construction Standards differently illustrates the ambiguity and inappropriateness of the Well Construction Standards. (Page 9)

The Deepwater Horizon: Fire, explosion and uncontrolled release of hydrocarbons in the Gulf of Mexico in April 2010. 11 fatalities.

If you look at the [Transocean’s safety] manual, you’re really impressed by it. It’s a safety expert’s dream. Everything anybody could ever imagine is in there. …because as one looks at it, everything under the sun is covered. It’s hard to see at a particular place somebody saying symptoms of that or this. If you see that, do this. This is not said by way of criticism. People have tried like hell in this manual to get it right. But it may be that when time is short, there might have been different ways to make clear exactly what should have been done in a short period of time. (Page 168-169)

I do not have any firm evidence about why this continues to be a problem, but I do have a number of observations based on my experiences over the past couple of decades.

Some of the issues appear to be systemic, for example, it does not seem to me that many health and safety professionals receive training in writing quasi-legal documents – which is ultimately, what safety management documentation is.

Another issue is the continuous “layering” of the safety documentation. This is often evident after an incident where the automatic response appears to be to amend or write a new procedure. More often than not, this is done without actually understanding why the initial procedure failed. Over time, this builds a volume of safety documentation incapable of being implemented.

However, the biggest concern I have observed in the last three or four years in particular is the ubiquitous “thumb drive” or USB stick. More and more we are observing safety documentation that has not been developed for a business or a particular risk, but rather has been cut and paste from some other organisation with no real regard for its application.

When you consider the quality of safety documentation in a general sense, it is not unreasonable to conclude that nobody is really reading these documents with any care or attention. I find it extraordinary how often I have to deal with safety management plans and other documents that contain the name of a totally unrelated company. Clearly the document is nothing more than a cut and paste from some other plan, but has been signed off by four, five or move different managers – yet even something as basic as the wrong company in the documentation is not being picked up. If the people responsible for developing and approving these documents were not reading them, why on earth would we expect the workforce to?

OK. So what does this have to do with the oracle of Apollo? It is taken from the Montara Commission of Inquiry:

The Inquiry also considers that (i) the Hazard Identification (HAZID) workshops which were conducted between PTTEPAA and Atlas to identify and manage risks at Montara; and (ii) the Safety Case Revisions/SIMOPS Plans which were produced by both entities, were pitched at far too great a level of generality. For instance, the workshops and documents did not deal in any specific way with management of barriers. Moreover, the SIMOPS documents were replete with delphic ‘motherhood’ statements, such as the following:

Safety management in the field is primarily the responsibility of the Vessel Masters/Superintendents, FPSO OIM, Rig OIM and WHP Person In Charge (PIC). The prioritisation of all activities in the Montara field is the responsibility of the PTTEPAA Project Manager. However, control of the individual activities during the field development remains with the relevant supervisors.

All parties in the Montara field development shall have clear structuring of HSE interfaces to ensure that there is no confusion as to: approval authority; roles and responsibilities of personnel; organisational structures, management of HSE; operating procedures; reporting structures; and SIMOPS. (page 135)

In the end, if our safety documentation does not provide good guidance about how the health and safety risks in the business are to be managed, what value does it add? And if it cannot be understood by the people expected to implement it, if it creates ambiguity and confusion, it is not overly pessimistic to think that it could be undermining our efforts to create safer workplaces.

Does safety culture undermine safety?

Like all safety initiatives “safety culture” has within it the capacity to be both an enabler and “underminer” of safety management and good safety performance. It seems to me that more and more of the initiatives undertaken in the name of safety culture far from enabling our safety objectives are actively undermining them.

It is perhaps worth starting the discussion with some definition of safety culture. Interestingly, it is a term bandied around in safety circles quite freely but without much evidence that everybody is talking about the same thing. For this discussion I will use the term safety culture in the context described by Hopkins (2005, p. 11), that is a “culture of safety” or an organisation that is focused on safety.

In this context, not all organisations have a safety culture; it is a conscious decision and something that you strive for.

This can be distinguished from “safety climate“. All organisations have a safety climate, and the safety climate may be weak or strong, good or bad and so on.

If we go back to the source, the term safety culture was first used in the International Nuclear Safety Advisory Group’s (1986) Summary Report on the Post-Accident Review Meeting on the Chernobyl Accident. There, safety culture was described as:

“That assembly of characteristics and attitudes in organizations and individuals which establishes that, as an overriding priority, nuclear plant safety issues receive the attention warranted by their significance.”

This is at the heart of what Lord Cullen (1990, p. 300) described in his inquiry into the Piper Alpha disaster:

“it is essential to create a corporate atmosphere or culture in which safety is understood to be and is accepted as, the number one priority”

Most recently, this notion of safety culture is implicit in the Royal Commission into the Pike River Coal Mine Disaster in New Zealand (Pankhurst et.al, 2012, Volume 2, p. 176):

Ultimately, the worth of a system depends on whether health and safety is taken seriously by everyone throughout an organisation; that it is accorded the attention that the Health and Safety in Employment Act 1992 demands. Problems in relation to risk assessment, incident investigation, information evaluation and reporting, among others, indicate to the commission that health and safety management was not taken seriously enough at Pike.

It does seem however, that when many organisations talk about safety culture, in fact they are talking about safety climate, and whether the “environment” of the organisation is conducive to good safety performance.

What I have observed over the past 20 or so years is that initiatives bundled under the heading safety culture do not contribute to safety receiving the attention it warrants by its significance. Rather, they often divert attention away from giving sufficient attention to safety and in many cases are excuses for not paying proper attention to serious health and safety risks.

Over and above this, the “window dressing” that often masquerades as safety culture contributes significantly to the “illusion of safety” (see for example Borys, 2009), creating an impression that health and safety risks are being controlled, when in fact there is no evidence to support that. Executive management see significant time and resources committed to initiatives branded as “safety culture”, and they see shifts in perception surveys which are somehow interpreted as indicators of safety culture, and more dishonestly as indicators of good safety performance.

This, unsurprisingly, creates the impression that safety is being effectively managed. The reality is seldom the case, with no effective assurance that the health and safety risks in the business are actually being controlled.

Unfortunately, like many safety concept, safety culture has:

  1. Become commercialised, as something that organisations have to purchase;
  2. Become commoditised, as a product that organisations can buy off the shelf;
  3. Been perceived as a silver bullet for all of our safety concerns. This is particularly apparent in the safety profession where the use of the term safety culture to describe underlying problems in safety management has become ubiquitous to the point of being embarrassing.

In the result, I have increasingly seen organisations, led by their safety managers, blindly pursuing the holy grail of safety culture (with no clear picture of what it even looks like) while significant health and safety risk remain unchecked.

Some key themes that I have seen emerge in the pursuit of safety culture are:

Safety culture as a product of perception surveys: The relentless pursuit of perception surveys in no way represents safety culture, nor does it represent an organisation giving safety the serious attention that it deserves.

The difficulty that I have with perception surveys as a measure of safety culture is that they provide no insight into whether or not risks are being controlled – they are after all no more than perceptions. Yet, somehow they are sold as an indication of a good safety culture and then, by some extraordinary leap of logic, proof of an effective safety management system.

They are neither. At best, they may give an insight into an organisation’s safety climate.

Perceptions can change (and can be changed) but this does not mean that the organisation is doing anything differently; it does not mean that the organisation has a culture focussed on safety, and it in no way means that an organisation’s health and safety risks are being controlled.

I am reminded of a lesson that I was taught by a very wise academic about 6 years ago. At the time, I was teaching a University program on accident prevention and had got my first report from my students on the amount of “feedback” I had provided to them during the semester. To my surprise, I had been marked very low in this area.

My friend asked me what I wrote to students when I sent back their papers, and I showed her. It was generally something like:

Please find attached a copy of your paper marked up with my comments ……

Her suggestion? Change it to:

Please find attached a copy of your paper marked up with my comments and feedback……

I did. Nothing else changed but my “rating” for providing feedback moved to over 90% approval.

The power of perception over action.

Glorified behavioural based safety programs as safety culture: The majority of programs that I have seen in the past that purport to be safety culture programs are nothing more than trumped up behavioural based safety programs. Whatever your view on the efficacy of these programs or their place in a good safety management program (and in my view they do have one), they do not represent safety culture.

One of the clearest indicators that these programs have nothing to do with safety culture is that they are directed almost exclusively at the workforce. Very seldom are middle management involved, much less senior management or executive management – and not a board member to be seen. To borrow from Carolyn Merritt[1]:

Thus when we talk about safety culture, we are talking first and foremost about how managerial decisions are made…

When these programs are described as something that they are not (safety culture) rather than what they are (targeted modification of workers’ behaviour), it is my view that they actively disengage the workforce from the organisation’s safety effort, and undermine any perception that might have existed that management was committed to safety.

An excuse or distraction from the real work of safety management – understanding that risks are being controlled: This is perhaps the most disconcerting aspect of the way that safety culture is being touted in organisations. It is held out as a safety catchall while the difficult work of understanding whether risks are actually being controlled is lost amid the management speak and motherhood statements that now define safety culture in practice.

Two examples spring to mind.

In the first, a worker discovered some fibrous materials at a worksite and was concerned that they were asbestos. In breach of all documented policies and procedures, the worker put the fibres into an empty plastic Coke bottle, put the Coke bottle into a yellow inter-office envelope and dropped it into the internal mail. To describe the investigation into the incident as superficial would be generous, but the ultimate conclusion was that despite all of these policy and procedure breaches the outcome was a good one because the worker, by raising his concerns, had acted in accordance with the expectations of the [insert name of commercial program] which demonstrated a good safety culture.

In an organisation with anything even remotely approaching a safety culture, I cannot imagine this incident being viewed as anything but an unmitigated failure of the safety management system and a failure of management to properly supervise and oversee that system in every important regard: incident investigation, hazard identification, training and competence, supervision and communication.

The mere fact that safety culture can be used to paper over such a fragile safety management system shows how far we have managed to move away from its original intention.

In the second case, a review of a contractor’s safety performance identified that the principal did not have a traffic management plan for vehicle movement in an open pit mine. This was described by the principal’s safety manager as a problem of safety culture. How absurd.

Rather than dress this failure up in some amorphous notion of culture (for which he offered no solution) the safety manager should have faced the reality that it was a complete failure by him to identify a significant risk, and then asked the obvious question: What other risks have I missed?

For such a fundamental control to be missing in a high hazard environment can only be regarded as a complete breakdown of the safety management system. It should also call into question the competence of the safety manager, but once again, safety culture was used as an excuse to avoid doing the hard work of understanding how the safety management system came to be in such a state of disrepair.

I have no doubt that safety culture properly understood by the highest levels of executive management and pursued at that level will help drive excellence in safety performance. The nonsense that we dress up as safety culture will not. It undermines our aspirational statements about health and safety, it disengages the workforce from the safety message of the organisation, it contributes to the illusion of safety and distracts us from the genuine hard work that needs to be done to understand whether the risks in our businesses are being controlled.


Borys, D. 2009. Exploring risk-awareness as a cultural approach to safety: Exposing the gap between work as imagined and work as actually performed. Safety Science Monitor, Issue 2, Volume 13.

Cullen, Lord. 1990. The public inquiry into the Piper Alpha disaster. Department of Energy. London: HMSO.

Hopkins, A. 2005. Safety, culture and risk: The organisational causes of disasters. Australia: CCH.

International Nuclear Safety Advisory Group. 1986. Summary report on the post-accident review meeting on the Chernobyl accident. Vienna: International Atomic Energy Agency. (see Also http://www-pub.iaea.org/books/IAEABooks/3598/Summary-Report-on-the-Post-accident-Review-Meeting-on-the-Chernobyl-Accident)

Pankhurst, G., Bell, S., Henry, D. 2012. Royal Commission on the Pike River Coal Mine Tragedy. Wellington, New Zealand

[1] Chairman and CEO of the US Chemical Safety and Hazard Investigation Board. Statement for the BP Independent Safety Review Panel on 10 November 2005, into the Texas City Refinery Explosion.

Zero harm and reasonably practicable: Inherently contradictory?

There is a lot of discussion in safety circles around the concept of zero harm and its place in safety management (see the following example).

On one side of the debate proponents of zero harm say that it is an “aspiration” rather than a hard target, and that aiming to prevent all injuries is the right thing to do.

On the other side, the argument suggests that the use of zero harm actively disengages the workforce from safety: “They” do not believe in zero harm and it is just another corporate slogan used as a substitute for addressing the workforce’s genuine safety concerns.

My personal view is that slogans like zero harm probably do undermine safety, at least insofar as the organisations’ conduct is inconsistent with their statements.

I might leave that part of the debate to people far better placed to argue it than me, but recently I was involved in a discussion with a client about zero harm in a slightly different, but related context.

Under the safety management system the client strove to achieve Zero Harm by managing all risks as far as “Reasonable Practicable”. This seems to be the stated aim of most organisations to a greater or lesser degree.

What the client did not understand was that reasonably practicable is a legal concept that inherently recognises that accidents do happen.

The law is clear that it is not the responsibility of the employer to ensure that accidents never happen, rather an employer must do everything practicable to ensure that its employees are not exposed to hazards.

So, if we genuinely believe in zero harm, and want to do more than the minimum “legal” requirements, then surely we should adopt a philosophy or measure of safety performance that is consistent with what we want to achieve? Surely there is no place for the concept of reasonably practicable in an organisation that believes in zero harm?