Work as it is actually performed: investigating when nothing happens

There has been some discussions and commentary in various online forums recently looking at the issue of “positive” incident investigations.  Although there seems to be a variety of nuances in the description of positive investigations they focus on “what went right“.

Some of these investigation models have also incorporated a broader management technique of “appreciative enquiry“, which, as I understand it, came to prominence in the late 1980’s (see HERE for examples and information about appreciative enquiry).

The discussion about these frameworks describes the “what went right” philosophy as a positive view of investigations. It is a philosophy that does not focus on blame, but promotes discussion:

The benefit of that approach is that the conversation with witnesses is an entirely positive one. It is not about what could have happened. Not about the doom and gloom narrowly averted. Rather, it is about their heroic act, well designed process or lucky event that allowed us to avoid the adverse outcome. People love talking about positive things particularly if they had something to do with them. (https://www.linkedin.com/pulse/investigate-your-serious-near-misses-positive-way-michael-tooma?trk=prof-post)

 In my view, when organisations are not mature enough to talk about issues in a non-judgmental way, without attribution of blame, the “what went right” enquiry may present a risk.  It may be seen as a contrivance, with the facilitator spending a lot of their time saying things like “remember this is not about blame“.

In “mature” organisations the need to construct a system of enquiry to focus on the positive and avoid discussion of blame is largely redundant because the participants are aligned with and support the goals of the organisation.  Their desire to support the goals of the organisation overrides any petty, personal concerns about individual praise or blame.

If you have ever been privileged enough to work with high-performance sporting teams or elite military forces, you will understand this idea.

A precondition of belonging to these groups is the willingness to say and hear things that support the group’s objectives without personal agendas or taking personal affront.  The newest member of the team has a license to speak frankly about the performance of the most senior, and the most senior is expected to accept that conversation, not in the context of them personally, but in the context of the overall objectives of the team.

The extent to which organisations have to contrive a system whereby participants are corralled by a “what went right” narrative says a lot about the culture of an organisation and the “buy in” that people have to team objectives.

That is not to say that appreciative enquiry or investigating “what went right” does not have a place in organisations, nor that it could be an important building block along the way to developing something like an elite performing team.  But as a word of caution, you should also understand some of the paradoxes involved.

The Safety Paradox supposes that any initiative done in the name of health and safety has the potential to both improve and damage health and safety in a workplace.

Having sat through appreciative enquiry “management brainstorming sessions” and incident investigations there is a strong sense of “flavour of the month” initiative as well as an even stronger sense of avoiding accountability.  An overriding impression of a process delivered without context or explanation – why this and why now?  The end product is a wall of butcher’s paper populated with sweeping motherhood statements and management speak, completely absent any meaningful desire to manage known problems.

The pendulum, it seemed, had swung too far the other way.

Again, that is not to say it is not an idea that should not be explored and applied.  But it needs context.  It needs explanation; it needs skilful facilitation, and it needs, perhaps most importantly, dedicated and meaningful follow-up with implementation.  Otherwise?  Well, we have all been in “those” types of sessions.

Another aspect of the “what went right” investigations is the requirement for something to have occurred.  There needs to be an incident or near miss to trigger the enquiry.

A risk in the “what went right” enquiry (without more) is that it can contribute to the illusion of safety.

The illusion of safety is the gap between safety management as we imagine it in our organisation and what happens in practice.  Incident investigations can be a powerful tool in exposing the illusion of safety because they have the potential to illustrate the disconnect between what we think happens and what is happening.  By just focusing on “what went right“, particularly in near miss incidents, we may fuel the illusion of safety and create a narrative that our systems are working to protect us from these incidents – effectively papering over the cracks in the edifice.

While avoiding blame and promoting open discussion is important, so too is avoiding sugar-coating the situation.  Again, balance, transparency and genuine enquiry ought to be the goal.

I would like to suggest something different – investigating work as it is performed; investigating when nothing happens.

An investigation framework that I find useful uses systems as opposed to causal analysis.

It supposes that organisations have systems and processes in place to prevent certain things from happening and tries to understand:

  1. What should have happened: how should these are systems and processes have been applied in a particular case to prevent the particular thing from happening; and
  2.  What happened: how was the work performed in the particular case.

From there, we identify and try to explain the “gap” between what should have happened and what did happen.

This framework is not concerned with “causation“.  All identified gaps are given equal attention and analysis, regardless of their potential causal relationship with the incident.  They are all important because they all represent a potential systemic weakness in safety management which, given a different factual matrix, could be causal.

The attractiveness of this framework is that it can help you identify systemic weakness when nothing has happened.

A few years ago I was involved in an incident leading to the prosecution of a client following a working at heights incident.  The incident and the various investigations that followed revealed the usual list of suspects:

  •  Training not followed;
  •  Procedures not followed;
  •  Risks not identified;
  •  Lack of supervision;
  •  Documentation not completed properly, and so on.

As part of working with that client, we applied the systems analysis framework to a range of other, similar high-risk work, including:

  •  Examples where the same task had been performed;
  •  Examples of different working at heights tasks; and
  •  Examples of other high-risk work tasks, including lifting operations and confined space entry.

In every case, the work had been performed “successfully“, without incident or near miss.

However, the analysis of the gap between how the work should have been performed and how it was performed demonstrated the same types of “failures” in the way that work was ordinarily performed as when the incident occurred.

In other words, even when work was “successful”, procedures were not followed, risks were not identified as well as they could have been, training was not complied with, documentation was not completed and so on.

The systemic weaknesses were not just present at the time of the incident.  They were characteristic of the way work was performed in the days and months previously.

The incident was not a one-off departure from an otherwise “good” system – it was simply evidence of otherwise broader, systemic failures.

Moreover, this system analysis approach highlighted weaknesses hidden by the traditional safety metrics – injury rates, action items closed out, hazards reported, management site visits, etc. – all of which were “green“.

I have applied this method of review from time to time over the years where I have been able to convince clients of its value.  On every occasion it brings to light the gap between the safety as imagined and safety in practice, lifting the veil on the illusion of safety.

In the Pike River Royal Commission, the Commission carefully examined Pike River’s system of incident investigation to understand if it “worked“.  They reviewed 1083 incident investigations and did a detailed examination of 436 of them.  Managers were subject to examination of their understanding of the investigation process, and ultimately the Commission found that “incidents were never properly investigated“.

You can see an example of the examination of management HERE.

Weakness in incident investigations, amongst other important systems elements, formed the basis of significant criticism of Pike River and its management:

 Ultimately, the worth of a system depends on whether health and safety is taken seriously by everyone throughout an organisation; that it is accorded the attention that the Health and Safety in Employment Act 1992 demands.  Problems in relation to risk assessment, incident investigation, information evaluation and reporting, among others, indicate to the commission that the health and safety management was not taken seriously enough at Pike.

 What do your philosophy and implementation of incident investigations say about you?

Risky Conversations, The Law, Social Psychology and Risk

New book by Dr Rob Long, Greg Smith and Craig Ashhurst

It is with pleasure I can announce the publication of my new book, Risky Conversations, The Law, Social Psychology and Risk which has been produced in conjunction with Dr Robert Long and Craig Ashurst.

The book is also the 5th in Dr Long’s series on the Social Psychology of Risk.

Risky Conversations

The book is the result of three days of conversations between myself, Dr Long and Craig in February 2016 when we gathered together with Rick Long of InVision Pictures and recorded conversations on twenty three topics in risk and safety. The recorded conversations were transcribed by Max and Sylvia Geyer and then we wrote commentary into the margins of the book (see an example below).

The book is 160 pages and included in the $49.95 price is access to all the videos. In addition a talking book of all the conversations can be purchased for $10.

The book can be purchased here: http://cart.humandymensions.com/?product_cat=books&paged=1

A sample of the Introduction and Chapter 1 can be downloaded here: Risky Conversations Chapter 1

You can see a sample of one of the videos here: https://vimeo.com/162034157

Perth Book Launch: A full launch will be held in Perth on 11 August where all three authors will be present in conjunction with a training day on the Social Psychology of Risk. Details to be announced soon in conjunction with a training day in the Social Psychology of Risk in Perth (to be held in conjunction with IFAP).

Melbourne Book Launch: Kevin Jones (safetyatworkblog) will be launching the book in Melbourne on 27 July (lunch time on day two of the SEEK program). Places for the launch are strictly limited to 30 and can be secured by email toadmin@humandymensions.com Download the SEEK flyer here: http://cart.humandymensions.com/wp-content/uploads/2016/05/SEEK-Program-Human-Dymensions.pdf). All people participating in the SEEK program receive a complimentary copy of the new book.

Lead indicators: Reinforcing the illusion of safety

One of my biggest gripes about safety management over the past 20 plus years is the lemming like fascination with “indicators“.

Notoriously, major inquiries around the globe have found that when organisations focus on “lag” indicators (typically personal injury rates) they miss, or become blinded to, more significant risks and catastrophic events often result.

Most recently, this was succinctly articulated by the Pike River Royal Commission which stated:

The statistical information provided to the board on health and safety comprised mainly personal injury rates and time lost through accidents.  … The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries.  … The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation.

I have long feared, and it appears that we are heading down the same path under the guise of “lead” indicators. A recent study described in the Queensland Government’s eSafe newsletter found serious shortcomings in using traditional lag indicators for measuring safety.

Nothing suspiring there!

Apparently, the study went on to note a range of leading indicators that helped to deliver good personal injury performance. These indicators included fairly common place practices such as:

  • subcontractors being selected based (in part) on safety criteria.
  • subcontractors submitting approve, site-specific safety programs.
  • the percentage of toolbox meetings attended by supervisors and managers.
  • the percentage of planning meetings attended by jobsite supervisors and managers.
  • the percentage of negative test results on random drug tests.
  • the percentage of safety compliance on jobsite safety audits (inspections).

And so on.

I am not saying that any of these indicators are not good safety practices. They are. They should be measured as a measure of good safety practice – but they are not a measure of a safe workplace. They are not an indicator of risks being controlled.

The problem with any general “indicator” approach, lead or lag, is it does not actually give us any insight into whether the risks in the business are being controlled. It simply perpetuates the illusion of safety.

In other words, I have a bunch of indicators. The indicators are being met. Therefore, the risks in my business are being controlled.

Nonsense.

Think of a potential fatal risk in your business. Take confined spaces as an example.

What do any of the indicators described above tell you about whether that risk is being controlled? Typically nothing.

What are the crucial systems in your business?

How do you prove that they are effective?

Delphic motherhood statements part 2 – safety documents that nobody can understand

A little while ago I did a post looking at the complexity of documented safety management systems, and the role that documentation has played in undermining effective safety management. You can review the post here.

I was recently sent an article (you can access it here) which underscores the potential negative impact safety documentation has on safety performance.

The New Zealand research found that:

  • Two thirds of employees did not fully understand information contained in health and safety documents , including safety procedures;
  • 80% of employees were not able to accurately complete hazard report forms; and
  • Safety documents were highly complex and used vocabulary that employees did not understand.

A fascinating aspect of the research is that it provides a list of words that were unfamiliar and confused employees. Some of those words included “significant hazards” , “competence”, “accountabilities” and “not adversely affect”. All words that reflect the requirements of legislation and guidance material but have little place in the day to day comprehension of workers.

From my own perspective, I have to say that this research is entirely consistent with my study of major accident events going back 30 years. Every major accident events enquiry that I have ever researched has identified that in some way the documented safety management systems undermine effective safety performance. Typically they are too complex for the people who have to implement them to understand.

Based on my experience I would add two further phrases to the list of unfamiliar words: ” reasonably practicable” and “root cause”. These two phrases are ubiquitous throughout safety management documents in Australia, yet universally whenever I am conducting obligations or investigation training there is no common (much less “correct”) understanding of what these things mean.

There are two things that I find professionally embarrassing as a person who has spent the last two decades specialising in safety and health management . The first is our continued reliance on lost time injury data as a measure of safety performance in light of the overwhelming evidence that they add no value to our understanding of the management of risk.

The second is , despite at least 30 years of “reminders” that out documented safety processes add little to the management of safety risks, almost universally we continue to do the same thing, in the same way but somehow expect a different. I think Einstein had something to say about that.

I have recently been working with a senior executive in an organisation who confronted a safety consultant with the following:

“if you can’t explain it to me easily, then you don’t understand it yourself “

An interesting test to apply to our safety documents?

Safety risk and safety data: Exploring management line of sight

I have recently done a video presentation on a fatality at the Adelaide Desalination plant, which you can find by following this link.

Recently, I was reading some of the transcript of the South Australian Senate Inquiry into the desalination plant (which you can find by following this link), and was struck by one manager’s description of all of the activity undertaken in the name of safety:

We start with the inductions when new staff join the project. So, at 6.30am, usually three times a week—I attend probably two of them; I was in one yesterday—we induct new staff onto the job. The first thing I point out is the list of non-negotiables. The second thing I point out is for each person to look after their mate. It starts there. We then have a standard list of documents. I will read from this list, because it’s quite a large list. There is the HSC risk register, task specific for each job. There is a construction execution plan. There is a JSA, task specific.

We have daily start cards for each area, which is another thing I introduced. I am not sure if we gave you a copy, but it’s a small easily-filled-in card where a work team can assess the risks of adjacent trades, etc. So, that is a specific thing. We have a pre-start meeting every day. There are SafeWork instruction notices posted at each of the work areas. We toolbox the job weekly, because the pace of this job changes. You can go out there in two-day gulps and the whole access can change, so we need to make sure people see that. We have the non-negotiables in place. We have site and work-front specific inductions, which is what I told you about. Again, I attended one yesterday.

I have regular safety walks. I have trained all of my management team and the two layers beneath that to go on safety walks. We have our OHSC risk register. There is a just culture model in place. So, if I need to address an incident and it turns out that this person needs retraining or perhaps needs to be disciplined or work outside the fence somewhere, we use this just culture model for that. We have all been trained in that. There are safety KPIs for management. There is a safety enhancement committee, which is a mixture of workers and staff. I actually chair a weekly safety leadership team, and that’s improving safety over and above. We are looking to refresh it all the time. And so it goes on. I have two pages of this stuff.

Now, there may have been far more information that sat behind all of this activity, but it seemed to me to be a typical approach to safety management – and one that typically gives no insight into whether the risks in the business are actually being managed.

One of my particular areas of interest in the context of safety management is “management obligations”, and more particularly how managers (at all levels) get assurance that the health and safety risks in their business are being effectively managed. It is a concept that I have referred to before and written about (Smith, 2012) as “management line of sight”.

An area of speciality for me is management obligations training; courses that are designed to help managers understand their legal obligations for safety and health, and how their behaviour – what they “do” – contributes to effective safety management.

Over the last 3 or 4 years I have put the following scenario to the various courses:

Who here knows about a risk in their business or area of responsibility that could kill someone?

Invariably, most hands go up.

Who has safety information that comes across their desk on a regular basis.

Again – most hands go up.

OK. What I would like you to do is to think about the risk. Then I want you to think about the data that you have looked at in the past 3 months.

Pause ……

What does that data tell you about how well the risk is being controlled?

And then the lights come on, with the realisation that their organisations spend inordinate amounts of time and resources producing volumes of information that tell them nothing about whether risks in the business are actually being controlled.

This “gap” was most recently highlighted in the Royal Commission into the Pike River Coal Mine Disaster (Pankhurst et.al, 2012), in which 29 men died in an underground coal mine explosion in New Zealand. The Royal Commission noted the following:

The statistical information provided to the board on health and safety comprised mainly [LTI rates]. The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries.

… The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation. (My emphasis).

Typically, in a training course discussion there is no meaningful consensus on  what the “crucial systems” are in a business, much less how we prove that they are effective.

What we can say with a high degree of certainty is that traditional measures of safety performance do not prove the effectiveness of crucial systems – certainly LTI and other personal injury rates do not, and we have known that for at least 25 years. However, other indicators are equally poor in creating insight into the control of crucial systems. The number of management site visits do not enlighten us, nor do the number of audit actions that have been closed out, the number of “behavioural observations” don’t help, the number of people trained, the number of corrective actions completed, the number of JHAs or “take 5s” done and on it goes.

These things are all indicators of activity, which are designed to ensure that the safety management systems are effective, but ultimately, they leave us in no better position as far as understanding the effectiveness of crucial systems.

There is another interesting challenge that falls out of exploring management line of sight, and that is, what should I be looking at?

Historically, and as I touched on above, we typically consider safety in the context of harm and risk: what can hurt people and how likely is it that they will be hurt? But line of sight and assurance demands a wider gaze than hazards and risks.

The Royal Commission (2012, volume 2, p. 176) also stated:

Ultimately, the worth of a system depends on whether health and safety is taken seriously by everyone throughout an organisation; that it is accorded the attention that the Health and Safety in Employment Act 1992 demands. Problems in relation to risk assessment, incident investigation, information evaluation and reporting, among others, indicate to the commission that health and safety management was not taken seriously enough at Pike. (my emphasis)

“Crucial Systems” mean more than gas monitoring or ventilation. They are more than the control of physical risks. They incorporate broader organisation systems around hazard identification and risk assessment, contractor safety management, management of change, incident investigation and so on. All elements that are designed to work together so that the “system” as a whole is effective to manage risk.

If organisations are weak insofar as they cannot “prove” that physical risks are being controlled, the reporting, assurance and line of sight to prove that these other “crucial” systems are effective is almost non existent.

When was the last time you received a report “proving the effectiveness” of your incident investigations, for example?

What are the “crucial systems” in your business, and how would you “prove” that they were effective. Food for thought.

References

Pankhurst, G., Bell, S., Henry, D (2012). Royal Commission on the Pike River Coal Mine Tragedy. Wellington, New Zealand

Smith , G. (2012). Management Obligations for Health and Safety. CRC Press, Boca Raton

25 Years on: Remembering Piper Alpha

In the past few weeks I have been asked to do presentations and share my views about the legacy of Piper Alpha in this, the 25th anniversary year of the disaster.

For me, the positive legacy is the advancement in safety regulation, engineering and “safety in design” that has seen the improvement of the physical safety of high hazard workplaces. Safety in design has also improved the “survivability” of disasters so that when accidents to occur, their consequences are better mitigated.

The ongoing disappointment, however, is the persistent failure of management oversight and assurance to properly understand if health and safety risks are being managed. This is a failure that has played out in every major accident inquiry since Piper Alpha and continues to undermine effective safety management.

You can see a video presentation of these ideas and concepts here.

Unfair dismissal, delphic motherhood statements and other observations on safety documentation

Delphic adj. ambiguous, enigmatic, obscure. Also Delphian [L Delphicius, from Gk Delphikos of Delpji (the ancient city in central Greece, famed for its oracle of Apollo, which was noted for giving ambiguous answers.) (The Macquarie Dictionary)

Let me apologise for the somewhat ‘delphic’ nature of the title for this blog, but it is an accurate description of a recent case and other findings, which has led to the observation on safety documentation. Although I might say less of an observation and more an update on, or restatement of, a long running concern that I have had about how safety documentation continues to actively undermine our efforts to create safer workplaces.

First are the recent unfair dismissal proceedings in Paul McGrath and Maitland Hayward v Sydney Water Corporation t/as Sydney Water [2013] FWC 793.

The case involved two workers who had their employment terminated after apparently breaching their company’s ‘lock out/tag out’ (LOTO) procedures. In the end, the termination was overturned and they were reinstated to their original positions. A number of the reasons for the decision turned on matters particular to Australian unfair dismissal laws, and included things such as:

• The long period of service of the two workers.

• The long and unblemished safety records of the two workers.

• The workers’ age and work history.

• The impact of the termination on the workers.

• The workers’ remorse.

However, the quality of the LOTO procedure was also a factor. The tribunal dealing with the claim noted:

• The LOTO procedure did not expressly detail the steps required to be taken to isolate power sources.

• The LOTO procedure requires formal training every two years, which did not occur.

The tribunal noted that there was “some attraction” in a submission that Sydney Water was itself in breach of its own policy.

• Evidence of experienced electricians was that the LOTO procedure was, at best, unclear, and at worst, confusing.

The tribunal noted that Sydney Water seemed to acknowledge this problem by undertaking extensive retraining of its employees on the procedure, because employees were concerned that they could also be subject to disciplinary action for a breach of the procedure.

The issue of the quality of safety documents in a safety context was also explored in the Royal Commission into the Pike River Coal disaster. The Commission noted in that case:

By November 2010 there were over 398 documents in the electronic system. Of these 227 were in draft as they were not signed off by two managers, although they were still used in the meantime. The number, and length, of the documents posed a challenge to the credibility of the system.

Although many of the documents were helpful, there were problems, not only with the sheer volume of material, but also with some of its content. For example, in 2010 two consultants and a Pike manager assessed the ventilation management plan and concluded it needed a complete review. (Volume 1, page 73)

Unfortunately, there is nothing surprising in this. The quality of safety documentation has been implicated in most major disasters for the past 25 years. And again, unfortunately, the response of the safety profession (and others) seems to be to keep doing the same thing and expect a different result. A few examples should suffice to make the point:

Longford Royal Commission: Fire and explosion at Esso’s gas plant in Longford, Australia. Two fatalities.

Esso’s [safety management system], together with all the supporting manuals, comprised a complex management system. It was repetitive, circular, and contained unnecessary cross referencing. Much of his language was impenetrable. These characteristics made this system difficult to comprehend both by management and buy operations personnel. (Page 200)

Montara Commission of Inquiry: Uncontrolled release of hydrocarbons off the north-west coast of Australia on 21 August 2009. No fatalities.

A number of aspects of PTTEPAA’s Well Construction Standards were at best ambiguous and open to different interpretations. The fact that a number of PTTEPAA employees and contractors interpreted aspects of the Well Construction Standards differently illustrates the ambiguity and inappropriateness of the Well Construction Standards. (Page 9)

The Deepwater Horizon: Fire, explosion and uncontrolled release of hydrocarbons in the Gulf of Mexico in April 2010. 11 fatalities.

If you look at the [Transocean’s safety] manual, you’re really impressed by it. It’s a safety expert’s dream. Everything anybody could ever imagine is in there. …because as one looks at it, everything under the sun is covered. It’s hard to see at a particular place somebody saying symptoms of that or this. If you see that, do this. This is not said by way of criticism. People have tried like hell in this manual to get it right. But it may be that when time is short, there might have been different ways to make clear exactly what should have been done in a short period of time. (Page 168-169)

I do not have any firm evidence about why this continues to be a problem, but I do have a number of observations based on my experiences over the past couple of decades.

Some of the issues appear to be systemic, for example, it does not seem to me that many health and safety professionals receive training in writing quasi-legal documents – which is ultimately, what safety management documentation is.

Another issue is the continuous “layering” of the safety documentation. This is often evident after an incident where the automatic response appears to be to amend or write a new procedure. More often than not, this is done without actually understanding why the initial procedure failed. Over time, this builds a volume of safety documentation incapable of being implemented.

However, the biggest concern I have observed in the last three or four years in particular is the ubiquitous “thumb drive” or USB stick. More and more we are observing safety documentation that has not been developed for a business or a particular risk, but rather has been cut and paste from some other organisation with no real regard for its application.

When you consider the quality of safety documentation in a general sense, it is not unreasonable to conclude that nobody is really reading these documents with any care or attention. I find it extraordinary how often I have to deal with safety management plans and other documents that contain the name of a totally unrelated company. Clearly the document is nothing more than a cut and paste from some other plan, but has been signed off by four, five or move different managers – yet even something as basic as the wrong company in the documentation is not being picked up. If the people responsible for developing and approving these documents were not reading them, why on earth would we expect the workforce to?

OK. So what does this have to do with the oracle of Apollo? It is taken from the Montara Commission of Inquiry:

The Inquiry also considers that (i) the Hazard Identification (HAZID) workshops which were conducted between PTTEPAA and Atlas to identify and manage risks at Montara; and (ii) the Safety Case Revisions/SIMOPS Plans which were produced by both entities, were pitched at far too great a level of generality. For instance, the workshops and documents did not deal in any specific way with management of barriers. Moreover, the SIMOPS documents were replete with delphic ‘motherhood’ statements, such as the following:

Safety management in the field is primarily the responsibility of the Vessel Masters/Superintendents, FPSO OIM, Rig OIM and WHP Person In Charge (PIC). The prioritisation of all activities in the Montara field is the responsibility of the PTTEPAA Project Manager. However, control of the individual activities during the field development remains with the relevant supervisors.

All parties in the Montara field development shall have clear structuring of HSE interfaces to ensure that there is no confusion as to: approval authority; roles and responsibilities of personnel; organisational structures, management of HSE; operating procedures; reporting structures; and SIMOPS. (page 135)

In the end, if our safety documentation does not provide good guidance about how the health and safety risks in the business are to be managed, what value does it add? And if it cannot be understood by the people expected to implement it, if it creates ambiguity and confusion, it is not overly pessimistic to think that it could be undermining our efforts to create safer workplaces.