NAT and HRO; NAT

In the following three weeks, the theoretical debate between two dominant schools on the origins of accidents and reliability, Normal Accident Theory (NAT) and High Reliability Theory (HRT), is elaborated and evaluated. For this week, we will start with NAT.

NAT

Background

Charles Perrow’s initial formulation of what has come to be known as Normal Accident Theory (NAT) was developed in the aftermath of the accident at the Three Mile Island nuclear power plant in 1979. Perrow introduced the idea that in some technological systems, accidents are inevitable or “normal”. He defined two related dimensions; interactive complexity and loose/tight coupling – which Perrow claimed together determine a system’s susceptibility to accidents.
Interactive complexity refers to the presence of unfamiliar or unplanned and unexpected sequences of events in a system that are either not visible or not immediately comprehensible. A tightly coupled system is one that is highly interdependent: Each part of the system is tightly linked to many other parts and therefore a change in one part can rapidly affect the status of other parts. Tightly coupled systems respond quickly to perturbations, but this response may be disastrous. Loosely coupled or decoupled systems have fewer or less tight links between parts and therefore can absorb failures or unplanned behavior without destabilization. 

According to the theory, systems, company, and organizations which is interactive complexity and tight coupling will experience accidents that cannot be foreseen or prevented. These systems are called System Accidents.

Why is this important?

But how do we get from nuclear meltdown to offshore projects or construction projects? Once you start to dissect them, some of the proceedings we manage exhibit both these features in abundance.

All the time our industry deals with risk analysis scenarios, foul weather, temporary infrastructure and communications, staff unfamiliar with their role or location, large-scale deployment of team members with low levels of training, contractors, supply-chain and much more. Any single one of these elements has the potential to suffer a failure that might interact unexpectedly with another part of the system. There are so mange variables for a project team to deal with – especially when you add the unpredictability of humanity into the mix – that it’s easy to imagine the interaction of dozens of potential accident scenarios over the course of just a single project day. It is Perrow’s “complexity” in a nutshell.

Perrow concludes, however, that accidents are inevitable in these systems and therefore systems for which accidents would have extremely serious consequences should not be built is overly pessimistic. The argument advanced is essentially that the efforts to improve safety in interactively complex, tightly coupled systems all involve increasing complexity and therefore only render accidents more likely. The flaw in the argument is that the only solution he considers improving safety is redundancy, where decisions are taken at the lowest appropriate level and coordination happens at the highest necessary level. In such systems staff at all relevant levels are trained and – just as importantly – empowered and supported to take the potentially economic and life-saving decisions.

Of course, this is easy to write and far harder to achieve: such is the complexity of the systems in which we operate that the impacts of decisions by individuals throughout the chain can have far-reaching effects, themselves adding to the problems we seek to resolve. Through planning and training, however, key roles and positions can be identified where fast-paced coupling can be matched.

In next week we will elaborate and evaluate High reliability Organizations, so stay tuned!

Sources:

Shrivastava, S., Sonpar, K. &Pazzaglia F. (2009) ”Normal accident theory versus High reliability theory: a resolution and call for an open systems view of accidents”, find it here

Marais, K., Dulac, N. & Leveson, N.: ”Beyond normal accidents and high reliability organizations: The need for an alternative approach to safety in Complex systems”, MIT find it here

About the Author

Julie Hviid

jh@rocconsult.eu

Other articles:

'}}
Risk Management Decision
Risk strategy. Risk management decision-making  This article describes how problems have been identified in processes, which are not always perfect, and how there is often anomaly and unreasonableness in deciding what is passable, what treatment options could be dominant in the three areas where problems can be identified.  introduction  Decision-making processes in risk management are …
'}}
Capacity assessment
Does your organisation have the capacity to handle unforeseen risk or even disasters? Are you preventive or reactive when it comes to risk management? Read more to find out...
'}}
Risk Strategy. Safety Risk Management
This article describes how safety risk management is a key component of any safety management system and involves identifying safety hazards to your operations and assessing the risks of mitigation. To successfully identify hazards you should think laterally and be unencumbered by past ideas and experience    Introduction The term “safe” Those involved in disaster  management …

JOIN OUR NEWSLETTER

GET IN TOUCH

Feel free to contact us

for more information

+45 28 60 49 50

contact@rocconsult.eu

Our core business is rehearsing

excellence in your project


RoC Drill is used by:

RoC Consult ApS - All rights reserved.

We use cookies to allow us to better understand how the site is used. By continuing to use this site, you consent to this policy. Click to learn more