Relational coordination

Risk Management

When cooperating across sections there may arise a line of potential problems, mainly in regards to the communicative aspect. These problems arise as a result of the clashing of different expertise, authorities and cultural differences. In relation to this a professor within the field of management by the name of Jody H. Gittel has come up with her theory of relational coordination. This theory is mainly focused on the public sector, it is however still applicable for international private organizations. By using this theory as a tool, this theory can help analyse the interpersonal processes, which could potentially be barriers for optimal efficiency. This theory has furthermore been the foundation for multiple Danish consultants, whom have come with their own additions to this theory. Consultants such as Carsten Hornstrup claim that the definition of a good relationship is subjective, and a certain relationship can therefore be seen in two completely opposite ways. A relevant factor in this is the individuals authoritative position within the hierarchy of the organization, whereas leaders will often have a more positive outlook on the relation.

Jody H. Gittel has put up a negative and positive spiral with the purpose of illustrating what indicates a positive and negative relationship. The reason it is illustrated as a spiral is that, a relation is heavily built upon the communication and likewise. There is therefore no real ‘starting point’ and one should try to improve one of the following aspects, in order to breakthrough the next until it comes into full circle.

The theory of relational coordination is based on two different dimensions: Relations and communication. The quality of these aspects are defined as such:

Relations:

  1. Mutual goals: Same interpretation of the mission objective within an organization, where a task is solved based on a set of common, clarified goals. This is also synonymous with the organization’s vision, so it is crucial that everyone is on the same page regarding the overall goal.
  2. Mutual knowledge: To which degree are the different groups familiar with each others professional field and competences? This is not only about perfoming one another’s list of duties, but also knowing and understanding them.
  3. Mutual respect: Whether the different groups feel acknowledged for their contribution to solving the common task. This is where the higher placed personnel may show a lack of respect other groups, which ultimately affects the common engagement in a negative way

Communication:

  1. Frequent and timely: This indicator revolves around whether communication is timed correctly, often and interpreted in a meaningful way. The overall coordination suffers if the communication is too frequent, too rare or timed incorrectly.
  2. Precise and problemsolving: Is the communication constructive, practical and relevant? The task needs to be presented in a comprehensive way for the receiver, and needs to address the actual issue at hand.

Business Impact Analysis

Risk Management

Business Impact Analysis

There will often be many active pieces within an organization. Some may be critical for the organization’s infrastructure, and others may be not as essential for the survival of the company. When conducting a business Impact Analysis (BIA) one needs to consider what is it, that brings actual value to the company. A company’s wealth and value is not only decided upon by its monetary value, but its cultural and social values as well. By first off, we need to establish ‘what value are we creating’ and thereafter ‘who do we create value for’ in order to get an idea of the organization’s output and paint a picture of the overall process.

By reviewing the following steps, we can in a systematic way review relevant elements for our company’s value creation. The steps are as follows:

  1. Value creation: Who are we creating value for? To understand this business model, we need to identify potential hazards that can cause disruption to our operations. In this step, you can use models such as Porters Value Chain and Business Model Canvas.
  1. Identification of critical activities: In this step we pool in a bunch of processes, which together constitute an activity. For example, the production line makes value for us, so we need to recognize where potential disruptions within this productionline would be critical for our operations.
  1. Mutual dependencies: Which activities rely on each other to function? In this part, it is also relevant to consider how dependent we are on our suppliers. Do we have an alternative suppliers, in case our Tier 1 is unable to perform their part?
  1. The robustness of critical activities:  How do we test our robustness? In this step we test the minimal operative level. For example, if the power is out, can we still keep an overview of our logistics on paper rather than electronics? The system’s robustness is defined by being able to absorb disruptive activities, whilst keeping our operative integrity? An analysis can be conducted by doing the following:
  2. Identifying vulnerabilities/minimum operational levels.
  3. Identify where an increase in resources can strengthen our robustness.
  4. Different types of exercises can also help in this phase (e.g. contingency plans).
  1. Internal and external ressources: The ressources that the company is reliant on, such as:
  2. Infrastructure; roads, stand-alone systems.
  3. Physical ressources; storage/inventory, equipment,
  4. Intellectual ressources; skills, employees educational background, capabilities.
  1. Maximum Tolerable Downtime (MTD): MTD describes the point where an organization is unable to keep their operational integrity after a disruptive event (post-crisis). The costs of restoration is so high that it would not be worth it.
  1. Recovery Time Objective (RTO): RTO describes when management wishes for an activity to be back up and running. RTO requires resources and therefore an allocation of economic funds. The RTO can be influenced by mitigating intervention, by having Risk Management as an integral part of the organization.

This figure can help illustrate what the MTD and RTO means during a disruptive event.

NAT and HRO; Summary

The NAT and HRO theories both simplify the cause of accidents. HRO underestimates the problems of uncertainty. NAT recognizes the difficulty of dealing with uncertainty but underestimates and oversimplifies the potential ways to cope with uncertainty. Both theories believe that redundancy is the only way to handle risk.

Limitations of Both NAT and HRO

Perrow contributed with his definition of NAT, by identifying interactive complexity and tight coupling as critical factors which shouldn’t be discounted. His top-down system view of accidents versus the bottom-up, component reliability view of the HRO theorists is critical in understanding and preventing future accidents. While the HRO theorists do offer more suggestions, most of them are inapplicable to complex systems or oversimplify the problems involved.

A top-down, systems approach to safety

First, it is important to recognize the difference between reliability and safety. HRO researchers talk about a “culture of reliability” where it is assumed that if each person and component in the system operates reliably, there will be no accidents.

Highly reliable systems are not necessarily safe and highly safe systems are not necessarily reliable. Reliability and safety are different qualities and should not be confused. In fact, these two qualities often conflict. Increasing reliability may decrease safety and increasing safety may decrease reliability.

Reliability in engineering is defined as the probability that a component satisfies its specified behavioral requirements over time and under given conditions. If a human operator does not follow the specified procedures, then they are not operating reliably. In some cases that can lead to an accident. In other cases, it may prevent an accident when the specified procedures turn out to be unsafe under the circumstances.

If the goal is to increase safety, then we should be talking about enhancing the safety culture, not the reliability culture. The safety culture is that part of organizational culture that reflects the general attitude and approaches to safety and risk management. Aircraft carriers do have a very strong safety culture and many of the aspects of this culture observed by the HRO researchers can and should be copied by other organizations but labeling these characteristics as “reliability” is misleading and can lead to misunderstanding what is needed to increase safety in complex, tightly coupled systems.

Safety is an emergent or system property, not a component property. Determining whether a plant is acceptably safe is not possible by examining a single valve in the plant (although conclusions can be reached about the valve’s reliability). Safety can be determined only by the relationship between the valve behavior and the other plant components and often the external environment of the plant—that is, in the context of the whole. A component and its specified behavior may be perfectly safe in one system but not when used in another.

Sources:

Shrivastava, S., Sonpar, K. &Pazzaglia F. (2009) ”Normal accident theory versus High reliability theory: a resolution and call for an open systems view of accidents”, find it here

Marais, K., Dulac, N. & Leveson, N.: ”Beyond normal accidents and hugh reliability organizations: The need for an alternative approach to safety in Complex systems”, MIT find it here

NAT and HRO; HRO

As promised this post is about High reliability organizations. If you haven’t read the latest post about natural accident theory (NAT), we recommend you to, by following this link

Around the same time Perrow articulated his NAT (natural accident theory), another stream of research emerged. Sholars from the Berkely campus of the University of California came together to study how organizations that operate complex, manage to remain accident-free for a longer time. This group and other scholar’s latter became what we know as HRO.

High reliability organizations operate in complex, high-hazard domains, for extended periods without serios accidents or catastrophic failure.  

Just like in the NAT, HRP are related to the two dimensions; interactive complexity and loose/tight coupling – which Perrow claimed together determine a system’s susceptibility to accidents.
Interactive complexity refers to the presence of unfamiliar or unplanned and unexpected sequences of events in a system that are either not visible or not immediately comprehensible. A tightly coupled system is one that is highly interdependent: Each part of the system is tightly linked to many other parts and therefore a change in one part can rapidly affect the status of other parts. Tightly coupled systems respond quickly to perturbations, but this response may be disastrous. Loosely coupled or decoupled systems have fewer or less tight links between parts and therefore can absorb failures or unplanned behavior without destabilization.  An HRO is hypercomplecity – extreme variety of components, system, and levels, combined by a really tight coupling, meaning a reciprocal interdependence across many units and levels.

Why reliability?

Although HRT scholars have abandoned attempts to explicitly define reliability, they appear to agree that reliability is the ability to maintain and execute error-free operations. 

HRO scholars report that HRO emphasis the following conditions as being necessary, but not sufficient, for ensuring reliability: a strategic prioritization of safety, careful attention to design and procedures, a limited degree of trial-and-error learning, redundancy, decentralized decision making, continuous training often through simulation, and strong cultures that encourage vigilance and responsiveness to potential accidents.

Why is this important?

It is important to recognize that standardization is necessary but not sufficient for achieving resilient and reliable health care systems. High reliability is an ongoing process or an organizational frame of mind, not a specific structure. Examples of such organizations could be health care organizations aiming to become highly reliable in their report of practice. Other examples include Air traffic control system, nuclear power plant and NASA. In each case, even a minor error could have catastrophic consequences.

Your organization

Even though your organization might not hold lives in its hands, all organizations still face risks to profits, customer satisfaction, and reputation. The behaviors of HROs can be very instructive for those trying to figure out how to error-proof processes, avoid surprises, and deliver the desired outcome every single time.

Sources:

Shrivastava, S., Sonpar, K. &Pazzaglia F. (2009) ”Normal accident theory versus High reliability theory: a resolution and call for an open systems view of accidents”, find it here

Marais, K., Dulac, N. & Leveson, N.:” Beyond normal accidents and high reliability organizations: The need for an alternative approach to safety in Complex systems”, MIT find it here

NAT and HRO; NAT

In the following three weeks, the theoretical debate between two dominant schools on the origins of accidents and reliability, Normal Accident Theory (NAT) and High Reliability Theory (HRT), is elaborated and evaluated. For this week, we will start with NAT.

NAT

Background

Charles Perrow’s initial formulation of what has come to be known as Normal Accident Theory (NAT) was developed in the aftermath of the accident at the Three Mile Island nuclear power plant in 1979. Perrow introduced the idea that in some technological systems, accidents are inevitable or “normal”. He defined two related dimensions; interactive complexity and loose/tight coupling – which Perrow claimed together determine a system’s susceptibility to accidents.
Interactive complexity refers to the presence of unfamiliar or unplanned and unexpected sequences of events in a system that are either not visible or not immediately comprehensible. A tightly coupled system is one that is highly interdependent: Each part of the system is tightly linked to many other parts and therefore a change in one part can rapidly affect the status of other parts. Tightly coupled systems respond quickly to perturbations, but this response may be disastrous. Loosely coupled or decoupled systems have fewer or less tight links between parts and therefore can absorb failures or unplanned behavior without destabilization. 

According to the theory, systems, company, and organizations which is interactive complexity and tight coupling will experience accidents that cannot be foreseen or prevented. These systems are called System Accidents.

Why is this important?

But how do we get from nuclear meltdown to offshore projects or construction projects? Once you start to dissect them, some of the proceedings we manage exhibit both these features in abundance.

All the time our industry deals with risk analysis scenarios, foul weather, temporary infrastructure and communications, staff unfamiliar with their role or location, large-scale deployment of team members with low levels of training, contractors, supply-chain and much more. Any single one of these elements has the potential to suffer a failure that might interact unexpectedly with another part of the system. There are so mange variables for a project team to deal with – especially when you add the unpredictability of humanity into the mix – that it’s easy to imagine the interaction of dozens of potential accident scenarios over the course of just a single project day. It is Perrow’s “complexity” in a nutshell.

Perrow concludes, however, that accidents are inevitable in these systems and therefore systems for which accidents would have extremely serious consequences should not be built is overly pessimistic. The argument advanced is essentially that the efforts to improve safety in interactively complex, tightly coupled systems all involve increasing complexity and therefore only render accidents more likely. The flaw in the argument is that the only solution he considers improving safety is redundancy, where decisions are taken at the lowest appropriate level and coordination happens at the highest necessary level. In such systems staff at all relevant levels are trained and – just as importantly – empowered and supported to take the potentially economic and life-saving decisions.

Of course, this is easy to write and far harder to achieve: such is the complexity of the systems in which we operate that the impacts of decisions by individuals throughout the chain can have far-reaching effects, themselves adding to the problems we seek to resolve. Through planning and training, however, key roles and positions can be identified where fast-paced coupling can be matched.

In next week we will elaborate and evaluate High reliability Organizations, so stay tuned!

Sources:

Shrivastava, S., Sonpar, K. &Pazzaglia F. (2009) ”Normal accident theory versus High reliability theory: a resolution and call for an open systems view of accidents”, find it here

Marais, K., Dulac, N. & Leveson, N.: ”Beyond normal accidents and high reliability organizations: The need for an alternative approach to safety in Complex systems”, MIT find it here

Risk Society; part 2

In the last post, we began to hear about what Ulrich Beck call “risk society”, which we will elaborate further today.

The Risk Society is to sum up defined as: “a systematic way of dealing with hazards and insecurities induced and introduced by modernization itself” according to Beck. To continue from last week, here are a further vocabulary taken from Beck’s work, guiding ourselves through the risk society in a more positive way.

Risk community
“Risk is not, in other words, the catastrophe, but the anticipation of the catastrophe. It is not a personal anticipation, it is a social construction. Today, people are aware that risks are transnational, and they are starting to believe in the possibility of an enormous catastrophe, like radical climate change or a terror attack. For this sole reason we find ourselves tied to others, beyond borders, religions, cultures. In one way or the other, risk produces a certain community of destination and, perhaps, even a worldwide public space”.

The availability to recognize the difference between quantitative risks and non-quantitative uncertainties: in the availability to negotiate between various rationalities, rather than engaging in mutual condemnation; in the availability to raise modern taboos on rational bases; and – finally – in recognizing the importance of demonstrating to the collective will that we are acting in a responsible way in terms of the losses that will always happen, despite all precautions.

Culture of uncertainty
“What we need is a “culture of uncertainty”, which must clearly be distinguished from “culture of residual risk” on one hand, and the culture of “risk-free” or “security” on the other. The key to a culture of uncertainty is in the availability to openly talk of how we face risks, in the availability to recognize the difference between quantitative risks and non-quantitative uncertainties: in the availability to negotiate between various rationalities, rather than engaging in mutual condemnation; in the availability to raise modern taboos on rational bases; and – last but not least – in recognizing the importance of demonstrating to the collective will that we are acting in a responsible way in terms of the losses that will always happen, despite all precautions.

A culture of uncertainty that will no longer recklessly talk of “residual risk” because every interlocutor will recognize that risks are only residual if they happen to others, that the aim of a democratic community is to take on a common responsibility. The culture of uncertainty, however, is also different compared to a “culture of security”. By this I mean a culture where absolute security is considered a right towards which all society should lean. Such a culture would choke any innovation in its cage”.

Risk globalization
“Within the reflection of the modernization processes, productive forces have lost their innocence. The growth of technological-economic “progress” is ever more obscured by the production of risks. “Initially, they might be legitimized as “hidden side effects”. But with their universalization, with the criticism carried out by the public opinion and the (anti)scientific analysis, risks definitely emerge from latency and acquire a new and central meaning for social and political conflict.”

By this we need to accept the insecurities as a element of the society.

For further reading

  • Beck, Ulrich (1992). Risk Society: Towards a New Modernity. Translated by Ritter, Mark. London: Sage Publications. ISBN 978-0-8039-8346-5.
  • Beck, Ulrich (2020) “Review of Risk Society: Towards a New Modernity”

Risk Society

We face risk everywhere we go in our everyday. These are technological risk that, because they are produced by society, are considered preventable; a contrast to natural risks which have traditionally been studied by hazards researchers. When that said, both types of risks are affected by social, economic, political and cultural systems. In this way they are both “social” in their form.

According to the German sociologist, Ulrich Beck, we are all exposed to risk, because we live in the risk society. Not because we live in a more dangerous world compared to before, but because risk has become the center of the public debate.
This post is about the risk society we live in, and why wee need to understand this, according to Ulrich Beck. Beck’s research implies that we can only anticipate danger and overcome fears, if we understand that risk has now become the center of life, for each and every one of us. We live in a time where  risk is on the global horizon, which companies such as individuals, are guided by.
According to Beck, we need, not only to act responsibility, but it implies a strategic advantage. The ability to anticipate a risk, is to not turn emergencies into social panic and fears into catastrophes.  

To guide us through the risk society, Beck teaches us to make a positive change of the ways and in the practices of a strategic decision.  

Anticipating
“From a sociological perspective, the concept of risk is always a matter of anticipation. Risk is the anticipation of disaster in the now, in order to prevent that disaster from happening or even worse. Anticipating a risk means putting the potential danger in perspective. The anticipation of the disaster puts a strain on the most steadfast certainties, but offers everyone the chance to produce significant changes, kickstarting new energies”.

An unexpected event
“Even if there is no catastrophe, we find ourselves in the middle of a social development in which the expectation of the unexpected, the waiting for possible risks increasingly dominates the scene of our lives: individual risks and collective risks. A new phenomenon that becomes a stress factor for the institutions of law, finance, for the political system and even for families’ everyday lives. Being able to live in the risk society means anticipating the unexpected”.

Freedom within risk 
Risk is a constructive part of the social insecurity we live in. We cannot, however, definite it in absolute terms: “Every insecurity is always relative to the context and concrete risks that a person or a society need to deal with”.    

Beck concludes: “we need to accept insecurity as an element of our freedom. It might seem paradoxical, but this is also a form of democratization: it is the choice, that is continually renewed, between various possible options. The change stems from this choice”. 

In the next post, we’ll learn even more about the risk society according to Ulrich Beck!

For further reading:

– Beck, Ulrich (1992). Risk Society: Towards a New Modernity. Translated by Ritter, Mark. London: Sage Publications. ISBN 978-0-8039-8346-5.

– Beck, Ulrich (2020) “Review of Risk Society: Towards a New Modernity”

Risk and culture

Culture is based on the unique human ability to classify experiences, code such classifications and pass such abstractions on. Cultural theory aims to understand why different people and social groups fear different risks. More specifically the theory claims that this is largely determined by social aspects and cultural adherence. The basis of cultural theory is anthropologists Mary Douglas’ and Michael Thompson grid-group typology (1978).

Why is this important? 

Cultural theory draws focus away from concepts such as risk and safety, and towards social institutions. To deal with risk in a reasonable manner you must understand the underlying mechanisms. 
You can use this theory and model to understand cultures in countries and companies and hence decide how to influence them. Do understand your own culture, which may be different, as well as the multiple cultures that may come into conflict at times.

The grid/group typology 

Grid 
Grid refers to the degree to which individuals’ choices are circumscribed by their position in society. At one site of the spectrum, people are homogeneous in their abilities, work and activity and can easily interchange roles. This makes them less dependent on one another. At the other site of the spectrum exists distinct roles and positions within the group with specialization and different accountability. There are also different degrees of entitlement, depending on position and there may be a different balance of exchange between and across individuals. This makes it useful to share and organize together. 

Group  
Group refers to the degree of solidarity among members of the society and how strongly people are bonded together. On one hand there are separated individuals, perhaps with common reason to group together, though with less a sense of unity and connection. On the other hand, some people have a connected sense of identity, concerning more personally to one another.  

 The grid/group model
If the dimensions are placed in a two-axis system, from low to high, four outcomes occur. These represent four different kinds of social environments, or biases so to say. These are termed individualistic-, egalitarian-, hierarchical-, and fatalistic worldviews, and they have a self-preserving pattern of risk perceptions. 

Individualistic

Individualist experience low grid and low group. They value individual initiative in the marketplace, and fear threats that might obstruct their individual freedom. In general, individualists see risk is an opportunity as long as it does not limit freedom. Self-regulation is a critical principle here, as if one person takes advantage of others then power differences arise and a fatalistic culture would develop. 

Egalitarian

Egalitarians experience low grid and high group. The good of the group comes before the good of any individual, because everyone is equal. They fear development that may increase the inequalities amongst people. Furthermore, they tend to be skeptical to expert knowledge because they suspect that experts and strong institutions might misuse their authority. 

Hierarchic

Hierarchists experience high grid and high group. A hierarchist society has a well-defined role for each member. Hierarchists believe in the need for a well-defined system of rules, and fear social deviance (such as crime) that disrupts those rules. Hierarchists have a great deal of faith in expert knowledge.  

Fatalistic

Fatalists experience high grid and low group. Fatalists take little part in social life, though they feel tied and regulated by social groups they do not belong to. This makes the fatalist quite indifferent about risk – what the person fear and not is mostly decided by others. The fatalist would rather be unaware of dangers since it is assumed to be unavoidable to them anyway. 

These four worldviews, or ways of life, make up the central part of the cultural theory. 

Note: In addition to the four worldviews described above, one group does not fit this pattern.  

Sources 

See also 

  • Thompson, M., Ellis, R., and Wildavsky, A., (1990) Cultural Theory, Westview Press, Colorado, CO 

Why we are not very good at risk assessment

Intro

People, or at least most people, are not objective in their decision making. We rely on our intuition to make decisions easier for us. Emotion, is one bias we rely on to make decisions easier. Past experience, is another. Intuition is not a bad thing, it has enabled us to survive on this world for a long time, arguably. But then it comes to making decisions regarding safety of others or assessing risk situations, using our intuition and the biases that follow, is not always a good thing. Let me explain…

But first a shoutout to the inspiration, and source, for this post; Marie Helweg-Larsen, a professor of psychology at Dickinson College. I saw Marie’s talk and presentation at the annual risk management conference held by IDA, the Danish Society for Engineers. That talk inspired me to make this post, where I give my perspective on the subjects Marie presented.

How do we assess risk?

… And why is it important to understand?

I assess risk in order to help people do their job as safe as possible. But thats just one way I assess risk. In my personal life I also make risk assessments, rather frequent actually. But it is not exactly to help myself do my job as safe as possible, I have a desk job, so there is not much risk, at least not physical risk. No… I assess my risk according to health, my distant future, my not so distant future, etc… because risk assessments matter in terms of changing my mind, at least to some degree.

It is important to understand how people assess their risk if you want to change their mind about something. Thinking you are not at risk, based on your own assessment, makes you behave as if you are not at risk. If then, I want to change your mind regarding your safety, I have to understand why you feel safe (not at risk) and make my own risk assessment, present the results to you, and hopefully change your mind, at least to some degree.

This is difficult. Just look at Anti-Vaccination and Anti-Mask movements in current times. Historically it has difficult too… just look at seatbelts and when they were first introduced. Smoking is another great example, and one Marie uses in her arguments as well. It is hard to persuade people to change their ways and change the way they assess their risks. But why?

Why are we not good at assessing risk?

Marie argues that this comes down to, at least, five biases.

  1. Optimistic bias
  2. Confirmation bias
  3. Affect bias
  4. Base rate fallacy
  5. Overconfidence bias

Allow me to explain…

1. Optimistic bias – the rose coloured glasses

People, or individuals, have a tendency to believe they are less at risk than other people doing the same thing or during the same scenario for example. If we use smokers as an example… All smokers, that smoke 15-20 cigarets a day, are more or less equally at risk to various diseases in their later life. But when asked to rate their own risk, they rate themselves as lower than others. This is because we tend to focus more in the optimistic results or data, when making decisions for ourselves, then we do the “negative” results. Now, why this is the case is beyond my understanding of the psychological science field, therefore I suggest reading some Marie’s research (linked in the Sources).

2. Confirmation bias

We tend to search for information that support our own view or opinion. This is perhaps the most well known bias, not only when dealing with risk, but in general. A lot of research has been done on this tendency. Researchers make a great deal of not doing this when writing papers, but they are not always equally successful.
Now, biased information leads to biased interpretation. This means we can interpret some risk as more substantial than others, when we have found more information on one than another.

An example Marie gave in her speech was that: women are worse drivers than men. But, men choose to only acknowledge when women drive bad, and overlook when other men drive bad. This is the essence of confirmation bias. We overlook certain data or information in order to support our own opinion. Whether or not women are worse drivers than men, I will not discuss…

Marie argued that: Social media is a recent amplifier of this bias. Due to “the algorithm”, people see more of the same stuff online in their social media group. They are not exposed to the other side as much. Another amplifier to this is individuals tendency to be very persuaded by groups. I won’t go deeper into this right now, as I am not familiar with the specifics.

3. Affect heuristic

A mental shortcut thet rely on emotions.

Sometimes we rely on our emotions to make certain decisions easier. When facing a hard decision I can be easier to rely on emotions than data. If we don’t have the mental energy or mental surplus to look at all the data and analyse it, then make a decision based on facts and reliable information, it is much easier to rely on one emotions to make a decision. This works decently well in private and in ones personal life. But in a professional setting where other peoples health, and in worst case life, may depend on your decision, you should not rely on your emotion. Instead you should probably rely on data and information gathered by experts or others in a similar situation.

Worries and fears increase perceived risk and vice-versa.

4. Base rate fallacy

We rely, a lot, on our previous experiences when assessing risk. Therefore a risk we have encountered more is perceived as greater. Relying on previous experience is not a bad thing to do. It is arguably the reason humans still exist on this planet. Because we learn from bad decisions, or most of us do anyway… It is what makes us able to adapt and survive.

We kind of covered this in our Subjective risk perception post and why it is bad when assessing risk.

But when dealing with risk previous experience are both good and bad. You have experienced risky scenarios for a reason, what that reason is, is the important question. If you have experience with software update failures and resulting data corruption, you are probably cautions when updating now. But the question of why you experienced failure is an important one. Maybe you didn’t do your preparations well enough (i.e., you slacked off when assessing the risks) or maybe you didn’t do a back-up and now the consequence is perceived as A LOT greater then if you had a back-up?

We should learn from our mistakes, to prevent us from risk scenarios on the future. But we should not make risk assessment based on mistakes that can be avoided and which we should have learned from.

5. Overconfidence bias

We tend to ignore experts and act as experts. The essence of this bias is the Dunning-Krueger Effect which many probably know. The tendency of people with low ability at a task to overestimate their own ability at that task. Whether or nor many people do this in terms of risk assessment I don’t know. Marie argued that it is a problem, and I tend to believe her research. But I have no experience with this bias myself, yet.

I suggest reading about the Dunning-Krueger Effect to learn more.

There is more…

Another reason people are not very good at risk assessments is because of mathematics and numbers. Most people understand math at a basic level, plus, minus, addition, and division. But not many people understand the math behind probability and statistics. This is another reason for relying on the above mentioned biases. We simply cannot always grasp the mathematics of risk assessments and probability of risk scenarios. Of course this is not true of everyone… But it is true for most people who are not studying, or have not studied, math to some degree. I myself, had to take a statistical mathematics course during my education to become a risk manager, and to be perfectly honest… I don’t fully understand probability math either…

now…

What can we do then?

Can we do anything to combat these tendencies? Or are we S**t out of luck?… Of course we can do something!

We can start with accepting that we are biased by nature. When we know our own bias we are much more capable of dealing with it and not be affected as much by it. This, naturally, starts with a lot of self-awareness and getting to know one self. But when that is done or in process, you should start seeing results.

Another thing we can do is trust data and experts. If risk assessments are important to us, we should seek out valid information on the subject. Combine your experiences with data and statistics instead of just using one variable to make decisions.

Then we should Dial down risk estimations. As mentioned numbers are not very effective to most people. Therefore we should use more gist-based estimations or figure out what works in our specific organisations or situations.

Last but not least we should just do the right thing.

In conclusion

Bias has a tendency do make people do dumb things. We tend to focus too much on the optimistic data or results when making decisions for our self. We tend to search for informations that supports our opinion. We tend to be ruled by our emotions. We don’t believe experts and act as experts our selves. And we overestimate our own abilities when doing task where we have almost no abilities. To overcome these biases we have to gain self-awareness and acknowledge our biased views. We should believe experts and statistical data. We should dial down number based risk estimations and use gist-based estimations to change peoples mind. And we should always do the right thing.


That is all from me at this time. Feel free to comment on this post if you have questions or just want to express your opinion.


Sources:

Marie’s research: https://helweglarsen.socialpsychology.org/publications.

Dunning-Krueger Effect: https://en.wikipedia.org/wiki/Dunning–Kruger_effect.

Seatbelts: https://www.wpr.org/surprisingly-controversial-history-seat-belts.

Where is my Bulldozer?!

From PMI.org: Have you seen my bulldozer? – Why integrating the execution of risk and quality processes are critical for a project! 

Case 

This case study – which is said to be true – is about a project manager, hired by a mining company to build a 3-mile road to the mining site. Upon completing the first 1.3 miles, stage 1, the company wants to celebrate and “officially” open the road but not with a scissors and ribbon, but with bulldozer and ribbon.  

The Bulldozer drives through the ribbon breaking grounds for stage 2, but to the project managers surprise, she hears her colleagues yell “The dozer is sinking?!, The dozer is sinking!” within six minutes of the dozer driving through the ribbon on to new ground, it has sunk out of sight… 

Now some questions have to be answered: 

  1. What happened to the bulldozer, and how do we retrieve it? 
  2. How did this happen?
  3. How could this have been avoided? 
  4. How can we mitigate the impact on the project? 

Now according to the case study, these questions were answered by the project manager, but a valuable exercise in risk management is reflection and creativity when it comes to both discussing previous scenarios and possible risk scenarios. Therefore, the first exercise in this case study is to discuss these 4 questions. Take note of the answers generated as they might be useful later (and might be just the same as the real case answers). 

Now for the answers to question 1 and 2. 

1) What happened to the bulldozer, and how do we retrieve it? 

It drove onto an unidentified/unmarked muskeg pocket (bog/quicksand). The crust covering the pocket cracked under the weight of the dozer, and it sank into the pocket, completely out of sight. A few different methods were attempted to retrieve the dozer, which was 50% self-insured by the company:  

  1. Sonar was attempted, but could not pinpoint the location of the dozer given all the other solid objects in the muskeg pocket. 
  2. Drag lines using a concrete pylon were attempted, but they did not locate the dozer. 
  3. The company sought permission to drain the pocket, but the environmental agency denied permission. 

2) How did this happen? The project manager knew that this was addressed in the risk management plan and that the quality plan addressed the possibility as well. After questioning the lead geological engineer, the project manager learned that soil samples had in fact been taken every 10th of a mile per the project plan, but that the lead geological engineer had not actually performed the core test on the samples, as he had completed a flyover of the landscape at the initiation of the project in the company helicopter and believed the terrain to be stable. 

Now for question 3 and 4 

3) How could this have been avoided? 4) How can we mitigate the impact on the project? 

There are no definitive answers to these questions, as they differ from company to company, and should be discussed internally before a project. Our recommendations is to have 1 person from each individual team (the teams that are vital to the project) answer/react to this case study in association with a risk manager or a facilitator who understands the risk management perspective of this case study. 

By planning correctly and doing drills (preferably RoC Drills) this scenario could have been avoided. Furthermore, mitigation i.e., planning and exercises, are much less expensive than responding to accidents and a lot of stress can be avoided. 

Rehearse before!