Why we are not very good at risk assessment

Intro

People, or at least most people, are not objective in their decision making. We rely on our intuition to make decisions easier for us. Emotion, is one bias we rely on to make decisions easier. Past experience, is another. Intuition is not a bad thing, it has enabled us to survive on this world for a long time, arguably. But then it comes to making decisions regarding safety of others or assessing risk situations, using our intuition and the biases that follow, is not always a good thing. Let me explain…

But first a shoutout to the inspiration, and source, for this post; Marie Helweg-Larsen, a professor of psychology at Dickinson College. I saw Marie’s talk and presentation at the annual risk management conference held by IDA, the Danish Society for Engineers. That talk inspired me to make this post, where I give my perspective on the subjects Marie presented.

How do we assess risk?

… And why is it important to understand?

I assess risk in order to help people do their job as safe as possible. But thats just one way I assess risk. In my personal life I also make risk assessments, rather frequent actually. But it is not exactly to help myself do my job as safe as possible, I have a desk job, so there is not much risk, at least not physical risk. No… I assess my risk according to health, my distant future, my not so distant future, etc… because risk assessments matter in terms of changing my mind, at least to some degree.

It is important to understand how people assess their risk if you want to change their mind about something. Thinking you are not at risk, based on your own assessment, makes you behave as if you are not at risk. If then, I want to change your mind regarding your safety, I have to understand why you feel safe (not at risk) and make my own risk assessment, present the results to you, and hopefully change your mind, at least to some degree.

This is difficult. Just look at Anti-Vaccination and Anti-Mask movements in current times. Historically it has difficult too… just look at seatbelts and when they were first introduced. Smoking is another great example, and one Marie uses in her arguments as well. It is hard to persuade people to change their ways and change the way they assess their risks. But why?

Why are we not good at assessing risk?

Marie argues that this comes down to, at least, five biases.

  1. Optimistic bias
  2. Confirmation bias
  3. Affect bias
  4. Base rate fallacy
  5. Overconfidence bias

Allow me to explain…

1. Optimistic bias – the rose coloured glasses

People, or individuals, have a tendency to believe they are less at risk than other people doing the same thing or during the same scenario for example. If we use smokers as an example… All smokers, that smoke 15-20 cigarets a day, are more or less equally at risk to various diseases in their later life. But when asked to rate their own risk, they rate themselves as lower than others. This is because we tend to focus more in the optimistic results or data, when making decisions for ourselves, then we do the “negative” results. Now, why this is the case is beyond my understanding of the psychological science field, therefore I suggest reading some Marie’s research (linked in the Sources).

2. Confirmation bias

We tend to search for information that support our own view or opinion. This is perhaps the most well known bias, not only when dealing with risk, but in general. A lot of research has been done on this tendency. Researchers make a great deal of not doing this when writing papers, but they are not always equally successful.
Now, biased information leads to biased interpretation. This means we can interpret some risk as more substantial than others, when we have found more information on one than another.

An example Marie gave in her speech was that: women are worse drivers than men. But, men choose to only acknowledge when women drive bad, and overlook when other men drive bad. This is the essence of confirmation bias. We overlook certain data or information in order to support our own opinion. Whether or not women are worse drivers than men, I will not discuss…

Marie argued that: Social media is a recent amplifier of this bias. Due to “the algorithm”, people see more of the same stuff online in their social media group. They are not exposed to the other side as much. Another amplifier to this is individuals tendency to be very persuaded by groups. I won’t go deeper into this right now, as I am not familiar with the specifics.

3. Affect heuristic

A mental shortcut thet rely on emotions.

Sometimes we rely on our emotions to make certain decisions easier. When facing a hard decision I can be easier to rely on emotions than data. If we don’t have the mental energy or mental surplus to look at all the data and analyse it, then make a decision based on facts and reliable information, it is much easier to rely on one emotions to make a decision. This works decently well in private and in ones personal life. But in a professional setting where other peoples health, and in worst case life, may depend on your decision, you should not rely on your emotion. Instead you should probably rely on data and information gathered by experts or others in a similar situation.

Worries and fears increase perceived risk and vice-versa.

4. Base rate fallacy

We rely, a lot, on our previous experiences when assessing risk. Therefore a risk we have encountered more is perceived as greater. Relying on previous experience is not a bad thing to do. It is arguably the reason humans still exist on this planet. Because we learn from bad decisions, or most of us do anyway… It is what makes us able to adapt and survive.

We kind of covered this in our Subjective risk perception post and why it is bad when assessing risk.

But when dealing with risk previous experience are both good and bad. You have experienced risky scenarios for a reason, what that reason is, is the important question. If you have experience with software update failures and resulting data corruption, you are probably cautions when updating now. But the question of why you experienced failure is an important one. Maybe you didn’t do your preparations well enough (i.e., you slacked off when assessing the risks) or maybe you didn’t do a back-up and now the consequence is perceived as A LOT greater then if you had a back-up?

We should learn from our mistakes, to prevent us from risk scenarios on the future. But we should not make risk assessment based on mistakes that can be avoided and which we should have learned from.

5. Overconfidence bias

We tend to ignore experts and act as experts. The essence of this bias is the Dunning-Krueger Effect which many probably know. The tendency of people with low ability at a task to overestimate their own ability at that task. Whether or nor many people do this in terms of risk assessment I don’t know. Marie argued that it is a problem, and I tend to believe her research. But I have no experience with this bias myself, yet.

I suggest reading about the Dunning-Krueger Effect to learn more.

There is more…

Another reason people are not very good at risk assessments is because of mathematics and numbers. Most people understand math at a basic level, plus, minus, addition, and division. But not many people understand the math behind probability and statistics. This is another reason for relying on the above mentioned biases. We simply cannot always grasp the mathematics of risk assessments and probability of risk scenarios. Of course this is not true of everyone… But it is true for most people who are not studying, or have not studied, math to some degree. I myself, had to take a statistical mathematics course during my education to become a risk manager, and to be perfectly honest… I don’t fully understand probability math either…

now…

What can we do then?

Can we do anything to combat these tendencies? Or are we S**t out of luck?… Of course we can do something!

We can start with accepting that we are biased by nature. When we know our own bias we are much more capable of dealing with it and not be affected as much by it. This, naturally, starts with a lot of self-awareness and getting to know one self. But when that is done or in process, you should start seeing results.

Another thing we can do is trust data and experts. If risk assessments are important to us, we should seek out valid information on the subject. Combine your experiences with data and statistics instead of just using one variable to make decisions.

Then we should Dial down risk estimations. As mentioned numbers are not very effective to most people. Therefore we should use more gist-based estimations or figure out what works in our specific organisations or situations.

Last but not least we should just do the right thing.

In conclusion

Bias has a tendency do make people do dumb things. We tend to focus too much on the optimistic data or results when making decisions for our self. We tend to search for informations that supports our opinion. We tend to be ruled by our emotions. We don’t believe experts and act as experts our selves. And we overestimate our own abilities when doing task where we have almost no abilities. To overcome these biases we have to gain self-awareness and acknowledge our biased views. We should believe experts and statistical data. We should dial down number based risk estimations and use gist-based estimations to change peoples mind. And we should always do the right thing.


That is all from me at this time. Feel free to comment on this post if you have questions or just want to express your opinion.


Sources:

Marie’s research: https://helweglarsen.socialpsychology.org/publications.

Dunning-Krueger Effect: https://en.wikipedia.org/wiki/Dunning–Kruger_effect.

Seatbelts: https://www.wpr.org/surprisingly-controversial-history-seat-belts.

Subjective risk perception

Back in week 46 we briefly went over how subjective risk perception could affect collaboration between organisations. As promised we will delve further into why that is in this blogpost. Let us give you a short recap…

The Theory

Subjective risk perception (Paul Slovic, Baruch Fischhoff & Sarah Lichtenstein) can be defined as; an individuals method of understanding or making sense of risks.

But, individuals don’t necessarily always have access to statistical data regarding risks. Therefore they base their conclusion, regarding a risk, on other factores, such as:

  • Availability – we rate a risk according to the information we have about it.
  • Overconfidence – we see our self as “better” than others in various situations where risk is present (driving a car).
  • “It won’t happen to me” – we have a tendency to think things don’t happen to us.

This heuristic way of dealing with risk has its advantages and disadvantages. Heuristic thinking in regard to risk, often lead to bias. Bias can lead to inaccurate risk assessments. Therefore you should always be aware of risk bias and perception when working with risk management!

To gain a better understanding of this theory I suggest reading the material linked in the end of this blog.

1 Effects on cooperation

When two or more organisations are working together on a project they have work with a set of rules, call it common ground. When you have gained common ground everyone on the project knows what to do and what not to do (at least in theory). The issue arises when one organisation either has their own agenda or in some other way seek to gain control of the common ground. By doing this (consciously or unconsciously) the cooperation is skewed towards their interests. This also happens when dealing with risk.

This is why risk bias is especially important to understand!

1.1 Risk bias

Risk bias is essentially risk management decisions made from a personal set of empirical data, instead of a common or well known set. For example: If the owner of an IT project previously experienced a certain event that caused his servers to shut down due to some error while updating a large system of PC’s–effectively erasing a lot of data. This project owner is probably biased towards this specific risk whenever another update comes along. This is a rather harmless bias of course, but this example can be translated to every other project whether its in the construction sector, Off-Shore sector, etc.

For the project manager, and all other decision maker on the project, this means that they have to be very careful when communication risk and when managing risk. Otherwise a lot of resources will be used to mitigate (lower) the wrong risk and a risk that has very harmful potential might get overlooked!

1.2 How to handle risk bias

Quick disclaimer… We do not have the explicit answers to every risk bias scenario. But in our experience and from what we have gathered from interviews one of the best methods is as follows.

1.1.1 Communication

Frequent, respective and precise communications between the consulting organisation and the project organisation is key to healthy cooperation. This is especially true when talking about risk and subjective risk perception. If both organisations are on the same page regarding risk analysis and risk management, the chance of failure to recognise each others problems and therefore overlooking a potentially harmful risk, is lowered a significant amount. Thats is at least what we have found when interviewing project managers.

To put this into practice… Communicate with your stakeholders and other cooperative partners. Try to understand why you think they are biased towards risk (if that is the case), or why they have a specific perception on risk. This of course works both ways.

Communication creates understanding and understanding means better cooperation.

2 Technical risk assessment

A well known factor when working in risk management is the reliance on technical data and expert engineers or the like, when making risk assessments. This could be called an industry bias, but thats not important.

Relying on technical data and experts is not a problem if the project is purely technical. Unfortunately project seldom are and this is where some issues arise.
One issue with relying on experts and technical data is the ability to think outside the box. Again… we are bias to view risk based on our knowledge and expertise. The human error part of the risk evaluation is therefore not part of the equation. You can have all the data in the world and make the most detail risk description of a system or a specific task, but if you forget to calculate the human factor into this risk description, you are prone to failure.

You need to have some people on the risk assessment team who are not experts, who work in “the field” or at least knows what it is like to work there. These people usually bring some creative thinking to the risk assessments and thereby you gain a whole other understanding of risk scenarios where the human factors are present in the equation. Just like an unmanned aircraft can’t, yet, be relied upon to transfer cargo or passengers. The technology is here and has been for some time, but human intuition (pilot intuition) is just so crucial if a problem arises, that we do not yet trust these systems.

2.1 Diverse group of people

One possible solutions to the technical risk assessment problem is the use of a diverse group of people when doing risk identification. A diverse group of people allow for, 1) creativity from the creatives, the non experts and 2) a highly accurate assessment on whether or not they are too creative or if their risk scenarios are even possible from a technical standpoint, the experts. By doing this you gain common ground throughout the whole project, from project owners, project managers and engineers to the people in the field. Nobody feels left out.

2.1.1 In practice

In practice one way this could work is with a statistical theory called the Law of Large Numbers. The more numbers you have as datapoints the more accurate an average you will get.

This works with risk management as follows:

  1. You have a team of diverse people (as many as possible).
  2. They are presented with risk (you could also have them brainstorm risk scenarios themselves in the beginning).
  3. They are then asked to rate these risks in both likelihood and consequence, with a minimum and a maximum value (from 1-10 or 1-5, just be realistic).
  4. When all values are gathered you can then make an average for every risk.

This is called the successiv principle and this method works according to one project manager that we interviewed (we suggest further reading of this principle) as we do not have experience with this.

This methode in some way eliminates both risk bias and the negative effects of subjective risk perception. Of course more work should be done to completely eliminate these factors, but you have to start somewhere.

Recap/conclusion

We talked about subjective risk perception theory, and risk bias. How to handle the possible problems it causes, with communication and understanding and with the successive principle. We discussed technical risk assessments and their downsides, as wells a a methode to avoid the downsides of this, again using the successive principle or just by gathering a diverse group of people when doing risk identification and risk management.


We hope you found this rather long read useful. If you have any questions, feel free to comment on this post down below and we will answer as soon as possible.


Sources

1 – Risk Perception theory, can be found here.
2 – Further reading on the successive principle, can be found here.

Where is my Bulldozer?!

From PMI.org: Have you seen my bulldozer? – Why integrating the execution of risk and quality processes are critical for a project! 

Case 

This case study – which is said to be true – is about a project manager, hired by a mining company to build a 3-mile road to the mining site. Upon completing the first 1.3 miles, stage 1, the company wants to celebrate and “officially” open the road but not with a scissors and ribbon, but with bulldozer and ribbon.  

The Bulldozer drives through the ribbon breaking grounds for stage 2, but to the project managers surprise, she hears her colleagues yell “The dozer is sinking?!, The dozer is sinking!” within six minutes of the dozer driving through the ribbon on to new ground, it has sunk out of sight… 

Now some questions have to be answered: 

  1. What happened to the bulldozer, and how do we retrieve it? 
  2. How did this happen?
  3. How could this have been avoided? 
  4. How can we mitigate the impact on the project? 

Now according to the case study, these questions were answered by the project manager, but a valuable exercise in risk management is reflection and creativity when it comes to both discussing previous scenarios and possible risk scenarios. Therefore, the first exercise in this case study is to discuss these 4 questions. Take note of the answers generated as they might be useful later (and might be just the same as the real case answers). 

Now for the answers to question 1 and 2. 

1) What happened to the bulldozer, and how do we retrieve it? 

It drove onto an unidentified/unmarked muskeg pocket (bog/quicksand). The crust covering the pocket cracked under the weight of the dozer, and it sank into the pocket, completely out of sight. A few different methods were attempted to retrieve the dozer, which was 50% self-insured by the company:  

  1. Sonar was attempted, but could not pinpoint the location of the dozer given all the other solid objects in the muskeg pocket. 
  2. Drag lines using a concrete pylon were attempted, but they did not locate the dozer. 
  3. The company sought permission to drain the pocket, but the environmental agency denied permission. 

2) How did this happen? The project manager knew that this was addressed in the risk management plan and that the quality plan addressed the possibility as well. After questioning the lead geological engineer, the project manager learned that soil samples had in fact been taken every 10th of a mile per the project plan, but that the lead geological engineer had not actually performed the core test on the samples, as he had completed a flyover of the landscape at the initiation of the project in the company helicopter and believed the terrain to be stable. 

Now for question 3 and 4 

3) How could this have been avoided? 4) How can we mitigate the impact on the project? 

There are no definitive answers to these questions, as they differ from company to company, and should be discussed internally before a project. Our recommendations is to have 1 person from each individual team (the teams that are vital to the project) answer/react to this case study in association with a risk manager or a facilitator who understands the risk management perspective of this case study. 

By planning correctly and doing drills (preferably RoC Drills) this scenario could have been avoided. Furthermore, mitigation i.e., planning and exercises, are much less expensive than responding to accidents and a lot of stress can be avoided. 

Rehearse before!