Intro
People, or at least most people, are not objective in their decision making. We rely on our intuition to make decisions easier for us. Emotion, is one bias we rely on to make decisions easier. Past experience, is another. Intuition is not a bad thing, it has enabled us to survive on this world for a long time, arguably. But then it comes to making decisions regarding safety of others or assessing risk situations, using our intuition and the biases that follow, is not always a good thing. Let me explain…
But first a shoutout to the inspiration, and source, for this post; Marie Helweg-Larsen, a professor of psychology at Dickinson College. I saw Marie's talk and presentation at the annual risk management conference held by IDA, the Danish Society for Engineers. That talk inspired me to make this post, where I give my perspective on the subjects Marie presented.
How do we assess risk?
… And why is it important to understand?
I assess risk in order to help people do their job as safe as possible. But thats just one way I assess risk. In my personal life I also make risk assessments, rather frequent actually. But it is not exactly to help myself do my job as safe as possible, I have a desk job, so there is not much risk, at least not physical risk. No… I assess my risk according to health, my distant future, my not so distant future, etc… because risk assessments matter in terms of changing my mind, at least to some degree.
It is important to understand how people assess their risk if you want to change their mind about something. Thinking you are not at risk, based on your own assessment, makes you behave as if you are not at risk. If then, I want to change your mind regarding your safety, I have to understand why you feel safe (not at risk) and make my own risk assessment, present the results to you, and hopefully change your mind, at least to some degree.
This is difficult. Just look at Anti-Vaccination and Anti-Mask movements in current times. Historically it has difficult too… just look at seatbelts and when they were first introduced. Smoking is another great example, and one Marie uses in her arguments as well. It is hard to persuade people to change their ways and change the way they assess their risks. But why?
Why are we not good at assessing risk?
Marie argues that this comes down to, at least, five biases.
- Optimistic bias
- Confirmation bias
- Affect bias
- Base rate fallacy
- Overconfidence bias
Allow me to explain…
1. Optimistic bias - the rose coloured glasses
People, or individuals, have a tendency to believe they are less at risk than other people doing the same thing or during the same scenario for example. If we use smokers as an example… All smokers, that smoke 15-20 cigarets a day, are more or less equally at risk to various diseases in their later life. But when asked to rate their own risk, they rate themselves as lower than others. This is because we tend to focus more in the optimistic results or data, when making decisions for ourselves, then we do the "negative" results. Now, why this is the case is beyond my understanding of the psychological science field, therefore I suggest reading some Marie's research (linked in the Sources).
2. Confirmation bias
We tend to search for information that support our own view or opinion. This is perhaps the most well known bias, not only when dealing with risk, but in general. A lot of research has been done on this tendency. Researchers make a great deal of not doing this when writing papers, but they are not always equally successful.
Now, biased information leads to biased interpretation. This means we can interpret some risk as more substantial than others, when we have found more information on one than another.
An example Marie gave in her speech was that: women are worse drivers than men. But, men choose to only acknowledge when women drive bad, and overlook when other men drive bad. This is the essence of confirmation bias. We overlook certain data or information in order to support our own opinion. Whether or not women are worse drivers than men, I will not discuss…
Marie argued that: Social media is a recent amplifier of this bias. Due to "the algorithm", people see more of the same stuff online in their social media group. They are not exposed to the other side as much. Another amplifier to this is individuals tendency to be very persuaded by groups. I won't go deeper into this right now, as I am not familiar with the specifics.
3. Affect heuristic
A mental shortcut thet rely on emotions.
Sometimes we rely on our emotions to make certain decisions easier. When facing a hard decision I can be easier to rely on emotions than data. If we don't have the mental energy or mental surplus to look at all the data and analyse it, then make a decision based on facts and reliable information, it is much easier to rely on one emotions to make a decision. This works decently well in private and in ones personal life. But in a professional setting where other peoples health, and in worst case life, may depend on your decision, you should not rely on your emotion. Instead you should probably rely on data and information gathered by experts or others in a similar situation.
Worries and fears increase perceived risk and vice-versa.
4. Base rate fallacy
We rely, a lot, on our previous experiences when assessing risk. Therefore a risk we have encountered more is perceived as greater. Relying on previous experience is not a bad thing to do. It is arguably the reason humans still exist on this planet. Because we learn from bad decisions, or most of us do anyway… It is what makes us able to adapt and survive.
We kind of covered this in our Subjective risk perception post and why it is bad when assessing risk.
But when dealing with risk previous experience are both good and bad. You have experienced risky scenarios for a reason, what that reason is, is the important question. If you have experience with software update failures and resulting data corruption, you are probably cautions when updating now. But the question of why you experienced failure is an important one. Maybe you didn't do your preparations well enough (i.e., you slacked off when assessing the risks) or maybe you didn't do a back-up and now the consequence is perceived as A LOT greater then if you had a back-up?
We should learn from our mistakes, to prevent us from risk scenarios on the future. But we should not make risk assessment based on mistakes that can be avoided and which we should have learned from.
5. Overconfidence bias
We tend to ignore experts and act as experts. The essence of this bias is the Dunning-Krueger Effect which many probably know. The tendency of people with low ability at a task to overestimate their own ability at that task. Whether or nor many people do this in terms of risk assessment I don't know. Marie argued that it is a problem, and I tend to believe her research. But I have no experience with this bias myself, yet.
I suggest reading about the Dunning-Krueger Effect to learn more.
There is more…
Another reason people are not very good at risk assessments is because of mathematics and numbers. Most people understand math at a basic level, plus, minus, addition, and division. But not many people understand the math behind probability and statistics. This is another reason for relying on the above mentioned biases. We simply cannot always grasp the mathematics of risk assessments and probability of risk scenarios. Of course this is not true of everyone… But it is true for most people who are not studying, or have not studied, math to some degree. I myself, had to take a statistical mathematics course during my education to become a risk manager, and to be perfectly honest… I don't fully understand probability math either…
now…
What can we do then?
Can we do anything to combat these tendencies? Or are we S**t out of luck?… Of course we can do something!
We can start with accepting that we are biased by nature. When we know our own bias we are much more capable of dealing with it and not be affected as much by it. This, naturally, starts with a lot of self-awareness and getting to know one self. But when that is done or in process, you should start seeing results.
Another thing we can do is trust data and experts. If risk assessments are important to us, we should seek out valid information on the subject. Combine your experiences with data and statistics instead of just using one variable to make decisions.
Then we should Dial down risk estimations. As mentioned numbers are not very effective to most people. Therefore we should use more gist-based estimations or figure out what works in our specific organisations or situations.
Last but not least we should just do the right thing.
In conclusion
Bias has a tendency do make people do dumb things. We tend to focus too much on the optimistic data or results when making decisions for our self. We tend to search for informations that supports our opinion. We tend to be ruled by our emotions. We don't believe experts and act as experts our selves. And we overestimate our own abilities when doing task where we have almost no abilities. To overcome these biases we have to gain self-awareness and acknowledge our biased views. We should believe experts and statistical data. We should dial down number based risk estimations and use gist-based estimations to change peoples mind. And we should always do the right thing.
That is all from me at this time. Feel free to comment on this post if you have questions or just want to express your opinion.
Sources:
Marie's research: https://helweglarsen.socialpsychology.org/publications.
Dunning-Krueger Effect: https://en.wikipedia.org/wiki/Dunning–Kruger_effect.
Seatbelts: https://www.wpr.org/surprisingly-controversial-history-seat-belts.