Back in week 46 we briefly went over how subjective risk perception could affect collaboration between organisations. As promised we will delve further into why that is in this blogpost. Let us give you a short recap…
Subjective risk perception (Paul Slovic, Baruch Fischhoff & Sarah Lichtenstein) can be defined as; an individuals method of understanding or making sense of risks.
But, individuals don’t necessarily always have access to statistical data regarding risks. Therefore they base their conclusion, regarding a risk, on other factores, such as:
- Availability – we rate a risk according to the information we have about it.
- Overconfidence – we see our self as “better” than others in various situations where risk is present (driving a car).
- “It won’t happen to me” – we have a tendency to think things don’t happen to us.
This heuristic way of dealing with risk has its advantages and disadvantages. Heuristic thinking in regard to risk, often lead to bias. Bias can lead to inaccurate risk assessments. Therefore you should always be aware of risk bias and perception when working with risk management!
To gain a better understanding of this theory I suggest reading the material linked in the end of this blog.
1 Effects on cooperation
When two or more organisations are working together on a project they have work with a set of rules, call it common ground. When you have gained common ground everyone on the project knows what to do and what not to do (at least in theory). The issue arises when one organisation either has their own agenda or in some other way seek to gain control of the common ground. By doing this (consciously or unconsciously) the cooperation is skewed towards their interests. This also happens when dealing with risk.
This is why risk bias is especially important to understand!
1.1 Risk bias
Risk bias is essentially risk management decisions made from a personal set of empirical data, instead of a common or well known set. For example: If the owner of an IT project previously experienced a certain event that caused his servers to shut down due to some error while updating a large system of PC’s–effectively erasing a lot of data. This project owner is probably biased towards this specific risk whenever another update comes along. This is a rather harmless bias of course, but this example can be translated to every other project whether its in the construction sector, Off-Shore sector, etc.
For the project manager, and all other decision maker on the project, this means that they have to be very careful when communication risk and when managing risk. Otherwise a lot of resources will be used to mitigate (lower) the wrong risk and a risk that has very harmful potential might get overlooked!
1.2 How to handle risk bias
Quick disclaimer… We do not have the explicit answers to every risk bias scenario. But in our experience and from what we have gathered from interviews one of the best methods is as follows.
Frequent, respective and precise communications between the consulting organisation and the project organisation is key to healthy cooperation. This is especially true when talking about risk and subjective risk perception. If both organisations are on the same page regarding risk analysis and risk management, the chance of failure to recognise each others problems and therefore overlooking a potentially harmful risk, is lowered a significant amount. Thats is at least what we have found when interviewing project managers.
To put this into practice… Communicate with your stakeholders and other cooperative partners. Try to understand why you think they are biased towards risk (if that is the case), or why they have a specific perception on risk. This of course works both ways.
Communication creates understanding and understanding means better cooperation.
2 Technical risk assessment
A well known factor when working in risk management is the reliance on technical data and expert engineers or the like, when making risk assessments. This could be called an industry bias, but thats not important.
Relying on technical data and experts is not a problem if the project is purely technical. Unfortunately project seldom are and this is where some issues arise.
One issue with relying on experts and technical data is the ability to think outside the box. Again… we are bias to view risk based on our knowledge and expertise. The human error part of the risk evaluation is therefore not part of the equation. You can have all the data in the world and make the most detail risk description of a system or a specific task, but if you forget to calculate the human factor into this risk description, you are prone to failure.
You need to have some people on the risk assessment team who are not experts, who work in “the field” or at least knows what it is like to work there. These people usually bring some creative thinking to the risk assessments and thereby you gain a whole other understanding of risk scenarios where the human factors are present in the equation. Just like an unmanned aircraft can’t, yet, be relied upon to transfer cargo or passengers. The technology is here and has been for some time, but human intuition (pilot intuition) is just so crucial if a problem arises, that we do not yet trust these systems.
2.1 Diverse group of people
One possible solutions to the technical risk assessment problem is the use of a diverse group of people when doing risk identification. A diverse group of people allow for, 1) creativity from the creatives, the non experts and 2) a highly accurate assessment on whether or not they are too creative or if their risk scenarios are even possible from a technical standpoint, the experts. By doing this you gain common ground throughout the whole project, from project owners, project managers and engineers to the people in the field. Nobody feels left out.
2.1.1 In practice
In practice one way this could work is with a statistical theory called the Law of Large Numbers. The more numbers you have as datapoints the more accurate an average you will get.
This works with risk management as follows:
- You have a team of diverse people (as many as possible).
- They are presented with risk (you could also have them brainstorm risk scenarios themselves in the beginning).
- They are then asked to rate these risks in both likelihood and consequence, with a minimum and a maximum value (from 1-10 or 1-5, just be realistic).
- When all values are gathered you can then make an average for every risk.
This is called the successiv principle and this method works according to one project manager that we interviewed (we suggest further reading of this principle) as we do not have experience with this.
This methode in some way eliminates both risk bias and the negative effects of subjective risk perception. Of course more work should be done to completely eliminate these factors, but you have to start somewhere.
We talked about subjective risk perception theory, and risk bias. How to handle the possible problems it causes, with communication and understanding and with the successive principle. We discussed technical risk assessments and their downsides, as wells a a methode to avoid the downsides of this, again using the successive principle or just by gathering a diverse group of people when doing risk identification and risk management.
We hope you found this rather long read useful. If you have any questions, feel free to comment on this post down below and we will answer as soon as possible.